[Mlir-commits] [mlir] 86771d0 - Introduce a ConditionallySpeculatable op interface
Sanjoy Das
llvmlistbot at llvm.org
Wed Oct 12 10:56:24 PDT 2022
Author: Sanjoy Das
Date: 2022-10-12T10:56:12-07:00
New Revision: 86771d0b65ee13242f89b8dfdf3c66f738eae4e5
URL: https://github.com/llvm/llvm-project/commit/86771d0b65ee13242f89b8dfdf3c66f738eae4e5
DIFF: https://github.com/llvm/llvm-project/commit/86771d0b65ee13242f89b8dfdf3c66f738eae4e5.diff
LOG: Introduce a ConditionallySpeculatable op interface
This patch takes the first step towards a more principled modeling of undefined behavior in MLIR as discussed in the following discourse threads:
1. https://discourse.llvm.org/t/semantics-modeling-undefined-behavior-and-side-effects/4812
2. https://discourse.llvm.org/t/rfc-mark-tensor-dim-and-memref-dim-as-side-effecting/65729
This patch in particular does the following:
1. Introduces a ConditionallySpeculatable OpInterface that dynamically determines whether an Operation can be speculated.
2. Re-defines `NoSideEffect` to allow undefined behavior, making it necessary but not sufficient for speculation. Also renames it to `NoMemoryEffect`.
3. Makes LICM respect the above semantics.
4. Changes all ops tagged with `NoSideEffect` today to additionally implement ConditionallySpeculatable and mark themselves as always speculatable. This combined trait is named `Pure`. This makes this change NFC.
For out of tree dialects:
1. Replace `NoSideEffect` with `Pure` if the operation does not have any memory effects, undefined behavior or infinite loops.
2. Replace `NoSideEffect` with `NoSideEffect` otherwise.
The next steps in this process are (I'm proposing to do these in upcoming patches):
1. Update operations like `tensor.dim`, `memref.dim`, `scf.for`, `affine.for` to implement a correct hook for `ConditionallySpeculatable`. I'm also happy to update ops in other dialects if the respective dialect owners would like to and can give me some pointers.
2. Update other passes that speculate operations to consult `ConditionallySpeculatable` in addition to `NoMemoryEffect`. I could not find any other than LICM on a quick skim, but I could have missed some.
3. Add some documentation / FAQs detailing the differences between side effects, undefined behavior, speculatabilty.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D135505
Added:
Modified:
mlir/docs/OpDefinitions.md
mlir/docs/Tutorials/QuickstartRewrites.md
mlir/docs/Tutorials/Toy/Ch-3.md
mlir/docs/Tutorials/Toy/Ch-4.md
mlir/examples/standalone/include/Standalone/StandaloneOps.td
mlir/examples/toy/Ch2/include/toy/Ops.td
mlir/examples/toy/Ch3/include/toy/Ops.td
mlir/examples/toy/Ch4/include/toy/Ops.td
mlir/examples/toy/Ch5/include/toy/Ops.td
mlir/examples/toy/Ch6/include/toy/Ops.td
mlir/examples/toy/Ch7/include/toy/Ops.td
mlir/include/mlir/Dialect/AMDGPU/AMDGPU.td
mlir/include/mlir/Dialect/AMX/AMX.td
mlir/include/mlir/Dialect/Affine/IR/AffineOps.td
mlir/include/mlir/Dialect/Arith/IR/ArithOps.td
mlir/include/mlir/Dialect/ArmNeon/ArmNeon.td
mlir/include/mlir/Dialect/ArmSVE/ArmSVE.td
mlir/include/mlir/Dialect/Async/IR/AsyncOps.td
mlir/include/mlir/Dialect/Bufferization/IR/BufferizationOps.td
mlir/include/mlir/Dialect/Complex/IR/ComplexOps.td
mlir/include/mlir/Dialect/ControlFlow/IR/ControlFlowOps.td
mlir/include/mlir/Dialect/EmitC/IR/EmitC.td
mlir/include/mlir/Dialect/Func/IR/FuncOps.td
mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
mlir/include/mlir/Dialect/LLVMIR/LLVMIntrinsicOps.td
mlir/include/mlir/Dialect/LLVMIR/LLVMOpBase.td
mlir/include/mlir/Dialect/LLVMIR/LLVMOps.td
mlir/include/mlir/Dialect/LLVMIR/NVVMOps.td
mlir/include/mlir/Dialect/LLVMIR/ROCDLOps.td
mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td
mlir/include/mlir/Dialect/MLProgram/IR/MLProgramOps.td
mlir/include/mlir/Dialect/Math/IR/MathOps.td
mlir/include/mlir/Dialect/MemRef/IR/MemRefOps.td
mlir/include/mlir/Dialect/NVGPU/IR/NVGPU.td
mlir/include/mlir/Dialect/OpenMP/OpenMPOps.td
mlir/include/mlir/Dialect/PDL/IR/PDLOps.td
mlir/include/mlir/Dialect/PDLInterp/IR/PDLInterpOps.td
mlir/include/mlir/Dialect/Quant/QuantOps.td
mlir/include/mlir/Dialect/SCF/IR/SCFOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVArithmeticOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVBitOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCLOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCastOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCompositeOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVControlFlowOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCooperativeMatrixOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGLOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGroupOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVImageOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVJointMatrixOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVLogicalOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMatrixOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMemoryOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMiscOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVNonUniformOps.td
mlir/include/mlir/Dialect/SPIRV/IR/SPIRVStructureOps.td
mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
mlir/include/mlir/Dialect/Tensor/IR/TensorOps.td
mlir/include/mlir/Dialect/Tosa/IR/TosaOps.td
mlir/include/mlir/Dialect/Tosa/IR/TosaUtilOps.td
mlir/include/mlir/Dialect/Transform/IR/TransformOps.td
mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
mlir/include/mlir/Dialect/X86Vector/X86Vector.td
mlir/include/mlir/IR/BuiltinOps.td
mlir/include/mlir/Interfaces/SideEffectInterfaceBase.td
mlir/include/mlir/Interfaces/SideEffectInterfaces.h
mlir/include/mlir/Interfaces/SideEffectInterfaces.td
mlir/include/mlir/Transforms/SideEffectUtils.h
mlir/lib/Analysis/AliasAnalysis/LocalAliasAnalysis.cpp
mlir/lib/Dialect/Affine/Utils/Utils.cpp
mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp
mlir/lib/Dialect/Vector/Transforms/VectorTransferOpTransforms.cpp
mlir/lib/Interfaces/SideEffectInterfaces.cpp
mlir/lib/Transforms/ControlFlowSink.cpp
mlir/lib/Transforms/Utils/LoopInvariantCodeMotionUtils.cpp
mlir/lib/Transforms/Utils/SideEffectUtils.cpp
mlir/test/Transforms/loop-invariant-code-motion.mlir
mlir/test/lib/Dialect/Test/TestOps.td
mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.td
mlir/test/mlir-tblgen/gen-dialect-doc.td
mlir/test/mlir-tblgen/llvm-intrinsics.td
mlir/tools/mlir-tblgen/LLVMIRIntrinsicGen.cpp
Removed:
################################################################################
diff --git a/mlir/docs/OpDefinitions.md b/mlir/docs/OpDefinitions.md
index 88dac15e2ba0f..38376aa0ca2be 100644
--- a/mlir/docs/OpDefinitions.md
+++ b/mlir/docs/OpDefinitions.md
@@ -106,7 +106,7 @@ An operation is defined by specializing the `Op` class with concrete contents
for all the fields it requires. For example, `tf.AvgPool` is defined as
```tablegen
-def TF_AvgPoolOp : TF_Op<"AvgPool", [NoSideEffect]> {
+def TF_AvgPoolOp : TF_Op<"AvgPool", [NoMemoryEffect]> {
let summary = "Performs average pooling on the input.";
let description = [{
diff --git a/mlir/docs/Tutorials/QuickstartRewrites.md b/mlir/docs/Tutorials/QuickstartRewrites.md
index 69fe1ead41cb3..dd6bb4a831f0c 100644
--- a/mlir/docs/Tutorials/QuickstartRewrites.md
+++ b/mlir/docs/Tutorials/QuickstartRewrites.md
@@ -45,7 +45,7 @@ operations are generated from. To define an operation one needs to specify:
```tablegen
def TFL_LeakyReluOp: TFL_Op<TFL_Dialect, "leaky_relu",
- [NoSideEffect, SameValueType]>,
+ [NoMemoryEffect, SameValueType]>,
Results<(outs Tensor)> {
let arguments = (ins
F32Tensor:$x,
diff --git a/mlir/docs/Tutorials/Toy/Ch-3.md b/mlir/docs/Tutorials/Toy/Ch-3.md
index 19968eb088ed1..08e2d4036a70c 100644
--- a/mlir/docs/Tutorials/Toy/Ch-3.md
+++ b/mlir/docs/Tutorials/Toy/Ch-3.md
@@ -144,10 +144,10 @@ eliminated. That is not ideal! What happened is that our pattern replaced the
last transform with the function input and left behind the now dead transpose
input. The Canonicalizer knows to clean up dead operations; however, MLIR
conservatively assumes that operations may have side-effects. We can fix this by
-adding a new trait, `NoSideEffect`, to our `TransposeOp`:
+adding a new trait, `NoMemoryEffect`, to our `TransposeOp`:
```tablegen
-def TransposeOp : Toy_Op<"transpose", [NoSideEffect]> {...}
+def TransposeOp : Toy_Op<"transpose", [NoMemoryEffect]> {...}
```
Let's retry now `toyc-ch3 test/transpose_transpose.toy -emit=mlir -opt`:
diff --git a/mlir/docs/Tutorials/Toy/Ch-4.md b/mlir/docs/Tutorials/Toy/Ch-4.md
index 8497ab8fffe73..9584a169f60e3 100644
--- a/mlir/docs/Tutorials/Toy/Ch-4.md
+++ b/mlir/docs/Tutorials/Toy/Ch-4.md
@@ -222,7 +222,7 @@ casts between two
diff erent shapes.
```tablegen
def CastOp : Toy_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
- NoSideEffect,
+ NoMemoryEffect,
SameOperandsAndResultShape]
> {
let summary = "shape cast operation";
diff --git a/mlir/examples/standalone/include/Standalone/StandaloneOps.td b/mlir/examples/standalone/include/Standalone/StandaloneOps.td
index 1098dc57b690b..6c9da3149cb39 100644
--- a/mlir/examples/standalone/include/Standalone/StandaloneOps.td
+++ b/mlir/examples/standalone/include/Standalone/StandaloneOps.td
@@ -13,7 +13,7 @@ include "Standalone/StandaloneDialect.td"
include "mlir/Interfaces/InferTypeOpInterface.td"
include "mlir/Interfaces/SideEffectInterfaces.td"
-def Standalone_FooOp : Standalone_Op<"foo", [NoSideEffect,
+def Standalone_FooOp : Standalone_Op<"foo", [Pure,
SameOperandsAndResultType]> {
let summary = "Illustrates how to define an operation.";
let description = [{
diff --git a/mlir/examples/toy/Ch2/include/toy/Ops.td b/mlir/examples/toy/Ch2/include/toy/Ops.td
index 6462787381680..f98915f7cd091 100644
--- a/mlir/examples/toy/Ch2/include/toy/Ops.td
+++ b/mlir/examples/toy/Ch2/include/toy/Ops.td
@@ -43,9 +43,9 @@ class Toy_Op<string mnemonic, list<Trait> traits = []> :
// We define a toy operation by inheriting from our base 'Toy_Op' class above.
// Here we provide the mnemonic and a list of traits for the operation. The
-// constant operation is marked as 'NoSideEffect' as it is a pure operation
+// constant operation is marked as 'Pure' as it is a pure operation
// and may be removed if dead.
-def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+def ConstantOp : Toy_Op<"constant", [Pure]> {
// Provide a summary and description for this operation. This can be used to
// auto-generate documentation of the operations within our dialect.
let summary = "constant";
@@ -265,7 +265,7 @@ def ReshapeOp : Toy_Op<"reshape"> {
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Toy_Op<"return", [Pure, HasParent<"FuncOp">,
Terminator]> {
let summary = "return operation";
let description = [{
diff --git a/mlir/examples/toy/Ch3/include/toy/Ops.td b/mlir/examples/toy/Ch3/include/toy/Ops.td
index 580826bb4cb65..8525255a1f5d4 100644
--- a/mlir/examples/toy/Ch3/include/toy/Ops.td
+++ b/mlir/examples/toy/Ch3/include/toy/Ops.td
@@ -42,9 +42,9 @@ class Toy_Op<string mnemonic, list<Trait> traits = []> :
// We define a toy operation by inheriting from our base 'Toy_Op' class above.
// Here we provide the mnemonic and a list of traits for the operation. The
-// constant operation is marked as 'NoSideEffect' as it is a pure operation
+// constant operation is marked as 'Pure' as it is a pure operation
// and may be removed if dead.
-def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+def ConstantOp : Toy_Op<"constant", [Pure]> {
// Provide a summary and description for this operation. This can be used to
// auto-generate documentation of the operations within our dialect.
let summary = "constant";
@@ -88,7 +88,7 @@ def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
// AddOp
//===----------------------------------------------------------------------===//
-def AddOp : Toy_Op<"add", [NoSideEffect]> {
+def AddOp : Toy_Op<"add", [Pure]> {
let summary = "element-wise addition operation";
let description = [{
The "add" operation performs element-wise addition between two tensors.
@@ -199,7 +199,7 @@ def GenericCallOp : Toy_Op<"generic_call"> {
// MulOp
//===----------------------------------------------------------------------===//
-def MulOp : Toy_Op<"mul", [NoSideEffect]> {
+def MulOp : Toy_Op<"mul", [Pure]> {
let summary = "element-wise multiplication operation";
let description = [{
The "mul" operation performs element-wise multiplication between two
@@ -239,7 +239,7 @@ def PrintOp : Toy_Op<"print"> {
// ReshapeOp
//===----------------------------------------------------------------------===//
-def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
+def ReshapeOp : Toy_Op<"reshape", [Pure]> {
let summary = "tensor reshape operation";
let description = [{
Reshape operation is transforming its input tensor into a new tensor with
@@ -267,7 +267,7 @@ def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Toy_Op<"return", [Pure, HasParent<"FuncOp">,
Terminator]> {
let summary = "return operation";
let description = [{
@@ -309,7 +309,7 @@ def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
// TransposeOp
//===----------------------------------------------------------------------===//
-def TransposeOp : Toy_Op<"transpose", [NoSideEffect]> {
+def TransposeOp : Toy_Op<"transpose", [Pure]> {
let summary = "transpose operation";
let arguments = (ins F64Tensor:$input);
diff --git a/mlir/examples/toy/Ch4/include/toy/Ops.td b/mlir/examples/toy/Ch4/include/toy/Ops.td
index 2bdae97bdd4db..bf1d41fade843 100644
--- a/mlir/examples/toy/Ch4/include/toy/Ops.td
+++ b/mlir/examples/toy/Ch4/include/toy/Ops.td
@@ -45,9 +45,9 @@ class Toy_Op<string mnemonic, list<Trait> traits = []> :
// We define a toy operation by inheriting from our base 'Toy_Op' class above.
// Here we provide the mnemonic and a list of traits for the operation. The
-// constant operation is marked as 'NoSideEffect' as it is a pure operation
+// constant operation is marked as 'Pure' as it is a pure operation
// and may be removed if dead.
-def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+def ConstantOp : Toy_Op<"constant", [Pure]> {
// Provide a summary and description for this operation. This can be used to
// auto-generate documentation of the operations within our dialect.
let summary = "constant";
@@ -92,7 +92,7 @@ def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
//===----------------------------------------------------------------------===//
def AddOp : Toy_Op<"add",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise addition operation";
let description = [{
The "add" operation performs element-wise addition between two tensors.
@@ -118,7 +118,7 @@ def AddOp : Toy_Op<"add",
def CastOp : Toy_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
DeclareOpInterfaceMethods<ShapeInferenceOpInterface>,
- NoSideEffect,
+ Pure,
SameOperandsAndResultShape
]> {
let summary = "shape cast operation";
@@ -231,7 +231,7 @@ def GenericCallOp : Toy_Op<"generic_call",
//===----------------------------------------------------------------------===//
def MulOp : Toy_Op<"mul",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise multiplication operation";
let description = [{
The "mul" operation performs element-wise multiplication between two
@@ -271,7 +271,7 @@ def PrintOp : Toy_Op<"print"> {
// ReshapeOp
//===----------------------------------------------------------------------===//
-def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
+def ReshapeOp : Toy_Op<"reshape", [Pure]> {
let summary = "tensor reshape operation";
let description = [{
Reshape operation is transforming its input tensor into a new tensor with
@@ -299,7 +299,7 @@ def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Toy_Op<"return", [Pure, HasParent<"FuncOp">,
Terminator]> {
let summary = "return operation";
let description = [{
@@ -342,7 +342,7 @@ def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
//===----------------------------------------------------------------------===//
def TransposeOp : Toy_Op<"transpose",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "transpose operation";
let arguments = (ins F64Tensor:$input);
diff --git a/mlir/examples/toy/Ch5/include/toy/Ops.td b/mlir/examples/toy/Ch5/include/toy/Ops.td
index cd51e53c2c897..1123dd92baf7a 100644
--- a/mlir/examples/toy/Ch5/include/toy/Ops.td
+++ b/mlir/examples/toy/Ch5/include/toy/Ops.td
@@ -45,9 +45,9 @@ class Toy_Op<string mnemonic, list<Trait> traits = []> :
// We define a toy operation by inheriting from our base 'Toy_Op' class above.
// Here we provide the mnemonic and a list of traits for the operation. The
-// constant operation is marked as 'NoSideEffect' as it is a pure operation
+// constant operation is marked as 'Pure' as it is a pure operation
// and may be removed if dead.
-def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+def ConstantOp : Toy_Op<"constant", [Pure]> {
// Provide a summary and description for this operation. This can be used to
// auto-generate documentation of the operations within our dialect.
let summary = "constant";
@@ -92,7 +92,7 @@ def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
//===----------------------------------------------------------------------===//
def AddOp : Toy_Op<"add",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise addition operation";
let description = [{
The "add" operation performs element-wise addition between two tensors.
@@ -118,7 +118,7 @@ def AddOp : Toy_Op<"add",
def CastOp : Toy_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
DeclareOpInterfaceMethods<ShapeInferenceOpInterface>,
- NoSideEffect,
+ Pure,
SameOperandsAndResultShape
]> {
let summary = "shape cast operation";
@@ -231,7 +231,7 @@ def GenericCallOp : Toy_Op<"generic_call",
//===----------------------------------------------------------------------===//
def MulOp : Toy_Op<"mul",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise multiplication operation";
let description = [{
The "mul" operation performs element-wise multiplication between two
@@ -272,7 +272,7 @@ def PrintOp : Toy_Op<"print"> {
// ReshapeOp
//===----------------------------------------------------------------------===//
-def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
+def ReshapeOp : Toy_Op<"reshape", [Pure]> {
let summary = "tensor reshape operation";
let description = [{
Reshape operation is transforming its input tensor into a new tensor with
@@ -300,7 +300,7 @@ def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Toy_Op<"return", [Pure, HasParent<"FuncOp">,
Terminator]> {
let summary = "return operation";
let description = [{
@@ -343,7 +343,7 @@ def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
//===----------------------------------------------------------------------===//
def TransposeOp : Toy_Op<"transpose",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "transpose operation";
let arguments = (ins F64Tensor:$input);
diff --git a/mlir/examples/toy/Ch6/include/toy/Ops.td b/mlir/examples/toy/Ch6/include/toy/Ops.td
index be432bf1d25de..9eab6c2353bed 100644
--- a/mlir/examples/toy/Ch6/include/toy/Ops.td
+++ b/mlir/examples/toy/Ch6/include/toy/Ops.td
@@ -45,9 +45,9 @@ class Toy_Op<string mnemonic, list<Trait> traits = []> :
// We define a toy operation by inheriting from our base 'Toy_Op' class above.
// Here we provide the mnemonic and a list of traits for the operation. The
-// constant operation is marked as 'NoSideEffect' as it is a pure operation
+// constant operation is marked as 'Pure' as it is a pure operation
// and may be removed if dead.
-def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+def ConstantOp : Toy_Op<"constant", [Pure]> {
// Provide a summary and description for this operation. This can be used to
// auto-generate documentation of the operations within our dialect.
let summary = "constant";
@@ -92,7 +92,7 @@ def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
//===----------------------------------------------------------------------===//
def AddOp : Toy_Op<"add",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise addition operation";
let description = [{
The "add" operation performs element-wise addition between two tensors.
@@ -118,7 +118,7 @@ def AddOp : Toy_Op<"add",
def CastOp : Toy_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
DeclareOpInterfaceMethods<ShapeInferenceOpInterface>,
- NoSideEffect,
+ Pure,
SameOperandsAndResultShape
]> {
let summary = "shape cast operation";
@@ -231,7 +231,7 @@ def GenericCallOp : Toy_Op<"generic_call",
//===----------------------------------------------------------------------===//
def MulOp : Toy_Op<"mul",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise multiplication operation";
let description = [{
The "mul" operation performs element-wise multiplication between two
@@ -272,7 +272,7 @@ def PrintOp : Toy_Op<"print"> {
// ReshapeOp
//===----------------------------------------------------------------------===//
-def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
+def ReshapeOp : Toy_Op<"reshape", [Pure]> {
let summary = "tensor reshape operation";
let description = [{
Reshape operation is transforming its input tensor into a new tensor with
@@ -300,7 +300,7 @@ def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Toy_Op<"return", [Pure, HasParent<"FuncOp">,
Terminator]> {
let summary = "return operation";
let description = [{
@@ -343,7 +343,7 @@ def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
//===----------------------------------------------------------------------===//
def TransposeOp : Toy_Op<"transpose",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "transpose operation";
let arguments = (ins F64Tensor:$input);
diff --git a/mlir/examples/toy/Ch7/include/toy/Ops.td b/mlir/examples/toy/Ch7/include/toy/Ops.td
index 3ec211ac72579..5a58434f70d62 100644
--- a/mlir/examples/toy/Ch7/include/toy/Ops.td
+++ b/mlir/examples/toy/Ch7/include/toy/Ops.td
@@ -63,10 +63,10 @@ def Toy_Type : AnyTypeOf<[F64Tensor, Toy_StructType]>;
// We define a toy operation by inheriting from our base 'Toy_Op' class above.
// Here we provide the mnemonic and a list of traits for the operation. The
-// constant operation is marked as 'NoSideEffect' as it is a pure operation
+// constant operation is marked as 'Pure' as it is a pure operation
// and may be removed if dead.
def ConstantOp : Toy_Op<"constant",
- [ConstantLike, NoSideEffect,
+ [ConstantLike, Pure,
DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
// Provide a summary and description for this operation. This can be used to
// auto-generate documentation of the operations within our dialect.
@@ -115,7 +115,7 @@ def ConstantOp : Toy_Op<"constant",
//===----------------------------------------------------------------------===//
def AddOp : Toy_Op<"add",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise addition operation";
let description = [{
The "add" operation performs element-wise addition between two tensors.
@@ -141,7 +141,7 @@ def AddOp : Toy_Op<"add",
def CastOp : Toy_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
DeclareOpInterfaceMethods<ShapeInferenceOpInterface>,
- NoSideEffect,
+ Pure,
SameOperandsAndResultShape
]> {
let summary = "shape cast operation";
@@ -255,7 +255,7 @@ def GenericCallOp : Toy_Op<"generic_call",
//===----------------------------------------------------------------------===//
def MulOp : Toy_Op<"mul",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise multiplication operation";
let description = [{
The "mul" operation performs element-wise multiplication between two
@@ -296,7 +296,7 @@ def PrintOp : Toy_Op<"print"> {
// ReshapeOp
//===----------------------------------------------------------------------===//
-def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
+def ReshapeOp : Toy_Op<"reshape", [Pure]> {
let summary = "tensor reshape operation";
let description = [{
Reshape operation is transforming its input tensor into a new tensor with
@@ -324,7 +324,7 @@ def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Toy_Op<"return", [Pure, HasParent<"FuncOp">,
Terminator]> {
let summary = "return operation";
let description = [{
@@ -366,7 +366,7 @@ def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
// StructAccessOp
//===----------------------------------------------------------------------===//
-def StructAccessOp : Toy_Op<"struct_access", [NoSideEffect]> {
+def StructAccessOp : Toy_Op<"struct_access", [Pure]> {
let summary = "struct access";
let description = [{
Access the Nth element of a value returning a struct type.
@@ -395,7 +395,7 @@ def StructAccessOp : Toy_Op<"struct_access", [NoSideEffect]> {
// StructConstantOp
//===----------------------------------------------------------------------===//
-def StructConstantOp : Toy_Op<"struct_constant", [ConstantLike, NoSideEffect]> {
+def StructConstantOp : Toy_Op<"struct_constant", [ConstantLike, Pure]> {
let summary = "struct constant";
let description = [{
Constant operation turns a literal struct value into an SSA value. The data
@@ -424,7 +424,7 @@ def StructConstantOp : Toy_Op<"struct_constant", [ConstantLike, NoSideEffect]> {
//===----------------------------------------------------------------------===//
def TransposeOp : Toy_Op<"transpose",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "transpose operation";
let arguments = (ins F64Tensor:$input);
diff --git a/mlir/include/mlir/Dialect/AMDGPU/AMDGPU.td b/mlir/include/mlir/Dialect/AMDGPU/AMDGPU.td
index c5d776b1f7837..91b62bb85c494 100644
--- a/mlir/include/mlir/Dialect/AMDGPU/AMDGPU.td
+++ b/mlir/include/mlir/Dialect/AMDGPU/AMDGPU.td
@@ -216,7 +216,7 @@ def MFMAOutTypes : AnyTypeOf<[F64,
def AMDGPU_MFMAOp :
AMDGPU_Op<"mfma", [AllTypesMatch<["sourceA", "sourceB"]>,
AllTypesMatch<["destC", "destD"]>,
- NoSideEffect]>,
+ Pure]>,
Arguments<(ins
I32Attr:$m,
I32Attr:$n,
diff --git a/mlir/include/mlir/Dialect/AMX/AMX.td b/mlir/include/mlir/Dialect/AMX/AMX.td
index f665672d143d1..8c64a58f4d78d 100644
--- a/mlir/include/mlir/Dialect/AMX/AMX.td
+++ b/mlir/include/mlir/Dialect/AMX/AMX.td
@@ -78,7 +78,7 @@ class AMX_IntrOp<string mnemonic, int numResults, list<Trait> traits = []> :
// Tile reset.
//
-def TileZeroOp : AMX_Op<"tile_zero", [NoSideEffect]> {
+def TileZeroOp : AMX_Op<"tile_zero", [Pure]> {
let summary = "tile zero operation";
let description = [{
Zeroes the destination tile, with the shape defined by the 2-dim
@@ -106,7 +106,7 @@ def TileZeroOp : AMX_Op<"tile_zero", [NoSideEffect]> {
// Tile memory operations.
//
-def TileLoadOp : AMX_Op<"tile_load", [NoSideEffect]> {
+def TileLoadOp : AMX_Op<"tile_load", [Pure]> {
let summary = "tile load operation";
let description = [{
Loads a tile from memory defined by a base and indices, with the
@@ -171,7 +171,8 @@ def TileStoreOp : AMX_Op<"tile_store"> {
// Tile arithmetic operations.
//
-def TileMulFOp : AMX_Op<"tile_mulf", [NoSideEffect, AllTypesMatch<["acc", "res"]>]> {
+def TileMulFOp : AMX_Op<"tile_mulf", [
+ Pure, AllTypesMatch<["acc", "res"]>]> {
let summary = "tile multiplication operation (floating-point)";
let description = [{
Multiplies a "m x k" tile with a "k x n" tile and accumulates the results
@@ -206,7 +207,8 @@ def TileMulFOp : AMX_Op<"tile_mulf", [NoSideEffect, AllTypesMatch<["acc", "res"]
let hasVerifier = 1;
}
-def TileMulIOp : AMX_Op<"tile_muli", [NoSideEffect, AllTypesMatch<["acc", "res"]>]> {
+def TileMulIOp : AMX_Op<"tile_muli", [
+ Pure, AllTypesMatch<["acc", "res"]>]> {
let summary = "tile multiplication operation (integer)";
let description = [{
Multiplies a "m x k" tile with a "k x n" tile and accumulates the results
diff --git a/mlir/include/mlir/Dialect/Affine/IR/AffineOps.td b/mlir/include/mlir/Dialect/Affine/IR/AffineOps.td
index 5015046173bd4..df0ce36829c4c 100644
--- a/mlir/include/mlir/Dialect/Affine/IR/AffineOps.td
+++ b/mlir/include/mlir/Dialect/Affine/IR/AffineOps.td
@@ -34,7 +34,7 @@ class Affine_Op<string mnemonic, list<Trait> traits = []> :
def ImplicitAffineTerminator
: SingleBlockImplicitTerminator<"AffineYieldOp">;
-def AffineApplyOp : Affine_Op<"apply", [NoSideEffect]> {
+def AffineApplyOp : Affine_Op<"apply", [Pure]> {
let summary = "affine apply operation";
let description = [{
The affine.apply operation applies an [affine mapping](#affine-expressions)
@@ -105,8 +105,8 @@ def AffineApplyOp : Affine_Op<"apply", [NoSideEffect]> {
}
def AffineForOp : Affine_Op<"for",
- [AutomaticAllocationScope, ImplicitAffineTerminator, RecursiveSideEffects,
- DeclareOpInterfaceMethods<LoopLikeOpInterface,
+ [AutomaticAllocationScope, ImplicitAffineTerminator, RecursivelySpeculatable,
+ RecursiveMemoryEffects, DeclareOpInterfaceMethods<LoopLikeOpInterface,
["getSingleInductionVar", "getSingleLowerBound", "getSingleStep",
"getSingleUpperBound"]>,
DeclareOpInterfaceMethods<RegionBranchOpInterface,
@@ -349,8 +349,8 @@ def AffineForOp : Affine_Op<"for",
}
def AffineIfOp : Affine_Op<"if",
- [ImplicitAffineTerminator, RecursiveSideEffects,
- NoRegionArguments]> {
+ [ImplicitAffineTerminator, RecursivelySpeculatable,
+ RecursiveMemoryEffects, NoRegionArguments]> {
let summary = "if-then-else operation";
let description = [{
Syntax:
@@ -571,7 +571,7 @@ class AffineMinMaxOpBase<string mnemonic, list<Trait> traits = []> :
let hasVerifier = 1;
}
-def AffineMinOp : AffineMinMaxOpBase<"min", [NoSideEffect]> {
+def AffineMinOp : AffineMinMaxOpBase<"min", [Pure]> {
let summary = "min operation";
let description = [{
Syntax:
@@ -595,7 +595,7 @@ def AffineMinOp : AffineMinMaxOpBase<"min", [NoSideEffect]> {
}];
}
-def AffineMaxOp : AffineMinMaxOpBase<"max", [NoSideEffect]> {
+def AffineMaxOp : AffineMinMaxOpBase<"max", [Pure]> {
let summary = "max operation";
let description = [{
The "max" operation computes the maximum value result from a multi-result
@@ -610,8 +610,9 @@ def AffineMaxOp : AffineMinMaxOpBase<"max", [NoSideEffect]> {
}
def AffineParallelOp : Affine_Op<"parallel",
- [AutomaticAllocationScope, ImplicitAffineTerminator, RecursiveSideEffects,
- DeclareOpInterfaceMethods<LoopLikeOpInterface>, MemRefsNormalizable]> {
+ [AutomaticAllocationScope, ImplicitAffineTerminator, RecursivelySpeculatable,
+ RecursiveMemoryEffects, DeclareOpInterfaceMethods<LoopLikeOpInterface>,
+ MemRefsNormalizable]> {
let summary = "multi-index parallel band operation";
let description = [{
The "affine.parallel" operation represents a hyper-rectangular affine
@@ -902,7 +903,7 @@ def AffineStoreOp : AffineStoreOpBase<"store"> {
let hasVerifier = 1;
}
-def AffineYieldOp : Affine_Op<"yield", [NoSideEffect, Terminator, ReturnLike,
+def AffineYieldOp : Affine_Op<"yield", [Pure, Terminator, ReturnLike,
MemRefsNormalizable]> {
let summary = "Yield values to parent operation";
let description = [{
@@ -1064,7 +1065,7 @@ def AffineVectorStoreOp : AffineStoreOpBase<"vector_store"> {
//===----------------------------------------------------------------------===//
def AffineDelinearizeIndexOp : Affine_Op<"delinearize_index",
- [NoSideEffect]> {
+ [Pure]> {
let summary = "delinearize an index";
let description = [{
The `affine.delinearize_index` operation takes a single index value and
diff --git a/mlir/include/mlir/Dialect/Arith/IR/ArithOps.td b/mlir/include/mlir/Dialect/Arith/IR/ArithOps.td
index e15c1f934a250..692338eb8370e 100644
--- a/mlir/include/mlir/Dialect/Arith/IR/ArithOps.td
+++ b/mlir/include/mlir/Dialect/Arith/IR/ArithOps.td
@@ -21,7 +21,7 @@ include "mlir/IR/OpAsmInterface.td"
// Base class for Arith dialect ops. Ops in this dialect have no side
// effects and can be applied element-wise to vectors and tensors.
class Arith_Op<string mnemonic, list<Trait> traits = []> :
- Op<Arith_Dialect, mnemonic, traits # [NoSideEffect,
+ Op<Arith_Dialect, mnemonic, traits # [Pure,
DeclareOpInterfaceMethods<VectorUnrollOpInterface>] #
ElementwiseMappable.traits>;
@@ -127,7 +127,7 @@ class Arith_CompareOpOfAnyRank<string mnemonic, list<Trait> traits = []> :
//===----------------------------------------------------------------------===//
def Arith_ConstantOp : Op<Arith_Dialect, "constant",
- [ConstantLike, NoSideEffect,
+ [ConstantLike, Pure,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
AllTypesMatch<["value", "result"]>,
DeclareOpInterfaceMethods<InferIntRangeInterface>]> {
diff --git a/mlir/include/mlir/Dialect/ArmNeon/ArmNeon.td b/mlir/include/mlir/Dialect/ArmNeon/ArmNeon.td
index f3ce22d3c6570..3cf5939fbc723 100644
--- a/mlir/include/mlir/Dialect/ArmNeon/ArmNeon.td
+++ b/mlir/include/mlir/Dialect/ArmNeon/ArmNeon.td
@@ -64,7 +64,7 @@ class ArmNeon_OverloadedOperandsWithOneResultIntrOp<string mnemonic,
: ArmNeon_IntrOp<mnemonic, [0], overloadedOperands, 1, traits>;
def SMullOp : ArmNeon_OverloadedOneResultIntrOp<"smull", [
- NoSideEffect,
+ Pure,
AllTypesMatch<["a", "b"]>,
TypesMatchWith<
"res has same vector shape and element bitwidth scaled by 2 as a",
@@ -93,7 +93,7 @@ def SMullOp : ArmNeon_OverloadedOneResultIntrOp<"smull", [
}
def SdotOp : ArmNeon_OverloadedOperandsWithOneResultIntrOp<"sdot",[1], [
- NoSideEffect,
+ Pure,
AllTypesMatch<["b", "c"]>,
AllTypesMatch<["a", "res"]>,
TypesMatchWith<"res has the same number of elements as operand b",
@@ -126,7 +126,7 @@ class ArmNeon_2dOp<string mnemonic, list<Trait> traits = []>
/*traits=*/traits>;
def Sdot2dOp : ArmNeon_2dOp<"sdot", [
- NoSideEffect,
+ Pure,
AllTypesMatch<["b", "c"]>,
AllTypesMatch<["a", "res"]>,
PredOpTrait<
diff --git a/mlir/include/mlir/Dialect/ArmSVE/ArmSVE.td b/mlir/include/mlir/Dialect/ArmSVE/ArmSVE.td
index d3ff0584f4b19..5a6435a35cae3 100644
--- a/mlir/include/mlir/Dialect/ArmSVE/ArmSVE.td
+++ b/mlir/include/mlir/Dialect/ArmSVE/ArmSVE.td
@@ -94,7 +94,7 @@ class ScalableMaskedIOp<string mnemonic, string op_description,
}
def SdotOp : ArmSVE_Op<"sdot",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["src1", "src2"]>,
AllTypesMatch<["acc", "dst"]>,
]> {
@@ -125,7 +125,7 @@ def SdotOp : ArmSVE_Op<"sdot",
}
def SmmlaOp : ArmSVE_Op<"smmla",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["src1", "src2"]>,
AllTypesMatch<["acc", "dst"]>,
]> {
@@ -157,7 +157,7 @@ def SmmlaOp : ArmSVE_Op<"smmla",
}
def UdotOp : ArmSVE_Op<"udot",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["src1", "src2"]>,
AllTypesMatch<["acc", "dst"]>,
]> {
@@ -188,7 +188,7 @@ def UdotOp : ArmSVE_Op<"udot",
}
def UmmlaOp : ArmSVE_Op<"ummla",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["src1", "src2"]>,
AllTypesMatch<["acc", "dst"]>,
]> {
diff --git a/mlir/include/mlir/Dialect/Async/IR/AsyncOps.td b/mlir/include/mlir/Dialect/Async/IR/AsyncOps.td
index 0830e95fb15ff..d0584ef34b8e0 100644
--- a/mlir/include/mlir/Dialect/Async/IR/AsyncOps.td
+++ b/mlir/include/mlir/Dialect/Async/IR/AsyncOps.td
@@ -101,7 +101,7 @@ def Async_ExecuteOp :
def Async_YieldOp :
Async_Op<"yield", [
- HasParent<"ExecuteOp">, NoSideEffect, Terminator,
+ HasParent<"ExecuteOp">, Pure, Terminator,
DeclareOpInterfaceMethods<RegionBranchTerminatorOpInterface>]> {
let summary = "terminator for Async execute operation";
let description = [{
@@ -156,7 +156,7 @@ def Async_AwaitOp : Async_Op<"await"> {
}];
}
-def Async_CreateGroupOp : Async_Op<"create_group", [NoSideEffect]> {
+def Async_CreateGroupOp : Async_Op<"create_group", [Pure]> {
let summary = "creates an empty async group";
let description = [{
The `async.create_group` allocates an empty async group. Async tokens or
diff --git a/mlir/include/mlir/Dialect/Bufferization/IR/BufferizationOps.td b/mlir/include/mlir/Dialect/Bufferization/IR/BufferizationOps.td
index 13e82f40fea1d..bfdafafd3764d 100644
--- a/mlir/include/mlir/Dialect/Bufferization/IR/BufferizationOps.td
+++ b/mlir/include/mlir/Dialect/Bufferization/IR/BufferizationOps.td
@@ -345,7 +345,7 @@ def Bufferization_ToMemrefOp : Bufferization_Op<"to_memref", [
BufferizableOpInterface,
SameOperandsAndResultShape,
SameOperandsAndResultElementType,
- NoSideEffect,
+ Pure,
TypesMatchWith<"type of 'tensor' is the tensor equivalent of 'memref'",
"memref", "tensor",
"memref::getTensorTypeFromMemRefType($_self)">
diff --git a/mlir/include/mlir/Dialect/Complex/IR/ComplexOps.td b/mlir/include/mlir/Dialect/Complex/IR/ComplexOps.td
index defdf376772c1..b8d258e36a5f3 100644
--- a/mlir/include/mlir/Dialect/Complex/IR/ComplexOps.td
+++ b/mlir/include/mlir/Dialect/Complex/IR/ComplexOps.td
@@ -21,7 +21,7 @@ class Complex_Op<string mnemonic, list<Trait> traits = []>
// floating-point element type. These operations take two operands and return
// one result, all of which must be complex numbers of the same type.
class ComplexArithmeticOp<string mnemonic, list<Trait> traits = []> :
- Complex_Op<mnemonic, traits # [NoSideEffect, SameOperandsAndResultType,
+ Complex_Op<mnemonic, traits # [Pure, SameOperandsAndResultType,
Elementwise]> {
let arguments = (ins Complex<AnyFloat>:$lhs, Complex<AnyFloat>:$rhs);
let results = (outs Complex<AnyFloat>:$result);
@@ -32,7 +32,7 @@ class ComplexArithmeticOp<string mnemonic, list<Trait> traits = []> :
// floating-point element type. These operations take one operand and return
// one result; the operand must be a complex number.
class ComplexUnaryOp<string mnemonic, list<Trait> traits = []> :
- Complex_Op<mnemonic, traits # [NoSideEffect, Elementwise]> {
+ Complex_Op<mnemonic, traits # [Pure, Elementwise]> {
let arguments = (ins Complex<AnyFloat>:$complex);
let assemblyFormat = "$complex attr-dict `:` type($complex)";
}
@@ -100,7 +100,7 @@ def Atan2Op : ComplexArithmeticOp<"atan2"> {
//===----------------------------------------------------------------------===//
def ConstantOp : Complex_Op<"constant", [
- ConstantLike, NoSideEffect,
+ ConstantLike, Pure,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>
]> {
let summary = "complex number constant operation";
@@ -154,7 +154,7 @@ def CosOp : ComplexUnaryOp<"cos", [SameOperandsAndResultType]> {
//===----------------------------------------------------------------------===//
def CreateOp : Complex_Op<"create",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["real", "imaginary"]>,
TypesMatchWith<"complex element type matches real operand type",
"complex", "real",
@@ -203,7 +203,7 @@ def DivOp : ComplexArithmeticOp<"div"> {
//===----------------------------------------------------------------------===//
def EqualOp : Complex_Op<"eq",
- [NoSideEffect, AllTypesMatch<["lhs", "rhs"]>, Elementwise]> {
+ [Pure, AllTypesMatch<["lhs", "rhs"]>, Elementwise]> {
let summary = "computes whether two complex values are equal";
let description = [{
The `eq` op takes two complex numbers and returns whether they are equal.
@@ -378,7 +378,7 @@ def NegOp : ComplexUnaryOp<"neg", [SameOperandsAndResultType]> {
//===----------------------------------------------------------------------===//
def NotEqualOp : Complex_Op<"neq",
- [NoSideEffect, AllTypesMatch<["lhs", "rhs"]>, Elementwise]> {
+ [Pure, AllTypesMatch<["lhs", "rhs"]>, Elementwise]> {
let summary = "computes whether two complex values are not equal";
let description = [{
The `neq` op takes two complex numbers and returns whether they are not
diff --git a/mlir/include/mlir/Dialect/ControlFlow/IR/ControlFlowOps.td b/mlir/include/mlir/Dialect/ControlFlow/IR/ControlFlowOps.td
index 549745561970e..b396b2c2c1413 100644
--- a/mlir/include/mlir/Dialect/ControlFlow/IR/ControlFlowOps.td
+++ b/mlir/include/mlir/Dialect/ControlFlow/IR/ControlFlowOps.td
@@ -64,7 +64,7 @@ def AssertOp : CF_Op<"assert"> {
def BranchOp : CF_Op<"br", [
DeclareOpInterfaceMethods<BranchOpInterface, ["getSuccessorForOperands"]>,
- NoSideEffect, Terminator
+ Pure, Terminator
]> {
let summary = "branch operation";
let description = [{
@@ -113,7 +113,7 @@ def BranchOp : CF_Op<"br", [
def CondBranchOp : CF_Op<"cond_br",
[AttrSizedOperandSegments,
DeclareOpInterfaceMethods<BranchOpInterface, ["getSuccessorForOperands"]>,
- NoSideEffect, Terminator]> {
+ Pure, Terminator]> {
let summary = "conditional branch operation";
let description = [{
The `cond_br` terminator operation represents a conditional branch on a
@@ -228,7 +228,7 @@ def CondBranchOp : CF_Op<"cond_br",
def SwitchOp : CF_Op<"switch",
[AttrSizedOperandSegments,
DeclareOpInterfaceMethods<BranchOpInterface, ["getSuccessorForOperands"]>,
- NoSideEffect, Terminator]> {
+ Pure, Terminator]> {
let summary = "switch operation";
let description = [{
The `switch` terminator operation represents a switch on a signless integer
diff --git a/mlir/include/mlir/Dialect/EmitC/IR/EmitC.td b/mlir/include/mlir/Dialect/EmitC/IR/EmitC.td
index 813898aa08d08..69863641bbd9c 100644
--- a/mlir/include/mlir/Dialect/EmitC/IR/EmitC.td
+++ b/mlir/include/mlir/Dialect/EmitC/IR/EmitC.td
@@ -148,7 +148,7 @@ def EmitC_ConstantOp : EmitC_Op<"constant", [ConstantLike]> {
}
def EmitC_IncludeOp
- : EmitC_Op<"include", [NoSideEffect, HasParent<"ModuleOp">]> {
+ : EmitC_Op<"include", [Pure, HasParent<"ModuleOp">]> {
let summary = "Include operation";
let description = [{
The `include` operation allows to define a source file inclusion via the
diff --git a/mlir/include/mlir/Dialect/Func/IR/FuncOps.td b/mlir/include/mlir/Dialect/Func/IR/FuncOps.td
index fbae2449258fd..01298280f7a98 100644
--- a/mlir/include/mlir/Dialect/Func/IR/FuncOps.td
+++ b/mlir/include/mlir/Dialect/Func/IR/FuncOps.td
@@ -165,7 +165,7 @@ def CallIndirectOp : Func_Op<"call_indirect", [
//===----------------------------------------------------------------------===//
def ConstantOp : Func_Op<"constant",
- [ConstantLike, NoSideEffect,
+ [ConstantLike, Pure,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>]> {
let summary = "constant";
let description = [{
@@ -327,7 +327,7 @@ def FuncOp : Func_Op<"func", [
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Func_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Func_Op<"return", [Pure, HasParent<"FuncOp">,
MemRefsNormalizable, ReturnLike, Terminator]> {
let summary = "Function return operation";
let description = [{
diff --git a/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td b/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
index f1d894a59455c..f5dedb04b0778 100644
--- a/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
+++ b/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
@@ -45,7 +45,7 @@ def GPU_DimensionAttr : EnumAttr<GPU_Dialect, GPU_Dimension, "dim">;
class GPU_IndexOp<string mnemonic, list<Trait> traits = []> :
GPU_Op<mnemonic, !listconcat(traits, [
- NoSideEffect, DeclareOpInterfaceMethods<InferIntRangeInterface>])>,
+ Pure, DeclareOpInterfaceMethods<InferIntRangeInterface>])>,
Arguments<(ins GPU_DimensionAttr:$dimension)>, Results<(outs Index)> {
let assemblyFormat = "$dimension attr-dict";
}
@@ -100,7 +100,7 @@ def GPU_ThreadIdOp : GPU_IndexOp<"thread_id"> {
}
def GPU_LaneIdOp : GPU_Op<"lane_id", [
- NoSideEffect, DeclareOpInterfaceMethods<InferIntRangeInterface>]> {
+ Pure, DeclareOpInterfaceMethods<InferIntRangeInterface>]> {
let description = [{
Returns the lane id within the subgroup (warp/wave).
@@ -114,7 +114,7 @@ def GPU_LaneIdOp : GPU_Op<"lane_id", [
}
def GPU_SubgroupIdOp : GPU_Op<"subgroup_id", [
- NoSideEffect, DeclareOpInterfaceMethods<InferIntRangeInterface>]>,
+ Pure, DeclareOpInterfaceMethods<InferIntRangeInterface>]>,
Arguments<(ins)>, Results<(outs Index:$result)> {
let description = [{
Returns the subgroup id, i.e. the index of the current subgroup within the
@@ -146,7 +146,7 @@ def GPU_GlobalIdOp : GPU_IndexOp<"global_id"> {
def GPU_NumSubgroupsOp : GPU_Op<"num_subgroups", [
- NoSideEffect, DeclareOpInterfaceMethods<InferIntRangeInterface>]>,
+ Pure, DeclareOpInterfaceMethods<InferIntRangeInterface>]>,
Arguments<(ins)>, Results<(outs Index:$result)> {
let description = [{
Returns the number of subgroups within a workgroup.
@@ -162,7 +162,7 @@ def GPU_NumSubgroupsOp : GPU_Op<"num_subgroups", [
}
def GPU_SubgroupSizeOp : GPU_Op<"subgroup_size", [
- NoSideEffect, DeclareOpInterfaceMethods<InferIntRangeInterface>]>,
+ Pure, DeclareOpInterfaceMethods<InferIntRangeInterface>]>,
Arguments<(ins)>, Results<(outs Index:$result)> {
let description = [{
Returns the number of threads within a subgroup.
@@ -612,7 +612,7 @@ def GPU_PrintfOp : GPU_Op<"printf", [MemoryEffects<[MemWrite]>]>,
}];
}
-def GPU_ReturnOp : GPU_Op<"return", [HasParent<"GPUFuncOp">, NoSideEffect,
+def GPU_ReturnOp : GPU_Op<"return", [HasParent<"GPUFuncOp">, Pure,
Terminator]>,
Arguments<(ins Variadic<AnyType>:$operands)>, Results<(outs)> {
let summary = "Terminator for GPU functions.";
@@ -629,7 +629,7 @@ def GPU_ReturnOp : GPU_Op<"return", [HasParent<"GPUFuncOp">, NoSideEffect,
}
def GPU_TerminatorOp : GPU_Op<"terminator", [HasParent<"LaunchOp">,
- NoSideEffect, Terminator]>,
+ Pure, Terminator]>,
Arguments<(ins)>, Results<(outs)> {
let summary = "Terminator for GPU launch regions.";
let description = [{
@@ -641,7 +641,7 @@ def GPU_TerminatorOp : GPU_Op<"terminator", [HasParent<"LaunchOp">,
let assemblyFormat = "attr-dict";
}
-def GPU_YieldOp : GPU_Op<"yield", [NoSideEffect, Terminator]>,
+def GPU_YieldOp : GPU_Op<"yield", [Pure, Terminator]>,
Arguments<(ins Variadic<AnyType>:$values)> {
let summary = "GPU yield operation";
let description = [{
@@ -761,7 +761,7 @@ def I32OrF32 : TypeConstraint<Or<[I32.predicate, F32.predicate]>,
"i32 or f32">;
def GPU_ShuffleOp : GPU_Op<
- "shuffle", [NoSideEffect, AllTypesMatch<["value", "shuffleResult"]>]>,
+ "shuffle", [Pure, AllTypesMatch<["value", "shuffleResult"]>]>,
Arguments<(ins I32OrF32:$value, I32:$offset, I32:$width,
GPU_ShuffleModeAttr:$mode)>,
Results<(outs I32OrF32:$shuffleResult, I1:$valid)> {
@@ -1164,7 +1164,7 @@ def GPU_SubgroupMmaStoreMatrixOp : GPU_Op<"subgroup_mma_store_matrix",
}
def GPU_SubgroupMmaComputeOp : GPU_Op<"subgroup_mma_compute",
- [NoSideEffect, AllTypesMatch<["opC", "res"]>]>{
+ [Pure, AllTypesMatch<["opC", "res"]>]>{
let summary = "GPU warp synchronous matrix multiply accumulate";
@@ -1202,7 +1202,7 @@ def GPU_SubgroupMmaComputeOp : GPU_Op<"subgroup_mma_compute",
}
def GPU_SubgroupMmaConstantMatrixOp : GPU_Op<"subgroup_mma_constant_matrix",
- [NoSideEffect,
+ [Pure,
TypesMatchWith<"value type matches element type of mma_matrix",
"res", "value",
"$_self.cast<gpu::MMAMatrixType>().getElementType()">]>{
@@ -1267,7 +1267,7 @@ def MMAElementWiseAttr : EnumAttr<GPU_Dialect, MMAElementWise,
"mma_element_wise">;
def GPU_SubgroupMmaElementwiseOp : GPU_Op<"subgroup_mma_elementwise",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["args"]>]>{
let summary = "GPU warp elementwise operation on a matrix";
diff --git a/mlir/include/mlir/Dialect/LLVMIR/LLVMIntrinsicOps.td b/mlir/include/mlir/Dialect/LLVMIR/LLVMIntrinsicOps.td
index 2146182677620..75e1050b06c63 100644
--- a/mlir/include/mlir/Dialect/LLVMIR/LLVMIntrinsicOps.td
+++ b/mlir/include/mlir/Dialect/LLVMIR/LLVMIntrinsicOps.td
@@ -12,35 +12,35 @@ include "mlir/Interfaces/InferTypeOpInterface.td"
class LLVM_UnaryIntrinsicOp<string func, list<Trait> traits = []> :
LLVM_OneResultIntrOp<func, [], [0],
- !listconcat([NoSideEffect, SameOperandsAndResultType], traits)> {
+ !listconcat([Pure, SameOperandsAndResultType], traits)> {
let arguments = (ins LLVM_Type:$in);
}
class LLVM_BinarySameArgsIntrinsicOp<string func, list<Trait> traits = []> :
LLVM_OneResultIntrOp<func, [], [0],
- !listconcat([NoSideEffect, SameOperandsAndResultType], traits)> {
+ !listconcat([Pure, SameOperandsAndResultType], traits)> {
let arguments = (ins LLVM_Type:$a, LLVM_Type:$b);
}
class LLVM_BinaryIntrinsicOp<string func, list<Trait> traits = []> :
LLVM_OneResultIntrOp<func, [], [0,1],
- !listconcat([NoSideEffect], traits)> {
+ !listconcat([Pure], traits)> {
let arguments = (ins LLVM_Type:$a, LLVM_Type:$b);
}
class LLVM_TernarySameArgsIntrinsicOp<string func, list<Trait> traits = []> :
LLVM_OneResultIntrOp<func, [], [0],
- !listconcat([NoSideEffect, SameOperandsAndResultType], traits)> {
+ !listconcat([Pure, SameOperandsAndResultType], traits)> {
let arguments = (ins LLVM_Type:$a, LLVM_Type:$b, LLVM_Type:$c);
}
class LLVM_CountZerosIntrinsicOp<string func, list<Trait> traits = []> :
LLVM_OneResultIntrOp<func, [], [0],
- !listconcat([NoSideEffect], traits)> {
+ !listconcat([Pure], traits)> {
let arguments = (ins LLVM_Type:$in, I1:$zero_undefined);
}
-def LLVM_AbsOp : LLVM_OneResultIntrOp<"abs", [], [0], [NoSideEffect]> {
+def LLVM_AbsOp : LLVM_OneResultIntrOp<"abs", [], [0], [Pure]> {
let arguments = (ins LLVM_Type:$in, I1:$is_int_min_poison);
}
@@ -389,7 +389,7 @@ def LLVM_MatrixTransposeOp : LLVM_Op<"intr.matrix.transpose"> {
/// Create a llvm.get.active.lane.mask to set a mask up to a given position.
def LLVM_GetActiveLaneMaskOp
- : LLVM_OneResultIntrOp<"get.active.lane.mask", [0], [0], [NoSideEffect]> {
+ : LLVM_OneResultIntrOp<"get.active.lane.mask", [0], [0], [Pure]> {
let arguments = (ins LLVM_Type:$base, LLVM_Type:$n);
let assemblyFormat = "$base `,` $n attr-dict `:` "
"type($base) `,` type($n) `to` type($res)";
@@ -468,7 +468,7 @@ def LLVM_vscale : LLVM_IntrOp<"vscale", [0], [], [], 1>;
/// Create a call to stepvector intrinsic.
def LLVM_StepVectorOp
- : LLVM_IntrOp<"experimental.stepvector", [0], [], [NoSideEffect], 1> {
+ : LLVM_IntrOp<"experimental.stepvector", [0], [], [Pure], 1> {
let arguments = (ins);
let results = (outs LLVM_Type:$res);
let assemblyFormat = "attr-dict `:` type($res)";
@@ -477,7 +477,7 @@ def LLVM_StepVectorOp
/// Create a call to vector.insert intrinsic
def LLVM_vector_insert
: LLVM_Op<"intr.vector.insert",
- [NoSideEffect, AllTypesMatch<["dstvec", "res"]>,
+ [Pure, AllTypesMatch<["dstvec", "res"]>,
PredOpTrait<"vectors are not bigger than 2^17 bits.", And<[
CPred<"getSrcVectorBitWidth() <= 131072">,
CPred<"getDstVectorBitWidth() <= 131072">
@@ -512,7 +512,7 @@ def LLVM_vector_insert
/// Create a call to vector.extract intrinsic
def LLVM_vector_extract
: LLVM_Op<"intr.vector.extract",
- [NoSideEffect,
+ [Pure,
PredOpTrait<"vectors are not bigger than 2^17 bits.", And<[
CPred<"getSrcVectorBitWidth() <= 131072">,
CPred<"getResVectorBitWidth() <= 131072">
@@ -548,7 +548,7 @@ def LLVM_vector_extract
//
class LLVM_VPBinaryBase<string mnem, Type element>
- : LLVM_OneResultIntrOp<"vp." # mnem, [0], [], [NoSideEffect]>,
+ : LLVM_OneResultIntrOp<"vp." # mnem, [0], [], [Pure]>,
Arguments<(ins LLVM_VectorOf<element>:$lhs, LLVM_VectorOf<element>:$rhs,
LLVM_VectorOf<I1>:$mask, I32:$evl)>;
@@ -557,14 +557,14 @@ class LLVM_VPBinaryI<string mnem> : LLVM_VPBinaryBase<mnem, AnyInteger>;
class LLVM_VPBinaryF<string mnem> : LLVM_VPBinaryBase<mnem, AnyFloat>;
class LLVM_VPUnaryBase<string mnem, Type element>
- : LLVM_OneResultIntrOp<"vp." # mnem, [0], [], [NoSideEffect]>,
+ : LLVM_OneResultIntrOp<"vp." # mnem, [0], [], [Pure]>,
Arguments<(ins LLVM_VectorOf<element>:$op,
LLVM_VectorOf<I1>:$mask, I32:$evl)>;
class LLVM_VPUnaryF<string mnem> : LLVM_VPUnaryBase<mnem, AnyFloat>;
class LLVM_VPTernaryBase<string mnem, Type element>
- : LLVM_OneResultIntrOp<"vp." # mnem, [0], [], [NoSideEffect]>,
+ : LLVM_OneResultIntrOp<"vp." # mnem, [0], [], [Pure]>,
Arguments<(ins LLVM_VectorOf<element>:$op1, LLVM_VectorOf<element>:$op2,
LLVM_VectorOf<element>:$op3, LLVM_VectorOf<I1>:$mask,
I32:$evl)>;
@@ -572,7 +572,7 @@ class LLVM_VPTernaryBase<string mnem, Type element>
class LLVM_VPTernaryF<string mnem> : LLVM_VPTernaryBase<mnem, AnyFloat>;
class LLVM_VPReductionBase<string mnem, Type element>
- : LLVM_OneResultIntrOp<"vp.reduce." # mnem, [], [1], [NoSideEffect]>,
+ : LLVM_OneResultIntrOp<"vp.reduce." # mnem, [], [1], [Pure]>,
Arguments<(ins element:$satrt_value, LLVM_VectorOf<element>:$val,
LLVM_VectorOf<I1>:$mask, I32:$evl)>;
@@ -581,12 +581,12 @@ class LLVM_VPReductionI<string mnem> : LLVM_VPReductionBase<mnem, AnyInteger>;
class LLVM_VPReductionF<string mnem> : LLVM_VPReductionBase<mnem, AnyFloat>;
class LLVM_VPSelectBase<string mnem>
- : LLVM_OneResultIntrOp<"vp." # mnem, [], [1], [NoSideEffect]>,
+ : LLVM_OneResultIntrOp<"vp." # mnem, [], [1], [Pure]>,
Arguments<(ins LLVM_VectorOf<I1>:$cond, LLVM_AnyVector:$true_val,
LLVM_AnyVector:$false_val, I32:$evl)>;
class LLVM_VPCastBase<string mnem, Type element>
- : LLVM_OneResultIntrOp<"vp." # mnem, [0], [0], [NoSideEffect]>,
+ : LLVM_OneResultIntrOp<"vp." # mnem, [0], [0], [Pure]>,
Arguments<(ins LLVM_VectorOf<element>:$src,
LLVM_VectorOf<I1>:$mask, I32:$evl)>;
diff --git a/mlir/include/mlir/Dialect/LLVMIR/LLVMOpBase.td b/mlir/include/mlir/Dialect/LLVMIR/LLVMOpBase.td
index 9b80606a0e163..5275739f750ea 100644
--- a/mlir/include/mlir/Dialect/LLVMIR/LLVMOpBase.td
+++ b/mlir/include/mlir/Dialect/LLVMIR/LLVMOpBase.td
@@ -378,7 +378,7 @@ class LLVM_IntrOp<string mnem, list<int> overloadedResults,
//
// Sample use: derive an entry from this class and populate the fields.
//
-// def LLVM_Name : LLVM_ZeroResultIntrOp<"name", [0], [NoSideEffect]>,
+// def LLVM_Name : LLVM_ZeroResultIntrOp<"name", [0], [Pure]>,
// Arguments<(ins LLVM_Type, LLVM_Type)>;
//
// The mnemonic will be prefixed with "llvm.intr.", where the "llvm." part comes
@@ -405,14 +405,14 @@ class LLVM_OneResultIntrOp<string mnem, list<int> overloadedResults = [],
// LLVM vector reduction over a single vector.
class LLVM_VectorReduction<string mnem>
: LLVM_OneResultIntrOp<"vector.reduce." # mnem,
- [], [0], [NoSideEffect]>,
+ [], [0], [Pure]>,
Arguments<(ins LLVM_Type)>;
// LLVM vector reduction over a single vector, with an initial value,
// and with permission to reassociate the reduction operations.
class LLVM_VectorReductionAcc<string mnem>
: LLVM_OpBase<LLVM_Dialect, "intr.vector.reduce." # mnem,
- [NoSideEffect]>,
+ [Pure]>,
Results<(outs LLVM_Type:$res)>,
Arguments<(ins LLVM_Type, LLVM_Type,
DefaultValuedAttr<BoolAttr, "false">:$reassoc)> {
diff --git a/mlir/include/mlir/Dialect/LLVMIR/LLVMOps.td b/mlir/include/mlir/Dialect/LLVMIR/LLVMOps.td
index b04ab7176e94c..acc56a932a08b 100644
--- a/mlir/include/mlir/Dialect/LLVMIR/LLVMOps.td
+++ b/mlir/include/mlir/Dialect/LLVMIR/LLVMOps.td
@@ -208,7 +208,7 @@ class LLVM_TerminatorOp<string mnemonic, list<Trait> traits = []> :
class LLVM_ArithmeticOpBase<Type type, string mnemonic,
string instName, list<Trait> traits = []> :
LLVM_Op<mnemonic,
- !listconcat([NoSideEffect, SameOperandsAndResultType], traits)>,
+ !listconcat([Pure, SameOperandsAndResultType], traits)>,
LLVM_Builder<"$res = builder.Create" # instName # "($lhs, $rhs);"> {
dag commonArgs = (ins LLVM_ScalarOrVectorOf<type>:$lhs,
LLVM_ScalarOrVectorOf<type>:$rhs);
@@ -237,7 +237,7 @@ class LLVM_FloatArithmeticOp<string mnemonic, string instName,
class LLVM_UnaryFloatArithmeticOp<Type type, string mnemonic,
string instName, list<Trait> traits = []> :
LLVM_Op<mnemonic,
- !listconcat([NoSideEffect, SameOperandsAndResultType, DeclareOpInterfaceMethods<FastmathFlagsInterface>], traits)>,
+ !listconcat([Pure, SameOperandsAndResultType, DeclareOpInterfaceMethods<FastmathFlagsInterface>], traits)>,
LLVM_Builder<"$res = builder.Create" # instName # "($operand);"> {
let arguments = (ins type:$operand, DefaultValuedAttr<LLVM_FMFAttr, "{}">:$fastmathFlags);
let results = (outs type:$res);
@@ -285,7 +285,7 @@ def ICmpPredicate : I64EnumAttr<
}
// Other integer operations.
-def LLVM_ICmpOp : LLVM_Op<"icmp", [NoSideEffect]> {
+def LLVM_ICmpOp : LLVM_Op<"icmp", [Pure]> {
let arguments = (ins ICmpPredicate:$predicate,
AnyTypeOf<[LLVM_ScalarOrVectorOf<AnyInteger>, LLVM_ScalarOrVectorOf<LLVM_AnyPointer>]>:$lhs,
AnyTypeOf<[LLVM_ScalarOrVectorOf<AnyInteger>, LLVM_ScalarOrVectorOf<LLVM_AnyPointer>]>:$rhs);
@@ -330,7 +330,7 @@ def FCmpPredicate : I64EnumAttr<
// Other floating-point operations.
def LLVM_FCmpOp : LLVM_Op<"fcmp", [
- NoSideEffect, DeclareOpInterfaceMethods<FastmathFlagsInterface>]> {
+ Pure, DeclareOpInterfaceMethods<FastmathFlagsInterface>]> {
let arguments = (ins FCmpPredicate:$predicate,
LLVM_ScalarOrVectorOf<LLVM_AnyFloat>:$lhs,
LLVM_ScalarOrVectorOf<LLVM_AnyFloat>:$rhs,
@@ -438,7 +438,7 @@ def LLVM_AllocaOp : LLVM_Op<"alloca">, MemoryOpWithAlignmentBase {
let hasVerifier = 1;
}
-def LLVM_GEPOp : LLVM_Op<"getelementptr", [NoSideEffect]> {
+def LLVM_GEPOp : LLVM_Op<"getelementptr", [Pure]> {
let arguments = (ins LLVM_ScalarOrVectorOf<LLVM_AnyPointer>:$base,
Variadic<LLVM_ScalarOrVectorOf<AnyInteger>>:$dynamicIndices,
DenseI32ArrayAttr:$rawConstantIndices,
@@ -583,7 +583,7 @@ def LLVM_StoreOp : LLVM_Op<"store">, MemoryOpWithAlignmentAndAttributes {
// Casts.
class LLVM_CastOp<string mnemonic, string instName, Type type,
Type resultType, list<Trait> traits = []> :
- LLVM_Op<mnemonic, !listconcat([NoSideEffect], traits)>,
+ LLVM_Op<mnemonic, !listconcat([Pure], traits)>,
LLVM_Builder<"$res = builder.Create" # instName # "($arg, $_resultType);"> {
let arguments = (ins type:$arg);
let results = (outs resultType:$res);
@@ -733,7 +733,7 @@ def LLVM_CallOp : LLVM_Op<"call",
// ExtractElementOp
//===----------------------------------------------------------------------===//
-def LLVM_ExtractElementOp : LLVM_Op<"extractelement", [NoSideEffect,
+def LLVM_ExtractElementOp : LLVM_Op<"extractelement", [Pure,
TypesMatchWith<"result type matches vector element type", "vector", "res",
"LLVM::getVectorElementType($_self)">]> {
let summary = "Extract an element from an LLVM vector.";
@@ -764,7 +764,7 @@ def LLVM_ExtractElementOp : LLVM_Op<"extractelement", [NoSideEffect,
// ExtractValueOp
//===----------------------------------------------------------------------===//
-def LLVM_ExtractValueOp : LLVM_Op<"extractvalue", [NoSideEffect]> {
+def LLVM_ExtractValueOp : LLVM_Op<"extractvalue", [Pure]> {
let summary = "Extract a value from an LLVM struct.";
let arguments = (ins LLVM_AnyAggregate:$container, DenseI64ArrayAttr:$position);
@@ -792,7 +792,7 @@ def LLVM_ExtractValueOp : LLVM_Op<"extractvalue", [NoSideEffect]> {
// InsertElementOp
//===----------------------------------------------------------------------===//
-def LLVM_InsertElementOp : LLVM_Op<"insertelement", [NoSideEffect,
+def LLVM_InsertElementOp : LLVM_Op<"insertelement", [Pure,
TypesMatchWith<"argument type matches vector element type", "vector",
"value", "LLVM::getVectorElementType($_self)">,
AllTypesMatch<["res", "vector"]>]> {
@@ -824,7 +824,7 @@ def LLVM_InsertElementOp : LLVM_Op<"insertelement", [NoSideEffect,
//===----------------------------------------------------------------------===//
def LLVM_InsertValueOp : LLVM_Op<
- "insertvalue", [NoSideEffect, AllTypesMatch<["container", "res"]>]> {
+ "insertvalue", [Pure, AllTypesMatch<["container", "res"]>]> {
let summary = "Insert a value into an LLVM struct.";
let arguments = (ins LLVM_AnyAggregate:$container, LLVM_PrimitiveType:$value,
@@ -850,7 +850,7 @@ def LLVM_InsertValueOp : LLVM_Op<
//===----------------------------------------------------------------------===//
def LLVM_ShuffleVectorOp : LLVM_Op<"shufflevector",
- [NoSideEffect, AllTypesMatch<["v1", "v2"]>]> {
+ [Pure, AllTypesMatch<["v1", "v2"]>]> {
let summary = "Construct a permutation of two vectors.";
let arguments = (ins LLVM_AnyVector:$v1, LLVM_AnyVector:$v2,
@@ -878,7 +878,7 @@ def LLVM_ShuffleVectorOp : LLVM_Op<"shufflevector",
// Misc operations.
def LLVM_SelectOp
: LLVM_Op<"select",
- [NoSideEffect, AllTypesMatch<["trueValue", "falseValue", "res"]>]>,
+ [Pure, AllTypesMatch<["trueValue", "falseValue", "res"]>]>,
LLVM_Builder<
"$res = builder.CreateSelect($condition, $trueValue, $falseValue);"> {
let arguments = (ins LLVM_ScalarOrVectorOf<I1>:$condition,
@@ -905,7 +905,7 @@ def LLVM_FreezeOp : LLVM_Op<"freeze", [SameOperandsAndResultType]> {
// Terminators.
def LLVM_BrOp : LLVM_TerminatorOp<"br",
- [DeclareOpInterfaceMethods<BranchOpInterface>, NoSideEffect]> {
+ [DeclareOpInterfaceMethods<BranchOpInterface>, Pure]> {
let arguments = (ins Variadic<LLVM_Type>:$destOperands);
let successors = (successor AnySuccessor:$dest);
let assemblyFormat = [{
@@ -915,7 +915,7 @@ def LLVM_BrOp : LLVM_TerminatorOp<"br",
}
def LLVM_CondBrOp : LLVM_TerminatorOp<"cond_br",
[AttrSizedOperandSegments, DeclareOpInterfaceMethods<BranchOpInterface>,
- NoSideEffect]> {
+ Pure]> {
let arguments = (ins I1:$condition,
Variadic<LLVM_Type>:$trueDestOperands,
Variadic<LLVM_Type>:$falseDestOperands,
@@ -955,7 +955,7 @@ def LLVM_CondBrOp : LLVM_TerminatorOp<"cond_br",
// ReturnOp
//===----------------------------------------------------------------------===//
-def LLVM_ReturnOp : LLVM_TerminatorOp<"return", [NoSideEffect]> {
+def LLVM_ReturnOp : LLVM_TerminatorOp<"return", [Pure]> {
let arguments = (ins Optional<LLVM_Type>:$arg);
let assemblyFormat = "attr-dict ($arg^ `:` type($arg))?";
@@ -1000,7 +1000,7 @@ def LLVM_UnreachableOp : LLVM_TerminatorOp<"unreachable"> {
def LLVM_SwitchOp : LLVM_TerminatorOp<"switch",
[AttrSizedOperandSegments, DeclareOpInterfaceMethods<BranchOpInterface>,
- NoSideEffect]> {
+ Pure]> {
let arguments = (ins
AnyInteger:$value,
Variadic<AnyType>:$defaultOperands,
@@ -1116,7 +1116,7 @@ def UnnamedAddr : LLVM_EnumAttr<
}
def LLVM_AddressOfOp : LLVM_Op<"mlir.addressof",
- [NoSideEffect, DeclareOpInterfaceMethods<SymbolUserOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<SymbolUserOpInterface>]> {
let arguments = (ins FlatSymbolRefAttr:$global_name);
let results = (outs LLVM_AnyPointer:$res);
@@ -1578,7 +1578,7 @@ def LLVM_LLVMFuncOp : LLVM_Op<"func", [
}
def LLVM_NullOp
- : LLVM_Op<"mlir.null", [NoSideEffect]>,
+ : LLVM_Op<"mlir.null", [Pure]>,
LLVM_Builder<"$res = llvm::ConstantPointerNull::get("
" cast<llvm::PointerType>($_resultType));"> {
let summary = "Defines a value containing a null pointer to LLVM type.";
@@ -1604,7 +1604,7 @@ def LLVM_NullOp
let assemblyFormat = "attr-dict `:` type($res)";
}
-def LLVM_UndefOp : LLVM_Op<"mlir.undef", [NoSideEffect]>,
+def LLVM_UndefOp : LLVM_Op<"mlir.undef", [Pure]>,
LLVM_Builder<"$res = llvm::UndefValue::get($_resultType);"> {
let summary = "Creates an undefined value of LLVM dialect type.";
let description = [{
@@ -1626,7 +1626,7 @@ def LLVM_UndefOp : LLVM_Op<"mlir.undef", [NoSideEffect]>,
}
def LLVM_ConstantOp
- : LLVM_Op<"mlir.constant", [NoSideEffect, ConstantLike]>,
+ : LLVM_Op<"mlir.constant", [Pure, ConstantLike]>,
LLVM_Builder<[{$res = getLLVMConstant($_resultType, $value, $_location,
moduleTranslation);}]>
{
diff --git a/mlir/include/mlir/Dialect/LLVMIR/NVVMOps.td b/mlir/include/mlir/Dialect/LLVMIR/NVVMOps.td
index 9a94cf745b7db..6851b23cc997f 100644
--- a/mlir/include/mlir/Dialect/LLVMIR/NVVMOps.td
+++ b/mlir/include/mlir/Dialect/LLVMIR/NVVMOps.td
@@ -73,7 +73,7 @@ class NVVM_IntrOp<string mnem, list<Trait> traits,
//===----------------------------------------------------------------------===//
class NVVM_SpecialRegisterOp<string mnemonic, list<Trait> traits = []> :
- NVVM_IntrOp<mnemonic, !listconcat(traits, [NoSideEffect]), 1> {
+ NVVM_IntrOp<mnemonic, !listconcat(traits, [Pure]), 1> {
let arguments = (ins);
let assemblyFormat = "attr-dict `:` type($res)";
}
@@ -105,7 +105,7 @@ def NVVM_GridDimZOp : NVVM_SpecialRegisterOp<"read.ptx.sreg.nctaid.z">;
// NVVM approximate op definitions
//===----------------------------------------------------------------------===//
-def NVVM_RcpApproxFtzF32Op : NVVM_IntrOp<"rcp.approx.ftz.f", [NoSideEffect], 1> {
+def NVVM_RcpApproxFtzF32Op : NVVM_IntrOp<"rcp.approx.ftz.f", [Pure], 1> {
let arguments = (ins F32:$arg);
let results = (outs F32:$res);
let assemblyFormat = "$arg attr-dict `:` type($res)";
diff --git a/mlir/include/mlir/Dialect/LLVMIR/ROCDLOps.td b/mlir/include/mlir/Dialect/LLVMIR/ROCDLOps.td
index b99d514f15b72..fa6b5171547df 100644
--- a/mlir/include/mlir/Dialect/LLVMIR/ROCDLOps.td
+++ b/mlir/include/mlir/Dialect/LLVMIR/ROCDLOps.td
@@ -47,7 +47,7 @@ class ROCDL_Op<string mnemonic, list<Trait> traits = []> :
class ROCDL_SpecialRegisterOp<string mnemonic,
list<Trait> traits = []> :
- ROCDL_Op<mnemonic, !listconcat(traits, [NoSideEffect])>,
+ ROCDL_Op<mnemonic, !listconcat(traits, [Pure])>,
Results<(outs LLVM_Type:$res)>, Arguments<(ins)> {
string llvmBuilder = "$res = createIntrinsicCall(builder,"
# "llvm::Intrinsic::amdgcn_" # !subst(".","_", mnemonic) # ");";
@@ -56,7 +56,7 @@ class ROCDL_SpecialRegisterOp<string mnemonic,
class ROCDL_DeviceFunctionOp<string mnemonic, string device_function,
int parameter, list<Trait> traits = []> :
- ROCDL_Op<mnemonic, !listconcat(traits, [NoSideEffect])>,
+ ROCDL_Op<mnemonic, !listconcat(traits, [Pure])>,
Results<(outs LLVM_Type:$res)>, Arguments<(ins)> {
string llvmBuilder = "$res = createDeviceFunctionCall(builder, \""
# device_function # "\", " # parameter # ");";
diff --git a/mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td b/mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td
index aab039ee40c67..d9c1eec9ea959 100644
--- a/mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td
+++ b/mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td
@@ -24,7 +24,7 @@ include "mlir/Interfaces/ViewLikeInterface.td"
class Linalg_Op<string mnemonic, list<Trait> traits = []> :
Op<Linalg_Dialect, mnemonic, traits>;
-def Linalg_YieldOp : Linalg_Op<"yield", [NoSideEffect, ReturnLike, Terminator]>,
+def Linalg_YieldOp : Linalg_Op<"yield", [Pure, ReturnLike, Terminator]>,
Arguments<(ins Variadic<AnyType>:$values)> {
let summary = "Linalg yield operation";
let description = [{
@@ -43,7 +43,7 @@ def Linalg_YieldOp : Linalg_Op<"yield", [NoSideEffect, ReturnLike, Terminator]>,
let hasVerifier = 1;
}
-def Linalg_IndexOp : Linalg_Op<"index", [NoSideEffect]>,
+def Linalg_IndexOp : Linalg_Op<"index", [Pure]>,
Arguments<(ins ConfinedAttr<I64Attr, [IntMinValue<0>]>:$dim)>,
Results<(outs Index:$result)> {
let summary = "linalg index operation";
diff --git a/mlir/include/mlir/Dialect/MLProgram/IR/MLProgramOps.td b/mlir/include/mlir/Dialect/MLProgram/IR/MLProgramOps.td
index 69b1eab379b3c..82a559aab0664 100644
--- a/mlir/include/mlir/Dialect/MLProgram/IR/MLProgramOps.td
+++ b/mlir/include/mlir/Dialect/MLProgram/IR/MLProgramOps.td
@@ -203,7 +203,7 @@ def MLProgram_GlobalLoadOp : MLProgram_Op<"global_load", [
//===----------------------------------------------------------------------===//
def MLProgram_GlobalLoadConstOp : MLProgram_Op<"global_load_const", [
- NoSideEffect,
+ Pure,
DeclareOpInterfaceMethods<SymbolUserOpInterface>
]> {
let summary = "Direct load a constant value from a global";
@@ -443,7 +443,7 @@ def MLProgram_SubgraphOp : MLProgram_Op<"subgraph", [
//===----------------------------------------------------------------------===//
def MLProgram_OutputOp : MLProgram_Op<"output", [
- NoSideEffect, HasParent<"SubgraphOp">, ReturnLike, Terminator
+ Pure, HasParent<"SubgraphOp">, ReturnLike, Terminator
]> {
let summary = "Outputs values from a subgraph function";
let description = [{
@@ -469,7 +469,7 @@ def MLProgram_OutputOp : MLProgram_Op<"output", [
//===----------------------------------------------------------------------===//
def MLProgram_ReturnOp : MLProgram_Op<"return", [
- NoSideEffect, HasParent<"FuncOp">, ReturnLike, Terminator
+ Pure, HasParent<"FuncOp">, ReturnLike, Terminator
]> {
let summary = "Returns values from a `func` function";
let description = [{
@@ -495,7 +495,7 @@ def MLProgram_ReturnOp : MLProgram_Op<"return", [
//===----------------------------------------------------------------------===//
def MLProgram_TokenOp : MLProgram_Op<"token", [
- NoSideEffect
+ Pure
]> {
let summary = "Produces a new token value";
let description = [{
diff --git a/mlir/include/mlir/Dialect/Math/IR/MathOps.td b/mlir/include/mlir/Dialect/Math/IR/MathOps.td
index 149a0da2bb775..99e209000c0f5 100644
--- a/mlir/include/mlir/Dialect/Math/IR/MathOps.td
+++ b/mlir/include/mlir/Dialect/Math/IR/MathOps.td
@@ -17,7 +17,7 @@ include "mlir/Interfaces/SideEffectInterfaces.td"
// Base class for math dialect ops. Ops in this dialect have no side effects and
// can be applied element-wise to vectors and tensors.
class Math_Op<string mnemonic, list<Trait> traits = []> :
- Op<Math_Dialect, mnemonic, traits # [NoSideEffect,
+ Op<Math_Dialect, mnemonic, traits # [Pure,
DeclareOpInterfaceMethods<VectorUnrollOpInterface>] #
ElementwiseMappable.traits>;
diff --git a/mlir/include/mlir/Dialect/MemRef/IR/MemRefOps.td b/mlir/include/mlir/Dialect/MemRef/IR/MemRefOps.td
index 1381b342ce808..c94a531019209 100644
--- a/mlir/include/mlir/Dialect/MemRef/IR/MemRefOps.td
+++ b/mlir/include/mlir/Dialect/MemRef/IR/MemRefOps.td
@@ -327,7 +327,7 @@ def MemRef_AllocaScopeOp : MemRef_Op<"alloca_scope",
[AutomaticAllocationScope,
DeclareOpInterfaceMethods<RegionBranchOpInterface>,
SingleBlockImplicitTerminator<"AllocaScopeReturnOp">,
- RecursiveSideEffects,
+ RecursiveMemoryEffects,
NoRegionArguments]> {
let summary = "explicitly delimited scope for stack allocation";
let description = [{
@@ -375,7 +375,7 @@ def MemRef_AllocaScopeOp : MemRef_Op<"alloca_scope",
def MemRef_AllocaScopeReturnOp : MemRef_Op<"alloca_scope.return",
[HasParent<"AllocaScopeOp">,
- NoSideEffect,
+ Pure,
ReturnLike,
Terminator]> {
let summary = "terminator for alloca_scope operation";
@@ -404,7 +404,7 @@ def MemRef_CastOp : MemRef_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
MemRefsNormalizable,
- NoSideEffect,
+ Pure,
SameOperandsAndResultShape,
ViewLikeOpInterface
]> {
@@ -544,7 +544,7 @@ def MemRef_DeallocOp : MemRef_Op<"dealloc", [MemRefsNormalizable]> {
def MemRef_DimOp : MemRef_Op<"dim", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
MemRefsNormalizable,
- NoSideEffect,
+ Pure,
ShapedDimOpInterface]> {
let summary = "dimension index operation";
let description = [{
@@ -809,7 +809,7 @@ def MemRef_DmaWaitOp : MemRef_Op<"dma_wait"> {
def MemRef_ExtractAlignedPointerAsIndexOp :
MemRef_Op<"extract_aligned_pointer_as_index", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
SameVariadicResultSize]> {
let summary = "Extracts a memref's underlying aligned pointer as an index";
let description = [{
@@ -849,7 +849,7 @@ def MemRef_ExtractAlignedPointerAsIndexOp :
def MemRef_ExtractStridedMetadataOp : MemRef_Op<"extract_strided_metadata", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
SameVariadicResultSize]> {
let summary = "Extracts a buffer base with offset and strides";
let description = [{
@@ -974,7 +974,7 @@ def GenericAtomicRMWOp : MemRef_Op<"generic_atomic_rmw", [
def AtomicYieldOp : MemRef_Op<"atomic_yield", [
HasParent<"GenericAtomicRMWOp">,
- NoSideEffect,
+ Pure,
Terminator
]> {
let summary = "yield operation for GenericAtomicRMWOp";
@@ -993,7 +993,7 @@ def AtomicYieldOp : MemRef_Op<"atomic_yield", [
//===----------------------------------------------------------------------===//
def MemRef_GetGlobalOp : MemRef_Op<"get_global",
- [NoSideEffect, DeclareOpInterfaceMethods<SymbolUserOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<SymbolUserOpInterface>]> {
let summary = "get the memref pointing to a global variable";
let description = [{
The `memref.get_global` operation retrieves the memref pointing to a
@@ -1209,7 +1209,7 @@ def MemRef_ReinterpretCastOp
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
AttrSizedOperandSegments,
MemRefsNormalizable,
- NoSideEffect,
+ Pure,
OffsetSizeAndStrideOpInterface,
ViewLikeOpInterface
]> {
@@ -1305,7 +1305,7 @@ def MemRef_ReinterpretCastOp
// RankOp
//===----------------------------------------------------------------------===//
-def MemRef_RankOp : MemRef_Op<"rank", [NoSideEffect]> {
+def MemRef_RankOp : MemRef_Op<"rank", [Pure]> {
let summary = "rank operation";
let description = [{
The `memref.rank` operation takes a memref operand and returns its rank.
@@ -1331,7 +1331,7 @@ def MemRef_RankOp : MemRef_Op<"rank", [NoSideEffect]> {
def MemRef_ReshapeOp: MemRef_Op<"reshape", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
ViewLikeOpInterface]> {
let summary = "memref reshape operation";
let description = [{
@@ -1396,7 +1396,7 @@ def MemRef_ReshapeOp: MemRef_Op<"reshape", [
class MemRef_ReassociativeReshapeOp<string mnemonic, list<Trait> traits = []> :
MemRef_Op<mnemonic, !listconcat(traits,
- [NoSideEffect, ViewLikeOpInterface])>,
+ [Pure, ViewLikeOpInterface])>,
Arguments<(ins AnyStridedMemRef:$src, IndexListArrayAttr:$reassociation)>,
Results<(outs AnyStridedMemRef:$result)>{
@@ -1681,7 +1681,7 @@ def SubViewOp : MemRef_OpWithOffsetSizesAndStrides<"subview", [
DeclareOpInterfaceMethods<ViewLikeOpInterface>,
AttrSizedOperandSegments,
OffsetSizeAndStrideOpInterface,
- NoSideEffect
+ Pure
]> {
let summary = "memref subview operation";
let description = [{
@@ -1966,7 +1966,7 @@ def TensorStoreOp : MemRef_Op<"tensor_store",
def MemRef_TransposeOp : MemRef_Op<"transpose", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect]>,
+ Pure]>,
Arguments<(ins AnyStridedMemRef:$in, AffineMapAttr:$permutation)>,
Results<(outs AnyStridedMemRef)> {
let summary = "`transpose` produces a new strided memref (metadata-only)";
@@ -2003,7 +2003,7 @@ def MemRef_TransposeOp : MemRef_Op<"transpose", [
def MemRef_ViewOp : MemRef_Op<"view", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
DeclareOpInterfaceMethods<ViewLikeOpInterface>,
- NoSideEffect]> {
+ Pure]> {
let summary = "memref view operation";
let description = [{
The "view" operation extracts an N-D contiguous memref with empty layout map
diff --git a/mlir/include/mlir/Dialect/NVGPU/IR/NVGPU.td b/mlir/include/mlir/Dialect/NVGPU/IR/NVGPU.td
index d4b1c84fa2531..e638aa66db42b 100644
--- a/mlir/include/mlir/Dialect/NVGPU/IR/NVGPU.td
+++ b/mlir/include/mlir/Dialect/NVGPU/IR/NVGPU.td
@@ -98,7 +98,7 @@ def NVGPU_LdMatrixOp : NVGPU_Op<"ldmatrix", [
}
def NVGPU_MmaSyncOp : NVGPU_Op<"mma.sync", [
- NoSideEffect,
+ Pure,
PredOpTrait<"matrixA and matrixB have same element type",
TCopVTEtIsSameAs<0, 1>>]> {
let description = [{
diff --git a/mlir/include/mlir/Dialect/OpenMP/OpenMPOps.td b/mlir/include/mlir/Dialect/OpenMP/OpenMPOps.td
index a1d9df716e7ab..800da7c1e4448 100644
--- a/mlir/include/mlir/Dialect/OpenMP/OpenMPOps.td
+++ b/mlir/include/mlir/Dialect/OpenMP/OpenMPOps.td
@@ -49,7 +49,7 @@ def OpenMP_PointerLikeType : TypeAlias<OpenMP_PointerLikeTypeInterface,
def ParallelOp : OpenMP_Op<"parallel", [
AutomaticAllocationScope, AttrSizedOperandSegments,
DeclareOpInterfaceMethods<OutlineableOpenMPOpInterface>,
- RecursiveSideEffects, ReductionClauseInterface]> {
+ RecursiveMemoryEffects, ReductionClauseInterface]> {
let summary = "parallel construct";
let description = [{
The parallel construct includes a region of code which is to be executed
@@ -113,7 +113,7 @@ def ParallelOp : OpenMP_Op<"parallel", [
let hasVerifier = 1;
}
-def TerminatorOp : OpenMP_Op<"terminator", [Terminator, NoSideEffect]> {
+def TerminatorOp : OpenMP_Op<"terminator", [Terminator, Pure]> {
let summary = "terminator for OpenMP regions";
let description = [{
A terminator operation for regions that appear in the body of OpenMP
@@ -249,7 +249,7 @@ def SingleOp : OpenMP_Op<"single", [AttrSizedOperandSegments]> {
def WsLoopOp : OpenMP_Op<"wsloop", [AttrSizedOperandSegments,
AllTypesMatch<["lowerBound", "upperBound", "step"]>,
- RecursiveSideEffects, ReductionClauseInterface]> {
+ RecursiveMemoryEffects, ReductionClauseInterface]> {
let summary = "worksharing-loop construct";
let description = [{
The worksharing-loop construct specifies that the iterations of the loop(s)
@@ -435,7 +435,7 @@ def SimdLoopOp : OpenMP_Op<"simdloop", [AttrSizedOperandSegments,
def YieldOp : OpenMP_Op<"yield",
- [NoSideEffect, ReturnLike, Terminator,
+ [Pure, ReturnLike, Terminator,
ParentOneOf<["WsLoopOp", "ReductionDeclareOp",
"AtomicUpdateOp", "SimdLoopOp"]>]> {
let summary = "loop yield and termination operation";
@@ -543,7 +543,7 @@ def TaskOp : OpenMP_Op<"task", [AttrSizedOperandSegments,
}
def TaskLoopOp : OpenMP_Op<"taskloop", [AttrSizedOperandSegments,
- AutomaticAllocationScope, RecursiveSideEffects,
+ AutomaticAllocationScope, RecursiveMemoryEffects,
AllTypesMatch<["lowerBound", "upperBound", "step"]>,
ReductionClauseInterface]> {
let summary = "taskloop construct";
@@ -1097,7 +1097,7 @@ def AtomicWriteOp : OpenMP_Op<"atomic.write"> {
def AtomicUpdateOp : OpenMP_Op<"atomic.update",
[SingleBlockImplicitTerminator<"YieldOp">,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "performs an atomic update";
diff --git a/mlir/include/mlir/Dialect/PDL/IR/PDLOps.td b/mlir/include/mlir/Dialect/PDL/IR/PDLOps.td
index c92cf4712cc05..fbe991a61a4cf 100644
--- a/mlir/include/mlir/Dialect/PDL/IR/PDLOps.td
+++ b/mlir/include/mlir/Dialect/PDL/IR/PDLOps.td
@@ -481,7 +481,7 @@ def PDL_ReplaceOp : PDL_Op<"replace", [
// pdl::ResultOp
//===----------------------------------------------------------------------===//
-def PDL_ResultOp : PDL_Op<"result", [NoSideEffect]> {
+def PDL_ResultOp : PDL_Op<"result", [Pure]> {
let summary = "Extract a result from an operation";
let description = [{
`pdl.result` operations extract result edges from an operation node within
@@ -513,7 +513,7 @@ def PDL_ResultOp : PDL_Op<"result", [NoSideEffect]> {
// pdl::ResultsOp
//===----------------------------------------------------------------------===//
-def PDL_ResultsOp : PDL_Op<"results", [NoSideEffect]> {
+def PDL_ResultsOp : PDL_Op<"results", [Pure]> {
let summary = "Extract a result group from an operation";
let description = [{
`pdl.results` operations extract a result group from an operation within a
diff --git a/mlir/include/mlir/Dialect/PDLInterp/IR/PDLInterpOps.td b/mlir/include/mlir/Dialect/PDLInterp/IR/PDLInterpOps.td
index f6a7c9a81c62c..659bfbcac8605 100644
--- a/mlir/include/mlir/Dialect/PDLInterp/IR/PDLInterpOps.td
+++ b/mlir/include/mlir/Dialect/PDLInterp/IR/PDLInterpOps.td
@@ -144,7 +144,7 @@ def PDLInterp_ApplyRewriteOp : PDLInterp_Op<"apply_rewrite"> {
//===----------------------------------------------------------------------===//
def PDLInterp_AreEqualOp
- : PDLInterp_PredicateOp<"are_equal", [NoSideEffect, SameTypeOperands]> {
+ : PDLInterp_PredicateOp<"are_equal", [Pure, SameTypeOperands]> {
let summary = "Check if two positional values or ranges are equivalent";
let description = [{
`pdl_interp.are_equal` operations compare two positional values for
@@ -166,7 +166,7 @@ def PDLInterp_AreEqualOp
// pdl_interp::BranchOp
//===----------------------------------------------------------------------===//
-def PDLInterp_BranchOp : PDLInterp_Op<"branch", [NoSideEffect, Terminator]> {
+def PDLInterp_BranchOp : PDLInterp_Op<"branch", [Pure, Terminator]> {
let summary = "General branch operation";
let description = [{
`pdl_interp.branch` operations expose general branch functionality to the
@@ -189,7 +189,7 @@ def PDLInterp_BranchOp : PDLInterp_Op<"branch", [NoSideEffect, Terminator]> {
//===----------------------------------------------------------------------===//
def PDLInterp_CheckAttributeOp
- : PDLInterp_PredicateOp<"check_attribute", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"check_attribute", [Pure]> {
let summary = "Check the value of an `Attribute`";
let description = [{
`pdl_interp.check_attribute` operations compare the value of a given
@@ -214,7 +214,7 @@ def PDLInterp_CheckAttributeOp
//===----------------------------------------------------------------------===//
def PDLInterp_CheckOperandCountOp
- : PDLInterp_PredicateOp<"check_operand_count", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"check_operand_count", [Pure]> {
let summary = "Check the number of operands of an `Operation`";
let description = [{
`pdl_interp.check_operand_count` operations compare the number of operands
@@ -248,7 +248,7 @@ def PDLInterp_CheckOperandCountOp
//===----------------------------------------------------------------------===//
def PDLInterp_CheckOperationNameOp
- : PDLInterp_PredicateOp<"check_operation_name", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"check_operation_name", [Pure]> {
let summary = "Check the OperationName of an `Operation`";
let description = [{
`pdl_interp.check_operation_name` operations compare the name of a given
@@ -271,7 +271,7 @@ def PDLInterp_CheckOperationNameOp
//===----------------------------------------------------------------------===//
def PDLInterp_CheckResultCountOp
- : PDLInterp_PredicateOp<"check_result_count", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"check_result_count", [Pure]> {
let summary = "Check the number of results of an `Operation`";
let description = [{
`pdl_interp.check_result_count` operations compare the number of results
@@ -305,7 +305,7 @@ def PDLInterp_CheckResultCountOp
//===----------------------------------------------------------------------===//
def PDLInterp_CheckTypeOp
- : PDLInterp_PredicateOp<"check_type", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"check_type", [Pure]> {
let summary = "Compare a type to a known value";
let description = [{
`pdl_interp.check_type` operations compare a type with a statically known
@@ -328,7 +328,7 @@ def PDLInterp_CheckTypeOp
//===----------------------------------------------------------------------===//
def PDLInterp_CheckTypesOp
- : PDLInterp_PredicateOp<"check_types", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"check_types", [Pure]> {
let summary = "Compare a range of types to a range of known values";
let description = [{
`pdl_interp.check_types` operations compare a range of types with a
@@ -352,7 +352,7 @@ def PDLInterp_CheckTypesOp
//===----------------------------------------------------------------------===//
def PDLInterp_ContinueOp
- : PDLInterp_Op<"continue", [NoSideEffect, HasParent<"ForEachOp">,
+ : PDLInterp_Op<"continue", [Pure, HasParent<"ForEachOp">,
Terminator]> {
let summary = "Breaks the current iteration";
let description = [{
@@ -375,7 +375,7 @@ def PDLInterp_ContinueOp
//===----------------------------------------------------------------------===//
def PDLInterp_CreateAttributeOp
- : PDLInterp_Op<"create_attribute", [NoSideEffect]> {
+ : PDLInterp_Op<"create_attribute", [Pure]> {
let summary = "Create an interpreter handle to a constant `Attribute`";
let description = [{
`pdl_interp.create_attribute` operations generate a handle within the
@@ -453,7 +453,7 @@ def PDLInterp_CreateOperationOp
// pdl_interp::CreateTypeOp
//===----------------------------------------------------------------------===//
-def PDLInterp_CreateTypeOp : PDLInterp_Op<"create_type", [NoSideEffect]> {
+def PDLInterp_CreateTypeOp : PDLInterp_Op<"create_type", [Pure]> {
let summary = "Create an interpreter handle to a constant `Type`";
let description = [{
`pdl_interp.create_type` operations generate a handle within the interpreter
@@ -481,7 +481,7 @@ def PDLInterp_CreateTypeOp : PDLInterp_Op<"create_type", [NoSideEffect]> {
// pdl_interp::CreateTypesOp
//===----------------------------------------------------------------------===//
-def PDLInterp_CreateTypesOp : PDLInterp_Op<"create_types", [NoSideEffect]> {
+def PDLInterp_CreateTypesOp : PDLInterp_Op<"create_types", [Pure]> {
let summary = "Create an interpreter handle to a range of constant `Type`s";
let description = [{
`pdl_interp.create_types` operations generate a handle within the
@@ -533,7 +533,7 @@ def PDLInterp_EraseOp : PDLInterp_Op<"erase"> {
//===----------------------------------------------------------------------===//
def PDLInterp_ExtractOp
- : PDLInterp_Op<"extract", [NoSideEffect,
+ : PDLInterp_Op<"extract", [Pure,
TypesMatchWith<
"`range` is a PDL range whose element type matches type of `result`",
"result", "range", "pdl::RangeType::get($_self)">]> {
@@ -569,7 +569,7 @@ def PDLInterp_ExtractOp
//===----------------------------------------------------------------------===//
def PDLInterp_FinalizeOp
- : PDLInterp_Op<"finalize", [NoSideEffect, Terminator]> {
+ : PDLInterp_Op<"finalize", [Pure, Terminator]> {
let summary = "Finalize a pattern match or rewrite sequence";
let description = [{
`pdl_interp.finalize` is used to denote the termination of a match or
@@ -681,7 +681,7 @@ def PDLInterp_FuncOp : PDLInterp_Op<"func", [
// pdl_interp::GetAttributeOp
//===----------------------------------------------------------------------===//
-def PDLInterp_GetAttributeOp : PDLInterp_Op<"get_attribute", [NoSideEffect]> {
+def PDLInterp_GetAttributeOp : PDLInterp_Op<"get_attribute", [Pure]> {
let summary = "Get a specified attribute value from an `Operation`";
let description = [{
`pdl_interp.get_attribute` operations try to get a specific attribute from
@@ -705,7 +705,7 @@ def PDLInterp_GetAttributeOp : PDLInterp_Op<"get_attribute", [NoSideEffect]> {
//===----------------------------------------------------------------------===//
def PDLInterp_GetAttributeTypeOp
- : PDLInterp_Op<"get_attribute_type", [NoSideEffect]> {
+ : PDLInterp_Op<"get_attribute_type", [Pure]> {
let summary = "Get the result type of a specified `Attribute`";
let description = [{
`pdl_interp.get_attribute_type` operations get the resulting type of a
@@ -734,7 +734,7 @@ def PDLInterp_GetAttributeTypeOp
//===----------------------------------------------------------------------===//
def PDLInterp_GetDefiningOpOp
- : PDLInterp_Op<"get_defining_op", [NoSideEffect]> {
+ : PDLInterp_Op<"get_defining_op", [Pure]> {
let summary = "Get the defining operation of a `Value`";
let description = [{
`pdl_interp.get_defining_op` operations try to get the defining operation
@@ -758,7 +758,7 @@ def PDLInterp_GetDefiningOpOp
// pdl_interp::GetOperandOp
//===----------------------------------------------------------------------===//
-def PDLInterp_GetOperandOp : PDLInterp_Op<"get_operand", [NoSideEffect]> {
+def PDLInterp_GetOperandOp : PDLInterp_Op<"get_operand", [Pure]> {
let summary = "Get a specified operand from an `Operation`";
let description = [{
`pdl_interp.get_operand` operations try to get a specific operand from an
@@ -782,7 +782,7 @@ def PDLInterp_GetOperandOp : PDLInterp_Op<"get_operand", [NoSideEffect]> {
// pdl_interp::GetOperandsOp
//===----------------------------------------------------------------------===//
-def PDLInterp_GetOperandsOp : PDLInterp_Op<"get_operands", [NoSideEffect]> {
+def PDLInterp_GetOperandsOp : PDLInterp_Op<"get_operands", [Pure]> {
let summary = "Get a specified operand group from an `Operation`";
let description = [{
`pdl_interp.get_operands` operations try to get a specific operand
@@ -825,7 +825,7 @@ def PDLInterp_GetOperandsOp : PDLInterp_Op<"get_operands", [NoSideEffect]> {
// pdl_interp::GetResultOp
//===----------------------------------------------------------------------===//
-def PDLInterp_GetResultOp : PDLInterp_Op<"get_result", [NoSideEffect]> {
+def PDLInterp_GetResultOp : PDLInterp_Op<"get_result", [Pure]> {
let summary = "Get a specified result from an `Operation`";
let description = [{
`pdl_interp.get_result` operations try to get a specific result from an
@@ -849,7 +849,7 @@ def PDLInterp_GetResultOp : PDLInterp_Op<"get_result", [NoSideEffect]> {
// pdl_interp::GetResultsOp
//===----------------------------------------------------------------------===//
-def PDLInterp_GetResultsOp : PDLInterp_Op<"get_results", [NoSideEffect]> {
+def PDLInterp_GetResultsOp : PDLInterp_Op<"get_results", [Pure]> {
let summary = "Get a specified result group from an `Operation`";
let description = [{
`pdl_interp.get_results` operations try to get a specific result group
@@ -898,7 +898,7 @@ def PDLInterp_GetResultsOp : PDLInterp_Op<"get_results", [NoSideEffect]> {
//===----------------------------------------------------------------------===//
def PDLInterp_GetUsersOp
- : PDLInterp_Op<"get_users", [NoSideEffect]> {
+ : PDLInterp_Op<"get_users", [Pure]> {
let summary = "Get the users of a `Value`";
let description = [{
`pdl_interp.get_users` extracts the users that accept this value. In the
@@ -933,7 +933,7 @@ def PDLInterp_GetUsersOp
// pdl_interp::GetValueTypeOp
//===----------------------------------------------------------------------===//
-def PDLInterp_GetValueTypeOp : PDLInterp_Op<"get_value_type", [NoSideEffect,
+def PDLInterp_GetValueTypeOp : PDLInterp_Op<"get_value_type", [Pure,
TypesMatchWith<"`value` type matches arity of `result`",
"result", "value", "getGetValueTypeOpValueType($_self)">]> {
let summary = "Get the result type of a specified `Value`";
@@ -973,7 +973,7 @@ def PDLInterp_GetValueTypeOp : PDLInterp_Op<"get_value_type", [NoSideEffect,
//===----------------------------------------------------------------------===//
def PDLInterp_IsNotNullOp
- : PDLInterp_PredicateOp<"is_not_null", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"is_not_null", [Pure]> {
let summary = "Check if a positional value is non-null";
let description = [{
`pdl_interp.is_not_null` operations check that a positional value or range
@@ -1061,7 +1061,7 @@ def PDLInterp_ReplaceOp : PDLInterp_Op<"replace"> {
//===----------------------------------------------------------------------===//
def PDLInterp_SwitchAttributeOp
- : PDLInterp_SwitchOp<"switch_attribute", [NoSideEffect]> {
+ : PDLInterp_SwitchOp<"switch_attribute", [Pure]> {
let summary = "Switch on the value of an `Attribute`";
let description = [{
`pdl_interp.switch_attribute` operations compare the value of a given
@@ -1094,7 +1094,7 @@ def PDLInterp_SwitchAttributeOp
//===----------------------------------------------------------------------===//
def PDLInterp_SwitchOperandCountOp
- : PDLInterp_SwitchOp<"switch_operand_count", [NoSideEffect]> {
+ : PDLInterp_SwitchOp<"switch_operand_count", [Pure]> {
let summary = "Switch on the operand count of an `Operation`";
let description = [{
`pdl_interp.switch_operand_count` operations compare the operand count of a
@@ -1128,7 +1128,7 @@ def PDLInterp_SwitchOperandCountOp
//===----------------------------------------------------------------------===//
def PDLInterp_SwitchOperationNameOp
- : PDLInterp_SwitchOp<"switch_operation_name", [NoSideEffect]> {
+ : PDLInterp_SwitchOp<"switch_operation_name", [Pure]> {
let summary = "Switch on the OperationName of an `Operation`";
let description = [{
`pdl_interp.switch_operation_name` operations compare the name of a given
@@ -1166,7 +1166,7 @@ def PDLInterp_SwitchOperationNameOp
//===----------------------------------------------------------------------===//
def PDLInterp_SwitchResultCountOp
- : PDLInterp_SwitchOp<"switch_result_count", [NoSideEffect]> {
+ : PDLInterp_SwitchOp<"switch_result_count", [Pure]> {
let summary = "Switch on the result count of an `Operation`";
let description = [{
`pdl_interp.switch_result_count` operations compare the result count of a
@@ -1199,7 +1199,7 @@ def PDLInterp_SwitchResultCountOp
// pdl_interp::SwitchTypeOp
//===----------------------------------------------------------------------===//
-def PDLInterp_SwitchTypeOp : PDLInterp_SwitchOp<"switch_type", [NoSideEffect]> {
+def PDLInterp_SwitchTypeOp : PDLInterp_SwitchOp<"switch_type", [Pure]> {
let summary = "Switch on a `Type` value";
let description = [{
`pdl_interp.switch_type` operations compare a type with a set of statically
@@ -1238,7 +1238,7 @@ def PDLInterp_SwitchTypeOp : PDLInterp_SwitchOp<"switch_type", [NoSideEffect]> {
//===----------------------------------------------------------------------===//
def PDLInterp_SwitchTypesOp : PDLInterp_SwitchOp<"switch_types",
- [NoSideEffect]> {
+ [Pure]> {
let summary = "Switch on a range of `Type` values";
let description = [{
`pdl_interp.switch_types` operations compare a range of types with a set of
diff --git a/mlir/include/mlir/Dialect/Quant/QuantOps.td b/mlir/include/mlir/Dialect/Quant/QuantOps.td
index de26f018b2e77..da0d1e9106394 100644
--- a/mlir/include/mlir/Dialect/Quant/QuantOps.td
+++ b/mlir/include/mlir/Dialect/Quant/QuantOps.td
@@ -47,7 +47,7 @@ class quant_Op<string mnemonic, list<Trait> traits> :
// (where the operand and result type are not quantized) at all points where
// it is legal to use a quantized representation (but is not known to be
// acceptable).
-def quant_QuantizeCastOp : quant_Op<"qcast", [NoSideEffect]> {
+def quant_QuantizeCastOp : quant_Op<"qcast", [Pure]> {
let arguments = (ins quant_RealValueType:$arg);
let results = (outs quant_RealValueType);
}
@@ -62,7 +62,7 @@ def quant_QuantizeCastOp : quant_Op<"qcast", [NoSideEffect]> {
// Especially early in transformation, it is common to have dcasts on
// all operands to ops that must operate with the expressed type (typically
// math ops prior to lowering to target-specific, quantized kernels).
-def quant_DequantizeCastOp : quant_Op<"dcast", [NoSideEffect]> {
+def quant_DequantizeCastOp : quant_Op<"dcast", [Pure]> {
let arguments = (ins quant_RealValueType:$arg);
let results = (outs quant_RealValueType);
}
@@ -78,7 +78,7 @@ def quant_DequantizeCastOp : quant_Op<"dcast", [NoSideEffect]> {
// i8 -> !quant<"uniform[i8:f32]{1.0}">
// tensor<4xi8> -> tensor<4x!quant<"uniform[i8:f32]{1.0}">>
// vector<4xi8> -> vector<4x!quant<"uniform[i8:f32]{1.0}">>
-def quant_StorageCastOp : quant_Op<"scast", [NoSideEffect]> {
+def quant_StorageCastOp : quant_Op<"scast", [Pure]> {
let arguments = (ins quant_RealOrStorageValueType:$arg);
let results = (outs quant_RealOrStorageValueType);
let hasFolder = 1;
diff --git a/mlir/include/mlir/Dialect/SCF/IR/SCFOps.td b/mlir/include/mlir/Dialect/SCF/IR/SCFOps.td
index b7bd4fd5586cb..be7dd8e3a2544 100644
--- a/mlir/include/mlir/Dialect/SCF/IR/SCFOps.td
+++ b/mlir/include/mlir/Dialect/SCF/IR/SCFOps.td
@@ -37,7 +37,7 @@ class SCF_Op<string mnemonic, list<Trait> traits = []> :
def ConditionOp : SCF_Op<"condition", [
HasParent<"WhileOp">,
DeclareOpInterfaceMethods<RegionBranchTerminatorOpInterface>,
- NoSideEffect,
+ Pure,
Terminator
]> {
let summary = "loop continuation condition";
@@ -121,7 +121,7 @@ def ForOp : SCF_Op<"for",
"getSingleUpperBound"]>,
DeclareOpInterfaceMethods<RegionBranchOpInterface>,
SingleBlockImplicitTerminator<"scf::YieldOp">,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "for operation";
let description = [{
The "scf.for" operation represents a loop taking 3 SSA value as operands
@@ -345,7 +345,7 @@ def ForOp : SCF_Op<"for",
def ForeachThreadOp : SCF_Op<"foreach_thread", [
AttrSizedOperandSegments,
SingleBlockImplicitTerminator<"scf::PerformConcurrentlyOp">,
- RecursiveSideEffects,
+ RecursiveMemoryEffects,
AutomaticAllocationScope,
]> {
let summary = "evaluate a block multiple times in parallel";
@@ -545,7 +545,7 @@ def ForeachThreadOp : SCF_Op<"foreach_thread", [
//===----------------------------------------------------------------------===//
def PerformConcurrentlyOp : SCF_Op<"foreach_thread.perform_concurrently", [
- NoSideEffect,
+ Pure,
Terminator,
DeclareOpInterfaceMethods<ParallelCombiningOpInterface>,
HasParent<"ForeachThreadOp">,
@@ -589,7 +589,7 @@ def IfOp : SCF_Op<"if",
[DeclareOpInterfaceMethods<RegionBranchOpInterface,
["getNumRegionInvocations",
"getRegionInvocationBounds"]>,
- SingleBlockImplicitTerminator<"scf::YieldOp">, RecursiveSideEffects,
+ SingleBlockImplicitTerminator<"scf::YieldOp">, RecursiveMemoryEffects,
NoRegionArguments]> {
let summary = "if-then-else operation";
let description = [{
@@ -687,7 +687,7 @@ def ParallelOp : SCF_Op<"parallel",
[AutomaticAllocationScope,
AttrSizedOperandSegments,
DeclareOpInterfaceMethods<LoopLikeOpInterface>,
- RecursiveSideEffects,
+ RecursiveMemoryEffects,
SingleBlockImplicitTerminator<"scf::YieldOp">]> {
let summary = "parallel for operation";
let description = [{
@@ -826,7 +826,7 @@ def ReduceOp : SCF_Op<"reduce", [HasParent<"ParallelOp">]> {
//===----------------------------------------------------------------------===//
def ReduceReturnOp :
- SCF_Op<"reduce.return", [HasParent<"ReduceOp">, NoSideEffect,
+ SCF_Op<"reduce.return", [HasParent<"ReduceOp">, Pure,
Terminator]> {
let summary = "terminator for reduce operation";
let description = [{
@@ -850,7 +850,7 @@ def ReduceReturnOp :
def WhileOp : SCF_Op<"while",
[DeclareOpInterfaceMethods<RegionBranchOpInterface>,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "a generic 'while' loop";
let description = [{
This operation represents a generic "while"/"do-while" loop that keeps
@@ -986,7 +986,7 @@ def WhileOp : SCF_Op<"while",
// YieldOp
//===----------------------------------------------------------------------===//
-def YieldOp : SCF_Op<"yield", [NoSideEffect, ReturnLike, Terminator,
+def YieldOp : SCF_Op<"yield", [Pure, ReturnLike, Terminator,
ParentOneOf<["ExecuteRegionOp, ForOp",
"IfOp, ParallelOp, WhileOp"]>]> {
let summary = "loop yield and termination operation";
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVArithmeticOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVArithmeticOps.td
index 96d57e2de59f8..50b5c4932f0ca 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVArithmeticOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVArithmeticOps.td
@@ -23,7 +23,7 @@ class SPIRV_ArithmeticBinaryOp<string mnemonic, Type type,
// Operands type same as result type.
SPIRV_BinaryOp<mnemonic, type, type,
!listconcat(traits,
- [NoSideEffect, SameOperandsAndResultType])> {
+ [Pure, SameOperandsAndResultType])> {
// In addition to normal types arithmetic instructions can support cooperative
// matrix.
let arguments = (ins
@@ -42,7 +42,7 @@ class SPIRV_ArithmeticUnaryOp<string mnemonic, Type type,
// Operand type same as result type.
SPIRV_UnaryOp<mnemonic, type, type,
!listconcat(traits,
- [NoSideEffect, SameOperandsAndResultType])>;
+ [Pure, SameOperandsAndResultType])>;
// -----
@@ -312,7 +312,7 @@ def SPIRV_IAddOp : SPIRV_ArithmeticBinaryOp<"IAdd",
def SPIRV_IAddCarryOp : SPIRV_BinaryOp<"IAddCarry",
SPIRV_AnyStruct, SPIRV_Integer,
- [Commutative, NoSideEffect]> {
+ [Commutative, Pure]> {
let summary = [{
Integer addition of Operand 1 and Operand 2, including the carry.
}];
@@ -448,7 +448,7 @@ def SPIRV_ISubOp : SPIRV_ArithmeticBinaryOp<"ISub",
// -----
def SPIRV_ISubBorrowOp : SPIRV_BinaryOp<"ISubBorrow", SPIRV_AnyStruct, SPIRV_Integer,
- [NoSideEffect]> {
+ [Pure]> {
let summary = [{
Result is the unsigned integer subtraction of Operand 2 from Operand 1,
and what it needed to borrow.
@@ -680,7 +680,7 @@ def SPIRV_UDivOp : SPIRV_ArithmeticBinaryOp<"UDiv",
// -----
-def SPIRV_VectorTimesScalarOp : SPIRV_Op<"VectorTimesScalar", [NoSideEffect]> {
+def SPIRV_VectorTimesScalarOp : SPIRV_Op<"VectorTimesScalar", [Pure]> {
let summary = "Scale a floating-point vector.";
let description = [{
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVBitOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVBitOps.td
index bda55c07861de..5e487aac663eb 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVBitOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVBitOps.td
@@ -21,13 +21,13 @@ class SPIRV_BitBinaryOp<string mnemonic, list<Trait> traits = []> :
// All the operands type used in bit instructions are SPIRV_Integer.
SPIRV_BinaryOp<mnemonic, SPIRV_Integer, SPIRV_Integer,
!listconcat(traits,
- [NoSideEffect, SameOperandsAndResultType])> {
+ [Pure, SameOperandsAndResultType])> {
let assemblyFormat = "operands attr-dict `:` type($result)";
}
class SPIRV_BitFieldExtractOp<string mnemonic, list<Trait> traits = []> :
SPIRV_Op<mnemonic, !listconcat(traits,
- [NoSideEffect, AllTypesMatch<["base", "result"]>])> {
+ [Pure, AllTypesMatch<["base", "result"]>])> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<SPIRV_Integer>:$base,
SPIRV_Integer:$offset,
@@ -48,12 +48,12 @@ class SPIRV_BitFieldExtractOp<string mnemonic, list<Trait> traits = []> :
class SPIRV_BitUnaryOp<string mnemonic, list<Trait> traits = []> :
SPIRV_UnaryOp<mnemonic, SPIRV_Integer, SPIRV_Integer,
!listconcat(traits,
- [NoSideEffect, SameOperandsAndResultType])>;
+ [Pure, SameOperandsAndResultType])>;
class SPIRV_ShiftOp<string mnemonic, list<Trait> traits = []> :
SPIRV_BinaryOp<mnemonic, SPIRV_Integer, SPIRV_Integer,
!listconcat(traits,
- [NoSideEffect, SameOperandsAndResultShape,
+ [Pure, SameOperandsAndResultShape,
AllTypesMatch<["operand1", "result"]>])> {
let assemblyFormat = [{
operands attr-dict `:` type($operand1) `,` type($operand2)
@@ -101,7 +101,7 @@ def SPIRV_BitCountOp : SPIRV_BitUnaryOp<"BitCount", []> {
// -----
def SPIRV_BitFieldInsertOp : SPIRV_Op<"BitFieldInsert",
- [NoSideEffect, AllTypesMatch<["base", "insert", "result"]>]> {
+ [Pure, AllTypesMatch<["base", "insert", "result"]>]> {
let summary = [{
Make a copy of an object, with a modified bit field that comes from
another object.
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCLOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCLOps.td
index 8f6171b4dd5a8..187622e55d35c 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCLOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCLOps.td
@@ -34,7 +34,7 @@ class SPIRV_CLOp<string mnemonic, int opcode, list<Trait> traits = []> :
// Base class for OpenCL unary ops.
class SPIRV_CLUnaryOp<string mnemonic, int opcode, Type resultType,
Type operandType, list<Trait> traits = []> :
- SPIRV_CLOp<mnemonic, opcode, !listconcat([NoSideEffect], traits)> {
+ SPIRV_CLOp<mnemonic, opcode, !listconcat([Pure], traits)> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<operandType>:$operand
@@ -59,7 +59,7 @@ class SPIRV_CLUnaryArithmeticOp<string mnemonic, int opcode, Type type,
// Base class for OpenCL binary ops.
class SPIRV_CLBinaryOp<string mnemonic, int opcode, Type resultType,
Type operandType, list<Trait> traits = []> :
- SPIRV_CLOp<mnemonic, opcode, !listconcat([NoSideEffect], traits)> {
+ SPIRV_CLOp<mnemonic, opcode, !listconcat([Pure], traits)> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<operandType>:$lhs,
@@ -85,7 +85,7 @@ class SPIRV_CLBinaryArithmeticOp<string mnemonic, int opcode, Type type,
// Base class for OpenCL binary ops.
class SPIRV_CLTernaryOp<string mnemonic, int opcode, Type resultType,
Type operandType, list<Trait> traits = []> :
- SPIRV_CLOp<mnemonic, opcode, !listconcat([NoSideEffect], traits)> {
+ SPIRV_CLOp<mnemonic, opcode, !listconcat([Pure], traits)> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<operandType>:$x,
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCastOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCastOps.td
index ab0683fca47cb..c985c6e94e19e 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCastOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCastOps.td
@@ -21,7 +21,7 @@ class SPIRV_CastOp<string mnemonic, Type resultType, Type operandType,
list<Trait> traits = []> :
SPIRV_Op<mnemonic,
!listconcat(traits,
- [NoSideEffect, SameOperandsAndResultShape])> {
+ [Pure, SameOperandsAndResultShape])> {
let arguments = (ins
SPIRV_ScalarOrVectorOrCoopMatrixOf<operandType>:$operand
);
@@ -36,7 +36,7 @@ class SPIRV_CastOp<string mnemonic, Type resultType, Type operandType,
// -----
-def SPIRV_BitcastOp : SPIRV_Op<"Bitcast", [NoSideEffect]> {
+def SPIRV_BitcastOp : SPIRV_Op<"Bitcast", [Pure]> {
let summary = "Bit pattern-preserving type conversion.";
let description = [{
@@ -332,7 +332,7 @@ def SPIRV_UConvertOp : SPIRV_CastOp<"UConvert",
}
// -----
-def SPIRV_PtrCastToGenericOp : SPIRV_Op<"PtrCastToGeneric", [NoSideEffect]> {
+def SPIRV_PtrCastToGenericOp : SPIRV_Op<"PtrCastToGeneric", [Pure]> {
let summary = "Convert a pointer’s Storage Class to Generic.";
let description = [{
@@ -375,7 +375,7 @@ def SPIRV_PtrCastToGenericOp : SPIRV_Op<"PtrCastToGeneric", [NoSideEffect]> {
// -----
-def SPIRV_GenericCastToPtrOp : SPIRV_Op<"GenericCastToPtr", [NoSideEffect]> {
+def SPIRV_GenericCastToPtrOp : SPIRV_Op<"GenericCastToPtr", [Pure]> {
let summary = "Convert a pointer’s Storage Class to a non-Generic class.";
let description = [{
@@ -418,7 +418,7 @@ def SPIRV_GenericCastToPtrOp : SPIRV_Op<"GenericCastToPtr", [NoSideEffect]> {
// -----
-def SPIRV_GenericCastToPtrExplicitOp : SPIRV_Op<"GenericCastToPtrExplicit", [NoSideEffect]> {
+def SPIRV_GenericCastToPtrExplicitOp : SPIRV_Op<"GenericCastToPtrExplicit", [Pure]> {
let summary = [{
Attempts to explicitly convert Pointer to Storage storage-class pointer
value.
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCompositeOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCompositeOps.td
index b382c342c09f9..7147e522cdd7f 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCompositeOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCompositeOps.td
@@ -19,7 +19,7 @@ include "mlir/Interfaces/SideEffectInterfaces.td"
// -----
-def SPIRV_CompositeConstructOp : SPIRV_Op<"CompositeConstruct", [NoSideEffect]> {
+def SPIRV_CompositeConstructOp : SPIRV_Op<"CompositeConstruct", [Pure]> {
let summary = [{
Construct a new composite object from a set of constituent objects.
}];
@@ -73,7 +73,7 @@ def SPIRV_CompositeConstructOp : SPIRV_Op<"CompositeConstruct", [NoSideEffect]>
// -----
def SPIRV_CompositeExtractOp : SPIRV_Op<"CompositeExtract",
- [NoSideEffect, UsableInSpecConstantOp]> {
+ [Pure, UsableInSpecConstantOp]> {
let summary = "Extract a part of a composite object.";
let description = [{
@@ -124,7 +124,7 @@ def SPIRV_CompositeExtractOp : SPIRV_Op<"CompositeExtract",
// -----
def SPIRV_CompositeInsertOp : SPIRV_Op<"CompositeInsert",
- [NoSideEffect, UsableInSpecConstantOp]> {
+ [Pure, UsableInSpecConstantOp]> {
let summary = [{
Make a copy of a composite object, while modifying one part of it.
}];
@@ -176,7 +176,7 @@ def SPIRV_CompositeInsertOp : SPIRV_Op<"CompositeInsert",
// -----
def SPIRV_VectorExtractDynamicOp : SPIRV_Op<"VectorExtractDynamic", [
- NoSideEffect,
+ Pure,
TypesMatchWith<"type of 'result' matches element type of 'vector'",
"vector", "result",
"$_self.cast<mlir::VectorType>().getElementType()">]> {
@@ -224,7 +224,7 @@ def SPIRV_VectorExtractDynamicOp : SPIRV_Op<"VectorExtractDynamic", [
// -----
def SPIRV_VectorInsertDynamicOp : SPIRV_Op<"VectorInsertDynamic", [
- NoSideEffect,
+ Pure,
TypesMatchWith<
"type of 'component' matches element type of 'vector'",
"vector", "component",
@@ -288,7 +288,7 @@ def SPIRV_VectorInsertDynamicOp : SPIRV_Op<"VectorInsertDynamic", [
// -----
def SPIRV_VectorShuffleOp : SPIRV_Op<"VectorShuffle", [
- NoSideEffect, AllElementTypesMatch<["vector1", "vector2", "result"]>]> {
+ Pure, AllElementTypesMatch<["vector1", "vector2", "result"]>]> {
let summary = [{
Select arbitrary components from two vectors to make a new vector.
}];
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVControlFlowOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVControlFlowOps.td
index 24dbf56de2a06..21a83bf290397 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVControlFlowOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVControlFlowOps.td
@@ -22,7 +22,7 @@ include "mlir/Interfaces/SideEffectInterfaces.td"
// -----
def SPIRV_BranchOp : SPIRV_Op<"Branch", [
- DeclareOpInterfaceMethods<BranchOpInterface>, InFunctionScope, NoSideEffect,
+ DeclareOpInterfaceMethods<BranchOpInterface>, InFunctionScope, Pure,
Terminator]> {
let summary = "Unconditional branch to target block.";
@@ -79,7 +79,7 @@ def SPIRV_BranchOp : SPIRV_Op<"Branch", [
def SPIRV_BranchConditionalOp : SPIRV_Op<"BranchConditional", [
AttrSizedOperandSegments, DeclareOpInterfaceMethods<BranchOpInterface>,
- InFunctionScope, NoSideEffect, Terminator]> {
+ InFunctionScope, Pure, Terminator]> {
let summary = [{
If Condition is true, branch to true block, otherwise branch to false
block.
@@ -313,7 +313,7 @@ def SPIRV_LoopOp : SPIRV_Op<"mlir.loop", [InFunctionScope]> {
// -----
-def SPIRV_MergeOp : SPIRV_Op<"mlir.merge", [NoSideEffect, Terminator]> {
+def SPIRV_MergeOp : SPIRV_Op<"mlir.merge", [Pure, Terminator]> {
let summary = "A special terminator for merging a structured selection/loop.";
let description = [{
@@ -337,7 +337,7 @@ def SPIRV_MergeOp : SPIRV_Op<"mlir.merge", [NoSideEffect, Terminator]> {
// -----
-def SPIRV_ReturnOp : SPIRV_Op<"Return", [InFunctionScope, NoSideEffect,
+def SPIRV_ReturnOp : SPIRV_Op<"Return", [InFunctionScope, Pure,
Terminator]> {
let summary = "Return with no value from a function with void return type.";
@@ -382,7 +382,7 @@ def SPIRV_UnreachableOp : SPIRV_Op<"Unreachable", [InFunctionScope, Terminator]>
// -----
-def SPIRV_ReturnValueOp : SPIRV_Op<"ReturnValue", [InFunctionScope, NoSideEffect,
+def SPIRV_ReturnValueOp : SPIRV_Op<"ReturnValue", [InFunctionScope, Pure,
Terminator]> {
let summary = "Return a value from a function.";
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCooperativeMatrixOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCooperativeMatrixOps.td
index 9b39a4b1f7eef..22ee3fb301a2d 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCooperativeMatrixOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVCooperativeMatrixOps.td
@@ -16,7 +16,7 @@
// -----
def SPIRV_NVCooperativeMatrixLengthOp : SPIRV_NvVendorOp<"CooperativeMatrixLength",
- [NoSideEffect]> {
+ [Pure]> {
let summary = "See extension SPV_NV_cooperative_matrix";
let description = [{
@@ -137,7 +137,7 @@ def SPIRV_NVCooperativeMatrixLoadOp : SPIRV_NvVendorOp<"CooperativeMatrixLoad",
// -----
def SPIRV_NVCooperativeMatrixMulAddOp : SPIRV_NvVendorOp<"CooperativeMatrixMulAdd",
- [NoSideEffect, AllTypesMatch<["c", "result"]>]> {
+ [Pure, AllTypesMatch<["c", "result"]>]> {
let summary = "See extension SPV_NV_cooperative_matrix";
let description = [{
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGLOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGLOps.td
index c59717ee3311e..3dd5219592b87 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGLOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGLOps.td
@@ -35,7 +35,7 @@ class SPIRV_GLOp<string mnemonic, int opcode, list<Trait> traits = []> :
// Base class for GL unary ops.
class SPIRV_GLUnaryOp<string mnemonic, int opcode, Type resultType,
Type operandType, list<Trait> traits = []> :
- SPIRV_GLOp<mnemonic, opcode, !listconcat([NoSideEffect], traits)> {
+ SPIRV_GLOp<mnemonic, opcode, !listconcat([Pure], traits)> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<operandType>:$operand
@@ -60,7 +60,7 @@ class SPIRV_GLUnaryArithmeticOp<string mnemonic, int opcode, Type type,
// Base class for GL binary ops.
class SPIRV_GLBinaryOp<string mnemonic, int opcode, Type resultType,
Type operandType, list<Trait> traits = []> :
- SPIRV_GLOp<mnemonic, opcode, !listconcat([NoSideEffect], traits)> {
+ SPIRV_GLOp<mnemonic, opcode, !listconcat([Pure], traits)> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<operandType>:$lhs,
@@ -86,7 +86,7 @@ class SPIRV_GLBinaryArithmeticOp<string mnemonic, int opcode, Type type,
// Base class for GL ternary ops.
class SPIRV_GLTernaryArithmeticOp<string mnemonic, int opcode, Type type,
list<Trait> traits = []> :
- SPIRV_GLOp<mnemonic, opcode, !listconcat([NoSideEffect], traits)> {
+ SPIRV_GLOp<mnemonic, opcode, !listconcat([Pure], traits)> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<type>:$x,
@@ -1077,7 +1077,7 @@ def SPIRV_GLFmaOp : SPIRV_GLTernaryArithmeticOp<"Fma", 50, SPIRV_Float> {
// ----
-def SPIRV_GLFrexpStructOp : SPIRV_GLOp<"FrexpStruct", 52, [NoSideEffect]> {
+def SPIRV_GLFrexpStructOp : SPIRV_GLOp<"FrexpStruct", 52, [Pure]> {
let summary = "Splits x into two components such that x = significand * 2^exponent";
let description = [{
@@ -1132,7 +1132,7 @@ def SPIRV_GLFrexpStructOp : SPIRV_GLOp<"FrexpStruct", 52, [NoSideEffect]> {
def SPIRV_GLLdexpOp :
SPIRV_GLOp<"Ldexp", 53, [
- NoSideEffect, AllTypesMatch<["x", "y"]>]> {
+ Pure, AllTypesMatch<["x", "y"]>]> {
let summary = "Builds y such that y = significand * 2^exponent";
let description = [{
@@ -1184,7 +1184,7 @@ def SPIRV_GLLdexpOp :
def SPIRV_GLFMixOp :
SPIRV_GLOp<"FMix", 46, [
- NoSideEffect, AllTypesMatch<["x", "y", "a", "result"]>]> {
+ Pure, AllTypesMatch<["x", "y", "a", "result"]>]> {
let summary = "Builds the linear blend of x and y";
let description = [{
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGroupOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGroupOps.td
index 1382ceb4cac88..8c43107a0dc94 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGroupOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVGroupOps.td
@@ -18,7 +18,7 @@
// -----
def SPIRV_GroupBroadcastOp : SPIRV_Op<"GroupBroadcast",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["value", "result"]>]> {
let summary = [{
Broadcast the Value of the invocation identified by the local id LocalId
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVImageOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVImageOps.td
index d2fa9b450a698..5d7338d7813bf 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVImageOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVImageOps.td
@@ -19,7 +19,7 @@ include "mlir/Interfaces/SideEffectInterfaces.td"
// -----
-def SPIRV_ImageDrefGatherOp : SPIRV_Op<"ImageDrefGather", [NoSideEffect]> {
+def SPIRV_ImageDrefGatherOp : SPIRV_Op<"ImageDrefGather", [Pure]> {
let summary = "Gathers the requested depth-comparison from four texels.";
let description = [{
@@ -86,7 +86,7 @@ def SPIRV_ImageDrefGatherOp : SPIRV_Op<"ImageDrefGather", [NoSideEffect]> {
// -----
-def SPIRV_ImageQuerySizeOp : SPIRV_Op<"ImageQuerySize", [NoSideEffect]> {
+def SPIRV_ImageQuerySizeOp : SPIRV_Op<"ImageQuerySize", [Pure]> {
let summary = "Query the dimensions of Image, with no level of detail.";
let description = [{
@@ -144,7 +144,7 @@ def SPIRV_ImageQuerySizeOp : SPIRV_Op<"ImageQuerySize", [NoSideEffect]> {
// -----
def SPIRV_ImageOp : SPIRV_Op<"Image",
- [NoSideEffect,
+ [Pure,
TypesMatchWith<"type of 'result' matches image type of 'sampledimage'",
"sampledimage", "result",
"$_self.cast<spirv::SampledImageType>().getImageType()">]> {
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVJointMatrixOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVJointMatrixOps.td
index b4d5ac0f94ff6..50bf353c75b1e 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVJointMatrixOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVJointMatrixOps.td
@@ -16,7 +16,7 @@
// -----
def SPIRV_INTELJointMatrixWorkItemLengthOp : SPIRV_IntelVendorOp<"JointMatrixWorkItemLength",
- [NoSideEffect]> {
+ [Pure]> {
let summary = "See extension SPV_INTEL_joint_matrix";
let description = [{
@@ -120,7 +120,7 @@ def SPIRV_INTELJointMatrixLoadOp : SPIRV_IntelVendorOp<"JointMatrixLoad", []> {
// -----
def SPIRV_INTELJointMatrixMadOp : SPIRV_IntelVendorOp<"JointMatrixMad",
- [NoSideEffect, AllTypesMatch<["c", "result"]>]> {
+ [Pure, AllTypesMatch<["c", "result"]>]> {
let summary = "See extension SPV_INTEL_joint_matrix";
let description = [{
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVLogicalOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVLogicalOps.td
index efad16206d8a8..0abe52396d8c1 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVLogicalOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVLogicalOps.td
@@ -22,7 +22,7 @@ class SPIRV_LogicalBinaryOp<string mnemonic, Type operandsType,
// Result type is SPIRV_Bool.
SPIRV_BinaryOp<mnemonic, SPIRV_Bool, operandsType,
!listconcat(traits, [
- NoSideEffect, SameTypeOperands,
+ Pure, SameTypeOperands,
SameOperandsAndResultShape,
TypesMatchWith<"type of result to correspond to the `i1` "
"equivalent of the operand",
@@ -42,7 +42,7 @@ class SPIRV_LogicalUnaryOp<string mnemonic, Type operandType,
// Result type is SPIRV_Bool.
SPIRV_UnaryOp<mnemonic, SPIRV_Bool, operandType,
!listconcat(traits, [
- NoSideEffect, SameTypeOperands, SameOperandsAndResultShape,
+ Pure, SameTypeOperands, SameOperandsAndResultShape,
TypesMatchWith<"type of result to correspond to the `i1` "
"equivalent of the operand",
"operand", "result",
@@ -952,7 +952,7 @@ def SPIRV_SLessThanEqualOp : SPIRV_LogicalBinaryOp<"SLessThanEqual",
// -----
def SPIRV_SelectOp : SPIRV_Op<"Select",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["true_value", "false_value", "result"]>,
UsableInSpecConstantOp]> {
let summary = [{
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMatrixOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMatrixOps.td
index 988016b64a5a0..93ba8940061b6 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMatrixOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMatrixOps.td
@@ -16,7 +16,7 @@ include "mlir/Interfaces/SideEffectInterfaces.td"
// -----
-def SPIRV_MatrixTimesMatrixOp : SPIRV_Op<"MatrixTimesMatrix", [NoSideEffect]> {
+def SPIRV_MatrixTimesMatrixOp : SPIRV_Op<"MatrixTimesMatrix", [Pure]> {
let summary = "Linear-algebraic multiply of LeftMatrix X RightMatrix.";
let description = [{
@@ -70,7 +70,7 @@ def SPIRV_MatrixTimesMatrixOp : SPIRV_Op<"MatrixTimesMatrix", [NoSideEffect]> {
// -----
-def SPIRV_MatrixTimesScalarOp : SPIRV_Op<"MatrixTimesScalar", [NoSideEffect]> {
+def SPIRV_MatrixTimesScalarOp : SPIRV_Op<"MatrixTimesScalar", [Pure]> {
let summary = "Scale a floating-point matrix.";
let description = [{
@@ -132,7 +132,7 @@ def SPIRV_MatrixTimesScalarOp : SPIRV_Op<"MatrixTimesScalar", [NoSideEffect]> {
// -----
-def SPIRV_TransposeOp : SPIRV_Op<"Transpose", [NoSideEffect]> {
+def SPIRV_TransposeOp : SPIRV_Op<"Transpose", [Pure]> {
let summary = "Transpose a matrix.";
let description = [{
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMemoryOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMemoryOps.td
index e34e36320fa5e..925891dc69873 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMemoryOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMemoryOps.td
@@ -18,7 +18,7 @@ include "mlir/Dialect/SPIRV/IR/SPIRVBase.td"
// -----
-def SPIRV_AccessChainOp : SPIRV_Op<"AccessChain", [NoSideEffect]> {
+def SPIRV_AccessChainOp : SPIRV_Op<"AccessChain", [Pure]> {
let summary = "Create a pointer into a composite object.";
let description = [{
@@ -132,7 +132,7 @@ def SPIRV_CopyMemoryOp : SPIRV_Op<"CopyMemory", []> {
// -----
-def SPIRV_InBoundsPtrAccessChainOp : SPIRV_Op<"InBoundsPtrAccessChain", [NoSideEffect]> {
+def SPIRV_InBoundsPtrAccessChainOp : SPIRV_Op<"InBoundsPtrAccessChain", [Pure]> {
let summary = [{
Has the same semantics as OpPtrAccessChain, with the addition that the
resulting pointer is known to point within the base object.
@@ -235,7 +235,7 @@ def SPIRV_LoadOp : SPIRV_Op<"Load", []> {
// -----
-def SPIRV_PtrAccessChainOp : SPIRV_Op<"PtrAccessChain", [NoSideEffect]> {
+def SPIRV_PtrAccessChainOp : SPIRV_Op<"PtrAccessChain", [Pure]> {
let summary = [{
Has the same semantics as OpAccessChain, with the addition of the
Element operand.
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMiscOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMiscOps.td
index afda6a2837113..702ad88bf720d 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMiscOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVMiscOps.td
@@ -56,7 +56,7 @@ def SPIRV_KHRAssumeTrueOp : SPIRV_KhrVendorOp<"AssumeTrue", []> {
// -----
-def SPIRV_UndefOp : SPIRV_Op<"Undef", [NoSideEffect]> {
+def SPIRV_UndefOp : SPIRV_Op<"Undef", [Pure]> {
let summary = "Make an intermediate object whose value is undefined.";
let description = [{
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVNonUniformOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVNonUniformOps.td
index 7e5d7ca3af267..8e900bef63ea1 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVNonUniformOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVNonUniformOps.td
@@ -92,7 +92,7 @@ def SPIRV_GroupNonUniformBallotOp : SPIRV_Op<"GroupNonUniformBallot", []> {
// -----
def SPIRV_GroupNonUniformBroadcastOp : SPIRV_Op<"GroupNonUniformBroadcast",
- [NoSideEffect, AllTypesMatch<["value", "result"]>]> {
+ [Pure, AllTypesMatch<["value", "result"]>]> {
let summary = [{
Result is the Value of the invocation identified by the id Id to all
active invocations in the group.
@@ -667,7 +667,7 @@ def SPIRV_GroupNonUniformSMinOp : SPIRV_GroupNonUniformArithmeticOp<"GroupNonUni
// -----
def SPIRV_GroupNonUniformShuffleOp : SPIRV_Op<"GroupNonUniformShuffle",
- [NoSideEffect, AllTypesMatch<["value", "result"]>]> {
+ [Pure, AllTypesMatch<["value", "result"]>]> {
let summary = [{
Result is the Value of the invocation identified by the id Id.
}];
@@ -719,7 +719,7 @@ def SPIRV_GroupNonUniformShuffleOp : SPIRV_Op<"GroupNonUniformShuffle",
// -----
def SPIRV_GroupNonUniformShuffleDownOp : SPIRV_Op<"GroupNonUniformShuffleDown",
- [NoSideEffect, AllTypesMatch<["value", "result"]>]> {
+ [Pure, AllTypesMatch<["value", "result"]>]> {
let summary = [{
Result is the Value of the invocation identified by the current
invocation’s id within the group + Delta.
@@ -774,7 +774,7 @@ def SPIRV_GroupNonUniformShuffleDownOp : SPIRV_Op<"GroupNonUniformShuffleDown",
// -----
def SPIRV_GroupNonUniformShuffleUpOp : SPIRV_Op<"GroupNonUniformShuffleUp",
- [NoSideEffect, AllTypesMatch<["value", "result"]>]> {
+ [Pure, AllTypesMatch<["value", "result"]>]> {
let summary = [{
Result is the Value of the invocation identified by the current
invocation’s id within the group - Delta.
@@ -828,7 +828,7 @@ def SPIRV_GroupNonUniformShuffleUpOp : SPIRV_Op<"GroupNonUniformShuffleUp",
// -----
def SPIRV_GroupNonUniformShuffleXorOp : SPIRV_Op<"GroupNonUniformShuffleXor",
- [NoSideEffect, AllTypesMatch<["value", "result"]>]> {
+ [Pure, AllTypesMatch<["value", "result"]>]> {
let summary = [{
Result is the Value of the invocation identified by the current
invocation’s id within the group xor’ed with Mask.
diff --git a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVStructureOps.td b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVStructureOps.td
index d22759e3e0d6c..1c0c3745665f0 100644
--- a/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVStructureOps.td
+++ b/mlir/include/mlir/Dialect/SPIRV/IR/SPIRVStructureOps.td
@@ -27,7 +27,7 @@ include "mlir/Interfaces/SideEffectInterfaces.td"
def SPIRV_AddressOfOp : SPIRV_Op<"mlir.addressof",
[DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- InFunctionScope, NoSideEffect]> {
+ InFunctionScope, Pure]> {
let summary = "Get the address of a global variable.";
let description = [{
@@ -75,7 +75,7 @@ def SPIRV_AddressOfOp : SPIRV_Op<"mlir.addressof",
def SPIRV_ConstantOp : SPIRV_Op<"Constant",
[ConstantLike,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = [{
Declare a new integer-type or floating-point-type scalar constant.
}];
@@ -520,7 +520,7 @@ def SPIRV_ModuleOp : SPIRV_Op<"module",
// -----
-def SPIRV_ReferenceOfOp : SPIRV_Op<"mlir.referenceof", [NoSideEffect]> {
+def SPIRV_ReferenceOfOp : SPIRV_Op<"mlir.referenceof", [Pure]> {
let summary = "Reference a specialization constant.";
let description = [{
@@ -669,7 +669,7 @@ def SPIRV_SpecConstantCompositeOp : SPIRV_Op<"SpecConstantComposite", [
// -----
def SPIRV_SpecConstantOperationOp : SPIRV_Op<"SpecConstantOperation", [
- NoSideEffect, InFunctionScope,
+ Pure, InFunctionScope,
SingleBlockImplicitTerminator<"YieldOp">]> {
let summary = [{
Declare a new specialization constant that results from doing an operation.
@@ -760,7 +760,7 @@ def SPIRV_SpecConstantOperationOp : SPIRV_Op<"SpecConstantOperation", [
// -----
def SPIRV_YieldOp : SPIRV_Op<"mlir.yield", [
- HasParent<"SpecConstantOperationOp">, NoSideEffect, Terminator]> {
+ HasParent<"SpecConstantOperationOp">, Pure, Terminator]> {
let summary = [{
Yields the result computed in `spirv.SpecConstantOperation`'s
region back to the parent op.
diff --git a/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td b/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td
index 8503b9d5633d6..6c0e7841d306c 100644
--- a/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td
+++ b/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td
@@ -32,7 +32,7 @@ class Shape_Op<string mnemonic, list<Trait> traits = []> :
Op<ShapeDialect, mnemonic, traits>;
def Shape_AddOp : Shape_Op<"add",
- [Commutative, NoSideEffect,
+ [Commutative, Pure,
DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Addition of sizes and indices";
let description = [{
@@ -61,7 +61,7 @@ def Shape_AddOp : Shape_Op<"add",
let hasVerifier = 1;
}
-def Shape_BroadcastOp : Shape_Op<"broadcast", [Commutative, NoSideEffect]> {
+def Shape_BroadcastOp : Shape_Op<"broadcast", [Commutative, Pure]> {
let summary = "Returns the broadcasted output shape of two or more inputs";
let description = [{
Returns the broadcasted shape for input shapes or extent tensors. The rest
@@ -108,7 +108,7 @@ def Shape_BroadcastOp : Shape_Op<"broadcast", [Commutative, NoSideEffect]> {
}
def Shape_ConstShapeOp : Shape_Op<"const_shape",
- [ConstantLike, NoSideEffect, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
+ [ConstantLike, Pure, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Creates a constant shape or extent tensor";
let description = [{
Creates a constant shape or extent tensor. The individual extents are given
@@ -136,7 +136,7 @@ def Shape_ConstShapeOp : Shape_Op<"const_shape",
def Shape_ConstSizeOp : Shape_Op<"const_size", [
ConstantLike,
- NoSideEffect,
+ Pure,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>
]> {
let summary = "Creates a constant of type `shape.size`";
@@ -157,7 +157,7 @@ def Shape_ConstSizeOp : Shape_Op<"const_size", [
let hasFolder = 1;
}
-def Shape_DivOp : Shape_Op<"div", [NoSideEffect,
+def Shape_DivOp : Shape_Op<"div", [Pure,
DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Division of sizes and indices";
let description = [{
@@ -194,7 +194,7 @@ def Shape_DivOp : Shape_Op<"div", [NoSideEffect,
}];
}
-def Shape_ShapeEqOp : Shape_Op<"shape_eq", [NoSideEffect, Commutative]> {
+def Shape_ShapeEqOp : Shape_Op<"shape_eq", [Pure, Commutative]> {
let summary = "Returns whether the input shapes or extent tensors are equal";
let description = [{
Takes one or more shape or extent tensor operands and determines whether
@@ -217,7 +217,7 @@ def Shape_ShapeEqOp : Shape_Op<"shape_eq", [NoSideEffect, Commutative]> {
let hasFolder = 1;
}
-def Shape_FromExtentsOp : Shape_Op<"from_extents", [NoSideEffect]> {
+def Shape_FromExtentsOp : Shape_Op<"from_extents", [Pure]> {
let summary = "Creates a shape from extents";
let description = [{
Creates a shape from multiple SSA values representing the extents of
@@ -238,7 +238,7 @@ def Shape_FromExtentsOp : Shape_Op<"from_extents", [NoSideEffect]> {
let hasFolder = 1;
}
-def Shape_FromExtentTensorOp : Shape_Op<"from_extent_tensor", [NoSideEffect]> {
+def Shape_FromExtentTensorOp : Shape_Op<"from_extent_tensor", [Pure]> {
let summary = "Creates a shape from a tensor of extents";
let description = [{
Creates a shape from a 1D integral tensor of extents. The rank of the
@@ -286,7 +286,7 @@ def Shape_IsBroadcastableOp : Shape_Op<"is_broadcastable", [Commutative]> {
}
def Shape_RankOp : Shape_Op<"rank",
- [NoSideEffect, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Gets the rank of a shape";
let description = [{
Returns the rank of the shape or extent tensor, i.e. the number of extents.
@@ -309,7 +309,7 @@ def Shape_RankOp : Shape_Op<"rank",
}
def Shape_ToExtentTensorOp : Shape_Op<"to_extent_tensor", [
- DeclareOpInterfaceMethods<CastOpInterface>, NoSideEffect
+ DeclareOpInterfaceMethods<CastOpInterface>, Pure
]> {
let summary = "Creates a dimension tensor from a shape";
let description = [{
@@ -329,7 +329,7 @@ def Shape_ToExtentTensorOp : Shape_Op<"to_extent_tensor", [
}
def Shape_DimOp : Shape_Op<"dim",
- [NoSideEffect, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Gets the specified extent from the shape of a shaped input";
let description = [{
Gets the extent indexed by `dim` from the shape of the `value` operand. If
@@ -364,7 +364,7 @@ def Shape_DimOp : Shape_Op<"dim",
}
def Shape_GetExtentOp : Shape_Op<"get_extent",
- [NoSideEffect, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Gets the specified extent from a shape or extent tensor";
let description = [{
Gets the extent indexed by `dim` from the `shape` operand. If the shape is
@@ -393,7 +393,7 @@ def Shape_GetExtentOp : Shape_Op<"get_extent",
let hasVerifier = 1;
}
-def Shape_IndexToSizeOp : Shape_Op<"index_to_size", [NoSideEffect]> {
+def Shape_IndexToSizeOp : Shape_Op<"index_to_size", [Pure]> {
let summary = "Converts a standard index to a shape size";
let description = [{
Converts a standard index to a `shape.size`. This operation and its
@@ -413,7 +413,7 @@ def Shape_IndexToSizeOp : Shape_Op<"index_to_size", [NoSideEffect]> {
}
def Shape_MaxOp : Shape_Op<"max",
- [Commutative, NoSideEffect,
+ [Commutative, Pure,
DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Elementwise maximum";
let description = [{
@@ -487,7 +487,7 @@ def Shape_MeetOp : Shape_Op<"meet",
}
def Shape_MinOp : Shape_Op<"min",
- [Commutative, NoSideEffect,
+ [Commutative, Pure,
DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Elementwise minimum";
let description = [{
@@ -514,7 +514,7 @@ def Shape_MinOp : Shape_Op<"min",
}
def Shape_MulOp : Shape_Op<"mul",
- [Commutative, NoSideEffect,
+ [Commutative, Pure,
DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Multiplication of sizes and indices";
let description = [{
@@ -544,7 +544,7 @@ def Shape_MulOp : Shape_Op<"mul",
}
def Shape_NumElementsOp : Shape_Op<"num_elements",
- [NoSideEffect, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Returns the number of elements for a given shape";
let description = [{
Returns the number of elements for a given shape which is the product of its
@@ -615,7 +615,7 @@ def Shape_ReduceOp : Shape_Op<"reduce",
}
def Shape_ShapeOfOp : Shape_Op<"shape_of",
- [NoSideEffect, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Returns shape of a value or shaped type operand";
let description = [{
@@ -639,7 +639,7 @@ def Shape_ShapeOfOp : Shape_Op<"shape_of",
}];
}
-def Shape_ValueOfOp : Shape_Op<"value_of", [NoSideEffect]> {
+def Shape_ValueOfOp : Shape_Op<"value_of", [Pure]> {
let summary = "Returns value of a !shape.value_shape operand";
let description = [{
@@ -655,7 +655,7 @@ def Shape_ValueOfOp : Shape_Op<"value_of", [NoSideEffect]> {
}
def Shape_SizeToIndexOp : Shape_Op<"size_to_index", [
- DeclareOpInterfaceMethods<CastOpInterface>, NoSideEffect
+ DeclareOpInterfaceMethods<CastOpInterface>, Pure
]> {
let summary = "Casts between index types of the shape and standard dialect";
let description = [{
@@ -674,7 +674,7 @@ def Shape_SizeToIndexOp : Shape_Op<"size_to_index", [
let hasCanonicalizer = 1;
}
-def Shape_ValueAsShapeOp : Shape_Op<"value_as_shape", [NoSideEffect]> {
+def Shape_ValueAsShapeOp : Shape_Op<"value_as_shape", [Pure]> {
let summary = "Returns value as a shape";
let description = [{
@@ -699,7 +699,7 @@ def Shape_ValueAsShapeOp : Shape_Op<"value_as_shape", [NoSideEffect]> {
let assemblyFormat = "$arg attr-dict `:` type($arg) `->` type($result)";
}
-def Shape_WithOp : Shape_Op<"with_shape", [NoSideEffect]> {
+def Shape_WithOp : Shape_Op<"with_shape", [Pure]> {
let summary = "Returns ValueShape with given shape";
let description = [{
Returns ValueShape with the shape updated to match the shape operand. That
@@ -744,7 +744,7 @@ def Shape_WithOp : Shape_Op<"with_shape", [NoSideEffect]> {
def Shape_YieldOp : Shape_Op<"yield",
[HasParent<"ReduceOp, FunctionLibraryOp">,
- NoSideEffect,
+ Pure,
ReturnLike,
Terminator]> {
let summary = "Returns the value to parent op";
@@ -774,7 +774,7 @@ def Shape_DebugPrintOp : Shape_Op<"debug_print", []> {
let results = (outs Shape_ShapeOrSizeType:$output);
}
-def Shape_SplitAtOp : Shape_Op<"split_at", [NoSideEffect]> {
+def Shape_SplitAtOp : Shape_Op<"split_at", [Pure]> {
let summary = "Splits a shape at a given index";
let description = [{
Splits a shape at a given dimension `index`, returning two shapes.
@@ -806,7 +806,7 @@ def Shape_SplitAtOp : Shape_Op<"split_at", [NoSideEffect]> {
let hasFolder = 1;
}
-def Shape_ConcatOp : Shape_Op<"concat", [NoSideEffect]> {
+def Shape_ConcatOp : Shape_Op<"concat", [Pure]> {
let summary = "Concatenates two shapes";
let description = [{
Creates a shape whose dimensions consist of first the dimensions from `lhs`
@@ -834,7 +834,7 @@ def Shape_ConcatOp : Shape_Op<"concat", [NoSideEffect]> {
// TODO: Move the code below and witnesses to a
diff erent file.
def Shape_AnyOp : Shape_Op<"any", [Commutative,
- NoSideEffect]> {
+ Pure]> {
let summary = "Return any combination of the input shapes";
let description = [{
This operation takes multiple input shapes or extent tensors and returns
@@ -859,7 +859,7 @@ def Shape_AnyOp : Shape_Op<"any", [Commutative,
let hasFolder = 1;
}
-def Shape_AssumingAllOp : Shape_Op<"assuming_all", [Commutative, NoSideEffect]> {
+def Shape_AssumingAllOp : Shape_Op<"assuming_all", [Commutative, Pure]> {
let summary = "Return a logical AND of all witnesses";
let description = [{
Used to simplify constraints as any single failing precondition is enough
@@ -893,7 +893,7 @@ def Shape_AssumingAllOp : Shape_Op<"assuming_all", [Commutative, NoSideEffect]>
def Shape_AssumingOp : Shape_Op<"assuming", [
SingleBlockImplicitTerminator<"AssumingYieldOp">,
DeclareOpInterfaceMethods<RegionBranchOpInterface>,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "Execute the region";
let description = [{
Executes the region assuming all witnesses are true.
@@ -925,7 +925,7 @@ def Shape_AssumingOp : Shape_Op<"assuming", [
}
def Shape_AssumingYieldOp : Shape_Op<"assuming_yield",
- [NoSideEffect, ReturnLike, Terminator, HasParent<"AssumingOp">]> {
+ [Pure, ReturnLike, Terminator, HasParent<"AssumingOp">]> {
let summary = "Yield operation";
let description = [{
This yield operation represents a return operation within the
@@ -996,7 +996,7 @@ def Shape_CstrEqOp : Shape_Op<"cstr_eq", [Commutative]> {
let hasFolder = 1;
}
-def Shape_ConstWitnessOp : Shape_Op<"const_witness", [ConstantLike, NoSideEffect]> {
+def Shape_ConstWitnessOp : Shape_Op<"const_witness", [ConstantLike, Pure]> {
let summary = "An operation that returns a statically known witness value";
let description = [{
This operation represents a statically known witness result. This can be
@@ -1165,7 +1165,7 @@ def Shape_FuncOp : Shape_Op<"func",
}
def Shape_ReturnOp : Shape_Op<"return",
- [NoSideEffect, HasParent<"FuncOp">, ReturnLike, Terminator]> {
+ [Pure, HasParent<"FuncOp">, ReturnLike, Terminator]> {
let summary = "Shape function return operation";
let description = [{
The `shape.return` operation represents a return operation within a function.
diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
index 549f6c83441a5..fef73ada1743f 100644
--- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
+++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
@@ -25,7 +25,7 @@ class SparseTensor_Op<string mnemonic, list<Trait> traits = []>
// Sparse Tensor Operations.
//===----------------------------------------------------------------------===//
-def SparseTensor_NewOp : SparseTensor_Op<"new", [NoSideEffect]>,
+def SparseTensor_NewOp : SparseTensor_Op<"new", [Pure]>,
Arguments<(ins AnyType:$source)>,
Results<(outs AnySparseTensor:$result)> {
string summary = "Materializes a new sparse tensor from given source";
@@ -49,7 +49,7 @@ def SparseTensor_NewOp : SparseTensor_Op<"new", [NoSideEffect]>,
}
def SparseTensor_ConvertOp : SparseTensor_Op<"convert",
- [NoSideEffect, SameOperandsAndResultElementType]>,
+ [Pure, SameOperandsAndResultElementType]>,
Arguments<(ins AnyTensor:$source)>,
Results<(outs AnyTensor:$dest)> {
string summary = "Converts between
diff erent tensor types";
@@ -90,7 +90,7 @@ def SparseTensor_ConvertOp : SparseTensor_Op<"convert",
let hasVerifier = 1;
}
-def SparseTensor_ToPointersOp : SparseTensor_Op<"pointers", [NoSideEffect]>,
+def SparseTensor_ToPointersOp : SparseTensor_Op<"pointers", [Pure]>,
Arguments<(ins AnySparseTensor:$tensor, IndexAttr:$dimension)>,
Results<(outs AnyStridedMemRefOfRank<1>:$result)> {
let summary = "Extracts pointers array at given dimension from a tensor";
@@ -114,7 +114,7 @@ def SparseTensor_ToPointersOp : SparseTensor_Op<"pointers", [NoSideEffect]>,
let hasVerifier = 1;
}
-def SparseTensor_ToIndicesOp : SparseTensor_Op<"indices", [NoSideEffect]>,
+def SparseTensor_ToIndicesOp : SparseTensor_Op<"indices", [Pure]>,
Arguments<(ins AnySparseTensor:$tensor, IndexAttr:$dimension)>,
Results<(outs AnyStridedMemRefOfRank<1>:$result)> {
let summary = "Extracts indices array at given dimension from a tensor";
@@ -138,7 +138,7 @@ def SparseTensor_ToIndicesOp : SparseTensor_Op<"indices", [NoSideEffect]>,
let hasVerifier = 1;
}
-def SparseTensor_ToValuesOp : SparseTensor_Op<"values", [NoSideEffect]>,
+def SparseTensor_ToValuesOp : SparseTensor_Op<"values", [Pure]>,
Arguments<(ins AnySparseTensor:$tensor)>,
Results<(outs AnyStridedMemRefOfRank<1>:$result)> {
let summary = "Extracts numerical values array from a tensor";
@@ -161,7 +161,7 @@ def SparseTensor_ToValuesOp : SparseTensor_Op<"values", [NoSideEffect]>,
let hasVerifier = 1;
}
-def SparseTensor_ConcatenateOp : SparseTensor_Op<"concatenate", [NoSideEffect]>,
+def SparseTensor_ConcatenateOp : SparseTensor_Op<"concatenate", [Pure]>,
Arguments<(ins Variadic<AnyRankedTensor>:$inputs, IndexAttr:$dimension)>,
Results<(outs AnyRankedTensor:$result)> {
@@ -467,7 +467,7 @@ def SparseTensor_SortOp : SparseTensor_Op<"sort", [AttrSizedOperandSegments]>,
// Sparse Tensor Syntax Operations.
//===----------------------------------------------------------------------===//
-def SparseTensor_BinaryOp : SparseTensor_Op<"binary", [NoSideEffect]>,
+def SparseTensor_BinaryOp : SparseTensor_Op<"binary", [Pure]>,
Arguments<(ins AnyType:$x, AnyType:$y, UnitAttr:$left_identity, UnitAttr:$right_identity)>,
Results<(outs AnyType:$output)> {
let summary = "Binary set operation utilized within linalg.generic";
@@ -581,7 +581,7 @@ def SparseTensor_BinaryOp : SparseTensor_Op<"binary", [NoSideEffect]>,
let hasVerifier = 1;
}
-def SparseTensor_UnaryOp : SparseTensor_Op<"unary", [NoSideEffect]>,
+def SparseTensor_UnaryOp : SparseTensor_Op<"unary", [Pure]>,
Arguments<(ins AnyType:$x)>,
Results<(outs AnyType:$output)> {
let summary = "Unary set operation utilized within linalg.generic";
@@ -659,7 +659,7 @@ def SparseTensor_UnaryOp : SparseTensor_Op<"unary", [NoSideEffect]>,
let hasVerifier = 1;
}
-def SparseTensor_ReduceOp : SparseTensor_Op<"reduce", [NoSideEffect, SameOperandsAndResultType]>,
+def SparseTensor_ReduceOp : SparseTensor_Op<"reduce", [Pure, SameOperandsAndResultType]>,
Arguments<(ins AnyType:$x, AnyType:$y, AnyType:$identity)>,
Results<(outs AnyType:$output)> {
let summary = "Custom reduction operation utilized within linalg.generic";
@@ -708,7 +708,7 @@ def SparseTensor_ReduceOp : SparseTensor_Op<"reduce", [NoSideEffect, SameOperand
let hasVerifier = 1;
}
-def SparseTensor_SelectOp : SparseTensor_Op<"select", [NoSideEffect, SameOperandsAndResultType]>,
+def SparseTensor_SelectOp : SparseTensor_Op<"select", [Pure, SameOperandsAndResultType]>,
Arguments<(ins AnyType:$x)>,
Results<(outs AnyType:$output)> {
let summary = "Select operation utilized within linalg.generic";
@@ -768,7 +768,7 @@ def SparseTensor_SelectOp : SparseTensor_Op<"select", [NoSideEffect, SameOperand
let hasVerifier = 1;
}
-def SparseTensor_YieldOp : SparseTensor_Op<"yield", [NoSideEffect, Terminator]>,
+def SparseTensor_YieldOp : SparseTensor_Op<"yield", [Pure, Terminator]>,
Arguments<(ins Optional<AnyType>:$result)> {
let summary = "Yield from sparse_tensor set-like operations";
let description = [{
diff --git a/mlir/include/mlir/Dialect/Tensor/IR/TensorOps.td b/mlir/include/mlir/Dialect/Tensor/IR/TensorOps.td
index 0a784e4a06a9c..bdc24fa0675e1 100644
--- a/mlir/include/mlir/Dialect/Tensor/IR/TensorOps.td
+++ b/mlir/include/mlir/Dialect/Tensor/IR/TensorOps.td
@@ -49,7 +49,7 @@ class Tensor_OpWithOffsetSizesAndStrides<string mnemonic,
def Tensor_CastOp : Tensor_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect
+ Pure
]> {
let summary = "tensor cast operation";
let description = [{
@@ -87,7 +87,7 @@ def Tensor_CastOp : Tensor_Op<"cast", [
def Tensor_DimOp : Tensor_Op<"dim", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
ShapedDimOpInterface]> {
let summary = "dimension index operation";
let description = [{
@@ -147,7 +147,7 @@ def Tensor_DimOp : Tensor_Op<"dim", [
//===----------------------------------------------------------------------===//
def Tensor_EmptyOp : Tensor_Op<"empty",
- [NoSideEffect,
+ [Pure,
DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>]> {
let summary = "empty tensor operation";
@@ -207,7 +207,7 @@ def Tensor_EmptyOp : Tensor_Op<"empty",
def Tensor_ExtractOp : Tensor_Op<"extract", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
TypesMatchWith<"result type matches element type of tensor",
"tensor", "result",
"$_self.cast<ShapedType>().getElementType()">]> {
@@ -252,7 +252,7 @@ def Tensor_ExtractSliceOp : Tensor_OpWithOffsetSizesAndStrides<"extract_slice",
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>,
AttrSizedOperandSegments,
- NoSideEffect,
+ Pure,
OffsetSizeAndStrideOpInterface
]> {
let summary = "extract slice operation";
@@ -446,7 +446,7 @@ def Tensor_ExtractSliceOp : Tensor_OpWithOffsetSizesAndStrides<"extract_slice",
def Tensor_FromElementsOp : Tensor_Op<"from_elements", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
TypesMatchWith<"operand types match result element type",
"result", "elements", "SmallVector<Type, 2>("
"$_self.cast<ShapedType>().getNumElements(), "
@@ -492,7 +492,7 @@ def Tensor_FromElementsOp : Tensor_Op<"from_elements", [
def Tensor_GatherOp : Tensor_Op<"gather", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect
+ Pure
]> {
let summary = "gather a subset of a tensor at specified indices";
let description = [{
@@ -631,7 +631,7 @@ def Tensor_GatherOp : Tensor_Op<"gather", [
def Tensor_GenerateOp : Tensor_Op<"generate", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- RecursiveSideEffects,
+ RecursiveMemoryEffects,
DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>,
SingleBlockImplicitTerminator<"mlir::tensor::YieldOp">]> {
let summary = "Creates a dynamically sized tensor from elements";
@@ -678,7 +678,7 @@ def Tensor_GenerateOp : Tensor_Op<"generate", [
def Tensor_InsertOp : Tensor_Op<"insert", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
TypesMatchWith<"result type matches type of dest",
"dest", "result",
"$_self.cast<ShapedType>()">,
@@ -733,7 +733,7 @@ def Tensor_InsertSliceOp : Tensor_OpWithOffsetSizesAndStrides<"insert_slice", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>,
AttrSizedOperandSegments,
- NoSideEffect,
+ Pure,
OffsetSizeAndStrideOpInterface,
TypesMatchWith<"expected result type to match dest type",
"dest", "result", "$_self">
@@ -872,7 +872,7 @@ def Tensor_InsertSliceOp : Tensor_OpWithOffsetSizesAndStrides<"insert_slice", [
def Tensor_RankOp : Tensor_Op<"rank", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "rank operation";
let description = [{
The `tensor.rank` operation takes a tensor operand and returns its rank.
@@ -898,7 +898,7 @@ def Tensor_RankOp : Tensor_Op<"rank", [
def Tensor_ReshapeOp: Tensor_Op<"reshape", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "tensor reshape operation";
let description = [{
The `reshape` operation converts a tensor from one type to an equivalent
@@ -963,7 +963,7 @@ def Tensor_ReshapeOp: Tensor_Op<"reshape", [
class Tensor_ReassociativeReshapeOp<string mnemonic, list<Trait> traits = []> :
Tensor_Op<mnemonic, !listconcat(traits, [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect])>,
+ Pure])>,
Arguments<(ins AnyTensor:$src, IndexListArrayAttr:$reassociation)>,
Results<(outs AnyTensor:$result)> {
@@ -1116,7 +1116,7 @@ def Tensor_CollapseShapeOp : Tensor_ReassociativeReshapeOp<"collapse_shape"> {
def Tensor_PadOp : Tensor_Op<"pad", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
AttrSizedOperandSegments,
- NoSideEffect,
+ Pure,
SingleBlockImplicitTerminator<"mlir::tensor::YieldOp">]> {
let summary = "tensor pad operation";
let description = [{
@@ -1338,7 +1338,7 @@ def Tensor_ParallelInsertSliceOp : Tensor_Op<"parallel_insert_slice", [
This op does not create a new value, which allows maintaining a clean
separation between the subset and full tensor.
- Note that we cannot mark this operation as pure (NoSideEffects), even
+ Note that we cannot mark this operation as pure (Pures), even
though it has no side effects, because it will get DCEd during
canonicalization.
@@ -1459,7 +1459,7 @@ def Tensor_ParallelInsertSliceOp : Tensor_Op<"parallel_insert_slice", [
def Tensor_ScatterOp : Tensor_Op<"scatter", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect
+ Pure
]> {
let summary =
"scatter a tensor into a destination tensor at specified indices";
@@ -1600,7 +1600,7 @@ def Tensor_ScatterOp : Tensor_Op<"scatter", [
def Tensor_SplatOp : Tensor_Op<"splat", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
TypesMatchWith<"operand type matches element type of result",
"aggregate", "input",
"$_self.cast<TensorType>().getElementType()">
@@ -1647,7 +1647,7 @@ def Tensor_SplatOp : Tensor_Op<"splat", [
//===----------------------------------------------------------------------===//
def Tensor_YieldOp : Tensor_Op<"yield",
- [NoSideEffect, ReturnLike, Terminator,
+ [Pure, ReturnLike, Terminator,
HasParent<"::mlir::tensor::GenerateOp, ::mlir::tensor::PadOp">]> {
let summary = "Yield a value from a region";
let description = [{
diff --git a/mlir/include/mlir/Dialect/Tosa/IR/TosaOps.td b/mlir/include/mlir/Dialect/Tosa/IR/TosaOps.td
index 30dc14bb13ac9..fc8b44e7cccde 100644
--- a/mlir/include/mlir/Dialect/Tosa/IR/TosaOps.td
+++ b/mlir/include/mlir/Dialect/Tosa/IR/TosaOps.td
@@ -35,7 +35,7 @@ include "mlir/Dialect/Tosa/IR/TosaOpBase.td"
def Tosa_ArgMaxOp : Tosa_Op<"argmax", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Perform argmax on the input.";
let description = [{
@@ -59,7 +59,7 @@ def Tosa_ArgMaxOp : Tosa_Op<"argmax", [
def Tosa_AvgPool2dOp : Tosa_Op<"avg_pool2d", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Performs max pooling on the input.";
let description = [{
@@ -91,7 +91,7 @@ def Tosa_AvgPool2dOp : Tosa_Op<"avg_pool2d", [
def Tosa_Conv2DOp : Tosa_Op<"conv2d", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "2D Convolution Operator";
let description = [{
@@ -124,7 +124,7 @@ def Tosa_Conv2DOp : Tosa_Op<"conv2d", [
def Tosa_Conv3DOp : Tosa_Op<"conv3d", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "3D Convolution operator";
let description = [{
@@ -156,7 +156,7 @@ def Tosa_Conv3DOp : Tosa_Op<"conv3d", [
def Tosa_DepthwiseConv2DOp : Tosa_Op<"depthwise_conv2d", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Depthwise 2D Convolution operator";
let description = [{
@@ -189,7 +189,7 @@ def Tosa_DepthwiseConv2DOp : Tosa_Op<"depthwise_conv2d", [
def Tosa_FullyConnectedOp : Tosa_Op<"fully_connected", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Fully Connected operator";
let description = [{
@@ -217,7 +217,7 @@ def Tosa_FullyConnectedOp : Tosa_Op<"fully_connected", [
def Tosa_MatMulOp : Tosa_Op<"matmul", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Matrix multiplication with bias";
let description = [{
@@ -245,7 +245,7 @@ def Tosa_MatMulOp : Tosa_Op<"matmul", [
def Tosa_MaxPool2dOp : Tosa_Op<"max_pool2d", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Performs max pooling on the input.";
let description = [{
@@ -276,7 +276,7 @@ def Tosa_MaxPool2dOp : Tosa_Op<"max_pool2d", [
def Tosa_TransposeConv2DOp : Tosa_Op<"transpose_conv2d", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Transpose 2D Convolution operator.";
let description = [{
@@ -313,7 +313,7 @@ def Tosa_TransposeConv2DOp : Tosa_Op<"transpose_conv2d", [
def Tosa_ClampOp : Tosa_Op<"clamp", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Computes clamp(features, min, max).";
let description = [{
@@ -345,7 +345,7 @@ def Tosa_ClampOp : Tosa_Op<"clamp", [
def Tosa_SigmoidOp : Tosa_Op<"sigmoid", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Computes elementwise sigmoid of input.";
let description = [{
@@ -371,7 +371,7 @@ def Tosa_SigmoidOp : Tosa_Op<"sigmoid", [
def Tosa_TanhOp : Tosa_Op<"tanh", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Computes elementwise hyperbolic tangent of input";
let description = [{
@@ -402,7 +402,7 @@ def Tosa_TanhOp : Tosa_Op<"tanh", [
def Tosa_AddOp : Tosa_Op<"add", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Elementwise addition operator";
let description = [{
@@ -429,7 +429,7 @@ def Tosa_AddOp : Tosa_Op<"add", [
def Tosa_ArithmeticRightShiftOp : Tosa_Op<"arithmetic_right_shift", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Elementwise Arithmetic Right Shift";
let description = [{
@@ -454,7 +454,7 @@ def Tosa_ArithmeticRightShiftOp : Tosa_Op<"arithmetic_right_shift", [
def Tosa_BitwiseAndOp : Tosa_Op<"bitwise_and", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Bitwise AND operator";
let description = [{
@@ -478,7 +478,7 @@ def Tosa_BitwiseAndOp : Tosa_Op<"bitwise_and", [
def Tosa_BitwiseOrOp : Tosa_Op<"bitwise_or", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Bitwise OR operator";
let description = [{
@@ -502,7 +502,7 @@ def Tosa_BitwiseOrOp : Tosa_Op<"bitwise_or", [
def Tosa_BitwiseXorOp : Tosa_Op<"bitwise_xor", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Bitwise XOR operator";
let description = [{
@@ -526,7 +526,7 @@ def Tosa_BitwiseXorOp : Tosa_Op<"bitwise_xor", [
def Tosa_DivOp : Tosa_Op<"div", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Integer divide operator";
let description = [{
@@ -552,7 +552,7 @@ def Tosa_DivOp : Tosa_Op<"div", [
def Tosa_LogicalAndOp : Tosa_Op<"logical_and", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, Commutative, NoSideEffect]> {
+ ResultsBroadcastableShape, Commutative, Pure]> {
let summary = "Returns the truth value of x AND y element-wise.";
let description = [{
@@ -576,7 +576,7 @@ def Tosa_LogicalAndOp : Tosa_Op<"logical_and", [
def Tosa_LogicalLeftShiftOp : Tosa_Op<"logical_left_shift", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Elementwise Logical Left Shift";
let description = [{
@@ -600,7 +600,7 @@ def Tosa_LogicalLeftShiftOp : Tosa_Op<"logical_left_shift", [
def Tosa_LogicalRightShiftOp : Tosa_Op<"logical_right_shift", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Elementwise Logical Right Shift";
let description = [{
@@ -624,7 +624,7 @@ def Tosa_LogicalRightShiftOp : Tosa_Op<"logical_right_shift", [
def Tosa_LogicalOrOp : Tosa_Op<"logical_or", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, Commutative, NoSideEffect]> {
+ ResultsBroadcastableShape, Commutative, Pure]> {
let summary = "Returns the truth value of x OR y element-wise.";
let description = [{
@@ -648,7 +648,7 @@ def Tosa_LogicalOrOp : Tosa_Op<"logical_or", [
def Tosa_LogicalXorOp : Tosa_Op<"logical_xor", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, Commutative, NoSideEffect]> {
+ ResultsBroadcastableShape, Commutative, Pure]> {
let summary = "Returns the truth value of x XOR y element-wise.";
let description = [{
@@ -672,7 +672,7 @@ def Tosa_LogicalXorOp : Tosa_Op<"logical_xor", [
def Tosa_MaximumOp : Tosa_Op<"maximum", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Elementwise Maximum";
let description = [{
@@ -696,7 +696,7 @@ def Tosa_MaximumOp : Tosa_Op<"maximum", [
def Tosa_MinimumOp : Tosa_Op<"minimum", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Elementwise Minimum";
let description = [{
@@ -720,7 +720,7 @@ def Tosa_MinimumOp : Tosa_Op<"minimum", [
def Tosa_MulOp : Tosa_Op<"mul", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Multiplication operator";
let description = [{
@@ -748,7 +748,7 @@ def Tosa_MulOp : Tosa_Op<"mul", [
def Tosa_PowOp : Tosa_Op<"pow", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Computes the power of one value to another.";
let description = [{
@@ -772,7 +772,7 @@ def Tosa_PowOp : Tosa_Op<"pow", [
def Tosa_SubOp : Tosa_Op<"sub", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Elementwise subtraction operator";
let description = [{
@@ -798,7 +798,7 @@ def Tosa_SubOp : Tosa_Op<"sub", [
def Tosa_TableOp : Tosa_Op<"table", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Table lookup op";
let description = [{
@@ -841,7 +841,7 @@ def Tosa_TableOp : Tosa_Op<"table", [
def Tosa_AbsOp : Tosa_Op<"abs", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise abs op";
let description = [{
@@ -863,7 +863,7 @@ def Tosa_AbsOp : Tosa_Op<"abs", [
def Tosa_BitwiseNotOp : Tosa_Op<"bitwise_not", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Bitwise NOT operator";
let description = [{
@@ -885,7 +885,7 @@ def Tosa_BitwiseNotOp : Tosa_Op<"bitwise_not", [
def Tosa_CeilOp : Tosa_Op<"ceil", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise ceil op";
let description = [{
@@ -907,7 +907,7 @@ def Tosa_CeilOp : Tosa_Op<"ceil", [
def Tosa_ClzOp : Tosa_Op<"clz", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise count leading zero op";
let description = [{
@@ -929,7 +929,7 @@ def Tosa_ClzOp : Tosa_Op<"clz", [
def Tosa_ExpOp : Tosa_Op<"exp", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise exp op";
let description = [{
@@ -951,7 +951,7 @@ def Tosa_ExpOp : Tosa_Op<"exp", [
def Tosa_FloorOp : Tosa_Op<"floor", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise floor op";
let description = [{
@@ -973,7 +973,7 @@ def Tosa_FloorOp : Tosa_Op<"floor", [
def Tosa_LogOp : Tosa_Op<"log", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise log op";
let description = [{
@@ -995,7 +995,7 @@ def Tosa_LogOp : Tosa_Op<"log", [
def Tosa_LogicalNotOp : Tosa_Op<"logical_not", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect, SameOperandsAndResultType]> {
+ Pure, SameOperandsAndResultType]> {
let summary = "Returns the truth value of NOT x element-wise.";
let description = [{
@@ -1017,7 +1017,7 @@ def Tosa_LogicalNotOp : Tosa_Op<"logical_not", [
def Tosa_NegateOp : Tosa_Op<"negate", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise negate op";
let description = [{
@@ -1042,7 +1042,7 @@ def Tosa_NegateOp : Tosa_Op<"negate", [
def Tosa_ReciprocalOp : Tosa_Op<"reciprocal", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise reciprocal op";
let description = [{
@@ -1065,7 +1065,7 @@ def Tosa_ReciprocalOp : Tosa_Op<"reciprocal", [
def Tosa_RsqrtOp : Tosa_Op<"rsqrt", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise 1/sqrt op";
let description = [{
@@ -1093,7 +1093,7 @@ def Tosa_RsqrtOp : Tosa_Op<"rsqrt", [
//===----------------------------------------------------------------------===//
def Tosa_SelectOp : Tosa_Op<"select", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
- ["inferReturnTypeComponents"]>, NoSideEffect]> {
+ ["inferReturnTypeComponents"]>, Pure]> {
let summary = "Elementwise select operator";
let description = [{
@@ -1122,7 +1122,7 @@ def Tosa_SelectOp : Tosa_Op<"select", [
// Operator: equal
//===----------------------------------------------------------------------===//
def Tosa_EqualOp : Tosa_Op<"equal", [InferTensorType, ResultsBroadcastableShape,
- Commutative, NoSideEffect]> {
+ Commutative, Pure]> {
let summary = "Returns the truth value of (x == y) element-wise.";
let description = [{
@@ -1153,7 +1153,7 @@ def Tosa_EqualOp : Tosa_Op<"equal", [InferTensorType, ResultsBroadcastableShape,
def Tosa_GreaterOp : Tosa_Op<"greater", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Returns the truth value of (x > y) element-wise.";
let description = [{
@@ -1178,7 +1178,7 @@ def Tosa_GreaterOp : Tosa_Op<"greater", [
def Tosa_GreaterEqualOp : Tosa_Op<"greater_equal", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Returns the truth value of (x >= y) element-wise.";
let description = [{
@@ -1208,7 +1208,7 @@ def Tosa_GreaterEqualOp : Tosa_Op<"greater_equal", [
def Tosa_ReduceAllOp : Tosa_Op<"reduce_all", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reduce All operator";
let description = [{
@@ -1233,7 +1233,7 @@ def Tosa_ReduceAllOp : Tosa_Op<"reduce_all", [
def Tosa_ReduceAnyOp : Tosa_Op<"reduce_any", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reduce Any operator";
let description = [{
@@ -1258,7 +1258,7 @@ def Tosa_ReduceAnyOp : Tosa_Op<"reduce_any", [
def Tosa_ReduceMaxOp : Tosa_Op<"reduce_max", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reduce Max operator";
let description = [{
@@ -1283,7 +1283,7 @@ def Tosa_ReduceMaxOp : Tosa_Op<"reduce_max", [
def Tosa_ReduceMinOp : Tosa_Op<"reduce_min", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reduce Min operator";
let description = [{
@@ -1308,7 +1308,7 @@ def Tosa_ReduceMinOp : Tosa_Op<"reduce_min", [
def Tosa_ReduceProdOp : Tosa_Op<"reduce_prod", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reduce Prod operator";
let description = [{
@@ -1333,7 +1333,7 @@ def Tosa_ReduceProdOp : Tosa_Op<"reduce_prod", [
def Tosa_ReduceSumOp : Tosa_Op<"reduce_sum", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reduce Sum operator";
let description = [{
@@ -1363,7 +1363,7 @@ def Tosa_ReduceSumOp : Tosa_Op<"reduce_sum", [
def Tosa_ConcatOp : Tosa_Op<"concat", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Concatenates tensors along one dimension.";
let description = [{
@@ -1389,7 +1389,7 @@ def Tosa_ConcatOp : Tosa_Op<"concat", [
def Tosa_PadOp : Tosa_Op<"pad", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Pads a tensor with value specified.";
let description = [{
@@ -1420,7 +1420,7 @@ def Tosa_PadOp : Tosa_Op<"pad", [
def Tosa_ReshapeOp: Tosa_Op<"reshape", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reshape operator";
let description = [{
@@ -1447,7 +1447,7 @@ def Tosa_ReshapeOp: Tosa_Op<"reshape", [
//===----------------------------------------------------------------------===//
def Tosa_ReverseOp: Tosa_Op<"reverse", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
- ["inferReturnTypeComponents"]>, NoSideEffect]> {
+ ["inferReturnTypeComponents"]>, Pure]> {
let summary = "Reverse operator";
let description = [{
@@ -1473,7 +1473,7 @@ def Tosa_ReverseOp: Tosa_Op<"reverse", [
//===----------------------------------------------------------------------===//
def Tosa_SliceOp: Tosa_Op<"slice", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
- ["inferReturnTypeComponents"]>, NoSideEffect]> {
+ ["inferReturnTypeComponents"]>, Pure]> {
let summary = "Slice operator";
let description = [{
@@ -1501,7 +1501,7 @@ def Tosa_SliceOp: Tosa_Op<"slice", [
def Tosa_TileOp: Tosa_Op<"tile", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Tile operator";
let description = [{
@@ -1525,7 +1525,7 @@ def Tosa_TileOp: Tosa_Op<"tile", [
def Tosa_TransposeOp : Tosa_Op<"transpose", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Transpose operator";
let description = [{
@@ -1556,7 +1556,7 @@ def Tosa_TransposeOp : Tosa_Op<"transpose", [
def Tosa_GatherOp : Tosa_Op<"gather", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Gather operation,";
let description = [{
@@ -1580,7 +1580,7 @@ def Tosa_GatherOp : Tosa_Op<"gather", [
def Tosa_ScatterOp : Tosa_Op<"scatter", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Scatter operation,";
let description = [{
@@ -1610,7 +1610,7 @@ def Tosa_ScatterOp : Tosa_Op<"scatter", [
def Tosa_ResizeOp : Tosa_Op<"resize", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Resize operation, supports various resize/upsample modes";
@@ -1646,7 +1646,7 @@ def Tosa_ResizeOp : Tosa_Op<"resize", [
//===----------------------------------------------------------------------===//
// Operator: cast
//===----------------------------------------------------------------------===//
-def Tosa_CastOp: Tosa_Op<"cast", [NoSideEffect,
+def Tosa_CastOp: Tosa_Op<"cast", [Pure,
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>]> {
@@ -1688,7 +1688,7 @@ def Tosa_CastOp: Tosa_Op<"cast", [NoSideEffect,
//===----------------------------------------------------------------------===//
// Operator: rescale
//===----------------------------------------------------------------------===//
-def Tosa_RescaleOp: Tosa_Op<"rescale", [NoSideEffect,
+def Tosa_RescaleOp: Tosa_Op<"rescale", [Pure,
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>]> {
let summary = "Tosa rescale operator";
@@ -1736,7 +1736,7 @@ def Tosa_RescaleOp: Tosa_Op<"rescale", [NoSideEffect,
//===----------------------------------------------------------------------===//
// Operator: const
//===----------------------------------------------------------------------===//
-def Tosa_ConstOp : Tosa_Op<"const", [ConstantLike, NoSideEffect,
+def Tosa_ConstOp : Tosa_Op<"const", [ConstantLike, Pure,
FirstAttrDerivedResultType]> {
let summary = "Constant op.";
@@ -1758,7 +1758,7 @@ def Tosa_ConstOp : Tosa_Op<"const", [ConstantLike, NoSideEffect,
//===----------------------------------------------------------------------===//
// Operator: identity
//===----------------------------------------------------------------------===//
-def Tosa_IdentityOp: Tosa_Op<"identity", [NoSideEffect,
+def Tosa_IdentityOp: Tosa_Op<"identity", [Pure,
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>]> {
let summary = "Identity operator";
@@ -1820,7 +1820,7 @@ def Tosa_IfOp : Tosa_Op<"cond_if", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
SingleBlockImplicitTerminator<"YieldOp">,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "Conditional if operator";
let description = [{
@@ -1854,7 +1854,7 @@ def Tosa_WhileOp : Tosa_Op<"while_loop", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
SingleBlockImplicitTerminator<"YieldOp">,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "output = input; While (Cond(output)) {output = Body(output)}";
let description = [{
diff --git a/mlir/include/mlir/Dialect/Tosa/IR/TosaUtilOps.td b/mlir/include/mlir/Dialect/Tosa/IR/TosaUtilOps.td
index 9afe29e174a05..a04e5d6c96eec 100644
--- a/mlir/include/mlir/Dialect/Tosa/IR/TosaUtilOps.td
+++ b/mlir/include/mlir/Dialect/Tosa/IR/TosaUtilOps.td
@@ -24,7 +24,7 @@ include "mlir/Dialect/Tosa/IR/TosaInterfaces.td"
include "mlir/Dialect/Tosa/IR/TosaTypesBase.td"
include "mlir/Dialect/Tosa/IR/TosaOpBase.td"
-def Tosa_ApplyScaleOp: Tosa_Op<"apply_scale", [NoSideEffect] # ElementwiseMappable.traits> {
+def Tosa_ApplyScaleOp: Tosa_Op<"apply_scale", [Pure] # ElementwiseMappable.traits> {
let summary = "Rescale scalar operator for Tosa tensor operators";
let description = [{
@@ -53,7 +53,7 @@ def Tosa_ApplyScaleOp: Tosa_Op<"apply_scale", [NoSideEffect] # ElementwiseMappab
//===----------------------------------------------------------------------===//
def Tosa_YieldOp : Tosa_Op<"yield", [
Terminator,
- NoSideEffect]> {
+ Pure]> {
let summary = "yield operator";
let description = [{
diff --git a/mlir/include/mlir/Dialect/Transform/IR/TransformOps.td b/mlir/include/mlir/Dialect/Transform/IR/TransformOps.td
index 955c09a5e0c79..bc20edfed77d1 100644
--- a/mlir/include/mlir/Dialect/Transform/IR/TransformOps.td
+++ b/mlir/include/mlir/Dialect/Transform/IR/TransformOps.td
@@ -380,7 +380,7 @@ def SequenceOp : TransformDialectOp<"sequence",
def WithPDLPatternsOp : TransformDialectOp<"with_pdl_patterns",
[DeclareOpInterfaceMethods<TransformOpInterface>, NoTerminator,
- OpAsmOpInterface, PossibleTopLevelTransformOpTrait, RecursiveSideEffects,
+ OpAsmOpInterface, PossibleTopLevelTransformOpTrait, RecursiveMemoryEffects,
SymbolTable]> {
let summary = "Contains PDL patterns available for use in transforms";
let description = [{
diff --git a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
index 1019ffe52a371..c6a8f8243adea 100644
--- a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
+++ b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
@@ -83,7 +83,7 @@ def IteratorTypeArrayAttr : TypedArrayAttrBase<IteratorTypeEnum,
// than the current set: {*, +}.
def Vector_ContractionOp :
Vector_Op<"contract", [
- NoSideEffect,
+ Pure,
PredOpTrait<"lhs and rhs have same element type", TCopVTEtIsSameAs<0, 1>>,
PredOpTrait<"third operand acc and result have same element type",
TCresVTEtIsSameAsOpBase<0, 2>>,
@@ -281,7 +281,7 @@ def Vector_ContractionOp :
}
def Vector_ReductionOp :
- Vector_Op<"reduction", [NoSideEffect,
+ Vector_Op<"reduction", [Pure,
PredOpTrait<"source operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
DeclareOpInterfaceMethods<MaskableOpInterface>,
@@ -332,7 +332,7 @@ def Vector_ReductionOp :
}
def Vector_MultiDimReductionOp :
- Vector_Op<"multi_reduction", [NoSideEffect,
+ Vector_Op<"multi_reduction", [Pure,
AllTypesMatch<["dest", "acc"]>,
PredOpTrait<"source operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
@@ -403,7 +403,7 @@ def Vector_MultiDimReductionOp :
}
def Vector_BroadcastOp :
- Vector_Op<"broadcast", [NoSideEffect,
+ Vector_Op<"broadcast", [Pure,
PredOpTrait<"source operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>]>,
Arguments<(ins AnyType:$source)>,
@@ -451,7 +451,7 @@ def Vector_BroadcastOp :
}
def Vector_ShuffleOp :
- Vector_Op<"shuffle", [NoSideEffect,
+ Vector_Op<"shuffle", [Pure,
PredOpTrait<"first operand v1 and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
PredOpTrait<"second operand v2 and result have same element type",
@@ -517,7 +517,7 @@ def Vector_ShuffleOp :
}
def Vector_ExtractElementOp :
- Vector_Op<"extractelement", [NoSideEffect,
+ Vector_Op<"extractelement", [Pure,
TypesMatchWith<"result type matches element type of vector operand",
"vector", "result",
"$_self.cast<ShapedType>().getElementType()">]>,
@@ -565,7 +565,7 @@ def Vector_ExtractElementOp :
}
def Vector_ExtractOp :
- Vector_Op<"extract", [NoSideEffect,
+ Vector_Op<"extract", [Pure,
PredOpTrait<"operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
DeclareOpInterfaceMethods<InferTypeOpInterface>]>,
@@ -604,7 +604,7 @@ def Vector_ExtractOp :
def Vector_FMAOp :
Op<Vector_Dialect, "fma", [
- NoSideEffect, AllTypesMatch<["lhs", "rhs", "acc", "result"]>,
+ Pure, AllTypesMatch<["lhs", "rhs", "acc", "result"]>,
DeclareOpInterfaceMethods<VectorUnrollOpInterface, ["getShapeForUnroll"]>
] # ElementwiseMappable.traits>,
Arguments<(ins AnyVectorOfAnyRank:$lhs,
@@ -634,7 +634,7 @@ def Vector_FMAOp :
}
def Vector_InsertElementOp :
- Vector_Op<"insertelement", [NoSideEffect,
+ Vector_Op<"insertelement", [Pure,
TypesMatchWith<"source operand type matches element type of result",
"result", "source",
"$_self.cast<ShapedType>().getElementType()">,
@@ -682,7 +682,7 @@ def Vector_InsertElementOp :
}
def Vector_InsertOp :
- Vector_Op<"insert", [NoSideEffect,
+ Vector_Op<"insert", [Pure,
PredOpTrait<"source operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
AllTypesMatch<["dest", "res"]>]>,
@@ -725,7 +725,7 @@ def Vector_InsertOp :
}
def Vector_InsertStridedSliceOp :
- Vector_Op<"insert_strided_slice", [NoSideEffect,
+ Vector_Op<"insert_strided_slice", [Pure,
PredOpTrait<"operand #0 and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
AllTypesMatch<["dest", "res"]>]>,
@@ -784,7 +784,7 @@ def Vector_InsertStridedSliceOp :
}
def Vector_OuterProductOp :
- Vector_Op<"outerproduct", [NoSideEffect,
+ Vector_Op<"outerproduct", [Pure,
PredOpTrait<"lhs operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
PredOpTrait<"rhs operand and result have same element type",
@@ -875,7 +875,7 @@ def Vector_OuterProductOp :
// TODO: Add transformation which decomposes ReshapeOp into an optimized
// sequence of vector rotate/shuffle/select operations.
def Vector_ReshapeOp :
- Vector_Op<"reshape", [AttrSizedOperandSegments, NoSideEffect]>,
+ Vector_Op<"reshape", [AttrSizedOperandSegments, Pure]>,
Arguments<(ins AnyVector:$vector, Variadic<Index>:$input_shape,
Variadic<Index>:$output_shape,
I64ArrayAttr:$fixed_vector_sizes)>,
@@ -995,7 +995,7 @@ def Vector_ReshapeOp :
}
def Vector_ExtractStridedSliceOp :
- Vector_Op<"extract_strided_slice", [NoSideEffect,
+ Vector_Op<"extract_strided_slice", [Pure,
PredOpTrait<"operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>]>,
Arguments<(ins AnyVector:$vector, I64ArrayAttr:$offsets,
@@ -1911,7 +1911,7 @@ def Vector_CompressStoreOp :
}
def Vector_ShapeCastOp :
- Vector_Op<"shape_cast", [NoSideEffect]>,
+ Vector_Op<"shape_cast", [Pure]>,
Arguments<(ins AnyVector:$source)>,
Results<(outs AnyVector:$result)> {
let summary = "shape_cast casts between vector shapes";
@@ -1963,7 +1963,7 @@ def Vector_ShapeCastOp :
}
def Vector_BitCastOp :
- Vector_Op<"bitcast", [NoSideEffect, AllRanksMatch<["source", "result"]>]>,
+ Vector_Op<"bitcast", [Pure, AllRanksMatch<["source", "result"]>]>,
Arguments<(ins AnyVectorOfAnyRank:$source)>,
Results<(outs AnyVectorOfAnyRank:$result)>{
let summary = "bitcast casts between vectors";
@@ -2003,7 +2003,7 @@ def Vector_BitCastOp :
}
def Vector_TypeCastOp :
- Vector_Op<"type_cast", [NoSideEffect, ViewLikeOpInterface]>,
+ Vector_Op<"type_cast", [Pure, ViewLikeOpInterface]>,
Arguments<(ins StaticShapeMemRefOf<[AnyType]>:$memref)>,
Results<(outs AnyMemRef:$result)> {
let summary = "type_cast op converts a scalar memref to a vector memref";
@@ -2050,7 +2050,7 @@ def Vector_TypeCastOp :
}
def Vector_ConstantMaskOp :
- Vector_Op<"constant_mask", [NoSideEffect]>,
+ Vector_Op<"constant_mask", [Pure]>,
Arguments<(ins I64ArrayAttr:$mask_dim_sizes)>,
Results<(outs VectorOfAnyRankOf<[I1]>)> {
let summary = "creates a constant vector mask";
@@ -2092,7 +2092,7 @@ def Vector_ConstantMaskOp :
}
def Vector_CreateMaskOp :
- Vector_Op<"create_mask", [NoSideEffect]>,
+ Vector_Op<"create_mask", [Pure]>,
Arguments<(ins Variadic<Index>:$operands)>,
Results<(outs VectorOfAnyRankOf<[I1]>)> {
let summary = "creates a vector mask";
@@ -2131,7 +2131,7 @@ def Vector_CreateMaskOp :
}
def Vector_MaskOp : Vector_Op<"mask", [
- SingleBlockImplicitTerminator<"vector::YieldOp">, RecursiveSideEffects,
+ SingleBlockImplicitTerminator<"vector::YieldOp">, RecursiveMemoryEffects,
NoRegionArguments
]> {
let summary = "Predicates a maskable vector operation";
@@ -2203,7 +2203,7 @@ def Vector_MaskOp : Vector_Op<"mask", [
}
def Vector_TransposeOp :
- Vector_Op<"transpose", [NoSideEffect,
+ Vector_Op<"transpose", [Pure,
DeclareOpInterfaceMethods<VectorUnrollOpInterface, ["getShapeForUnroll"]>,
PredOpTrait<"operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>]>,
@@ -2300,7 +2300,7 @@ def Vector_PrintOp :
/// This may seem redundant with vector.contract but it serves the purposes of
/// more progressive lowering and localized type conversion on the path:
/// `vector<...x...xf32> -> vector<...xf32> -> !llvm<... x float>`.
-def Vector_MatmulOp : Vector_Op<"matrix_multiply", [NoSideEffect,
+def Vector_MatmulOp : Vector_Op<"matrix_multiply", [Pure,
PredOpTrait<"lhs operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
PredOpTrait<"rhs operand and result have same element type",
@@ -2364,7 +2364,7 @@ def Vector_MatmulOp : Vector_Op<"matrix_multiply", [NoSideEffect,
/// This may seem redundant with vector.transpose but it serves the purposes of
/// more progressive lowering and localized type conversion on the path:
/// `vector<...x...xf32> -> vector<...xf32> -> !llvm<... x float>`.
-def Vector_FlatTransposeOp : Vector_Op<"flat_transpose", [NoSideEffect,
+def Vector_FlatTransposeOp : Vector_Op<"flat_transpose", [Pure,
PredOpTrait<"source operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>]>,
Arguments<(
@@ -2407,7 +2407,7 @@ def Vector_FlatTransposeOp : Vector_Op<"flat_transpose", [NoSideEffect,
//===----------------------------------------------------------------------===//
def Vector_SplatOp : Vector_Op<"splat", [
- NoSideEffect,
+ Pure,
TypesMatchWith<"operand type matches element type of result",
"aggregate", "input",
"$_self.cast<VectorType>().getElementType()">
@@ -2450,7 +2450,7 @@ def Vector_SplatOp : Vector_Op<"splat", [
// call to the function. For that, it might be useful to have a
// 'vector.scale.global' and a 'vector.scale.local' operation.
def VectorScaleOp : Vector_Op<"vscale",
- [NoSideEffect]> {
+ [Pure]> {
let summary = "Load vector scale size";
let description = [{
The `vscale` op returns the scale of the scalable vectors, a positive
@@ -2473,7 +2473,7 @@ def VectorScaleOp : Vector_Op<"vscale",
//===----------------------------------------------------------------------===//
def Vector_ScanOp :
- Vector_Op<"scan", [NoSideEffect,
+ Vector_Op<"scan", [Pure,
AllTypesMatch<["source", "dest"]>,
AllTypesMatch<["initial_value", "accumulated_value"]> ]>,
Arguments<(ins Vector_CombiningKindAttr:$kind,
@@ -2528,7 +2528,7 @@ def Vector_ScanOp :
}
def Vector_YieldOp : Vector_Op<"yield", [
- NoSideEffect, ReturnLike, Terminator]> {
+ Pure, ReturnLike, Terminator]> {
let summary = "Terminates and yields values from vector regions.";
let description = [{
"vector.yield" yields an SSA value from the Vector dialect op region and
@@ -2552,7 +2552,7 @@ def Vector_YieldOp : Vector_Op<"yield", [
def Vector_WarpExecuteOnLane0Op : Vector_Op<"warp_execute_on_lane_0",
[DeclareOpInterfaceMethods<RegionBranchOpInterface, ["areTypesCompatible"]>,
SingleBlockImplicitTerminator<"vector::YieldOp">,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "Executes operations in the associated region on thread #0 of a"
"SPMD program";
let description = [{
diff --git a/mlir/include/mlir/Dialect/X86Vector/X86Vector.td b/mlir/include/mlir/Dialect/X86Vector/X86Vector.td
index 03fa89ef899a6..483e9236c9e9b 100644
--- a/mlir/include/mlir/Dialect/X86Vector/X86Vector.td
+++ b/mlir/include/mlir/Dialect/X86Vector/X86Vector.td
@@ -54,7 +54,7 @@ class AVX512_IntrOverloadedOp<string mnemonic,
// MaskCompressOp
//----------------------------------------------------------------------------//
-def MaskCompressOp : AVX512_Op<"mask.compress", [NoSideEffect,
+def MaskCompressOp : AVX512_Op<"mask.compress", [Pure,
// TODO: Support optional arguments in `AllTypesMatch`. "type($src)" could
// then be removed from assemblyFormat.
AllTypesMatch<["a", "dst"]>,
@@ -91,7 +91,7 @@ def MaskCompressOp : AVX512_Op<"mask.compress", [NoSideEffect,
}
def MaskCompressIntrOp : AVX512_IntrOverloadedOp<"mask.compress", [
- NoSideEffect,
+ Pure,
AllTypesMatch<["a", "src", "res"]>,
TypesMatchWith<"`k` has the same number of bits as elements in `res`",
"res", "k",
@@ -109,7 +109,7 @@ def MaskCompressIntrOp : AVX512_IntrOverloadedOp<"mask.compress", [
// MaskRndScaleOp
//----------------------------------------------------------------------------//
-def MaskRndScaleOp : AVX512_Op<"mask.rndscale", [NoSideEffect,
+def MaskRndScaleOp : AVX512_Op<"mask.rndscale", [Pure,
AllTypesMatch<["src", "a", "dst"]>,
TypesMatchWith<"imm has the same number of bits as elements in dst",
"dst", "imm",
@@ -142,7 +142,7 @@ def MaskRndScaleOp : AVX512_Op<"mask.rndscale", [NoSideEffect,
}
def MaskRndScalePSIntrOp : AVX512_IntrOp<"mask.rndscale.ps.512", 1, [
- NoSideEffect,
+ Pure,
AllTypesMatch<["src", "a", "res"]>]> {
let arguments = (ins VectorOfLengthAndType<[16], [F32]>:$src,
I32:$k,
@@ -152,7 +152,7 @@ def MaskRndScalePSIntrOp : AVX512_IntrOp<"mask.rndscale.ps.512", 1, [
}
def MaskRndScalePDIntrOp : AVX512_IntrOp<"mask.rndscale.pd.512", 1, [
- NoSideEffect,
+ Pure,
AllTypesMatch<["src", "a", "res"]>]> {
let arguments = (ins VectorOfLengthAndType<[8], [F64]>:$src,
I32:$k,
@@ -165,7 +165,7 @@ def MaskRndScalePDIntrOp : AVX512_IntrOp<"mask.rndscale.pd.512", 1, [
// MaskScaleFOp
//----------------------------------------------------------------------------//
-def MaskScaleFOp : AVX512_Op<"mask.scalef", [NoSideEffect,
+def MaskScaleFOp : AVX512_Op<"mask.scalef", [Pure,
AllTypesMatch<["src", "a", "b", "dst"]>,
TypesMatchWith<"k has the same number of bits as elements in dst",
"dst", "k",
@@ -199,7 +199,7 @@ def MaskScaleFOp : AVX512_Op<"mask.scalef", [NoSideEffect,
}
def MaskScaleFPSIntrOp : AVX512_IntrOp<"mask.scalef.ps.512", 1, [
- NoSideEffect,
+ Pure,
AllTypesMatch<["src", "a", "b", "res"]>]> {
let arguments = (ins VectorOfLengthAndType<[16], [F32]>:$src,
VectorOfLengthAndType<[16], [F32]>:$a,
@@ -209,7 +209,7 @@ def MaskScaleFPSIntrOp : AVX512_IntrOp<"mask.scalef.ps.512", 1, [
}
def MaskScaleFPDIntrOp : AVX512_IntrOp<"mask.scalef.pd.512", 1, [
- NoSideEffect,
+ Pure,
AllTypesMatch<["src", "a", "b", "res"]>]> {
let arguments = (ins VectorOfLengthAndType<[8], [F64]>:$src,
VectorOfLengthAndType<[8], [F64]>:$a,
@@ -222,7 +222,7 @@ def MaskScaleFPDIntrOp : AVX512_IntrOp<"mask.scalef.pd.512", 1, [
// Vp2IntersectOp
//----------------------------------------------------------------------------//
-def Vp2IntersectOp : AVX512_Op<"vp2intersect", [NoSideEffect,
+def Vp2IntersectOp : AVX512_Op<"vp2intersect", [Pure,
AllTypesMatch<["a", "b"]>,
TypesMatchWith<"k1 has the same number of bits as elements in a",
"a", "k1",
@@ -260,13 +260,13 @@ def Vp2IntersectOp : AVX512_Op<"vp2intersect", [NoSideEffect,
}
def Vp2IntersectDIntrOp : AVX512_IntrOp<"vp2intersect.d.512", 2, [
- NoSideEffect]> {
+ Pure]> {
let arguments = (ins VectorOfLengthAndType<[16], [I32]>:$a,
VectorOfLengthAndType<[16], [I32]>:$b);
}
def Vp2IntersectQIntrOp : AVX512_IntrOp<"vp2intersect.q.512", 2, [
- NoSideEffect]> {
+ Pure]> {
let arguments = (ins VectorOfLengthAndType<[8], [I64]>:$a,
VectorOfLengthAndType<[8], [I64]>:$b);
}
@@ -295,14 +295,14 @@ class AVX_IntrOp<string mnemonic, int numResults, list<Trait> traits = []> :
// AVX Rsqrt
//----------------------------------------------------------------------------//
-def RsqrtOp : AVX_Op<"rsqrt", [NoSideEffect, SameOperandsAndResultType]> {
+def RsqrtOp : AVX_Op<"rsqrt", [Pure, SameOperandsAndResultType]> {
let summary = "Rsqrt";
let arguments = (ins VectorOfLengthAndType<[8], [F32]>:$a);
let results = (outs VectorOfLengthAndType<[8], [F32]>:$b);
let assemblyFormat = "$a attr-dict `:` type($a)";
}
-def RsqrtIntrOp : AVX_IntrOp<"rsqrt.ps.256", 1, [NoSideEffect,
+def RsqrtIntrOp : AVX_IntrOp<"rsqrt.ps.256", 1, [Pure,
SameOperandsAndResultType]> {
let arguments = (ins VectorOfLengthAndType<[8], [F32]>:$a);
}
@@ -311,7 +311,7 @@ def RsqrtIntrOp : AVX_IntrOp<"rsqrt.ps.256", 1, [NoSideEffect,
// AVX Dot
//----------------------------------------------------------------------------//
-def DotOp : AVX_LowOp<"dot", [NoSideEffect, SameOperandsAndResultType]> {
+def DotOp : AVX_LowOp<"dot", [Pure, SameOperandsAndResultType]> {
let summary = "Dot";
let description = [{
Computes the 4-way dot products of the lower and higher parts of the source
@@ -335,7 +335,7 @@ def DotOp : AVX_LowOp<"dot", [NoSideEffect, SameOperandsAndResultType]> {
let assemblyFormat = "$a `,` $b attr-dict `:` type($res)";
}
-def DotIntrOp : AVX_IntrOp<"dp.ps.256", 1, [NoSideEffect,
+def DotIntrOp : AVX_IntrOp<"dp.ps.256", 1, [Pure,
AllTypesMatch<["a", "b", "res"]>]> {
let arguments = (ins VectorOfLengthAndType<[8], [F32]>:$a,
VectorOfLengthAndType<[8], [F32]>:$b, I8:$c);
diff --git a/mlir/include/mlir/IR/BuiltinOps.td b/mlir/include/mlir/IR/BuiltinOps.td
index 621a71e32c7f8..72ce4cb5ecd87 100644
--- a/mlir/include/mlir/IR/BuiltinOps.td
+++ b/mlir/include/mlir/IR/BuiltinOps.td
@@ -99,7 +99,7 @@ def ModuleOp : Builtin_Op<"module", [
//===----------------------------------------------------------------------===//
def UnrealizedConversionCastOp : Builtin_Op<"unrealized_conversion_cast", [
- DeclareOpInterfaceMethods<CastOpInterface>, NoSideEffect
+ DeclareOpInterfaceMethods<CastOpInterface>, Pure
]> {
let summary = "An unrealized conversion from one set of types to another";
let description = [{
diff --git a/mlir/include/mlir/Interfaces/SideEffectInterfaceBase.td b/mlir/include/mlir/Interfaces/SideEffectInterfaceBase.td
index 0bb9ad6c7febb..bddbaaddfb8d7 100644
--- a/mlir/include/mlir/Interfaces/SideEffectInterfaceBase.td
+++ b/mlir/include/mlir/Interfaces/SideEffectInterfaceBase.td
@@ -98,7 +98,7 @@ class EffectOpInterfaceBase<string name, string baseEffect>
static bool hasNoEffect(Operation *op) {
if (auto interface = dyn_cast<}] # name # [{>(op))
return interface.hasNoEffect();
- return op->hasTrait<::mlir::OpTrait::HasRecursiveSideEffects>();
+ return op->hasTrait<::mlir::OpTrait::HasRecursiveMemoryEffects>();
}
/// Collect all of the effect instances that operate on the provided value
diff --git a/mlir/include/mlir/Interfaces/SideEffectInterfaces.h b/mlir/include/mlir/Interfaces/SideEffectInterfaces.h
index 4b219e71aa756..3be924ad772ee 100644
--- a/mlir/include/mlir/Interfaces/SideEffectInterfaces.h
+++ b/mlir/include/mlir/Interfaces/SideEffectInterfaces.h
@@ -194,18 +194,65 @@ class EffectInstance {
};
} // namespace SideEffects
+namespace Speculation {
+/// This enum is returned from the `getSpeculatability` method in the
+/// `ConditionallySpeculatable` op interface.
+enum class Speculatability {
+ /// The Operation in question cannot be speculatively executed. This could be
+ /// because it may invoke undefined behavior or have other side effects.
+ NotSpeculatable,
+
+ // The Operation in question can be speculatively executed. It does not have
+ // any side effects or undefined behavior.
+ Speculatable,
+
+ // The Operation in question can be speculatively executed if all the
+ // operations in all attached regions can also be speculatively executed.
+ RecursivelySpeculatable,
+};
+
+constexpr auto NotSpeculatable = Speculatability::NotSpeculatable;
+constexpr auto Speculatable = Speculatability::Speculatable;
+constexpr auto RecursivelySpeculatable =
+ Speculatability::RecursivelySpeculatable;
+} // namespace Speculation
+
//===----------------------------------------------------------------------===//
// SideEffect Traits
//===----------------------------------------------------------------------===//
namespace OpTrait {
-/// This trait indicates that the side effects of an operation includes the
+/// This trait indicates that the memory effects of an operation includes the
/// effects of operations nested within its regions. If the operation has no
/// derived effects interfaces, the operation itself can be assumed to have no
-/// side effects.
+/// memory effects.
template <typename ConcreteType>
-class HasRecursiveSideEffects
- : public TraitBase<ConcreteType, HasRecursiveSideEffects> {};
+class HasRecursiveMemoryEffects
+ : public TraitBase<ConcreteType, HasRecursiveMemoryEffects> {};
+
+/// This trait marks an op (which must be tagged as implementing the
+/// ConditionallySpeculatable interface) as being recursively speculatable.
+/// This means that said op can be speculated only if all the instructions in
+/// all the regions attached to the op can be speculated.
+template <typename ConcreteType>
+struct RecursivelySpeculatableImplTrait
+ : public TraitBase<ConcreteType, RecursivelySpeculatableImplTrait> {
+
+ Speculation::Speculatability getSpeculatability() {
+ return Speculation::RecursivelySpeculatable;
+ }
+};
+
+/// This trait marks an op (which must be tagged as implementing the
+/// ConditionallySpeculatable interface) as being always speculatable.
+template <typename ConcreteType>
+struct AlwaysSpeculatableImplTrait
+ : public TraitBase<ConcreteType, AlwaysSpeculatableImplTrait> {
+
+ Speculation::Speculatability getSpeculatability() {
+ return Speculation::Speculatable;
+ }
+};
} // namespace OpTrait
//===----------------------------------------------------------------------===//
diff --git a/mlir/include/mlir/Interfaces/SideEffectInterfaces.td b/mlir/include/mlir/Interfaces/SideEffectInterfaces.td
index 1c12a5a56e7f7..eb810c2cfada1 100644
--- a/mlir/include/mlir/Interfaces/SideEffectInterfaces.td
+++ b/mlir/include/mlir/Interfaces/SideEffectInterfaces.td
@@ -77,9 +77,59 @@ def MemWrite : MemWrite<DefaultResource>;
// Effect Traits
//===----------------------------------------------------------------------===//
-// Op has no side effect.
-def NoSideEffect : MemoryEffects<[]>;
+// Op has no effect on memory but may have undefined behavior.
+def NoMemoryEffect : MemoryEffects<[]>;
+
// Op has recursively computed side effects.
-def RecursiveSideEffects : NativeOpTrait<"HasRecursiveSideEffects">;
+def RecursiveMemoryEffects : NativeOpTrait<"HasRecursiveMemoryEffects">;
+
+//===----------------------------------------------------------------------===//
+// Speculation
+//===----------------------------------------------------------------------===//
+
+// Used to inject an implementation of getSpeculatability. Users should not use
+// this directly.
+def RecursivelySpeculatableImplTrait
+ : NativeOpTrait<"RecursivelySpeculatableImplTrait">;
+
+// Used to inject an implementation of getSpeculatability. Users should not use
+// this directly.
+def AlwaysSpeculatableImplTrait
+ : NativeOpTrait<"AlwaysSpeculatableImplTrait">;
+
+// This op interface enables Op authors to inject custom logic to determine
+// whether an Operation can be speculatively executed. Ops that implement this
+// interface need to implement the custom logic in the `getSpeculatability` method.
+// For instance, the `getSpeculatability` for a specific op may check the attributes
+// or input types to determine whether that specific Operation is speculatable.
+def ConditionallySpeculatable : OpInterface<"ConditionallySpeculatable"> {
+ let description = [{
+ An interface used to query information about the speculability of an
+ operation.
+ }];
+ let cppNamespace = "::mlir";
+
+ let methods = [
+ InterfaceMethod<[{
+ Returns value indicating whether the specific operation in question can
+ be speculatively executed. Please see the documentation on the
+ Speculatability enum to know how to interpret the return value.
+ }],
+ "::mlir::Speculation::Speculatability", "getSpeculatability", (ins)>
+ ];
+}
+
+// Marks an Operation as always speculatable.
+def AlwaysSpeculatable : TraitList<[
+ ConditionallySpeculatable, AlwaysSpeculatableImplTrait]>;
+
+// Marks an Operation as speculatable only if all the operations in all attached
+// regions are also speculatable.
+def RecursivelySpeculatable : TraitList<[
+ ConditionallySpeculatable, RecursivelySpeculatableImplTrait]>;
+
+// Always speculatable operation that does not touch memory. These operations
+// are always legal to hoist or sink.
+def Pure : TraitList<[AlwaysSpeculatable, NoMemoryEffect]>;
#endif // MLIR_INTERFACES_SIDEEFFECTS
diff --git a/mlir/include/mlir/Transforms/SideEffectUtils.h b/mlir/include/mlir/Transforms/SideEffectUtils.h
index 5c53a99d53d5f..4c797a7f30827 100644
--- a/mlir/include/mlir/Transforms/SideEffectUtils.h
+++ b/mlir/include/mlir/Transforms/SideEffectUtils.h
@@ -13,17 +13,24 @@ namespace mlir {
class Operation;
-/// Returns true if the given operation is side-effect free.
+/// Returns true if the given operation is free of memory effects.
///
-/// An operation is side-effect free if its implementation of
+/// An operation is free of memory effects if its implementation of
/// `MemoryEffectOpInterface` indicates that it has no memory effects. For
-/// example, it may implement `NoSideEffect` in ODS. Alternatively, if the
-/// operation `HasRecursiveSideEffects`, then it is side-effect free if all of
-/// its nested operations are side-effect free.
+/// example, it may implement `NoMemoryEffect` in ODS. Alternatively, if the
+/// operation has the `HasRecursiveMemoryEffects` trait, then it is free of
+/// memory effects if all of its nested operations are free of memory effects.
///
-/// If the operation has both, then it is side-effect free if both conditions
-/// are satisfied.
-bool isSideEffectFree(Operation *op);
+/// If the operation has both, then it is free of memory effects if both
+/// conditions are satisfied.
+bool isMemoryEffectFree(Operation *op);
+
+/// Returns true if the given operation is speculatable, i.e. has no undefined
+/// behavior or other side effects.
+///
+/// An operation can indicate that it is speculatable by implementing the
+/// getSpeculatability hook in the ConditionallySpeculatable op interface.
+bool isSpeculatable(Operation *op);
} // end namespace mlir
diff --git a/mlir/lib/Analysis/AliasAnalysis/LocalAliasAnalysis.cpp b/mlir/lib/Analysis/AliasAnalysis/LocalAliasAnalysis.cpp
index 196312d4f6a55..b529b075fa34d 100644
--- a/mlir/lib/Analysis/AliasAnalysis/LocalAliasAnalysis.cpp
+++ b/mlir/lib/Analysis/AliasAnalysis/LocalAliasAnalysis.cpp
@@ -348,7 +348,7 @@ AliasResult LocalAliasAnalysis::alias(Value lhs, Value rhs) {
ModRefResult LocalAliasAnalysis::getModRef(Operation *op, Value location) {
// Check to see if this operation relies on nested side effects.
- if (op->hasTrait<OpTrait::HasRecursiveSideEffects>()) {
+ if (op->hasTrait<OpTrait::HasRecursiveMemoryEffects>()) {
// TODO: To check recursive operations we need to check all of the nested
// operations, which can result in a quadratic number of queries. We should
// introduce some caching of some kind to help alleviate this, especially as
diff --git a/mlir/lib/Dialect/Affine/Utils/Utils.cpp b/mlir/lib/Dialect/Affine/Utils/Utils.cpp
index 2f077cc3bae0c..53ab1130fddaa 100644
--- a/mlir/lib/Dialect/Affine/Utils/Utils.cpp
+++ b/mlir/lib/Dialect/Affine/Utils/Utils.cpp
@@ -743,7 +743,7 @@ bool mlir::hasNoInterveningEffect(Operation *start, T memOp) {
return;
}
- if (op->hasTrait<OpTrait::HasRecursiveSideEffects>()) {
+ if (op->hasTrait<OpTrait::HasRecursiveMemoryEffects>()) {
// Recurse into the regions for this op and check whether the internal
// operations may have the side effect `EffectType` on memOp.
for (Region ®ion : op->getRegions())
diff --git a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
index 26814aba3913c..292eb4618aac6 100644
--- a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
+++ b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
@@ -378,7 +378,7 @@ static bool isGuaranteedAutomaticAllocation(Operation *op) {
static bool isOpItselfPotentialAutomaticAllocation(Operation *op) {
// This op itself doesn't create a stack allocation,
// the inner allocation should be handled separately.
- if (op->hasTrait<OpTrait::HasRecursiveSideEffects>())
+ if (op->hasTrait<OpTrait::HasRecursiveMemoryEffects>())
return false;
MemoryEffectOpInterface interface = dyn_cast<MemoryEffectOpInterface>(op);
if (!interface)
diff --git a/mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp b/mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp
index 3d916d0b5698a..3c4f20fd5f9c2 100644
--- a/mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp
@@ -224,7 +224,7 @@ static WarpExecuteOnLane0Op moveRegionToNewWarpOpAndAppendReturns(
static bool canBeHoisted(Operation *op,
function_ref<bool(Value)> definedOutside) {
return llvm::all_of(op->getOperands(), definedOutside) &&
- isSideEffectFree(op) && op->getNumRegions() == 0;
+ isMemoryEffectFree(op) && op->getNumRegions() == 0;
}
/// Return a value yielded by `warpOp` which statifies the filter lamdba
@@ -555,7 +555,7 @@ struct WarpOpTransferWrite : public OpRewritePattern<vector::TransferWriteOp> {
// There must be no op with a side effect after writeOp.
Operation *nextOp = writeOp.getOperation();
while ((nextOp = nextOp->getNextNode()))
- if (!isSideEffectFree(nextOp))
+ if (!isMemoryEffectFree(nextOp))
return failure();
if (!llvm::all_of(writeOp->getOperands(), [&](Value value) {
diff --git a/mlir/lib/Dialect/Vector/Transforms/VectorTransferOpTransforms.cpp b/mlir/lib/Dialect/Vector/Transforms/VectorTransferOpTransforms.cpp
index 893ca290aec82..7ab8e8ad02537 100644
--- a/mlir/lib/Dialect/Vector/Transforms/VectorTransferOpTransforms.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/VectorTransferOpTransforms.cpp
@@ -114,7 +114,7 @@ void TransferOptimization::deadStoreOp(vector::TransferWriteOp write) {
users.append(subView->getUsers().begin(), subView->getUsers().end());
continue;
}
- if (isSideEffectFree(user))
+ if (isMemoryEffectFree(user))
continue;
if (user == write.getOperation())
continue;
@@ -200,7 +200,7 @@ void TransferOptimization::storeToLoadForwarding(vector::TransferReadOp read) {
users.append(subView->getUsers().begin(), subView->getUsers().end());
continue;
}
- if (isSideEffectFree(user) || isa<vector::TransferReadOp>(user))
+ if (isMemoryEffectFree(user) || isa<vector::TransferReadOp>(user))
continue;
if (auto write = dyn_cast<vector::TransferWriteOp>(user)) {
// If there is a write, but we can prove that it is disjoint we can ignore
diff --git a/mlir/lib/Interfaces/SideEffectInterfaces.cpp b/mlir/lib/Interfaces/SideEffectInterfaces.cpp
index b6884f076a151..e1669ec64f725 100644
--- a/mlir/lib/Interfaces/SideEffectInterfaces.cpp
+++ b/mlir/lib/Interfaces/SideEffectInterfaces.cpp
@@ -47,7 +47,8 @@ static bool wouldOpBeTriviallyDeadImpl(Operation *rootOp) {
// If the operation has recursive effects, push all of the nested operations
// on to the stack to consider.
- bool hasRecursiveEffects = op->hasTrait<OpTrait::HasRecursiveSideEffects>();
+ bool hasRecursiveEffects =
+ op->hasTrait<OpTrait::HasRecursiveMemoryEffects>();
if (hasRecursiveEffects) {
for (Region ®ion : op->getRegions()) {
for (auto &block : region) {
diff --git a/mlir/lib/Transforms/ControlFlowSink.cpp b/mlir/lib/Transforms/ControlFlowSink.cpp
index 0fcdb9c51fa66..1000a79c7162a 100644
--- a/mlir/lib/Transforms/ControlFlowSink.cpp
+++ b/mlir/lib/Transforms/ControlFlowSink.cpp
@@ -44,7 +44,7 @@ void ControlFlowSink::runOnOperation() {
// Sink side-effect free operations.
numSunk = controlFlowSink(
regionsToSink, domInfo,
- [](Operation *op, Region *) { return isSideEffectFree(op); },
+ [](Operation *op, Region *) { return isMemoryEffectFree(op); },
[](Operation *op, Region *region) {
// Move the operation to the beginning of the region's entry block.
// This guarantees the preservation of SSA dominance of all of the
diff --git a/mlir/lib/Transforms/Utils/LoopInvariantCodeMotionUtils.cpp b/mlir/lib/Transforms/Utils/LoopInvariantCodeMotionUtils.cpp
index e7a6bd7b4b616..770255f251b7b 100644
--- a/mlir/lib/Transforms/Utils/LoopInvariantCodeMotionUtils.cpp
+++ b/mlir/lib/Transforms/Utils/LoopInvariantCodeMotionUtils.cpp
@@ -100,6 +100,8 @@ size_t mlir::moveLoopInvariantCode(LoopLikeOpInterface loopLike) {
[&](Value value, Region *) {
return loopLike.isDefinedOutsideOfLoop(value);
},
- [&](Operation *op, Region *) { return isSideEffectFree(op); },
+ [&](Operation *op, Region *) {
+ return isMemoryEffectFree(op) && isSpeculatable(op);
+ },
[&](Operation *op, Region *) { loopLike.moveOutOfLoop(op); });
}
diff --git a/mlir/lib/Transforms/Utils/SideEffectUtils.cpp b/mlir/lib/Transforms/Utils/SideEffectUtils.cpp
index 69165d6edd5f6..cb981be9357a5 100644
--- a/mlir/lib/Transforms/Utils/SideEffectUtils.cpp
+++ b/mlir/lib/Transforms/Utils/SideEffectUtils.cpp
@@ -12,15 +12,15 @@
using namespace mlir;
-bool mlir::isSideEffectFree(Operation *op) {
+bool mlir::isMemoryEffectFree(Operation *op) {
if (auto memInterface = dyn_cast<MemoryEffectOpInterface>(op)) {
// If the op has side-effects, it cannot be moved.
if (!memInterface.hasNoEffect())
return false;
// If the op does not have recursive side effects, then it can be moved.
- if (!op->hasTrait<OpTrait::HasRecursiveSideEffects>())
+ if (!op->hasTrait<OpTrait::HasRecursiveMemoryEffects>())
return true;
- } else if (!op->hasTrait<OpTrait::HasRecursiveSideEffects>()) {
+ } else if (!op->hasTrait<OpTrait::HasRecursiveMemoryEffects>()) {
// Otherwise, if the op does not implement the memory effect interface and
// it does not have recursive side effects, then it cannot be known that the
// op is moveable.
@@ -30,7 +30,29 @@ bool mlir::isSideEffectFree(Operation *op) {
// Recurse into the regions and ensure that all nested ops can also be moved.
for (Region ®ion : op->getRegions())
for (Operation &op : region.getOps())
- if (!isSideEffectFree(&op))
+ if (!isMemoryEffectFree(&op))
return false;
return true;
}
+
+bool mlir::isSpeculatable(Operation *op) {
+ auto conditionallySpeculatable = dyn_cast<ConditionallySpeculatable>(op);
+ if (!conditionallySpeculatable)
+ return false;
+
+ switch (conditionallySpeculatable.getSpeculatability()) {
+ case Speculation::RecursivelySpeculatable:
+ for (Region ®ion : op->getRegions()) {
+ for (Operation &op : region.getOps())
+ if (!isSpeculatable(&op))
+ return false;
+ }
+ return true;
+
+ case Speculation::Speculatable:
+ return true;
+
+ case Speculation::NotSpeculatable:
+ return false;
+ }
+}
diff --git a/mlir/test/Transforms/loop-invariant-code-motion.mlir b/mlir/test/Transforms/loop-invariant-code-motion.mlir
index 6ea6cdf8f2757..0b74c81b6de13 100644
--- a/mlir/test/Transforms/loop-invariant-code-motion.mlir
+++ b/mlir/test/Transforms/loop-invariant-code-motion.mlir
@@ -426,3 +426,80 @@ func.func @test_invariant_cycle_not_hoisted() {
} : () -> ()
return
}
+
+// -----
+
+// CHECK-LABEL: test_always_speculatable_op
+func.func @test_always_speculatable_op(%lb: index, %ub: index, %step: index) {
+ // CHECK: test.always_speculatable_op
+ // CHECK-NEXT: scf.for
+ scf.for %i = %lb to %ub step %step {
+ %val = "test.always_speculatable_op"() : () -> i32
+ }
+
+ return
+}
+
+// CHECK-LABEL: test_never_speculatable_op
+func.func @test_never_speculatable_op(%lb: index, %ub: index, %step: index) {
+ // CHECK: scf.for
+ // CHECK-NEXT: test.never_speculatable_op
+ scf.for %i = %lb to %ub step %step {
+ %val = "test.never_speculatable_op"() : () -> i32
+ }
+
+ return
+}
+
+// CHECK-LABEL: test_conditionally_speculatable_op_success
+func.func @test_conditionally_speculatable_op_success(%lb: index, %ub: index, %step: index) {
+ // CHECK: test.conditionally_speculatable_op
+ // CHECK-NEXT: scf.for
+ scf.for %i = %lb to %ub step %step {
+ %const_val = arith.constant 5 : i32
+ %val = "test.conditionally_speculatable_op"(%const_val) : (i32) -> i32
+ }
+
+ return
+}
+
+// CHECK-LABEL: test_conditionally_speculatable_op_failure
+func.func @test_conditionally_speculatable_op_failure(%lb: index, %ub: index, %step: index, %arg: i32) {
+ // CHECK: scf.for
+ // CHECK-NEXT: test.conditionally_speculatable_op
+ %const_5 = arith.constant 5 : i32
+ %non_const = arith.addi %arg, %const_5 : i32
+ scf.for %i = %lb to %ub step %step {
+ %val = "test.conditionally_speculatable_op"(%non_const) : (i32) -> i32
+ }
+
+ return
+}
+
+// CHECK-LABEL: test_recursively_speculatable_op_success
+func.func @test_recursively_speculatable_op_success(%lb: index, %ub: index, %step: index, %arg: i32) {
+ // CHECK: test.recursively_speculatable_op
+ // CHECK: scf.for
+ scf.for %i = %lb to %ub step %step {
+ %val = "test.recursively_speculatable_op"()({
+ %result = arith.addi %arg, %arg : i32
+ test.region_yield %result : i32
+ }) : () -> i32
+ }
+
+ return
+}
+
+// CHECK-LABEL: test_recursively_speculatable_op_failure
+func.func @test_recursively_speculatable_op_failure(%lb: index, %ub: index, %step: index, %arg: i32) {
+ // CHECK: scf.for
+ // CHECK-NEXT: test.recursively_speculatable_op
+ scf.for %i = %lb to %ub step %step {
+ %val = "test.recursively_speculatable_op"()({
+ %result = "test.never_speculatable_op"() : () -> i32
+ test.region_yield %result : i32
+ }) : () -> i32
+ }
+
+ return
+}
diff --git a/mlir/test/lib/Dialect/Test/TestOps.td b/mlir/test/lib/Dialect/Test/TestOps.td
index 85df4198f172a..1fca43ddff411 100644
--- a/mlir/test/lib/Dialect/Test/TestOps.td
+++ b/mlir/test/lib/Dialect/Test/TestOps.td
@@ -1183,7 +1183,7 @@ def TestOpWithRegionPattern : TEST_Op<"op_with_region_pattern"> {
let hasCanonicalizer = 1;
}
-def TestOpConstant : TEST_Op<"constant", [ConstantLike, NoSideEffect]> {
+def TestOpConstant : TEST_Op<"constant", [ConstantLike, NoMemoryEffect]> {
let arguments = (ins AnyAttr:$value);
let results = (outs AnyType);
@@ -1197,8 +1197,8 @@ def : Pat<(OpR $input1, (ConstantLikeMatcher I32Attr:$input2)),
(OpS:$unused $input1, $input2)>;
// Op for testing trivial removal via folding of op with inner ops and no uses.
-def TestOpWithRegionFoldNoSideEffect : TEST_Op<
- "op_with_region_fold_no_side_effect", [NoSideEffect]> {
+def TestOpWithRegionFoldNoMemoryEffect : TEST_Op<
+ "op_with_region_fold_no_side_effect", [NoMemoryEffect]> {
let regions = (region SizedRegion<1>:$region);
}
@@ -1238,28 +1238,28 @@ def TestCommutative2Op : TEST_Op<"op_commutative2", [Commutative]> {
def TestIdempotentTraitOp
: TEST_Op<"op_idempotent_trait",
- [SameOperandsAndResultType, NoSideEffect, Idempotent]> {
+ [SameOperandsAndResultType, NoMemoryEffect, Idempotent]> {
let arguments = (ins I32:$op1);
let results = (outs I32);
}
def TestIdempotentTraitBinaryOp
: TEST_Op<"op_idempotent_trait_binary",
- [SameOperandsAndResultType, NoSideEffect, Idempotent]> {
+ [SameOperandsAndResultType, NoMemoryEffect, Idempotent]> {
let arguments = (ins I32:$op1, I32:$op2);
let results = (outs I32);
}
def TestInvolutionTraitNoOperationFolderOp
: TEST_Op<"op_involution_trait_no_operation_fold",
- [SameOperandsAndResultType, NoSideEffect, Involution]> {
+ [SameOperandsAndResultType, NoMemoryEffect, Involution]> {
let arguments = (ins I32:$op1);
let results = (outs I32);
}
def TestInvolutionTraitFailingOperationFolderOp
: TEST_Op<"op_involution_trait_failing_operation_fold",
- [SameOperandsAndResultType, NoSideEffect, Involution]> {
+ [SameOperandsAndResultType, NoMemoryEffect, Involution]> {
let arguments = (ins I32:$op1);
let results = (outs I32);
let hasFolder = 1;
@@ -1267,7 +1267,7 @@ def TestInvolutionTraitFailingOperationFolderOp
def TestInvolutionTraitSuccesfulOperationFolderOp
: TEST_Op<"op_involution_trait_succesful_operation_fold",
- [SameOperandsAndResultType, NoSideEffect, Involution]> {
+ [SameOperandsAndResultType, NoMemoryEffect, Involution]> {
let arguments = (ins I32:$op1);
let results = (outs I32);
let hasFolder = 1;
@@ -2543,7 +2543,7 @@ def CopyOp : TEST_Op<"copy", [CopyOpInterface]> {
//===----------------------------------------------------------------------===//
def RegionYieldOp : TEST_Op<"region_yield",
- [NoSideEffect, ReturnLike, Terminator]> {
+ [Pure, ReturnLike, Terminator]> {
let description = [{
This operation is used in a region and yields the corresponding type for
that operation.
@@ -2599,7 +2599,7 @@ def TensorBasedOp : TEST_Op<"tensor_based", []> {
//===----------------------------------------------------------------------===//
def RegionIfYieldOp : TEST_Op<"region_if_yield",
- [NoSideEffect, ReturnLike, Terminator]> {
+ [NoMemoryEffect, ReturnLike, Terminator]> {
let arguments = (ins Variadic<AnyType>:$results);
let assemblyFormat = [{
$results `:` type($results) attr-dict
@@ -2610,7 +2610,7 @@ def RegionIfOp : TEST_Op<"region_if",
[DeclareOpInterfaceMethods<RegionBranchOpInterface,
["getRegionInvocationBounds"]>,
SingleBlockImplicitTerminator<"RegionIfYieldOp">,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let description =[{
Represents an abstract if-then-else-join pattern. In this context, the then
and else regions jump to the join region, which finally returns to its
@@ -2641,7 +2641,7 @@ def RegionIfOp : TEST_Op<"region_if",
def AnyCondOp : TEST_Op<"any_cond",
[DeclareOpInterfaceMethods<RegionBranchOpInterface,
["getRegionInvocationBounds"]>,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let results = (outs Variadic<AnyType>:$results);
let regions = (region AnyRegion:$region);
}
@@ -2960,8 +2960,8 @@ def TestVerifiersOp : TEST_Op<"verifiers",
// Test loop op with a graph region.
def TestGraphLoopOp : TEST_Op<"graph_loop",
- [LoopLikeOpInterface, NoSideEffect,
- RecursiveSideEffects, SingleBlock,
+ [LoopLikeOpInterface, NoMemoryEffect,
+ RecursivelySpeculatable, SingleBlock,
RegionKindInterface, HasOnlyGraphRegion]> {
let arguments = (ins Variadic<AnyType>:$args);
let results = (outs Variadic<AnyType>:$rets);
@@ -2981,7 +2981,7 @@ def TestGraphLoopOp : TEST_Op<"graph_loop",
//===----------------------------------------------------------------------===//
def TestWithBoundsOp : TEST_Op<"with_bounds",
[DeclareOpInterfaceMethods<InferIntRangeInterface>,
- NoSideEffect]> {
+ NoMemoryEffect]> {
let arguments = (ins IndexAttr:$umin,
IndexAttr:$umax,
IndexAttr:$smin,
@@ -3005,7 +3005,7 @@ def TestWithBoundsRegionOp : TEST_Op<"with_bounds_region",
def TestIncrementOp : TEST_Op<"increment",
[DeclareOpInterfaceMethods<InferIntRangeInterface>,
- NoSideEffect]> {
+ NoMemoryEffect]> {
let arguments = (ins Index:$value);
let results = (outs Index:$result);
@@ -3023,4 +3023,65 @@ def TestReflectBoundsOp : TEST_Op<"reflect_bounds",
let assemblyFormat = "attr-dict $value";
}
+
+//===----------------------------------------------------------------------===//
+// Test ConditionallySpeculatable
+//===----------------------------------------------------------------------===//
+
+def ConditionallySpeculatableOp : TEST_Op<"conditionally_speculatable_op",
+ [ConditionallySpeculatable, NoMemoryEffect]> {
+ let description = [{
+ Op used to test conditional speculation. This op can be speculatively
+ executed if the input to it is an `arith.constant`.
+ }];
+
+ let arguments = (ins I32:$input);
+ let results = (outs I32:$result);
+
+ let extraClassDeclaration = [{
+ ::mlir::Speculation::Speculatability getSpeculatability();
+ }];
+
+ let extraClassDefinition = [{
+ ::mlir::Speculation::Speculatability
+ ConditionallySpeculatableOp::getSpeculatability() {
+ Operation* definingOp = getInput().getDefiningOp();
+ return definingOp && isa<::mlir::arith::ConstantOp>(definingOp) ?
+ ::mlir::Speculation::Speculatable : ::mlir::Speculation::NotSpeculatable;
+ }
+ }];
+}
+
+def PureOp : TEST_Op<"always_speculatable_op", [Pure]> {
+ let description = [{
+ Op used to test conditional speculation. This op can always be
+ speculatively executed.
+ }];
+ let results = (outs I32:$result);
+}
+
+def NeverSpeculatableOp : TEST_Op<"never_speculatable_op", [ConditionallySpeculatable]> {
+ let description = [{
+ Op used to test conditional speculation. This op can never be
+ speculatively executed.
+ }];
+ let results = (outs I32:$result);
+
+ let extraClassDeclaration = [{
+ ::mlir::Speculation::Speculatability getSpeculatability() {
+ return ::mlir::Speculation::NotSpeculatable;
+ }
+ }];
+}
+
+def RecursivelySpeculatableOp : TEST_Op<"recursively_speculatable_op", [
+ RecursivelySpeculatable, RecursiveMemoryEffects]> {
+ let description = [{
+ Op used to test conditional speculation. This op can be speculatively
+ executed only if all the ops in the attached region can be.
+ }];
+ let results = (outs I32:$result);
+ let regions = (region SizedRegion<1>:$body);
+}
+
#endif // TEST_OPS
diff --git a/mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.td b/mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.td
index 84e3868d771c2..aaec014ac660a 100644
--- a/mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.td
+++ b/mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.td
@@ -80,7 +80,7 @@ def TestPrintRemarkAtOperandOp
def TestAddTestExtensionOp
: Op<Transform_Dialect, "test_add_test_extension",
[DeclareOpInterfaceMethods<TransformOpInterface>,
- NoSideEffect]> {
+ NoMemoryEffect]> {
let arguments = (ins StrAttr:$message);
let assemblyFormat = "$message attr-dict";
let cppNamespace = "::mlir::test";
@@ -108,7 +108,7 @@ def TestRemapOperandPayloadToSelfOp
def TestRemoveTestExtensionOp
: Op<Transform_Dialect, "test_remove_test_extension",
[DeclareOpInterfaceMethods<TransformOpInterface>,
- NoSideEffect]> {
+ NoMemoryEffect]> {
let assemblyFormat = "attr-dict";
let cppNamespace = "::mlir::test";
}
diff --git a/mlir/test/mlir-tblgen/gen-dialect-doc.td b/mlir/test/mlir-tblgen/gen-dialect-doc.td
index 1ca80960fea39..6c34db8df1463 100644
--- a/mlir/test/mlir-tblgen/gen-dialect-doc.td
+++ b/mlir/test/mlir-tblgen/gen-dialect-doc.td
@@ -13,7 +13,7 @@ def Test_Dialect : Dialect {
}];
let cppNamespace = "NS";
}
-def AOp : Op<Test_Dialect, "a", [NoSideEffect, SingleBlockImplicitTerminator<"YieldOp">]>;
+def AOp : Op<Test_Dialect, "a", [NoMemoryEffect, SingleBlockImplicitTerminator<"YieldOp">]>;
def TestAttr : DialectAttr<Test_Dialect, CPred<"true">> {
let summary = "attribute summary";
@@ -31,7 +31,7 @@ def TestType : DialectType<Test_Dialect, CPred<"true">> {
// CHECK-NOT: [TOC]
// CHECK: Traits: SingleBlockImplicitTerminator<YieldOp>
-// CHECK: Interfaces: NoSideEffect (MemoryEffectOpInterface)
+// CHECK: Interfaces: NoMemoryEffect (MemoryEffectOpInterface)
// CHECK: Effects: MemoryEffects::Effect{}
// CHECK: ## Attribute constraint definition
diff --git a/mlir/test/mlir-tblgen/llvm-intrinsics.td b/mlir/test/mlir-tblgen/llvm-intrinsics.td
index 1019e36e3b2ee..b3fbe289cb346 100644
--- a/mlir/test/mlir-tblgen/llvm-intrinsics.td
+++ b/mlir/test/mlir-tblgen/llvm-intrinsics.td
@@ -21,7 +21,7 @@
// match the result type.
// CHECK: [1]
// It has no side effects.
-// CHECK: [NoSideEffect]
+// CHECK: [NoMemoryEffect]
// It has a result.
// CHECK: 1,
// It does not require an access group.
@@ -42,7 +42,7 @@
// GROUPS-LABEL: def LLVM_ptrmask
// GROUPS: LLVM_IntrOp<"ptrmask
// It has no side effects.
-// GROUPS: [NoSideEffect]
+// GROUPS: [NoMemoryEffect]
// It has a result.
// GROUPS: 1,
// It requires generation of an access group LLVM metadata.
@@ -64,7 +64,7 @@
// ALIAS-LABEL: def LLVM_ptrmask
// ALIAS: LLVM_IntrOp<"ptrmask
// It has no side effects.
-// ALIAS: [NoSideEffect]
+// ALIAS: [NoMemoryEffect]
// It has a result.
// ALIAS: 1,
// It does not require an access group.
diff --git a/mlir/tools/mlir-tblgen/LLVMIRIntrinsicGen.cpp b/mlir/tools/mlir-tblgen/LLVMIRIntrinsicGen.cpp
index 646527d1bdcc8..d3f81554f4248 100644
--- a/mlir/tools/mlir-tblgen/LLVMIRIntrinsicGen.cpp
+++ b/mlir/tools/mlir-tblgen/LLVMIRIntrinsicGen.cpp
@@ -211,7 +211,7 @@ static bool emitIntrinsic(const llvm::Record &record, llvm::raw_ostream &os) {
if (intr.isCommutative())
traits.push_back("Commutative");
if (!intr.hasSideEffects())
- traits.push_back("NoSideEffect");
+ traits.push_back("NoMemoryEffect");
// Prepare strings for operands.
llvm::SmallVector<llvm::StringRef, 8> operands(intr.getNumOperands(),
More information about the Mlir-commits
mailing list