[Mlir-commits] [mlir] 2ee5586 - [mlir][vector] Make the in_bounds attribute mandatory (#97049)
llvmlistbot at llvm.org
llvmlistbot at llvm.org
Tue Jul 16 08:49:55 PDT 2024
Author: Andrzej WarzyĆski
Date: 2024-07-16T16:49:52+01:00
New Revision: 2ee5586ac7d8424b51790b143dbc6e2105bf99bc
URL: https://github.com/llvm/llvm-project/commit/2ee5586ac7d8424b51790b143dbc6e2105bf99bc
DIFF: https://github.com/llvm/llvm-project/commit/2ee5586ac7d8424b51790b143dbc6e2105bf99bc.diff
LOG: [mlir][vector] Make the in_bounds attribute mandatory (#97049)
At the moment, the in_bounds attribute has two confusing/contradicting
properties:
1. It is both optional _and_ has an effective default-value.
2. The default value is "out-of-bounds" for non-broadcast dims, and
"in-bounds" for broadcast dims.
(see the `isDimInBounds` vector interface method for an example of this
"default" behaviour [1]).
This PR aims to clarify the logic surrounding the `in_bounds` attribute
by:
* making the attribute mandatory (i.e. it is always present),
* always setting the default value to "out of bounds" (that's
consistent with the current behaviour for the most common cases).
#### Broadcast dimensions in tests
As per [2], the broadcast dimensions requires the corresponding
`in_bounds` attribute to be `true`:
```
vector.transfer_read op requires broadcast dimensions to be in-bounds
```
The changes in this PR mean that we can no longer rely on the
default value in cases like the following (dim 0 is a broadcast dim):
```mlir
%read = vector.transfer_read %A[%base1, %base2], %f, %mask
{permutation_map = affine_map<(d0, d1) -> (0, d1)>} :
memref<?x?xf32>, vector<4x9xf32>
```
Instead, the broadcast dimension has to explicitly be marked as "in
bounds:
```mlir
%read = vector.transfer_read %A[%base1, %base2], %f, %mask
{in_bounds = [true, false], permutation_map = affine_map<(d0, d1) -> (0, d1)>} :
memref<?x?xf32>, vector<4x9xf32>
```
All tests with broadcast dims are updated accordingly.
#### Changes in "SuperVectorize.cpp" and "Vectorization.cpp"
The following patterns in "Vectorization.cpp" are updated to explicitly
set the `in_bounds` attribute to `false`:
* `LinalgCopyVTRForwardingPattern` and `LinalgCopyVTWForwardingPattern`
Also, `vectorizeAffineLoad` (from "SuperVectorize.cpp") and
`vectorizeAsLinalgGeneric` (from "Vectorization.cpp") are updated to
make sure that xfer Ops created by these hooks set the dimension
corresponding to broadcast dims as "in bounds". Otherwise, the Op
verifier would complain
Note that there is no mechanism to verify whether the corresponding
memory access are indeed in bounds. Still, this is consistent with the
current behaviour where the broadcast dim would be implicitly assumed
to be "in bounds".
[1]
https://github.com/llvm/llvm-project/blob/4145ad2bac4bb99d5034d60c74bb2789f6c6e802/mlir/include/mlir/Interfaces/VectorInterfaces.td#L243-L246
[2]
https://mlir.llvm.org/docs/Dialects/Vector/#vectortransfer_read-vectortransferreadop
Added:
Modified:
mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
mlir/include/mlir/IR/AffineMap.h
mlir/include/mlir/Interfaces/VectorInterfaces.td
mlir/lib/Dialect/Affine/Transforms/SuperVectorize.cpp
mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
mlir/lib/Dialect/Vector/IR/VectorOps.cpp
mlir/lib/Dialect/Vector/Transforms/LowerVectorMask.cpp
mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp
mlir/lib/IR/AffineMap.cpp
mlir/test/Conversion/VectorToSCF/vector-to-scf.mlir
mlir/test/Dialect/Affine/SuperVectorize/vectorize_1d.mlir
mlir/test/Dialect/Affine/SuperVectorize/vectorize_2d.mlir
mlir/test/Dialect/Affine/SuperVectorize/vectorize_affine_apply.mlir
mlir/test/Dialect/Linalg/hoisting.mlir
mlir/test/Dialect/Linalg/vectorization.mlir
mlir/test/Dialect/Vector/invalid.mlir
mlir/test/Dialect/Vector/ops.mlir
mlir/test/Dialect/Vector/vector-transfer-collapse-inner-most-dims.mlir
mlir/test/Dialect/Vector/vector-transfer-permutation-lowering.mlir
mlir/test/Dialect/Vector/vector-transfer-to-vector-load-store.mlir
mlir/test/Dialect/Vector/vector-transfer-unroll.mlir
mlir/test/Integration/Dialect/Vector/CPU/transfer-read-1d.mlir
mlir/test/Integration/Dialect/Vector/CPU/transfer-read-2d.mlir
mlir/test/Integration/Dialect/Vector/CPU/transfer-read-3d.mlir
mlir/test/python/dialects/vector.py
Removed:
################################################################################
diff --git a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
index 44efb7f8bd3d4..2019eb5a9fd7f 100644
--- a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
+++ b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
@@ -1363,7 +1363,7 @@ def Vector_TransferReadOp :
AffineMapAttr:$permutation_map,
AnyType:$padding,
Optional<VectorOf<[I1]>>:$mask,
- OptionalAttr<BoolArrayAttr>:$in_bounds)>,
+ BoolArrayAttr:$in_bounds)>,
Results<(outs AnyVectorOfAnyRank:$vector)> {
let summary = "Reads a supervector from memory into an SSA vector value.";
@@ -1401,16 +1401,19 @@ def Vector_TransferReadOp :
permutation or broadcasting. Elements whose corresponding mask element is
`0` are masked out and replaced with `padding`.
- An optional boolean array attribute `in_bounds` specifies for every vector
- dimension if the transfer is guaranteed to be within the source bounds. If
- specified, the `in_bounds` array length has to be equal to the vector rank.
- If set to "false", accesses (including the starting point) may run
+ For every vector dimension, the boolean array attribute `in_bounds`
+ specifies if the transfer is guaranteed to be within the source bounds. If
+ set to "false", accesses (including the starting point) may run
out-of-bounds along the respective vector dimension as the index increases.
- Broadcast dimensions must always be in-bounds. In absence of the attribute,
- accesses along all vector dimensions (except for broadcasts) may run
- out-of-bounds. A `vector.transfer_read` can be lowered to a simple load if
- all dimensions are specified to be within bounds and no `mask` was
- specified. Note that non-vector dimensions *must* always be in-bounds.
+ Non-vector and broadcast dimensions *must* always be in-bounds. The
+ `in_bounds` array length has to be equal to the vector rank. This attribute
+ has a default value: `false` (i.e. "out-of-bounds"). When skipped in the
+ textual IR, the default value is assumed. Similarly, the OP printer will
+ omit this attribute when all dimensions are out-of-bounds (i.e. the default
+ value is used).
+
+ A `vector.transfer_read` can be lowered to a simple load if all dimensions
+ are specified to be within bounds and no `mask` was specified.
This operation is called 'read' by opposition to 'load' because the
super-vector granularity is generally not representable with a single
@@ -1607,7 +1610,7 @@ def Vector_TransferWriteOp :
Variadic<Index>:$indices,
AffineMapAttr:$permutation_map,
Optional<VectorOf<[I1]>>:$mask,
- OptionalAttr<BoolArrayAttr>:$in_bounds)>,
+ BoolArrayAttr:$in_bounds)>,
Results<(outs Optional<AnyRankedTensor>:$result)> {
let summary = "The vector.transfer_write op writes a supervector to memory.";
@@ -1643,15 +1646,19 @@ def Vector_TransferWriteOp :
any permutation. Elements whose corresponding mask element is `0` are
masked out.
- An optional boolean array attribute `in_bounds` specifies for every vector
- dimension if the transfer is guaranteed to be within the source bounds. If
- specified, the `in_bounds` array length has to be equal to the vector rank.
- If set to "false", accesses (including the starting point) may run
+ For every vector dimension, the boolean array attribute `in_bounds`
+ specifies if the transfer is guaranteed to be within the source bounds. If
+ set to "false", accesses (including the starting point) may run
out-of-bounds along the respective vector dimension as the index increases.
- In absence of the attribute, accesses along all vector dimensions may run
- out-of-bounds. A `vector.transfer_write` can be lowered to a simple store if
- all dimensions are specified to be within bounds and no `mask` was
- specified. Note that non-vector dimensions *must* always be in-bounds.
+ Non-vector and broadcast dimensions *must* always be in-bounds. The
+ `in_bounds` array length has to be equal to the vector rank. This attribute
+ has a default value: `false` (i.e. "out-of-bounds"). When skipped in the
+ textual IR, the default value is assumed. Similarly, the OP printer will
+ omit this attribute when all dimensions are out-of-bounds (i.e. the default
+ value is used).
+
+ A `vector.transfer_write` can be lowered to a simple store if all
+ dimensions are specified to be within bounds and no `mask` was specified.
This operation is called 'write' by opposition to 'store' because the
super-vector granularity is generally not representable with a single
diff --git a/mlir/include/mlir/IR/AffineMap.h b/mlir/include/mlir/IR/AffineMap.h
index 264c1c8308e78..676da6d176497 100644
--- a/mlir/include/mlir/IR/AffineMap.h
+++ b/mlir/include/mlir/IR/AffineMap.h
@@ -146,6 +146,14 @@ class AffineMap {
/// affine map (d0, ..., dn) -> (dp, ..., dn) on the most minor dimensions.
bool isMinorIdentity() const;
+ /// Returns the list of broadcast dimensions (i.e. dims indicated by value 0
+ /// in the result).
+ /// Ex:
+ /// * (d0, d1, d2) -> (0, d1) gives [0]
+ /// * (d0, d1, d2) -> (d2, d1) gives []
+ /// * (d0, d1, d2, d4) -> (d0, 0, d1, 0) gives [1, 3]
+ SmallVector<unsigned> getBroadcastDims() const;
+
/// Returns true if this affine map is a minor identity up to broadcasted
/// dimensions which are indicated by value 0 in the result. If
/// `broadcastedDims` is not null, it will be populated with the indices of
diff --git a/mlir/include/mlir/Interfaces/VectorInterfaces.td b/mlir/include/mlir/Interfaces/VectorInterfaces.td
index 781d6d3e3f813..7ea62c2ae2ab1 100644
--- a/mlir/include/mlir/Interfaces/VectorInterfaces.td
+++ b/mlir/include/mlir/Interfaces/VectorInterfaces.td
@@ -98,7 +98,7 @@ def VectorTransferOpInterface : OpInterface<"VectorTransferOpInterface"> {
dimension whether it is in-bounds or not. (Broadcast dimensions are
always in-bounds).
}],
- /*retTy=*/"::std::optional<::mlir::ArrayAttr>",
+ /*retTy=*/"::mlir::ArrayAttr",
/*methodName=*/"getInBounds",
/*args=*/(ins)
>,
@@ -240,9 +240,7 @@ def VectorTransferOpInterface : OpInterface<"VectorTransferOpInterface"> {
bool isDimInBounds(unsigned dim) {
if ($_op.isBroadcastDim(dim))
return true;
- if (!$_op.getInBounds())
- return false;
- auto inBounds = ::llvm::cast<::mlir::ArrayAttr>(*$_op.getInBounds());
+ auto inBounds = $_op.getInBounds();
return ::llvm::cast<::mlir::BoolAttr>(inBounds[dim]).getValue();
}
diff --git a/mlir/lib/Dialect/Affine/Transforms/SuperVectorize.cpp b/mlir/lib/Dialect/Affine/Transforms/SuperVectorize.cpp
index 71e9648a5e00f..6bb8dfecba0ec 100644
--- a/mlir/lib/Dialect/Affine/Transforms/SuperVectorize.cpp
+++ b/mlir/lib/Dialect/Affine/Transforms/SuperVectorize.cpp
@@ -1223,8 +1223,19 @@ static Operation *vectorizeAffineLoad(AffineLoadOp loadOp,
LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ permutationMap: ");
LLVM_DEBUG(permutationMap.print(dbgs()));
+ // Make sure that the in_bounds attribute corresponding to a broadcast dim
+ // is set to `true` - that's required by the xfer Op.
+ // FIXME: We're not veryfying whether the corresponding access is in bounds.
+ // TODO: Use masking instead.
+ SmallVector<unsigned> broadcastedDims = permutationMap.getBroadcastDims();
+ SmallVector<bool> inBounds(vectorType.getRank(), false);
+
+ for (auto idx : broadcastedDims)
+ inBounds[idx] = true;
+
auto transfer = state.builder.create<vector::TransferReadOp>(
- loadOp.getLoc(), vectorType, loadOp.getMemRef(), indices, permutationMap);
+ loadOp.getLoc(), vectorType, loadOp.getMemRef(), indices, permutationMap,
+ inBounds);
// Register replacement for future uses in the scope.
state.registerOpVectorReplacement(loadOp, transfer);
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index a4c0508d0d8fa..68ee915cca3f4 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -1343,8 +1343,17 @@ vectorizeAsLinalgGeneric(RewriterBase &rewriter, VectorizationState &state,
SmallVector<Value> indices(linalgOp.getShape(opOperand).size(), zero);
+ // Make sure that the in_bounds attribute corresponding to a broadcast dim
+ // is `true`
+ SmallVector<unsigned> broadcastedDims = readMap.getBroadcastDims();
+ SmallVector<bool> inBounds(readType.getRank(), false);
+
+ for (auto idx : broadcastedDims)
+ inBounds[idx] = true;
+
Operation *read = rewriter.create<vector::TransferReadOp>(
- loc, readType, opOperand->get(), indices, readMap);
+ loc, readType, opOperand->get(), indices, readMap,
+ ArrayRef<bool>(inBounds));
read = state.maskOperation(rewriter, read, linalgOp, maskingMap);
Value readValue = read->getResult(0);
@@ -2681,11 +2690,12 @@ LogicalResult LinalgCopyVTRForwardingPattern::matchAndRewrite(
// The `masked` attribute is only valid on this padded buffer.
// When forwarding to vector.transfer_read, the attribute must be reset
// conservatively.
+ auto vectorType = xferOp.getVectorType();
Value res = rewriter.create<vector::TransferReadOp>(
- xferOp.getLoc(), xferOp.getVectorType(), in, xferOp.getIndices(),
+ xferOp.getLoc(), vectorType, in, xferOp.getIndices(),
xferOp.getPermutationMapAttr(), xferOp.getPadding(), xferOp.getMask(),
- // in_bounds is explicitly reset
- /*inBoundsAttr=*/ArrayAttr());
+ rewriter.getBoolArrayAttr(
+ SmallVector<bool>(vectorType.getRank(), false)));
if (maybeFillOp)
rewriter.eraseOp(maybeFillOp);
@@ -2739,11 +2749,12 @@ LogicalResult LinalgCopyVTWForwardingPattern::matchAndRewrite(
// The `masked` attribute is only valid on this padded buffer.
// When forwarding to vector.transfer_write, the attribute must be reset
// conservatively.
+ auto vector = xferOp.getVector();
rewriter.create<vector::TransferWriteOp>(
- xferOp.getLoc(), xferOp.getVector(), out, xferOp.getIndices(),
+ xferOp.getLoc(), vector, out, xferOp.getIndices(),
xferOp.getPermutationMapAttr(), xferOp.getMask(),
- // in_bounds is explicitly reset
- /*inBoundsAttr=*/ArrayAttr());
+ rewriter.getBoolArrayAttr(
+ SmallVector<bool>(vector.getType().getRank(), false)));
rewriter.eraseOp(copyOp);
rewriter.eraseOp(xferOp);
diff --git a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
index 55bace2e35f44..df3a59ed80ad4 100644
--- a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
+++ b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
@@ -3818,7 +3818,8 @@ void TransferReadOp::build(OpBuilder &builder, OperationState &result,
auto permutationMapAttr = AffineMapAttr::get(permutationMap);
auto inBoundsAttr = (inBounds && !inBounds.value().empty())
? builder.getBoolArrayAttr(inBounds.value())
- : ArrayAttr();
+ : builder.getBoolArrayAttr(
+ SmallVector<bool>(vectorType.getRank(), false));
build(builder, result, vectorType, source, indices, permutationMapAttr,
inBoundsAttr);
}
@@ -3833,7 +3834,8 @@ void TransferReadOp::build(OpBuilder &builder, OperationState &result,
auto permutationMapAttr = AffineMapAttr::get(permutationMap);
auto inBoundsAttr = (inBounds && !inBounds.value().empty())
? builder.getBoolArrayAttr(inBounds.value())
- : ArrayAttr();
+ : builder.getBoolArrayAttr(
+ SmallVector<bool>(vectorType.getRank(), false));
build(builder, result, vectorType, source, indices, permutationMapAttr,
padding,
/*mask=*/Value(), inBoundsAttr);
@@ -3951,17 +3953,15 @@ verifyTransferOp(VectorTransferOpInterface op, ShapedType shapedType,
<< inferredMaskType << ") and mask operand type (" << maskType
<< ") don't match";
- if (inBounds) {
- if (permutationMap.getNumResults() != static_cast<int64_t>(inBounds.size()))
- return op->emitOpError("expects the optional in_bounds attr of same rank "
- "as permutation_map results: ")
- << AffineMapAttr::get(permutationMap)
- << " vs inBounds of size: " << inBounds.size();
- for (unsigned int i = 0; i < permutationMap.getNumResults(); ++i)
- if (isa<AffineConstantExpr>(permutationMap.getResult(i)) &&
- !llvm::cast<BoolAttr>(inBounds.getValue()[i]).getValue())
- return op->emitOpError("requires broadcast dimensions to be in-bounds");
- }
+ if (permutationMap.getNumResults() != static_cast<int64_t>(inBounds.size()))
+ return op->emitOpError("expects the in_bounds attr of same rank "
+ "as permutation_map results: ")
+ << AffineMapAttr::get(permutationMap)
+ << " vs inBounds of size: " << inBounds.size();
+ for (unsigned int i = 0, e = permutationMap.getNumResults(); i < e; ++i)
+ if (isa<AffineConstantExpr>(permutationMap.getResult(i)) &&
+ !llvm::cast<BoolAttr>(inBounds.getValue()[i]).getValue())
+ return op->emitOpError("requires broadcast dimensions to be in-bounds");
return success();
}
@@ -4037,6 +4037,13 @@ ParseResult TransferReadOp::parse(OpAsmParser &parser, OperationState &result) {
} else {
permMap = llvm::cast<AffineMapAttr>(permMapAttr).getValue();
}
+ auto inBoundsAttrName = TransferReadOp::getInBoundsAttrName(result.name);
+ Attribute inBoundsAttr = result.attributes.get(inBoundsAttrName);
+ if (!inBoundsAttr) {
+ result.addAttribute(inBoundsAttrName,
+ builder.getBoolArrayAttr(
+ SmallVector<bool>(permMap.getNumResults(), false)));
+ }
if (parser.resolveOperand(sourceInfo, shapedType, result.operands) ||
parser.resolveOperands(indexInfo, indexType, result.operands) ||
parser.resolveOperand(paddingInfo, shapedType.getElementType(),
@@ -4081,8 +4088,7 @@ LogicalResult TransferReadOp::verify() {
if (failed(verifyTransferOp(cast<VectorTransferOpInterface>(getOperation()),
shapedType, vectorType, maskType,
- inferredMaskType, permutationMap,
- getInBounds() ? *getInBounds() : ArrayAttr())))
+ inferredMaskType, permutationMap, getInBounds())))
return failure();
if (auto sourceVectorElementType =
@@ -4355,9 +4361,11 @@ void TransferWriteOp::build(OpBuilder &builder, OperationState &result,
AffineMap permutationMap,
std::optional<ArrayRef<bool>> inBounds) {
auto permutationMapAttr = AffineMapAttr::get(permutationMap);
- auto inBoundsAttr = (inBounds && !inBounds.value().empty())
- ? builder.getBoolArrayAttr(inBounds.value())
- : ArrayAttr();
+ auto inBoundsAttr =
+ (inBounds && !inBounds.value().empty())
+ ? builder.getBoolArrayAttr(inBounds.value())
+ : builder.getBoolArrayAttr(SmallVector<bool>(
+ llvm::cast<VectorType>(vector.getType()).getRank(), false));
build(builder, result, vector, dest, indices, permutationMapAttr,
/*mask=*/Value(), inBoundsAttr);
}
@@ -4410,6 +4418,13 @@ ParseResult TransferWriteOp::parse(OpAsmParser &parser,
} else {
permMap = llvm::cast<AffineMapAttr>(permMapAttr).getValue();
}
+ auto inBoundsAttrName = TransferWriteOp::getInBoundsAttrName(result.name);
+ Attribute inBoundsAttr = result.attributes.get(inBoundsAttrName);
+ if (!inBoundsAttr) {
+ result.addAttribute(inBoundsAttrName,
+ builder.getBoolArrayAttr(
+ SmallVector<bool>(permMap.getNumResults(), false)));
+ }
if (parser.resolveOperand(vectorInfo, vectorType, result.operands) ||
parser.resolveOperand(sourceInfo, shapedType, result.operands) ||
parser.resolveOperands(indexInfo, indexType, result.operands))
@@ -4463,8 +4478,7 @@ LogicalResult TransferWriteOp::verify() {
if (failed(verifyTransferOp(cast<VectorTransferOpInterface>(getOperation()),
shapedType, vectorType, maskType,
- inferredMaskType, permutationMap,
- getInBounds() ? *getInBounds() : ArrayAttr())))
+ inferredMaskType, permutationMap, getInBounds())))
return failure();
return verifyPermutationMap(permutationMap,
diff --git a/mlir/lib/Dialect/Vector/Transforms/LowerVectorMask.cpp b/mlir/lib/Dialect/Vector/Transforms/LowerVectorMask.cpp
index f53bb5157eb37..dfeb7bc53adad 100644
--- a/mlir/lib/Dialect/Vector/Transforms/LowerVectorMask.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/LowerVectorMask.cpp
@@ -224,7 +224,7 @@ struct MaskedTransferReadOpPattern
rewriter.replaceOpWithNewOp<TransferReadOp>(
maskingOp.getOperation(), readOp.getVectorType(), readOp.getSource(),
readOp.getIndices(), readOp.getPermutationMap(), readOp.getPadding(),
- maskingOp.getMask(), readOp.getInBounds().value_or(ArrayAttr()));
+ maskingOp.getMask(), readOp.getInBounds());
return success();
}
};
@@ -246,7 +246,7 @@ struct MaskedTransferWriteOpPattern
rewriter.replaceOpWithNewOp<TransferWriteOp>(
maskingOp.getOperation(), resultType, writeOp.getVector(),
writeOp.getSource(), writeOp.getIndices(), writeOp.getPermutationMap(),
- maskingOp.getMask(), writeOp.getInBounds().value_or(ArrayAttr()));
+ maskingOp.getMask(), writeOp.getInBounds());
return success();
}
};
diff --git a/mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp b/mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp
index c31c51489ecc9..b3c6dec47f6be 100644
--- a/mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp
@@ -133,9 +133,7 @@ struct TransferReadPermutationLowering
// Transpose in_bounds attribute.
ArrayAttr newInBoundsAttr =
- op.getInBounds() ? inverseTransposeInBoundsAttr(
- rewriter, op.getInBounds().value(), permutation)
- : ArrayAttr();
+ inverseTransposeInBoundsAttr(rewriter, op.getInBounds(), permutation);
// Generate new transfer_read operation.
VectorType newReadType = VectorType::get(
@@ -208,9 +206,7 @@ struct TransferWritePermutationLowering
// Transpose in_bounds attribute.
ArrayAttr newInBoundsAttr =
- op.getInBounds() ? inverseTransposeInBoundsAttr(
- rewriter, op.getInBounds().value(), permutation)
- : ArrayAttr();
+ inverseTransposeInBoundsAttr(rewriter, op.getInBounds(), permutation);
// Generate new transfer_write operation.
Value newVec = rewriter.create<vector::TransposeOp>(
diff --git a/mlir/lib/IR/AffineMap.cpp b/mlir/lib/IR/AffineMap.cpp
index 62f595299afe2..859fb8ebc10e8 100644
--- a/mlir/lib/IR/AffineMap.cpp
+++ b/mlir/lib/IR/AffineMap.cpp
@@ -158,6 +158,19 @@ bool AffineMap::isMinorIdentity() const {
getMinorIdentityMap(getNumDims(), getNumResults(), getContext());
}
+SmallVector<unsigned> AffineMap::getBroadcastDims() const {
+ SmallVector<unsigned> broadcastedDims;
+ for (const auto &[resIdx, expr] : llvm::enumerate(getResults())) {
+ if (auto constExpr = dyn_cast<AffineConstantExpr>(expr)) {
+ if (constExpr.getValue() != 0)
+ continue;
+ broadcastedDims.push_back(resIdx);
+ }
+ }
+
+ return broadcastedDims;
+}
+
/// Returns true if this affine map is a minor identity up to broadcasted
/// dimensions which are indicated by value 0 in the result.
bool AffineMap::isMinorIdentityWithBroadcasting(
diff --git a/mlir/test/Conversion/VectorToSCF/vector-to-scf.mlir b/mlir/test/Conversion/VectorToSCF/vector-to-scf.mlir
index e1babdd2f1f63..3f4e70a6835af 100644
--- a/mlir/test/Conversion/VectorToSCF/vector-to-scf.mlir
+++ b/mlir/test/Conversion/VectorToSCF/vector-to-scf.mlir
@@ -133,7 +133,7 @@ func.func @materialize_read(%M: index, %N: index, %O: index, %P: index) {
affine.for %i1 = 0 to %N {
affine.for %i2 = 0 to %O {
affine.for %i3 = 0 to %P step 5 {
- %f = vector.transfer_read %A[%i0, %i1, %i2, %i3], %f0 {permutation_map = affine_map<(d0, d1, d2, d3) -> (d3, 0, d0)>} : memref<?x?x?x?xf32>, vector<5x4x3xf32>
+ %f = vector.transfer_read %A[%i0, %i1, %i2, %i3], %f0 {in_bounds = [false, true, false], permutation_map = affine_map<(d0, d1, d2, d3) -> (d3, 0, d0)>} : memref<?x?x?x?xf32>, vector<5x4x3xf32>
// Add a dummy use to prevent dead code elimination from removing
// transfer read ops.
"dummy_use"(%f) : (vector<5x4x3xf32>) -> ()
@@ -507,7 +507,7 @@ func.func @transfer_read_with_tensor(%arg: tensor<f32>) -> vector<1xf32> {
// CHECK-NEXT: %[[RESULT:.*]] = vector.broadcast %[[EXTRACTED]] : f32 to vector<1xf32>
// CHECK-NEXT: return %[[RESULT]] : vector<1xf32>
%f0 = arith.constant 0.0 : f32
- %0 = vector.transfer_read %arg[], %f0 {permutation_map = affine_map<()->(0)>} :
+ %0 = vector.transfer_read %arg[], %f0 {in_bounds = [true], permutation_map = affine_map<()->(0)>} :
tensor<f32>, vector<1xf32>
return %0: vector<1xf32>
}
@@ -746,7 +746,7 @@ func.func @cannot_lower_transfer_read_with_leading_scalable(%arg0: memref<?x4xf3
func.func @does_not_crash_on_unpack_one_dim(%subview: memref<1x1x1x1xi32>, %mask: vector<1x1xi1>) -> vector<1x1x1x1xi32> {
%c0 = arith.constant 0 : index
%c0_i32 = arith.constant 0 : i32
- %3 = vector.transfer_read %subview[%c0, %c0, %c0, %c0], %c0_i32, %mask {permutation_map = #map1}
+ %3 = vector.transfer_read %subview[%c0, %c0, %c0, %c0], %c0_i32, %mask {in_bounds = [false, true, true, false], permutation_map = #map1}
: memref<1x1x1x1xi32>, vector<1x1x1x1xi32>
return %3 : vector<1x1x1x1xi32>
}
diff --git a/mlir/test/Dialect/Affine/SuperVectorize/vectorize_1d.mlir b/mlir/test/Dialect/Affine/SuperVectorize/vectorize_1d.mlir
index 9244604128cb7..0a077624d18f8 100644
--- a/mlir/test/Dialect/Affine/SuperVectorize/vectorize_1d.mlir
+++ b/mlir/test/Dialect/Affine/SuperVectorize/vectorize_1d.mlir
@@ -22,7 +22,7 @@ func.func @vec1d_1(%A : memref<?x?xf32>, %B : memref<?x?x?xf32>) {
// CHECK-NEXT: %{{.*}} = affine.apply #[[$map_id1]](%[[C0]])
// CHECK-NEXT: %{{.*}} = affine.apply #[[$map_id1]](%[[C0]])
// CHECK-NEXT: %{{.*}} = arith.constant 0.0{{.*}}: f32
-// CHECK-NEXT: {{.*}} = vector.transfer_read %{{.*}}[%{{.*}}, %{{.*}}], %{{.*}} {permutation_map = #[[$map_proj_d0d1_0]]} : memref<?x?xf32>, vector<128xf32>
+// CHECK-NEXT: {{.*}} = vector.transfer_read %{{.*}}[%{{.*}}, %{{.*}}], %{{.*}} {in_bounds = [true], permutation_map = #[[$map_proj_d0d1_0]]} : memref<?x?xf32>, vector<128xf32>
affine.for %i0 = 0 to %M { // vectorized due to scalar -> vector
%a0 = affine.load %A[%c0, %c0] : memref<?x?xf32>
}
@@ -425,7 +425,7 @@ func.func @vec_rejected_8(%A : memref<?x?xf32>, %B : memref<?x?x?xf32>) {
// CHECK: %{{.*}} = affine.apply #[[$map_id1]](%{{.*}})
// CHECK: %{{.*}} = affine.apply #[[$map_id1]](%{{.*}})
// CHECK: %{{.*}} = arith.constant 0.0{{.*}}: f32
-// CHECK: {{.*}} = vector.transfer_read %{{.*}}[%{{.*}}, %{{.*}}], %{{.*}} {permutation_map = #[[$map_proj_d0d1_0]]} : memref<?x?xf32>, vector<128xf32>
+// CHECK: {{.*}} = vector.transfer_read %{{.*}}[%{{.*}}, %{{.*}}], %{{.*}} {in_bounds = [true], permutation_map = #[[$map_proj_d0d1_0]]} : memref<?x?xf32>, vector<128xf32>
affine.for %i17 = 0 to %M { // not vectorized, the 1-D pattern that matched %{{.*}} in DFS post-order prevents vectorizing %{{.*}}
affine.for %i18 = 0 to %M { // vectorized due to scalar -> vector
%a18 = affine.load %A[%c0, %c0] : memref<?x?xf32>
@@ -459,7 +459,7 @@ func.func @vec_rejected_9(%A : memref<?x?xf32>, %B : memref<?x?x?xf32>) {
// CHECK: %{{.*}} = affine.apply #[[$map_id1]](%{{.*}})
// CHECK-NEXT: %{{.*}} = affine.apply #[[$map_id1]](%{{.*}})
// CHECK-NEXT: %{{.*}} = arith.constant 0.0{{.*}}: f32
-// CHECK-NEXT: {{.*}} = vector.transfer_read %{{.*}}[%{{.*}}, %{{.*}}], %{{.*}} {permutation_map = #[[$map_proj_d0d1_0]]} : memref<?x?xf32>, vector<128xf32>
+// CHECK-NEXT: {{.*}} = vector.transfer_read %{{.*}}[%{{.*}}, %{{.*}}], %{{.*}} {in_bounds = [true], permutation_map = #[[$map_proj_d0d1_0]]} : memref<?x?xf32>, vector<128xf32>
affine.for %i17 = 0 to %M { // not vectorized, the 1-D pattern that matched %i18 in DFS post-order prevents vectorizing %{{.*}}
affine.for %i18 = 0 to %M { // vectorized due to scalar -> vector
%a18 = affine.load %A[%c0, %c0] : memref<?x?xf32>
diff --git a/mlir/test/Dialect/Affine/SuperVectorize/vectorize_2d.mlir b/mlir/test/Dialect/Affine/SuperVectorize/vectorize_2d.mlir
index 83916e755363b..eb5120a49e3d4 100644
--- a/mlir/test/Dialect/Affine/SuperVectorize/vectorize_2d.mlir
+++ b/mlir/test/Dialect/Affine/SuperVectorize/vectorize_2d.mlir
@@ -123,8 +123,8 @@ func.func @vectorize_matmul(%arg0: memref<?x?xf32>, %arg1: memref<?x?xf32>, %arg
// VECT: affine.for %[[I2:.*]] = #[[$map_id1]](%[[C0]]) to #[[$map_id1]](%[[M]]) step 4 {
// VECT-NEXT: affine.for %[[I3:.*]] = #[[$map_id1]](%[[C0]]) to #[[$map_id1]](%[[N]]) step 8 {
// VECT-NEXT: affine.for %[[I4:.*]] = #[[$map_id1]](%[[C0]]) to #[[$map_id1]](%[[K]]) {
- // VECT: %[[A:.*]] = vector.transfer_read %{{.*}}[%[[I4]], %[[I3]]], %{{.*}} {permutation_map = #[[$map_proj_d0d1_zerod1]]} : memref<?x?xf32>, vector<4x8xf32>
- // VECT: %[[B:.*]] = vector.transfer_read %{{.*}}[%[[I2]], %[[I4]]], %{{.*}} {permutation_map = #[[$map_proj_d0d1_d0zero]]} : memref<?x?xf32>, vector<4x8xf32>
+ // VECT: %[[A:.*]] = vector.transfer_read %{{.*}}[%[[I4]], %[[I3]]], %{{.*}} {in_bounds = [true, false], permutation_map = #[[$map_proj_d0d1_zerod1]]} : memref<?x?xf32>, vector<4x8xf32>
+ // VECT: %[[B:.*]] = vector.transfer_read %{{.*}}[%[[I2]], %[[I4]]], %{{.*}} {in_bounds = [false, true], permutation_map = #[[$map_proj_d0d1_d0zero]]} : memref<?x?xf32>, vector<4x8xf32>
// VECT-NEXT: %[[C:.*]] = arith.mulf %[[B]], %[[A]] : vector<4x8xf32>
// VECT: %[[D:.*]] = vector.transfer_read %{{.*}}[%[[I2]], %[[I3]]], %{{.*}} : memref<?x?xf32>, vector<4x8xf32>
// VECT-NEXT: %[[E:.*]] = arith.addf %[[D]], %[[C]] : vector<4x8xf32>
diff --git a/mlir/test/Dialect/Affine/SuperVectorize/vectorize_affine_apply.mlir b/mlir/test/Dialect/Affine/SuperVectorize/vectorize_affine_apply.mlir
index 15a7133cf0f65..16ade6455d697 100644
--- a/mlir/test/Dialect/Affine/SuperVectorize/vectorize_affine_apply.mlir
+++ b/mlir/test/Dialect/Affine/SuperVectorize/vectorize_affine_apply.mlir
@@ -141,7 +141,7 @@ func.func @affine_map_with_expr_2(%arg0: memref<8x12x16xf32>, %arg1: memref<8x24
// CHECK-NEXT: %[[S1:.*]] = affine.apply #[[$MAP_ID4]](%[[ARG3]], %[[ARG4]], %[[I0]])
// CHECK-NEXT: %[[S2:.*]] = affine.apply #[[$MAP_ID5]](%[[ARG3]], %[[ARG4]], %[[I0]])
// CHECK-NEXT: %[[CST:.*]] = arith.constant 0.000000e+00 : f32
-// CHECK-NEXT: %[[S3:.*]] = vector.transfer_read %[[ARG0]][%[[S0]], %[[S1]], %[[S2]]], %[[CST]] {permutation_map = #[[$MAP_ID6]]} : memref<8x12x16xf32>, vector<8xf32>
+// CHECK-NEXT: %[[S3:.*]] = vector.transfer_read %[[ARG0]][%[[S0]], %[[S1]], %[[S2]]], %[[CST]] {in_bounds = [true], permutation_map = #[[$MAP_ID6]]} : memref<8x12x16xf32>, vector<8xf32>
// CHECK-NEXT: vector.transfer_write %[[S3]], %[[ARG1]][%[[ARG3]], %[[ARG4]], %[[ARG5]]] : vector<8xf32>, memref<8x24x48xf32>
// CHECK-NEXT: }
// CHECK-NEXT: }
diff --git a/mlir/test/Dialect/Linalg/hoisting.mlir b/mlir/test/Dialect/Linalg/hoisting.mlir
index 241b8a486c012..44c15c272bb3e 100644
--- a/mlir/test/Dialect/Linalg/hoisting.mlir
+++ b/mlir/test/Dialect/Linalg/hoisting.mlir
@@ -200,7 +200,7 @@ func.func @hoist_vector_transfer_pairs_in_affine_loops(%memref0: memref<64x64xi3
affine.for %arg3 = 0 to 64 {
affine.for %arg4 = 0 to 64 step 16 {
affine.for %arg5 = 0 to 64 {
- %0 = vector.transfer_read %memref0[%arg3, %arg5], %c0_i32 {permutation_map = affine_map<(d0, d1) -> (0)>} : memref<64x64xi32>, vector<16xi32>
+ %0 = vector.transfer_read %memref0[%arg3, %arg5], %c0_i32 {in_bounds = [true], permutation_map = affine_map<(d0, d1) -> (0)>} : memref<64x64xi32>, vector<16xi32>
%1 = vector.transfer_read %memref1[%arg5, %arg4], %c0_i32 : memref<64x64xi32>, vector<16xi32>
%2 = vector.transfer_read %memref2[%arg3, %arg4], %c0_i32 : memref<64x64xi32>, vector<16xi32>
%3 = arith.muli %0, %1 : vector<16xi32>
diff --git a/mlir/test/Dialect/Linalg/vectorization.mlir b/mlir/test/Dialect/Linalg/vectorization.mlir
index bbeccc7fecd68..783149971f0d6 100644
--- a/mlir/test/Dialect/Linalg/vectorization.mlir
+++ b/mlir/test/Dialect/Linalg/vectorization.mlir
@@ -130,7 +130,7 @@ func.func @vectorize_dynamic_1d_broadcast(%arg0: tensor<?xf32>,
// CHECK-LABEL: @vectorize_dynamic_1d_broadcast
// CHECK: %[[VAL_3:.*]] = arith.constant 0 : index
// CHECK: %[[VAL_4:.*]] = tensor.dim %{{.*}}, %[[VAL_3]] : tensor<?xf32>
-// CHECK: %[[VAL_7:.*]] = vector.transfer_read %{{.*}} {permutation_map = #{{.*}}} : tensor<?xf32>, vector<4xf32>
+// CHECK: %[[VAL_7:.*]] = vector.transfer_read %{{.*}} {in_bounds = {{.*}}, permutation_map = #{{.*}}} : tensor<?xf32>, vector<4xf32>
// CHECK: %[[VAL_9:.*]] = vector.create_mask %[[VAL_4]] : vector<4xi1>
// CHECK: %[[VAL_10:.*]] = vector.mask %[[VAL_9]] { vector.transfer_read %{{.*}} {in_bounds = [true]} : tensor<?xf32>, vector<4xf32> } : vector<4xi1> -> vector<4xf32>
// CHECK: %[[VAL_12:.*]] = vector.mask %[[VAL_9]] { vector.transfer_read %{{.*}} {in_bounds = [true]} : tensor<?xf32>, vector<4xf32> } : vector<4xi1> -> vector<4xf32>
diff --git a/mlir/test/Dialect/Vector/invalid.mlir b/mlir/test/Dialect/Vector/invalid.mlir
index db169a6c1f8ae..208982a3e0e7b 100644
--- a/mlir/test/Dialect/Vector/invalid.mlir
+++ b/mlir/test/Dialect/Vector/invalid.mlir
@@ -478,7 +478,7 @@ func.func @test_vector.transfer_read(%arg0: memref<?x?xvector<2x3xf32>>) {
%c3 = arith.constant 3 : index
%f0 = arith.constant 0.0 : f32
%vf0 = vector.splat %f0 : vector<2x3xf32>
- // expected-error at +1 {{ expects the optional in_bounds attr of same rank as permutation_map results: affine_map<(d0, d1) -> (d0, d1)>}}
+ // expected-error at +1 {{ expects the in_bounds attr of same rank as permutation_map results: affine_map<(d0, d1) -> (d0, d1)>}}
%0 = vector.transfer_read %arg0[%c3, %c3], %vf0 {in_bounds = [true], permutation_map = affine_map<(d0, d1)->(d0, d1)>} : memref<?x?xvector<2x3xf32>>, vector<1x1x2x3xf32>
}
diff --git a/mlir/test/Dialect/Vector/ops.mlir b/mlir/test/Dialect/Vector/ops.mlir
index 531e2db636431..7e578452b82cc 100644
--- a/mlir/test/Dialect/Vector/ops.mlir
+++ b/mlir/test/Dialect/Vector/ops.mlir
@@ -70,7 +70,7 @@ func.func @vector_transfer_ops(%arg0: memref<?x?xf32>,
// CHECK: vector.transfer_read %{{.*}}[%[[C3]], %[[C3]]], %{{.*}}, %{{.*}} : memref<?x?xf32>, vector<5xf32>
%8 = vector.transfer_read %arg0[%c3, %c3], %f0, %m : memref<?x?xf32>, vector<5xf32>
// CHECK: vector.transfer_read %{{.*}}[%[[C3]], %[[C3]], %[[C3]]], %{{.*}}, %{{.*}} : memref<?x?x?xf32>, vector<5x4x8xf32>
- %9 = vector.transfer_read %arg4[%c3, %c3, %c3], %f0, %m2 {permutation_map = affine_map<(d0, d1, d2)->(d1, d0, 0)>} : memref<?x?x?xf32>, vector<5x4x8xf32>
+ %9 = vector.transfer_read %arg4[%c3, %c3, %c3], %f0, %m2 {in_bounds = [false, false, true], permutation_map = affine_map<(d0, d1, d2)->(d1, d0, 0)>} : memref<?x?x?xf32>, vector<5x4x8xf32>
// CHECK: vector.transfer_write
vector.transfer_write %0, %arg0[%c3, %c3] {permutation_map = affine_map<(d0, d1)->(d0)>} : vector<128xf32>, memref<?x?xf32>
diff --git a/mlir/test/Dialect/Vector/vector-transfer-collapse-inner-most-dims.mlir b/mlir/test/Dialect/Vector/vector-transfer-collapse-inner-most-dims.mlir
index bc3c3c014c4d2..3f36756fe95d7 100644
--- a/mlir/test/Dialect/Vector/vector-transfer-collapse-inner-most-dims.mlir
+++ b/mlir/test/Dialect/Vector/vector-transfer-collapse-inner-most-dims.mlir
@@ -191,7 +191,6 @@ func.func @negative_contiguous_inner_most_non_zero_idx_out_of_bounds(%src: memre
// CHECK-NOT: memref.shape_cast
// CHECK: vector.transfer_read
-
// -----
func.func @contiguous_inner_most_dim_with_subview(%src: memref<1000x1xf32>, %i:index, %ii:index) -> (vector<4x1xf32>) {
diff --git a/mlir/test/Dialect/Vector/vector-transfer-permutation-lowering.mlir b/mlir/test/Dialect/Vector/vector-transfer-permutation-lowering.mlir
index 35418b38df9b2..b9dcb2b55e52e 100644
--- a/mlir/test/Dialect/Vector/vector-transfer-permutation-lowering.mlir
+++ b/mlir/test/Dialect/Vector/vector-transfer-permutation-lowering.mlir
@@ -228,7 +228,7 @@ func.func @permutation_with_mask_xfer_read_scalable(%mem: memref<?x?xf32>, %dim_
func.func @masked_permutation_xfer_read_fixed_width(%arg0: tensor<?x1xf32>, %mask : vector<4x1xi1>) {
%cst = arith.constant 0.000000e+00 : f32
%c0 = arith.constant 0 : index
- %3 = vector.mask %mask { vector.transfer_read %arg0[%c0, %c0], %cst {permutation_map = affine_map<(d0, d1) -> (d1, 0, d0)>} : tensor<?x1xf32>, vector<1x4x4xf32> } : vector<4x1xi1> -> vector<1x4x4xf32>
+ %3 = vector.mask %mask { vector.transfer_read %arg0[%c0, %c0], %cst {in_bounds = [false, true, false], permutation_map = affine_map<(d0, d1) -> (d1, 0, d0)>} : tensor<?x1xf32>, vector<1x4x4xf32> } : vector<4x1xi1> -> vector<1x4x4xf32>
call @test.some_use(%3) : (vector<1x4x4xf32>) -> ()
return
}
diff --git a/mlir/test/Dialect/Vector/vector-transfer-to-vector-load-store.mlir b/mlir/test/Dialect/Vector/vector-transfer-to-vector-load-store.mlir
index d169e6d5878e2..2b7906d4dd40c 100644
--- a/mlir/test/Dialect/Vector/vector-transfer-to-vector-load-store.mlir
+++ b/mlir/test/Dialect/Vector/vector-transfer-to-vector-load-store.mlir
@@ -28,7 +28,7 @@ func.func @vector_transfer_ops_0d_tensor(%M: tensor<f32>) -> vector<1xf32> {
// CHECK-NEXT: %[[S:.*]] = tensor.extract %[[SOURCE]][] : tensor<f32>
// CHECK-NEXT: %[[V:.*]] = vector.broadcast %[[S]] : f32 to vector<1xf32>
- %0 = vector.transfer_read %M[], %f0 {permutation_map = affine_map<()->(0)>} :
+ %0 = vector.transfer_read %M[], %f0 {in_bounds = [true], permutation_map = affine_map<()->(0)>} :
tensor<f32>, vector<1xf32>
// CHECK-NEXT: return %[[V]]
@@ -296,8 +296,8 @@ func.func @transfer_read_permutations(%arg0 : memref<?x?xf32>, %arg1 : memref<?x
// CHECK: %[[MASK1:.*]] = vector.splat %{{.*}} : vector<16x14xi1>
%mask1 = vector.splat %m : vector<16x14xi1>
- %1 = vector.transfer_read %arg1[%c0, %c0, %c0, %c0], %cst, %mask1 {permutation_map = #map1} : memref<?x?x?x?xf32>, vector<7x14x8x16xf32>
-// CHECK: vector.transfer_read {{.*}} %[[MASK1]] {permutation_map = #[[$MAP0]]} : memref<?x?x?x?xf32>, vector<16x14x7x8xf32>
+ %1 = vector.transfer_read %arg1[%c0, %c0, %c0, %c0], %cst, %mask1 {in_bounds = [true, false, true, false], permutation_map = #map1} : memref<?x?x?x?xf32>, vector<7x14x8x16xf32>
+// CHECK: vector.transfer_read {{.*}} %[[MASK1]] {in_bounds = [false, false, true, true], permutation_map = #[[$MAP0]]} : memref<?x?x?x?xf32>, vector<16x14x7x8xf32>
// CHECK: vector.transpose %{{.*}}, [2, 1, 3, 0] : vector<16x14x7x8xf32> to vector<7x14x8x16xf32>
// CHECK: %[[MASK3:.*]] = vector.splat %{{.*}} : vector<14x7xi1>
@@ -307,12 +307,12 @@ func.func @transfer_read_permutations(%arg0 : memref<?x?xf32>, %arg1 : memref<?x
// CHECK: vector.broadcast %{{.*}} : vector<14x16x7xf32> to vector<8x14x16x7xf32>
// CHECK: vector.transpose %{{.*}}, [3, 1, 0, 2] : vector<8x14x16x7xf32> to vector<7x14x8x16xf32>
- %3 = vector.transfer_read %arg0[%c0, %c0], %cst {permutation_map = #map3} : memref<?x?xf32>, vector<7x14x8x16xf32>
+ %3 = vector.transfer_read %arg0[%c0, %c0], %cst {in_bounds = [false, false, true, true], permutation_map = #map3} : memref<?x?xf32>, vector<7x14x8x16xf32>
// CHECK: vector.transfer_read %{{.*}}[%[[C0]], %[[C0]]], %[[CF0]] : memref<?x?xf32>, vector<14x7xf32>
// CHECK: vector.broadcast %{{.*}} : vector<14x7xf32> to vector<8x16x14x7xf32>
// CHECK: vector.transpose %{{.*}}, [3, 2, 0, 1] : vector<8x16x14x7xf32> to vector<7x14x8x16xf32>
- %4 = vector.transfer_read %arg0[%c0, %c0], %cst {permutation_map = #map4} : memref<?x?xf32>, vector<7x14x8x16xf32>
+ %4 = vector.transfer_read %arg0[%c0, %c0], %cst {in_bounds = [true, false, true, false], permutation_map = #map4} : memref<?x?xf32>, vector<7x14x8x16xf32>
// CHECK: vector.transfer_read %{{.*}}[%[[C0]], %[[C0]]], %[[CF0]] : memref<?x?xf32>, vector<16x14xf32>
// CHECK: vector.broadcast %{{.*}} : vector<16x14xf32> to vector<7x8x16x14xf32>
// CHECK: vector.transpose %{{.*}}, [0, 3, 1, 2] : vector<7x8x16x14xf32> to vector<7x14x8x16xf32>
@@ -321,7 +321,7 @@ func.func @transfer_read_permutations(%arg0 : memref<?x?xf32>, %arg1 : memref<?x
// CHECK: vector.transfer_read %{{.*}}[%[[C0]], %[[C0]], %[[C0]], %[[C0]]], %[[CF0]] : memref<?x?x?x?xf32>, vector<16x14x7x8xf32>
// CHECK: vector.transpose %{{.*}}, [2, 1, 3, 0] : vector<16x14x7x8xf32> to vector<7x14x8x16xf32>
- %6 = vector.transfer_read %arg0[%c0, %c0], %cst {permutation_map = #map6} : memref<?x?xf32>, vector<8xf32>
+ %6 = vector.transfer_read %arg0[%c0, %c0], %cst {in_bounds = [true], permutation_map = #map6} : memref<?x?xf32>, vector<8xf32>
// CHECK: memref.load %{{.*}}[%[[C0]], %[[C0]]] : memref<?x?xf32>
// CHECK: vector.broadcast %{{.*}} : f32 to vector<8xf32>
diff --git a/mlir/test/Dialect/Vector/vector-transfer-unroll.mlir b/mlir/test/Dialect/Vector/vector-transfer-unroll.mlir
index 578d845a27ad4..eb0db736d5da5 100644
--- a/mlir/test/Dialect/Vector/vector-transfer-unroll.mlir
+++ b/mlir/test/Dialect/Vector/vector-transfer-unroll.mlir
@@ -199,7 +199,7 @@ func.func @transfer_read_unroll_permutation(%arg0 : memref<6x4xf32>) -> vector<4
func.func @transfer_read_unroll_broadcast(%arg0 : memref<6x4xf32>) -> vector<6x4xf32> {
%c0 = arith.constant 0 : index
%cf0 = arith.constant 0.0 : f32
- %0 = vector.transfer_read %arg0[%c0, %c0], %cf0 {permutation_map = #map0} : memref<6x4xf32>, vector<6x4xf32>
+ %0 = vector.transfer_read %arg0[%c0, %c0], %cf0 {in_bounds = [true, false], permutation_map = #map0} : memref<6x4xf32>, vector<6x4xf32>
return %0 : vector<6x4xf32>
}
@@ -226,7 +226,7 @@ func.func @transfer_read_unroll_broadcast(%arg0 : memref<6x4xf32>) -> vector<6x4
func.func @transfer_read_unroll_broadcast_permuation(%arg0 : memref<6x4xf32>) -> vector<4x6xf32> {
%c0 = arith.constant 0 : index
%cf0 = arith.constant 0.0 : f32
- %0 = vector.transfer_read %arg0[%c0, %c0], %cf0 {permutation_map = #map0} : memref<6x4xf32>, vector<4x6xf32>
+ %0 = vector.transfer_read %arg0[%c0, %c0], %cf0 {in_bounds = [true, false], permutation_map = #map0} : memref<6x4xf32>, vector<4x6xf32>
return %0 : vector<4x6xf32>
}
diff --git a/mlir/test/Integration/Dialect/Vector/CPU/transfer-read-1d.mlir b/mlir/test/Integration/Dialect/Vector/CPU/transfer-read-1d.mlir
index 8a98d39e657f2..12b0511d486ea 100644
--- a/mlir/test/Integration/Dialect/Vector/CPU/transfer-read-1d.mlir
+++ b/mlir/test/Integration/Dialect/Vector/CPU/transfer-read-1d.mlir
@@ -82,7 +82,7 @@ func.func @transfer_read_1d_broadcast(
%A : memref<?x?xf32>, %base1 : index, %base2 : index) {
%fm42 = arith.constant -42.0: f32
%f = vector.transfer_read %A[%base1, %base2], %fm42
- {permutation_map = affine_map<(d0, d1) -> (0)>}
+ {in_bounds = [true], permutation_map = affine_map<(d0, d1) -> (0)>}
: memref<?x?xf32>, vector<9xf32>
vector.print %f: vector<9xf32>
return
diff --git a/mlir/test/Integration/Dialect/Vector/CPU/transfer-read-2d.mlir b/mlir/test/Integration/Dialect/Vector/CPU/transfer-read-2d.mlir
index cb8a8ce8ab0b0..9f8849fa9a148 100644
--- a/mlir/test/Integration/Dialect/Vector/CPU/transfer-read-2d.mlir
+++ b/mlir/test/Integration/Dialect/Vector/CPU/transfer-read-2d.mlir
@@ -57,7 +57,7 @@ func.func @transfer_read_2d_mask_broadcast(
%fm42 = arith.constant -42.0: f32
%mask = arith.constant dense<[1, 0, 1, 0, 1, 1, 1, 0, 1]> : vector<9xi1>
%f = vector.transfer_read %A[%base1, %base2], %fm42, %mask
- {permutation_map = affine_map<(d0, d1) -> (0, d1)>} :
+ {in_bounds = [true, false], permutation_map = affine_map<(d0, d1) -> (0, d1)>} :
memref<?x?xf32>, vector<4x9xf32>
vector.print %f: vector<4x9xf32>
return
@@ -69,7 +69,7 @@ func.func @transfer_read_2d_mask_transpose_broadcast_last_dim(
%fm42 = arith.constant -42.0: f32
%mask = arith.constant dense<[1, 0, 1, 1]> : vector<4xi1>
%f = vector.transfer_read %A[%base1, %base2], %fm42, %mask
- {permutation_map = affine_map<(d0, d1) -> (d1, 0)>} :
+ {in_bounds = [false, true], permutation_map = affine_map<(d0, d1) -> (d1, 0)>} :
memref<?x?xf32>, vector<4x9xf32>
vector.print %f: vector<4x9xf32>
return
@@ -91,7 +91,7 @@ func.func @transfer_read_2d_broadcast(
%A : memref<?x?xf32>, %base1: index, %base2: index) {
%fm42 = arith.constant -42.0: f32
%f = vector.transfer_read %A[%base1, %base2], %fm42
- {permutation_map = affine_map<(d0, d1) -> (d1, 0)>} :
+ {in_bounds = [false, true], permutation_map = affine_map<(d0, d1) -> (d1, 0)>} :
memref<?x?xf32>, vector<4x9xf32>
vector.print %f: vector<4x9xf32>
return
diff --git a/mlir/test/Integration/Dialect/Vector/CPU/transfer-read-3d.mlir b/mlir/test/Integration/Dialect/Vector/CPU/transfer-read-3d.mlir
index 4aecca3d6891e..466afeec459b4 100644
--- a/mlir/test/Integration/Dialect/Vector/CPU/transfer-read-3d.mlir
+++ b/mlir/test/Integration/Dialect/Vector/CPU/transfer-read-3d.mlir
@@ -32,7 +32,7 @@ func.func @transfer_read_3d_broadcast(%A : memref<?x?x?x?xf32>,
%o: index, %a: index, %b: index, %c: index) {
%fm42 = arith.constant -42.0: f32
%f = vector.transfer_read %A[%o, %a, %b, %c], %fm42
- {permutation_map = affine_map<(d0, d1, d2, d3) -> (d1, 0, d3)>}
+ {in_bounds = [false, true, false], permutation_map = affine_map<(d0, d1, d2, d3) -> (d1, 0, d3)>}
: memref<?x?x?x?xf32>, vector<2x5x3xf32>
vector.print %f: vector<2x5x3xf32>
return
@@ -43,7 +43,7 @@ func.func @transfer_read_3d_mask_broadcast(
%fm42 = arith.constant -42.0: f32
%mask = arith.constant dense<[0, 1]> : vector<2xi1>
%f = vector.transfer_read %A[%o, %a, %b, %c], %fm42, %mask
- {permutation_map = affine_map<(d0, d1, d2, d3) -> (d1, 0, 0)>}
+ {in_bounds = [false, true, true], permutation_map = affine_map<(d0, d1, d2, d3) -> (d1, 0, 0)>}
: memref<?x?x?x?xf32>, vector<2x5x3xf32>
vector.print %f: vector<2x5x3xf32>
return
diff --git a/mlir/test/python/dialects/vector.py b/mlir/test/python/dialects/vector.py
index dafb2bfde8982..77eaf94a830d9 100644
--- a/mlir/test/python/dialects/vector.py
+++ b/mlir/test/python/dialects/vector.py
@@ -51,10 +51,16 @@ def testTransferReadOp():
with InsertionPoint(f.add_entry_block()):
A, zero, padding, mask = f.arguments
vector.TransferReadOp(
- vector_type, A, [zero, zero], identity_map_attr, padding, mask=mask
+ vector_type,
+ A,
+ [zero, zero],
+ identity_map_attr,
+ padding,
+ [False, False],
+ mask=mask,
)
vector.TransferReadOp(
- vector_type, A, [zero, zero], identity_map_attr, padding
+ vector_type, A, [zero, zero], identity_map_attr, padding, [False, False]
)
func.ReturnOp([])
More information about the Mlir-commits
mailing list