[Mlir-commits] [mlir] [mlir][vector] Update syntax and representation of insert/extract_strided_slice (PR #101850)

Benjamin Maxwell llvmlistbot at llvm.org
Sat Aug 3 14:09:53 PDT 2024


https://github.com/MacDue created https://github.com/llvm/llvm-project/pull/101850

This commit updates the representation of both `extract_strided_slice` and `insert_strided_slice` to primitive arrays of int64_ts, rather than ArrayAttrs of IntegerAttrs. This prevents a lot of boilerplate conversions between IntegerAttr and int64_t.

Because previously the offsets, strides, and sizes were in the attribute dictionary (with no special syntax), simply replacing the attribute types with `DenseI64ArrayAttr` would be a syntax break.

So since a syntax break is mostly unavoidable this commit also tackles a long-standing TODO:

```mlir
// TODO: Evolve to a range form syntax similar to:
%1 = vector.extract_strided_slice %0[0:2:1][2:4:1]
  : vector<4x8x16xf32> to vector<2x4x16xf32>
```

This is done by introducing a new `StridedSliceAttr` attribute that can be used for both operations, with syntax based on the above example (see the attribute documentation `VectorAttributes.td` for a full syntax overview).


With this:

`extract_strided_slice` goes from:
```mlir
%1 = vector.extract_strided_slice %0
     {offsets = [0, 2], sizes = [2, 4], strides = [1, 1]}
     : vector<4x8x16xf32> to vector<2x4x16xf32>
```
To:
```mlir
%1 = vector.extract_strided_slice %0[0:2:1][2:4:1]
     : vector<4x8x16xf32> to vector<2x4x16xf32>
```
(matching the TODO)

---

And `insert_strided_slice` goes from:
```mlir
%2 = vector.insert_strided_slice %0, %1
     {offsets = [0, 0, 2], strides = [1, 1]}
     : vector<2x4xf32> into vector<16x4x8xf32>
```

To:
```mlir
%2 = vector.insert_strided_slice %0, %1[0][0:1][2:1]
     : vector<2x4xf32> into vector<16x4x8xf32>
```
(inspired by the TODO)

---

Almost all test changes were done automatically via `auto-upgrade-insert-extract-slice.py`, available at: https://gist.github.com/MacDue/ca84d3ec19cf83ae71aab2be8f09c3c5 (use at your own risk).

This PR is split into multiple commits to make the changes more understandable.
- The first commit is code changes
- The second commit is **automatic** test changes
- The final commit is manual test changes




>From 33f76a11d9f5883d2e87f417c484d29a5a10a71e Mon Sep 17 00:00:00 2001
From: MacDue <macdue at dueutil.tech>
Date: Sat, 3 Aug 2024 18:19:20 +0100
Subject: [PATCH 1/3] [mlir][vector] Update syntax and representation of
 insert/extract_strided_slice

This commit updates the representation of both `extract_strided_slice`
and `insert_strided_slice` to primitive arrays of int64_ts, rather than
ArrayAttrs of IntegerAttrs. This prevents a lot of boilerplate
conversions between IntegerAttr and int64_t.

Because previously the offsets, strides, and sizes were in the
attribute dictionary (with no special syntax), simply replacing the
attribute types with `DenseI64ArrayAttr` would be a syntax break.

So since a break is unavoidable this commit also tackles a long-standing
TODO:

```mlir
// TODO: Evolve to a range form syntax similar to:
%1 = vector.extract_strided_slice %0[0:2:1][2:4:1]
  : vector<4x8x16xf32> to vector<2x4x16xf32>
```

This is done by introducing a new `StridedSliceAttr` attribute that can
be used for both operations, with syntax based on the above example.

See the attribute documentation `VectorAttributes.td` for a full
overview.
---
 .../Dialect/Vector/IR/VectorAttributes.td     |  64 ++++
 .../mlir/Dialect/Vector/IR/VectorOps.td       |  39 +-
 .../Conversion/VectorToGPU/VectorToGPU.cpp    |  11 +-
 .../VectorToSPIRV/VectorToSPIRV.cpp           |  13 +-
 .../Dialect/Arith/Transforms/IntNarrowing.cpp |   5 +-
 mlir/lib/Dialect/Vector/IR/VectorOps.cpp      | 333 ++++++++++--------
 .../Vector/Transforms/LowerVectorScan.cpp     |   9 +-
 .../Transforms/VectorDropLeadUnitDim.cpp      |  24 +-
 ...sertExtractStridedSliceRewritePatterns.cpp |  72 ++--
 .../Vector/Transforms/VectorLinearize.cpp     |  19 +-
 .../Vector/Transforms/VectorTransforms.cpp    |  49 +--
 11 files changed, 333 insertions(+), 305 deletions(-)

diff --git a/mlir/include/mlir/Dialect/Vector/IR/VectorAttributes.td b/mlir/include/mlir/Dialect/Vector/IR/VectorAttributes.td
index 0f08f61d7b257..7fa20b950e7c6 100644
--- a/mlir/include/mlir/Dialect/Vector/IR/VectorAttributes.td
+++ b/mlir/include/mlir/Dialect/Vector/IR/VectorAttributes.td
@@ -16,6 +16,11 @@
 include "mlir/Dialect/Vector/IR/Vector.td"
 include "mlir/IR/EnumAttr.td"
 
+class Vector_Attr<string attrName, string attrMnemonic, list<Trait> traits = []>
+    : AttrDef<Vector_Dialect, attrName, traits> {
+  let mnemonic = attrMnemonic;
+}
+
 // The "kind" of combining function for contractions and reductions.
 def COMBINING_KIND_ADD : I32BitEnumAttrCaseBit<"ADD", 0, "add">;
 def COMBINING_KIND_MUL : I32BitEnumAttrCaseBit<"MUL", 1, "mul">;
@@ -82,4 +87,63 @@ def Vector_PrintPunctuation : EnumAttr<Vector_Dialect, PrintPunctuation, "punctu
   let assemblyFormat = "`<` $value `>`";
 }
 
+def Vector_StridedSliceAttr : Vector_Attr<"StridedSlice", "strided_slice">
+{
+  let summary = "strided vector slice";
+
+  let description = [{
+    An attribute that represents a strided slice of a vector.
+
+    *Syntax:*
+
+    ```
+    offset = integer-literal
+    stride = integer-literal
+    size = integer-literal
+    offset-list = offset (`,` offset)*
+
+    // Without sizes (used for insert_strided_slice)
+    strided-slice-without-sizes = offset-list? (`[` offset `:` stride `]`)+
+
+    // With sizes (used for extract_strided_slice)
+    strided-slice-with-sizes = (`[` offset `:` size `:` stride `]`)+
+    ```
+
+    *Examples:*
+
+    Without sizes:
+
+    `[0:1][4:2]`
+
+    - The first dimension starts at offset 0 and is strided by 1
+    - The second dimension starts at offset 4 and is strided by 2
+
+    `[0, 1, 2][3:1][4:8]`
+
+    - The first three dimensions are indexed without striding (offsets 0, 1, 2)
+    - The fourth dimension starts at offset 3 and is strided by 1
+    - The fifth dimension starts at offset 4 and is strided by 8
+
+    With sizes (used for extract_strided_slice)
+
+    `[0:2:4][2:4:3]`
+
+    - The first dimension starts at offset 0, has size 2, and is strided by 4
+    - The second dimension starts at offset 2, has size 4, and is strided by 3
+  }];
+
+  let parameters = (ins
+    ArrayRefParameter<"int64_t">:$offsets,
+    OptionalArrayRefParameter<"int64_t">:$sizes,
+    ArrayRefParameter<"int64_t">:$strides
+  );
+
+  let builders = [AttrBuilder<(ins "ArrayRef<int64_t>":$offsets, "ArrayRef<int64_t>":$strides), [{
+      return $_get($_ctxt, offsets, ArrayRef<int64_t>{}, strides);
+    }]>
+  ];
+
+  let hasCustomAssemblyFormat = 1;
+}
+
 #endif // MLIR_DIALECT_VECTOR_IR_VECTOR_ATTRIBUTES
diff --git a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
index 434ff3956c250..45edb75c1989a 100644
--- a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
+++ b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
@@ -1040,8 +1040,8 @@ def Vector_InsertStridedSliceOp :
     PredOpTrait<"operand #0 and result have same element type",
                  TCresVTEtIsSameAsOpBase<0, 0>>,
     AllTypesMatch<["dest", "res"]>]>,
-    Arguments<(ins AnyVector:$source, AnyVector:$dest, I64ArrayAttr:$offsets,
-               I64ArrayAttr:$strides)>,
+    Arguments<(ins AnyVector:$source, AnyVector:$dest,
+               Vector_StridedSliceAttr:$strided_slice)>,
     Results<(outs AnyVector:$res)> {
   let summary = "strided_slice operation";
   let description = [{
@@ -1059,14 +1059,13 @@ def Vector_InsertStridedSliceOp :
     Example:
 
     ```mlir
-    %2 = vector.insert_strided_slice %0, %1
-        {offsets = [0, 0, 2], strides = [1, 1]}:
-      vector<2x4xf32> into vector<16x4x8xf32>
+    %2 = vector.insert_strided_slice %0, %1[0][0:1][2:1]
+      : vector<2x4xf32> into vector<16x4x8xf32>
     ```
   }];
 
   let assemblyFormat = [{
-    $source `,` $dest attr-dict `:` type($source) `into` type($dest)
+    $source `,` $dest `` $strided_slice attr-dict `:` type($source) `into` type($dest)
   }];
 
   let builders = [
@@ -1081,10 +1080,13 @@ def Vector_InsertStridedSliceOp :
       return ::llvm::cast<VectorType>(getDest().getType());
     }
     bool hasNonUnitStrides() {
-      return llvm::any_of(getStrides(), [](Attribute attr) {
-        return ::llvm::cast<IntegerAttr>(attr).getInt() != 1;
+      return llvm::any_of(getStrides(), [](int64_t stride) {
+        return stride != 1;
       });
     }
+
+    ArrayRef<int64_t> getOffsets() { return getStridedSlice().getOffsets(); }
+    ArrayRef<int64_t> getStrides() { return getStridedSlice().getStrides(); }
   }];
 
   let hasFolder = 1;
@@ -1298,8 +1300,7 @@ def Vector_ExtractStridedSliceOp :
   Vector_Op<"extract_strided_slice", [Pure,
     PredOpTrait<"operand and result have same element type",
                  TCresVTEtIsSameAsOpBase<0, 0>>]>,
-    Arguments<(ins AnyVector:$vector, I64ArrayAttr:$offsets,
-               I64ArrayAttr:$sizes, I64ArrayAttr:$strides)>,
+    Arguments<(ins AnyVector:$vector, Vector_StridedSliceAttr:$strided_slice)>,
     Results<(outs AnyVector)> {
   let summary = "extract_strided_slice operation";
   let description = [{
@@ -1316,13 +1317,8 @@ def Vector_ExtractStridedSliceOp :
     Example:
 
     ```mlir
-    %1 = vector.extract_strided_slice %0
-        {offsets = [0, 2], sizes = [2, 4], strides = [1, 1]}:
-      vector<4x8x16xf32> to vector<2x4x16xf32>
-
-    // TODO: Evolve to a range form syntax similar to:
     %1 = vector.extract_strided_slice %0[0:2:1][2:4:1]
-      vector<4x8x16xf32> to vector<2x4x16xf32>
+      : vector<4x8x16xf32> to vector<2x4x16xf32>
     ```
   }];
   let builders = [
@@ -1333,17 +1329,20 @@ def Vector_ExtractStridedSliceOp :
     VectorType getSourceVectorType() {
       return ::llvm::cast<VectorType>(getVector().getType());
     }
-    void getOffsets(SmallVectorImpl<int64_t> &results);
     bool hasNonUnitStrides() {
-      return llvm::any_of(getStrides(), [](Attribute attr) {
-        return ::llvm::cast<IntegerAttr>(attr).getInt() != 1;
+      return llvm::any_of(getStrides(), [](int64_t stride) {
+        return stride != 1;
       });
     }
+
+    ArrayRef<int64_t> getOffsets() { return getStridedSlice().getOffsets(); }
+    ArrayRef<int64_t> getSizes() { return getStridedSlice().getSizes(); }
+    ArrayRef<int64_t> getStrides() { return getStridedSlice().getStrides(); }
   }];
   let hasCanonicalizer = 1;
   let hasFolder = 1;
   let hasVerifier = 1;
-  let assemblyFormat = "$vector attr-dict `:` type($vector) `to` type(results)";
+  let assemblyFormat = "$vector `` $strided_slice attr-dict `:` type($vector) `to` type(results)";
 }
 
 // TODO: Tighten semantics so that masks and inbounds can't be used
diff --git a/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp b/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
index 0150ff667e4ef..a2647e2b647c1 100644
--- a/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
+++ b/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
@@ -940,12 +940,6 @@ convertTransferWriteToStores(RewriterBase &rewriter, vector::TransferWriteOp op,
   return success();
 }
 
-static void populateFromInt64AttrArray(ArrayAttr arrayAttr,
-                                       SmallVectorImpl<int64_t> &results) {
-  for (auto attr : arrayAttr)
-    results.push_back(cast<IntegerAttr>(attr).getInt());
-}
-
 static LogicalResult
 convertExtractStridedSlice(RewriterBase &rewriter,
                            vector::ExtractStridedSliceOp op,
@@ -996,11 +990,8 @@ convertExtractStridedSlice(RewriterBase &rewriter,
   auto sourceVector = it->second;
 
   // offset and sizes at warp-level of onwership.
-  SmallVector<int64_t> offsets;
-  populateFromInt64AttrArray(op.getOffsets(), offsets);
+  ArrayRef<int64_t> offsets = op.getOffsets();
 
-  SmallVector<int64_t> sizes;
-  populateFromInt64AttrArray(op.getSizes(), sizes);
   ArrayRef<int64_t> warpVectorShape = op.getSourceVectorType().getShape();
 
   // Compute offset in vector registers. Note that the mma.sync vector registers
diff --git a/mlir/lib/Conversion/VectorToSPIRV/VectorToSPIRV.cpp b/mlir/lib/Conversion/VectorToSPIRV/VectorToSPIRV.cpp
index 21b8858989839..4d4e5ebb4f428 100644
--- a/mlir/lib/Conversion/VectorToSPIRV/VectorToSPIRV.cpp
+++ b/mlir/lib/Conversion/VectorToSPIRV/VectorToSPIRV.cpp
@@ -46,9 +46,6 @@ static uint64_t getFirstIntValue(ValueRange values) {
 static uint64_t getFirstIntValue(ArrayRef<Attribute> attr) {
   return cast<IntegerAttr>(attr[0]).getInt();
 }
-static uint64_t getFirstIntValue(ArrayAttr attr) {
-  return (*attr.getAsValueRange<IntegerAttr>().begin()).getZExtValue();
-}
 static uint64_t getFirstIntValue(ArrayRef<OpFoldResult> foldResults) {
   auto attr = foldResults[0].dyn_cast<Attribute>();
   if (attr)
@@ -187,9 +184,9 @@ struct VectorExtractStridedSliceOpConvert final
     if (!dstType)
       return failure();
 
-    uint64_t offset = getFirstIntValue(extractOp.getOffsets());
-    uint64_t size = getFirstIntValue(extractOp.getSizes());
-    uint64_t stride = getFirstIntValue(extractOp.getStrides());
+    int64_t offset = extractOp.getOffsets().front();
+    int64_t size = extractOp.getSizes().front();
+    int64_t stride = extractOp.getStrides().front();
     if (stride != 1)
       return failure();
 
@@ -323,10 +320,10 @@ struct VectorInsertStridedSliceOpConvert final
     Value srcVector = adaptor.getOperands().front();
     Value dstVector = adaptor.getOperands().back();
 
-    uint64_t stride = getFirstIntValue(insertOp.getStrides());
+    uint64_t stride = insertOp.getStrides().front();
     if (stride != 1)
       return failure();
-    uint64_t offset = getFirstIntValue(insertOp.getOffsets());
+    uint64_t offset = insertOp.getOffsets().front();
 
     if (isa<spirv::ScalarType>(srcVector.getType())) {
       assert(!isa<spirv::ScalarType>(dstVector.getType()));
diff --git a/mlir/lib/Dialect/Arith/Transforms/IntNarrowing.cpp b/mlir/lib/Dialect/Arith/Transforms/IntNarrowing.cpp
index e2d42e961c576..941644e1116fc 100644
--- a/mlir/lib/Dialect/Arith/Transforms/IntNarrowing.cpp
+++ b/mlir/lib/Dialect/Arith/Transforms/IntNarrowing.cpp
@@ -550,11 +550,8 @@ struct ExtensionOverExtractStridedSlice final
     if (failed(ext))
       return failure();
 
-    VectorType origTy = op.getType();
-    VectorType extractTy =
-        origTy.cloneWith(origTy.getShape(), ext->getInElementType());
     Value newExtract = rewriter.create<vector::ExtractStridedSliceOp>(
-        op.getLoc(), extractTy, ext->getIn(), op.getOffsets(), op.getSizes(),
+        op.getLoc(), ext->getIn(), op.getOffsets(), op.getSizes(),
         op.getStrides());
     ext->recreateAndReplace(rewriter, op, newExtract);
     return success();
diff --git a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
index 5047bd925d4c5..dda6b916176fa 100644
--- a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
+++ b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
@@ -1340,13 +1340,6 @@ LogicalResult vector::ExtractOp::verify() {
   return success();
 }
 
-template <typename IntType>
-static SmallVector<IntType> extractVector(ArrayAttr arrayAttr) {
-  return llvm::to_vector<4>(llvm::map_range(
-      arrayAttr.getAsRange<IntegerAttr>(),
-      [](IntegerAttr attr) { return static_cast<IntType>(attr.getInt()); }));
-}
-
 /// Fold the result of chains of ExtractOp in place by simply concatenating the
 /// positions.
 static LogicalResult foldExtractOpFromExtractChain(ExtractOp extractOp) {
@@ -1770,8 +1763,7 @@ static Value foldExtractFromExtractStrided(ExtractOp extractOp) {
     return Value();
 
   // Trim offsets for dimensions fully extracted.
-  auto sliceOffsets =
-      extractVector<int64_t>(extractStridedSliceOp.getOffsets());
+  SmallVector<int64_t> sliceOffsets(extractStridedSliceOp.getOffsets());
   while (!sliceOffsets.empty()) {
     size_t lastOffset = sliceOffsets.size() - 1;
     if (sliceOffsets.back() != 0 ||
@@ -1825,12 +1817,10 @@ static Value foldExtractStridedOpFromInsertChain(ExtractOp extractOp) {
                              insertOp.getSourceVectorType().getRank();
     if (destinationRank > insertOp.getSourceVectorType().getRank())
       return Value();
-    auto insertOffsets = extractVector<int64_t>(insertOp.getOffsets());
+    ArrayRef<int64_t> insertOffsets = insertOp.getOffsets();
     ArrayRef<int64_t> extractOffsets = extractOp.getStaticPosition();
 
-    if (llvm::any_of(insertOp.getStrides(), [](Attribute attr) {
-          return llvm::cast<IntegerAttr>(attr).getInt() != 1;
-        }))
+    if (insertOp.hasNonUnitStrides())
       return Value();
     bool disjoint = false;
     SmallVector<int64_t, 4> offsetDiffs;
@@ -2899,6 +2889,95 @@ OpFoldResult vector::InsertOp::fold(FoldAdaptor adaptor) {
   return {};
 }
 
+//===----------------------------------------------------------------------===//
+// StridedSliceAttr
+//===----------------------------------------------------------------------===//
+
+Attribute StridedSliceAttr::parse(AsmParser &parser, Type attrType) {
+  SmallVector<int64_t> offsets;
+  SmallVector<int64_t> sizes;
+  SmallVector<int64_t> strides;
+  bool parsedNonStridedOffsets = false;
+  while (succeeded(parser.parseOptionalLSquare())) {
+    int64_t offset = 0;
+    if (parser.parseInteger(offset))
+      return {};
+    if (parser.parseOptionalColon()) {
+      // Case 1: [Offset, ...]
+      if (!strides.empty() || parsedNonStridedOffsets) {
+        parser.emitError(parser.getCurrentLocation(),
+                         "expected slice stride or size");
+        return {};
+      }
+      offsets.push_back(offset);
+      if (succeeded(parser.parseOptionalComma())) {
+        if (parser.parseCommaSeparatedList(
+                AsmParser::Delimiter::None, [&]() -> ParseResult {
+                  if (parser.parseInteger(offset))
+                    return failure();
+                  offsets.push_back(offset);
+                  return success();
+                })) {
+          return {};
+        }
+      }
+      if (parser.parseRSquare())
+        return {};
+      parsedNonStridedOffsets = true;
+      continue;
+    }
+    int64_t sizeOrStide = 0;
+    if (parser.parseInteger(sizeOrStide)) {
+      parser.emitError(parser.getCurrentLocation(),
+                       "expected slice stride or size");
+      return {};
+    }
+    if (parser.parseOptionalColon()) {
+      // Case 2: [Offset:Stride]
+      if (!sizes.empty() || parser.parseRSquare()) {
+        parser.emitError(parser.getCurrentLocation(), "expected slice size");
+        return {};
+      }
+      offsets.push_back(offset);
+      strides.push_back(sizeOrStide);
+      continue;
+    }
+    // Case 3: [Offset:Size:Stride]
+    if (sizes.size() < strides.size()) {
+      parser.emitError(parser.getCurrentLocation(), "unexpected slice size");
+      return {};
+    }
+    int64_t stride = 0;
+    if (parser.parseInteger(stride) || parser.parseRSquare()) {
+      parser.emitError(parser.getCurrentLocation(), "expected slice stride");
+      return {};
+    }
+    offsets.push_back(offset);
+    sizes.push_back(sizeOrStide);
+    strides.push_back(stride);
+  }
+  return StridedSliceAttr::get(parser.getContext(), offsets, sizes, strides);
+}
+
+void StridedSliceAttr::print(AsmPrinter &printer) const {
+  ArrayRef<int64_t> offsets = getOffsets();
+  ArrayRef<int64_t> sizes = getSizes();
+  ArrayRef<int64_t> strides = getStrides();
+  int nonStridedOffsets = offsets.size() - strides.size();
+  if (nonStridedOffsets > 0) {
+    printer << '[';
+    llvm::interleaveComma(offsets.take_front(nonStridedOffsets), printer);
+    printer << ']';
+  }
+  for (int d = nonStridedOffsets, e = offsets.size(); d < e; ++d) {
+    int strideIdx = d - nonStridedOffsets;
+    printer << '[' << offsets[d] << ':';
+    if (!sizes.empty())
+      printer << sizes[strideIdx] << ':';
+    printer << strides[strideIdx] << ']';
+  }
+}
+
 //===----------------------------------------------------------------------===//
 // InsertStridedSliceOp
 //===----------------------------------------------------------------------===//
@@ -2907,26 +2986,8 @@ void InsertStridedSliceOp::build(OpBuilder &builder, OperationState &result,
                                  Value source, Value dest,
                                  ArrayRef<int64_t> offsets,
                                  ArrayRef<int64_t> strides) {
-  result.addOperands({source, dest});
-  auto offsetsAttr = getVectorSubscriptAttr(builder, offsets);
-  auto stridesAttr = getVectorSubscriptAttr(builder, strides);
-  result.addTypes(dest.getType());
-  result.addAttribute(InsertStridedSliceOp::getOffsetsAttrName(result.name),
-                      offsetsAttr);
-  result.addAttribute(InsertStridedSliceOp::getStridesAttrName(result.name),
-                      stridesAttr);
-}
-
-// TODO: Should be moved to Tablegen ConfinedAttr attributes.
-template <typename OpType>
-static LogicalResult isIntegerArrayAttrSmallerThanShape(OpType op,
-                                                        ArrayAttr arrayAttr,
-                                                        ArrayRef<int64_t> shape,
-                                                        StringRef attrName) {
-  if (arrayAttr.size() > shape.size())
-    return op.emitOpError("expected ")
-           << attrName << " attribute of rank no greater than vector rank";
-  return success();
+  build(builder, result, source, dest,
+        StridedSliceAttr::get(builder.getContext(), offsets, strides));
 }
 
 // Returns true if all integers in `arrayAttr` are in the half-open [min, max}
@@ -2934,16 +2995,15 @@ static LogicalResult isIntegerArrayAttrSmallerThanShape(OpType op,
 // Otherwise, the admissible interval is [min, max].
 template <typename OpType>
 static LogicalResult
-isIntegerArrayAttrConfinedToRange(OpType op, ArrayAttr arrayAttr, int64_t min,
-                                  int64_t max, StringRef attrName,
-                                  bool halfOpen = true) {
-  for (auto attr : arrayAttr) {
-    auto val = llvm::cast<IntegerAttr>(attr).getInt();
+isIntArrayConfinedToRange(OpType op, ArrayRef<int64_t> array, int64_t min,
+                          int64_t max, StringRef arrayName,
+                          bool halfOpen = true) {
+  for (int64_t val : array) {
     auto upper = max;
     if (!halfOpen)
       upper += 1;
     if (val < min || val >= upper)
-      return op.emitOpError("expected ") << attrName << " to be confined to ["
+      return op.emitOpError("expected ") << arrayName << " to be confined to ["
                                          << min << ", " << upper << ")";
   }
   return success();
@@ -2954,13 +3014,12 @@ isIntegerArrayAttrConfinedToRange(OpType op, ArrayAttr arrayAttr, int64_t min,
 // Otherwise, the admissible interval is [min, max].
 template <typename OpType>
 static LogicalResult
-isIntegerArrayAttrConfinedToShape(OpType op, ArrayAttr arrayAttr,
-                                  ArrayRef<int64_t> shape, StringRef attrName,
-                                  bool halfOpen = true, int64_t min = 0) {
-  for (auto [index, attrDimPair] :
-       llvm::enumerate(llvm::zip_first(arrayAttr, shape))) {
-    int64_t val = llvm::cast<IntegerAttr>(std::get<0>(attrDimPair)).getInt();
-    int64_t max = std::get<1>(attrDimPair);
+isIntArrayConfinedToShape(OpType op, ArrayRef<int64_t> array,
+                          ArrayRef<int64_t> shape, StringRef attrName,
+                          bool halfOpen = true, int64_t min = 0) {
+  for (auto [index, dimPair] : llvm::enumerate(llvm::zip_first(array, shape))) {
+    int64_t val, max;
+    std::tie(val, max) = dimPair;
     if (!halfOpen)
       max += 1;
     if (val < min || val >= max)
@@ -2977,40 +3036,32 @@ isIntegerArrayAttrConfinedToShape(OpType op, ArrayAttr arrayAttr,
 // If `halfOpen` is true then the admissible interval is [min, max). Otherwise,
 // the admissible interval is [min, max].
 template <typename OpType>
-static LogicalResult isSumOfIntegerArrayAttrConfinedToShape(
-    OpType op, ArrayAttr arrayAttr1, ArrayAttr arrayAttr2,
-    ArrayRef<int64_t> shape, StringRef attrName1, StringRef attrName2,
+static LogicalResult isSumOfIntArrayConfinedToShape(
+    OpType op, ArrayRef<int64_t> array1, ArrayRef<int64_t> array2,
+    ArrayRef<int64_t> shape, StringRef arrayName1, StringRef arrayName2,
     bool halfOpen = true, int64_t min = 1) {
-  assert(arrayAttr1.size() <= shape.size());
-  assert(arrayAttr2.size() <= shape.size());
-  for (auto [index, it] :
-       llvm::enumerate(llvm::zip(arrayAttr1, arrayAttr2, shape))) {
-    auto val1 = llvm::cast<IntegerAttr>(std::get<0>(it)).getInt();
-    auto val2 = llvm::cast<IntegerAttr>(std::get<1>(it)).getInt();
-    int64_t max = std::get<2>(it);
+  assert(array1.size() <= shape.size());
+  assert(array2.size() <= shape.size());
+  for (auto [index, it] : llvm::enumerate(llvm::zip(array1, array2, shape))) {
+    int64_t val1, val2, max;
+    std::tie(val1, val2, max) = it;
     if (!halfOpen)
       max += 1;
     if (val1 + val2 < 0 || val1 + val2 >= max)
       return op.emitOpError("expected sum(")
-             << attrName1 << ", " << attrName2 << ") dimension " << index
+             << arrayName1 << ", " << arrayName2 << ") dimension " << index
              << " to be confined to [" << min << ", " << max << ")";
   }
   return success();
 }
 
-static ArrayAttr makeI64ArrayAttr(ArrayRef<int64_t> values,
-                                  MLIRContext *context) {
-  auto attrs = llvm::map_range(values, [context](int64_t v) -> Attribute {
-    return IntegerAttr::get(IntegerType::get(context, 64), APInt(64, v));
-  });
-  return ArrayAttr::get(context, llvm::to_vector<8>(attrs));
-}
-
 LogicalResult InsertStridedSliceOp::verify() {
   auto sourceVectorType = getSourceVectorType();
   auto destVectorType = getDestVectorType();
-  auto offsets = getOffsetsAttr();
-  auto strides = getStridesAttr();
+  auto offsets = getOffsets();
+  auto strides = getStrides();
+  if (!getStridedSlice().getSizes().empty())
+    return emitOpError("slice sizes not supported");
   if (offsets.size() != static_cast<unsigned>(destVectorType.getRank()))
     return emitOpError(
         "expected offsets of same size as destination vector rank");
@@ -3025,18 +3076,14 @@ LogicalResult InsertStridedSliceOp::verify() {
   SmallVector<int64_t, 4> sourceShapeAsDestShape(
       destShape.size() - sourceShape.size(), 0);
   sourceShapeAsDestShape.append(sourceShape.begin(), sourceShape.end());
-  auto offName = InsertStridedSliceOp::getOffsetsAttrName();
-  auto stridesName = InsertStridedSliceOp::getStridesAttrName();
-  if (failed(isIntegerArrayAttrConfinedToShape(*this, offsets, destShape,
-                                               offName)) ||
-      failed(isIntegerArrayAttrConfinedToRange(*this, strides, /*min=*/1,
-                                               /*max=*/1, stridesName,
-                                               /*halfOpen=*/false)) ||
-      failed(isSumOfIntegerArrayAttrConfinedToShape(
-          *this, offsets,
-          makeI64ArrayAttr(sourceShapeAsDestShape, getContext()), destShape,
-          offName, "source vector shape",
-          /*halfOpen=*/false, /*min=*/1)))
+  if (failed(isIntArrayConfinedToShape(*this, offsets, destShape, "offsets")) ||
+      failed(isIntArrayConfinedToRange(*this, strides, /*min=*/1,
+                                       /*max=*/1, "strides",
+                                       /*halfOpen=*/false)) ||
+      failed(isSumOfIntArrayConfinedToShape(*this, offsets,
+                                            sourceShapeAsDestShape, destShape,
+                                            "offsets", "source vector shape",
+                                            /*halfOpen=*/false, /*min=*/1)))
     return failure();
 
   unsigned rankDiff = destShape.size() - sourceShape.size();
@@ -3161,7 +3208,7 @@ class InsertStridedSliceConstantFolder final
     VectorType sliceVecTy = sourceValue.getType();
     ArrayRef<int64_t> sliceShape = sliceVecTy.getShape();
     int64_t rankDifference = destTy.getRank() - sliceVecTy.getRank();
-    SmallVector<int64_t, 4> offsets = getI64SubArray(op.getOffsets());
+    ArrayRef<int64_t> offsets = op.getOffsets();
     SmallVector<int64_t, 4> destStrides = computeStrides(destTy.getShape());
 
     // Calcualte the destination element indices by enumerating all slice
@@ -3398,14 +3445,15 @@ void ReshapeOp::getFixedVectorSizes(SmallVectorImpl<int64_t> &results) {
 //   2. Add sizes from 'vectorType' for remaining dims.
 // Scalable flags are inherited from 'vectorType'.
 static Type inferStridedSliceOpResultType(VectorType vectorType,
-                                          ArrayAttr offsets, ArrayAttr sizes,
-                                          ArrayAttr strides) {
+                                          ArrayRef<int64_t> offsets,
+                                          ArrayRef<int64_t> sizes,
+                                          ArrayRef<int64_t> strides) {
   assert(offsets.size() == sizes.size() && offsets.size() == strides.size());
   SmallVector<int64_t, 4> shape;
   shape.reserve(vectorType.getRank());
   unsigned idx = 0;
   for (unsigned e = offsets.size(); idx < e; ++idx)
-    shape.push_back(llvm::cast<IntegerAttr>(sizes[idx]).getInt());
+    shape.push_back(sizes[idx]);
   for (unsigned e = vectorType.getShape().size(); idx < e; ++idx)
     shape.push_back(vectorType.getShape()[idx]);
 
@@ -3418,51 +3466,49 @@ void ExtractStridedSliceOp::build(OpBuilder &builder, OperationState &result,
                                   ArrayRef<int64_t> sizes,
                                   ArrayRef<int64_t> strides) {
   result.addOperands(source);
-  auto offsetsAttr = getVectorSubscriptAttr(builder, offsets);
-  auto sizesAttr = getVectorSubscriptAttr(builder, sizes);
-  auto stridesAttr = getVectorSubscriptAttr(builder, strides);
-  result.addTypes(
-      inferStridedSliceOpResultType(llvm::cast<VectorType>(source.getType()),
-                                    offsetsAttr, sizesAttr, stridesAttr));
-  result.addAttribute(ExtractStridedSliceOp::getOffsetsAttrName(result.name),
-                      offsetsAttr);
-  result.addAttribute(ExtractStridedSliceOp::getSizesAttrName(result.name),
-                      sizesAttr);
-  result.addAttribute(ExtractStridedSliceOp::getStridesAttrName(result.name),
-                      stridesAttr);
+  auto stridedSliceAttr =
+      StridedSliceAttr::get(builder.getContext(), offsets, sizes, strides);
+  result.addTypes(inferStridedSliceOpResultType(
+      llvm::cast<VectorType>(source.getType()), offsets, sizes, strides));
+  result.addAttribute(
+      ExtractStridedSliceOp::getStridedSliceAttrName(result.name),
+      stridedSliceAttr);
 }
 
 LogicalResult ExtractStridedSliceOp::verify() {
+  getStridedSlice().dump();
   auto type = getSourceVectorType();
-  auto offsets = getOffsetsAttr();
-  auto sizes = getSizesAttr();
-  auto strides = getStridesAttr();
+  auto offsets = getOffsets();
+  auto sizes = getSizes();
+  auto strides = getStrides();
   if (offsets.size() != sizes.size() || offsets.size() != strides.size())
     return emitOpError(
         "expected offsets, sizes and strides attributes of same size");
 
   auto shape = type.getShape();
-  auto offName = getOffsetsAttrName();
-  auto sizesName = getSizesAttrName();
-  auto stridesName = getStridesAttrName();
-  if (failed(
-          isIntegerArrayAttrSmallerThanShape(*this, offsets, shape, offName)) ||
-      failed(
-          isIntegerArrayAttrSmallerThanShape(*this, sizes, shape, sizesName)) ||
-      failed(isIntegerArrayAttrSmallerThanShape(*this, strides, shape,
-                                                stridesName)) ||
-      failed(
-          isIntegerArrayAttrConfinedToShape(*this, offsets, shape, offName)) ||
-      failed(isIntegerArrayAttrConfinedToShape(*this, sizes, shape, sizesName,
-                                               /*halfOpen=*/false,
-                                               /*min=*/1)) ||
-      failed(isIntegerArrayAttrConfinedToRange(*this, strides, /*min=*/1,
-                                               /*max=*/1, stridesName,
-                                               /*halfOpen=*/false)) ||
-      failed(isSumOfIntegerArrayAttrConfinedToShape(*this, offsets, sizes,
-                                                    shape, offName, sizesName,
-                                                    /*halfOpen=*/false)))
+  auto isIntArraySmallerThanShape = [&](ArrayRef<int64_t> array,
+                                        StringRef arrayName) -> LogicalResult {
+    if (array.size() > shape.size())
+      return emitOpError("expected ")
+             << arrayName << " to have rank no greater than vector rank";
+    return success();
+  };
+
+  if (failed(isIntArraySmallerThanShape(offsets, "offsets")) ||
+      failed(isIntArraySmallerThanShape(sizes, "sizes")) ||
+      failed(isIntArraySmallerThanShape(strides, "strides")) ||
+      failed(isIntArrayConfinedToShape(*this, offsets, shape, "offsets")) ||
+      failed(isIntArrayConfinedToShape(*this, sizes, shape, "sizes",
+                                       /*halfOpen=*/false,
+                                       /*min=*/1)) ||
+      failed(isIntArrayConfinedToRange(*this, strides, /*min=*/1,
+                                       /*max=*/1, "strides",
+                                       /*halfOpen=*/false)) ||
+      failed(isSumOfIntArrayConfinedToShape(*this, offsets, sizes, shape,
+                                            "offsets", "sizes",
+                                            /*halfOpen=*/false))) {
     return failure();
+  }
 
   auto resultType = inferStridedSliceOpResultType(getSourceVectorType(),
                                                   offsets, sizes, strides);
@@ -3472,7 +3518,7 @@ LogicalResult ExtractStridedSliceOp::verify() {
   for (unsigned idx = 0; idx < sizes.size(); ++idx) {
     if (type.getScalableDims()[idx]) {
       auto inputDim = type.getShape()[idx];
-      auto inputSize = llvm::cast<IntegerAttr>(sizes[idx]).getInt();
+      auto inputSize = sizes[idx];
       if (inputDim != inputSize)
         return emitOpError("expected size at idx=")
                << idx
@@ -3490,20 +3536,16 @@ LogicalResult ExtractStridedSliceOp::verify() {
 // extracted vector is a subset of one of the vector inserted.
 static LogicalResult
 foldExtractStridedOpFromInsertChain(ExtractStridedSliceOp op) {
-  // Helper to extract integer out of ArrayAttr.
-  auto getElement = [](ArrayAttr array, int idx) {
-    return llvm::cast<IntegerAttr>(array[idx]).getInt();
-  };
-  ArrayAttr extractOffsets = op.getOffsets();
-  ArrayAttr extractStrides = op.getStrides();
-  ArrayAttr extractSizes = op.getSizes();
+  ArrayRef<int64_t> extractOffsets = op.getOffsets();
+  ArrayRef<int64_t> extractStrides = op.getStrides();
+  ArrayRef<int64_t> extractSizes = op.getSizes();
   auto insertOp = op.getVector().getDefiningOp<InsertStridedSliceOp>();
   while (insertOp) {
     if (op.getSourceVectorType().getRank() !=
         insertOp.getSourceVectorType().getRank())
       return failure();
-    ArrayAttr insertOffsets = insertOp.getOffsets();
-    ArrayAttr insertStrides = insertOp.getStrides();
+    ArrayRef<int64_t> insertOffsets = insertOp.getOffsets();
+    ArrayRef<int64_t> insertStrides = insertOp.getStrides();
     // If the rank of extract is greater than the rank of insert, we are likely
     // extracting a partial chunk of the vector inserted.
     if (extractOffsets.size() > insertOffsets.size())
@@ -3512,12 +3554,12 @@ foldExtractStridedOpFromInsertChain(ExtractStridedSliceOp op) {
     bool disjoint = false;
     SmallVector<int64_t, 4> offsetDiffs;
     for (unsigned dim = 0, e = extractOffsets.size(); dim < e; ++dim) {
-      if (getElement(extractStrides, dim) != getElement(insertStrides, dim))
+      if (extractStrides[dim] != insertStrides[dim])
         return failure();
-      int64_t start = getElement(insertOffsets, dim);
+      int64_t start = insertOffsets[dim];
       int64_t end = start + insertOp.getSourceVectorType().getDimSize(dim);
-      int64_t offset = getElement(extractOffsets, dim);
-      int64_t size = getElement(extractSizes, dim);
+      int64_t offset = extractOffsets[dim];
+      int64_t size = extractSizes[dim];
       // Check if the start of the extract offset is in the interval inserted.
       if (start <= offset && offset < end) {
         // If the extract interval overlaps but is not fully included we may
@@ -3535,7 +3577,9 @@ foldExtractStridedOpFromInsertChain(ExtractStridedSliceOp op) {
       op.setOperand(insertOp.getSource());
       // OpBuilder is only used as a helper to build an I64ArrayAttr.
       OpBuilder b(op.getContext());
-      op.setOffsetsAttr(b.getI64ArrayAttr(offsetDiffs));
+      auto stridedSliceAttr = StridedSliceAttr::get(
+          op.getContext(), offsetDiffs, op.getSizes(), op.getStrides());
+      op.setStridedSliceAttr(stridedSliceAttr);
       return success();
     }
     // If the chunk extracted is disjoint from the chunk inserted, keep looking
@@ -3558,11 +3602,6 @@ OpFoldResult ExtractStridedSliceOp::fold(FoldAdaptor adaptor) {
     return getResult();
   return {};
 }
-
-void ExtractStridedSliceOp::getOffsets(SmallVectorImpl<int64_t> &results) {
-  populateFromInt64AttrArray(getOffsets(), results);
-}
-
 namespace {
 
 // Pattern to rewrite an ExtractStridedSliceOp(ConstantMaskOp) to
@@ -3586,11 +3625,8 @@ class StridedSliceConstantMaskFolder final
     // Gather constant mask dimension sizes.
     ArrayRef<int64_t> maskDimSizes = constantMaskOp.getMaskDimSizes();
     // Gather strided slice offsets and sizes.
-    SmallVector<int64_t, 4> sliceOffsets;
-    populateFromInt64AttrArray(extractStridedSliceOp.getOffsets(),
-                               sliceOffsets);
-    SmallVector<int64_t, 4> sliceSizes;
-    populateFromInt64AttrArray(extractStridedSliceOp.getSizes(), sliceSizes);
+    ArrayRef<int64_t> sliceOffsets = extractStridedSliceOp.getOffsets();
+    ArrayRef<int64_t> sliceSizes = extractStridedSliceOp.getSizes();
 
     // Compute slice of vector mask region.
     SmallVector<int64_t, 4> sliceMaskDimSizes;
@@ -3682,10 +3718,10 @@ class StridedSliceNonSplatConstantFolder final
 
     // Expand offsets and sizes to match the vector rank.
     SmallVector<int64_t, 4> offsets(sliceRank, 0);
-    copy(getI64SubArray(extractStridedSliceOp.getOffsets()), offsets.begin());
+    copy(extractStridedSliceOp.getOffsets(), offsets.begin());
 
     SmallVector<int64_t, 4> sizes(sourceShape.begin(), sourceShape.end());
-    copy(getI64SubArray(extractStridedSliceOp.getSizes()), sizes.begin());
+    copy(extractStridedSliceOp.getSizes(), sizes.begin());
 
     // Calculate the slice elements by enumerating all slice positions and
     // linearizing them. The enumeration order is lexicographic which yields a
@@ -3748,10 +3784,9 @@ class StridedSliceBroadcast final
     bool isScalarSrc = (srcRank == 0 || srcVecType.getNumElements() == 1);
     if (!lowerDimMatch && !isScalarSrc) {
       source = rewriter.create<ExtractStridedSliceOp>(
-          op->getLoc(), source,
-          getI64SubArray(op.getOffsets(), /* dropFront=*/rankDiff),
-          getI64SubArray(op.getSizes(), /* dropFront=*/rankDiff),
-          getI64SubArray(op.getStrides(), /* dropFront=*/rankDiff));
+          op->getLoc(), source, op.getOffsets().drop_front(rankDiff),
+          op.getSizes().drop_front(rankDiff),
+          op.getStrides().drop_front(rankDiff));
     }
     rewriter.replaceOpWithNewOp<BroadcastOp>(op, op.getType(), source);
     return success();
diff --git a/mlir/lib/Dialect/Vector/Transforms/LowerVectorScan.cpp b/mlir/lib/Dialect/Vector/Transforms/LowerVectorScan.cpp
index 92fddc13d6333..46d61f2a5ddcc 100644
--- a/mlir/lib/Dialect/Vector/Transforms/LowerVectorScan.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/LowerVectorScan.cpp
@@ -130,23 +130,16 @@ struct ScanToArithOps : public OpRewritePattern<vector::ScanOp> {
     VectorType initialValueType = scanOp.getInitialValueType();
     int64_t initialValueRank = initialValueType.getRank();
 
-    SmallVector<int64_t> reductionShape(destShape.begin(), destShape.end());
-    reductionShape[reductionDim] = 1;
-    VectorType reductionType = VectorType::get(reductionShape, elType);
     SmallVector<int64_t> offsets(destRank, 0);
     SmallVector<int64_t> strides(destRank, 1);
     SmallVector<int64_t> sizes(destShape.begin(), destShape.end());
     sizes[reductionDim] = 1;
-    ArrayAttr scanSizes = rewriter.getI64ArrayAttr(sizes);
-    ArrayAttr scanStrides = rewriter.getI64ArrayAttr(strides);
 
     Value lastOutput, lastInput;
     for (int i = 0; i < destShape[reductionDim]; i++) {
       offsets[reductionDim] = i;
-      ArrayAttr scanOffsets = rewriter.getI64ArrayAttr(offsets);
       Value input = rewriter.create<vector::ExtractStridedSliceOp>(
-          loc, reductionType, scanOp.getSource(), scanOffsets, scanSizes,
-          scanStrides);
+          loc, scanOp.getSource(), offsets, sizes, strides);
       Value output;
       if (i == 0) {
         if (inclusive) {
diff --git a/mlir/lib/Dialect/Vector/Transforms/VectorDropLeadUnitDim.cpp b/mlir/lib/Dialect/Vector/Transforms/VectorDropLeadUnitDim.cpp
index 42ac717b44c4b..0b8a2ab6b2fa0 100644
--- a/mlir/lib/Dialect/Vector/Transforms/VectorDropLeadUnitDim.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/VectorDropLeadUnitDim.cpp
@@ -71,11 +71,6 @@ struct CastAwayExtractStridedSliceLeadingOneDim
     int64_t dropCount = oldSrcType.getRank() - newSrcType.getRank();
 
     VectorType oldDstType = extractOp.getType();
-    VectorType newDstType =
-        VectorType::get(oldDstType.getShape().drop_front(dropCount),
-                        oldDstType.getElementType(),
-                        oldDstType.getScalableDims().drop_front(dropCount));
-
     Location loc = extractOp.getLoc();
 
     Value newSrcVector = rewriter.create<vector::ExtractOp>(
@@ -83,15 +78,12 @@ struct CastAwayExtractStridedSliceLeadingOneDim
 
     // The offsets/sizes/strides attribute can have a less number of elements
     // than the input vector's rank: it is meant for the leading dimensions.
-    auto newOffsets = rewriter.getArrayAttr(
-        extractOp.getOffsets().getValue().drop_front(dropCount));
-    auto newSizes = rewriter.getArrayAttr(
-        extractOp.getSizes().getValue().drop_front(dropCount));
-    auto newStrides = rewriter.getArrayAttr(
-        extractOp.getStrides().getValue().drop_front(dropCount));
+    auto newOffsets = extractOp.getOffsets().drop_front(dropCount);
+    auto newSizes = extractOp.getSizes().drop_front(dropCount);
+    auto newStrides = extractOp.getStrides().drop_front(dropCount);
 
     auto newExtractOp = rewriter.create<vector::ExtractStridedSliceOp>(
-        loc, newDstType, newSrcVector, newOffsets, newSizes, newStrides);
+        loc, newSrcVector, newOffsets, newSizes, newStrides);
 
     rewriter.replaceOpWithNewOp<vector::BroadcastOp>(extractOp, oldDstType,
                                                      newExtractOp);
@@ -126,13 +118,11 @@ struct CastAwayInsertStridedSliceLeadingOneDim
     Value newDstVector = rewriter.create<vector::ExtractOp>(
         loc, insertOp.getDest(), splatZero(dstDropCount));
 
-    auto newOffsets = rewriter.getArrayAttr(
-        insertOp.getOffsets().getValue().take_back(newDstType.getRank()));
-    auto newStrides = rewriter.getArrayAttr(
-        insertOp.getStrides().getValue().take_back(newSrcType.getRank()));
+    auto newOffsets = insertOp.getOffsets().take_back(newDstType.getRank());
+    auto newStrides = insertOp.getStrides().take_back(newSrcType.getRank());
 
     auto newInsertOp = rewriter.create<vector::InsertStridedSliceOp>(
-        loc, newDstType, newSrcVector, newDstVector, newOffsets, newStrides);
+        loc, newSrcVector, newDstVector, newOffsets, newStrides);
 
     rewriter.replaceOpWithNewOp<vector::BroadcastOp>(insertOp, oldDstType,
                                                      newInsertOp);
diff --git a/mlir/lib/Dialect/Vector/Transforms/VectorInsertExtractStridedSliceRewritePatterns.cpp b/mlir/lib/Dialect/Vector/Transforms/VectorInsertExtractStridedSliceRewritePatterns.cpp
index ec2ef3fc7501c..4de58ed7526a9 100644
--- a/mlir/lib/Dialect/Vector/Transforms/VectorInsertExtractStridedSliceRewritePatterns.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/VectorInsertExtractStridedSliceRewritePatterns.cpp
@@ -63,7 +63,7 @@ class DecomposeDifferentRankInsertStridedSlice
     auto srcType = op.getSourceVectorType();
     auto dstType = op.getDestVectorType();
 
-    if (op.getOffsets().getValue().empty())
+    if (op.getOffsets().empty())
       return failure();
 
     auto loc = op.getLoc();
@@ -76,21 +76,17 @@ class DecomposeDifferentRankInsertStridedSlice
     // Extract / insert the subvector of matching rank and InsertStridedSlice
     // on it.
     Value extracted = rewriter.create<ExtractOp>(
-        loc, op.getDest(),
-        getI64SubArray(op.getOffsets(), /*dropFront=*/0,
-                       /*dropBack=*/rankRest));
+        loc, op.getDest(), op.getOffsets().drop_back(rankRest));
 
     // A different pattern will kick in for InsertStridedSlice with matching
     // ranks.
     auto stridedSliceInnerOp = rewriter.create<InsertStridedSliceOp>(
-        loc, op.getSource(), extracted,
-        getI64SubArray(op.getOffsets(), /*dropFront=*/rankDiff),
-        getI64SubArray(op.getStrides(), /*dropFront=*/0));
-
-    rewriter.replaceOpWithNewOp<InsertOp>(
-        op, stridedSliceInnerOp.getResult(), op.getDest(),
-        getI64SubArray(op.getOffsets(), /*dropFront=*/0,
-                       /*dropBack=*/rankRest));
+        loc, op.getSource(), extracted, op.getOffsets().drop_front(rankDiff),
+        op.getStrides());
+
+    rewriter.replaceOpWithNewOp<InsertOp>(op, stridedSliceInnerOp.getResult(),
+                                          op.getDest(),
+                                          op.getOffsets().drop_back(rankRest));
     return success();
   }
 };
@@ -119,7 +115,7 @@ class ConvertSameRankInsertStridedSliceIntoShuffle
     auto srcType = op.getSourceVectorType();
     auto dstType = op.getDestVectorType();
 
-    if (op.getOffsets().getValue().empty())
+    if (op.getOffsets().empty())
       return failure();
 
     int64_t srcRank = srcType.getRank();
@@ -133,11 +129,9 @@ class ConvertSameRankInsertStridedSliceIntoShuffle
       return success();
     }
 
-    int64_t offset =
-        cast<IntegerAttr>(op.getOffsets().getValue().front()).getInt();
+    int64_t offset = op.getOffsets().front();
     int64_t size = srcType.getShape().front();
-    int64_t stride =
-        cast<IntegerAttr>(op.getStrides().getValue().front()).getInt();
+    int64_t stride = op.getStrides().front();
 
     auto loc = op.getLoc();
     Value res = op.getDest();
@@ -181,9 +175,8 @@ class ConvertSameRankInsertStridedSliceIntoShuffle
         // 3. Reduce the problem to lowering a new InsertStridedSlice op with
         // smaller rank.
         extractedSource = rewriter.create<InsertStridedSliceOp>(
-            loc, extractedSource, extractedDest,
-            getI64SubArray(op.getOffsets(), /* dropFront=*/1),
-            getI64SubArray(op.getStrides(), /* dropFront=*/1));
+            loc, extractedSource, extractedDest, op.getOffsets().drop_front(1),
+            op.getStrides().drop_front(1));
       }
       // 4. Insert the extractedSource into the res vector.
       res = insertOne(rewriter, loc, extractedSource, res, off);
@@ -205,18 +198,16 @@ class Convert1DExtractStridedSliceIntoShuffle
                                 PatternRewriter &rewriter) const override {
     auto dstType = op.getType();
 
-    assert(!op.getOffsets().getValue().empty() && "Unexpected empty offsets");
+    assert(!op.getOffsets().empty() && "Unexpected empty offsets");
 
-    int64_t offset =
-        cast<IntegerAttr>(op.getOffsets().getValue().front()).getInt();
-    int64_t size = cast<IntegerAttr>(op.getSizes().getValue().front()).getInt();
-    int64_t stride =
-        cast<IntegerAttr>(op.getStrides().getValue().front()).getInt();
+    int64_t offset = op.getOffsets().front();
+    int64_t size = op.getSizes().front();
+    int64_t stride = op.getStrides().front();
 
     assert(dstType.getElementType().isSignlessIntOrIndexOrFloat());
 
     // Single offset can be more efficiently shuffled.
-    if (op.getOffsets().getValue().size() != 1)
+    if (op.getOffsets().size() != 1)
       return failure();
 
     SmallVector<int64_t, 4> offsets;
@@ -248,14 +239,12 @@ class Convert1DExtractStridedSliceIntoExtractInsertChain final
       return failure();
 
     // Only handle 1-D cases.
-    if (op.getOffsets().getValue().size() != 1)
+    if (op.getOffsets().size() != 1)
       return failure();
 
-    int64_t offset =
-        cast<IntegerAttr>(op.getOffsets().getValue().front()).getInt();
-    int64_t size = cast<IntegerAttr>(op.getSizes().getValue().front()).getInt();
-    int64_t stride =
-        cast<IntegerAttr>(op.getStrides().getValue().front()).getInt();
+    int64_t offset = op.getOffsets().front();
+    int64_t size = op.getSizes().front();
+    int64_t stride = op.getStrides().front();
 
     Location loc = op.getLoc();
     SmallVector<Value> elements;
@@ -294,13 +283,11 @@ class DecomposeNDExtractStridedSlice
                                 PatternRewriter &rewriter) const override {
     auto dstType = op.getType();
 
-    assert(!op.getOffsets().getValue().empty() && "Unexpected empty offsets");
+    assert(!op.getOffsets().empty() && "Unexpected empty offsets");
 
-    int64_t offset =
-        cast<IntegerAttr>(op.getOffsets().getValue().front()).getInt();
-    int64_t size = cast<IntegerAttr>(op.getSizes().getValue().front()).getInt();
-    int64_t stride =
-        cast<IntegerAttr>(op.getStrides().getValue().front()).getInt();
+    int64_t offset = op.getOffsets().front();
+    int64_t size = op.getSizes().front();
+    int64_t stride = op.getStrides().front();
 
     auto loc = op.getLoc();
     auto elemType = dstType.getElementType();
@@ -308,7 +295,7 @@ class DecomposeNDExtractStridedSlice
 
     // Single offset can be more efficiently shuffled. It's handled in
     // Convert1DExtractStridedSliceIntoShuffle.
-    if (op.getOffsets().getValue().size() == 1)
+    if (op.getOffsets().size() == 1)
       return failure();
 
     // Extract/insert on a lower ranked extract strided slice op.
@@ -319,9 +306,8 @@ class DecomposeNDExtractStridedSlice
          off += stride, ++idx) {
       Value one = extractOne(rewriter, loc, op.getVector(), off);
       Value extracted = rewriter.create<ExtractStridedSliceOp>(
-          loc, one, getI64SubArray(op.getOffsets(), /* dropFront=*/1),
-          getI64SubArray(op.getSizes(), /* dropFront=*/1),
-          getI64SubArray(op.getStrides(), /* dropFront=*/1));
+          loc, one, op.getOffsets().drop_front(), op.getSizes().drop_front(),
+          op.getStrides().drop_front());
       res = insertOne(rewriter, loc, extracted, res, idx);
     }
     rewriter.replaceOp(op, res);
diff --git a/mlir/lib/Dialect/Vector/Transforms/VectorLinearize.cpp b/mlir/lib/Dialect/Vector/Transforms/VectorLinearize.cpp
index 868397f2daaae..dbcdc6ea8f31a 100644
--- a/mlir/lib/Dialect/Vector/Transforms/VectorLinearize.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/VectorLinearize.cpp
@@ -160,10 +160,10 @@ struct LinearizeVectorExtractStridedSlice final
       return rewriter.notifyMatchFailure(
           extractOp, "Can't flatten since targetBitWidth <= OpSize");
 
-    ArrayAttr offsets = extractOp.getOffsets();
-    ArrayAttr sizes = extractOp.getSizes();
-    ArrayAttr strides = extractOp.getStrides();
-    if (!isConstantIntValue(strides[0], 1))
+    ArrayRef<int64_t> offsets = extractOp.getOffsets();
+    ArrayRef<int64_t> sizes = extractOp.getSizes();
+    ArrayRef<int64_t> strides = extractOp.getStrides();
+    if (strides[0] != 1)
       return rewriter.notifyMatchFailure(
           extractOp, "Strided slice with stride != 1 is not supported.");
     Value srcVector = adaptor.getVector();
@@ -185,8 +185,8 @@ struct LinearizeVectorExtractStridedSlice final
     }
     // Get total number of extracted slices.
     int64_t nExtractedSlices = 1;
-    for (Attribute size : sizes) {
-      nExtractedSlices *= cast<IntegerAttr>(size).getInt();
+    for (int64_t size : sizes) {
+      nExtractedSlices *= size;
     }
     // Compute the strides of the source vector considering first k dimensions.
     llvm::SmallVector<int64_t, 4> sourceStrides(kD, extractGranularitySize);
@@ -202,8 +202,7 @@ struct LinearizeVectorExtractStridedSlice final
     llvm::SmallVector<int64_t, 4> extractedStrides(kD, 1);
     // Compute extractedStrides.
     for (int i = kD - 2; i >= 0; --i) {
-      extractedStrides[i] =
-          extractedStrides[i + 1] * cast<IntegerAttr>(sizes[i + 1]).getInt();
+      extractedStrides[i] = extractedStrides[i + 1] * sizes[i + 1];
     }
     // Iterate over all extracted slices from 0 to nExtractedSlices - 1
     // and compute the multi-dimensional index and the corresponding linearized
@@ -220,9 +219,7 @@ struct LinearizeVectorExtractStridedSlice final
       // i.e. shift the multiDimIndex by the offsets.
       int64_t linearizedIndex = 0;
       for (int64_t j = 0; j < kD; ++j) {
-        linearizedIndex +=
-            (cast<IntegerAttr>(offsets[j]).getInt() + multiDimIndex[j]) *
-            sourceStrides[j];
+        linearizedIndex += (offsets[j] + multiDimIndex[j]) * sourceStrides[j];
       }
       // Fill the indices array form linearizedIndex to linearizedIndex +
       // extractGranularitySize.
diff --git a/mlir/lib/Dialect/Vector/Transforms/VectorTransforms.cpp b/mlir/lib/Dialect/Vector/Transforms/VectorTransforms.cpp
index 6777e589795c8..75820162dd9d5 100644
--- a/mlir/lib/Dialect/Vector/Transforms/VectorTransforms.cpp
+++ b/mlir/lib/Dialect/Vector/Transforms/VectorTransforms.cpp
@@ -548,13 +548,6 @@ struct ReorderElementwiseOpsOnTranspose final
   }
 };
 
-// Returns the values in `arrayAttr` as an integer vector.
-static SmallVector<int64_t> getIntValueVector(ArrayAttr arrayAttr) {
-  return llvm::to_vector<4>(
-      llvm::map_range(arrayAttr.getAsRange<IntegerAttr>(),
-                      [](IntegerAttr attr) { return attr.getInt(); }));
-}
-
 // Shuffles vector.bitcast op after vector.extract op.
 //
 // This transforms IR like:
@@ -661,8 +654,7 @@ struct BubbleDownBitCastForStridedSliceExtract
       return failure();
 
     // Only accept all one strides for now.
-    if (llvm::any_of(extractOp.getStrides().getAsValueRange<IntegerAttr>(),
-                     [](const APInt &val) { return !val.isOne(); }))
+    if (extractOp.hasNonUnitStrides())
       return failure();
 
     unsigned rank = extractOp.getSourceVectorType().getRank();
@@ -673,34 +665,24 @@ struct BubbleDownBitCastForStridedSliceExtract
     // are selecting the full range for the last bitcasted dimension; other
     // dimensions aren't affected. Otherwise, we need to scale down the last
     // dimension's offset given we are extracting from less elements now.
-    ArrayAttr newOffsets = extractOp.getOffsets();
+    SmallVector<int64_t> newOffsets(extractOp.getOffsets());
     if (newOffsets.size() == rank) {
-      SmallVector<int64_t> offsets = getIntValueVector(newOffsets);
-      if (offsets.back() % expandRatio != 0)
+      if (newOffsets.back() % expandRatio != 0)
         return failure();
-      offsets.back() = offsets.back() / expandRatio;
-      newOffsets = rewriter.getI64ArrayAttr(offsets);
+      newOffsets.back() = newOffsets.back() / expandRatio;
     }
 
     // Similarly for sizes.
-    ArrayAttr newSizes = extractOp.getSizes();
+    SmallVector<int64_t> newSizes(extractOp.getSizes());
     if (newSizes.size() == rank) {
-      SmallVector<int64_t> sizes = getIntValueVector(newSizes);
-      if (sizes.back() % expandRatio != 0)
+      if (newSizes.back() % expandRatio != 0)
         return failure();
-      sizes.back() = sizes.back() / expandRatio;
-      newSizes = rewriter.getI64ArrayAttr(sizes);
+      newSizes.back() = newSizes.back() / expandRatio;
     }
 
-    SmallVector<int64_t> dims =
-        llvm::to_vector<4>(cast<VectorType>(extractOp.getType()).getShape());
-    dims.back() = dims.back() / expandRatio;
-    VectorType newExtractType =
-        VectorType::get(dims, castSrcType.getElementType());
-
     auto newExtractOp = rewriter.create<vector::ExtractStridedSliceOp>(
-        extractOp.getLoc(), newExtractType, castOp.getSource(), newOffsets,
-        newSizes, extractOp.getStrides());
+        extractOp.getLoc(), castOp.getSource(), newOffsets, newSizes,
+        extractOp.getStrides());
 
     rewriter.replaceOpWithNewOp<vector::BitCastOp>(
         extractOp, extractOp.getType(), newExtractOp);
@@ -818,8 +800,7 @@ struct BubbleUpBitCastForStridedSliceInsert
       return failure();
 
     // Only accept all one strides for now.
-    if (llvm::any_of(insertOp.getStrides().getAsValueRange<IntegerAttr>(),
-                     [](const APInt &val) { return !val.isOne(); }))
+    if (insertOp.hasNonUnitStrides())
       return failure();
 
     unsigned rank = insertOp.getSourceVectorType().getRank();
@@ -836,13 +817,11 @@ struct BubbleUpBitCastForStridedSliceInsert
     if (insertOp.getSourceVectorType().getNumElements() % numElements != 0)
       return failure();
 
-    ArrayAttr newOffsets = insertOp.getOffsets();
+    SmallVector<int64_t> newOffsets(insertOp.getOffsets());
     assert(newOffsets.size() == rank);
-    SmallVector<int64_t> offsets = getIntValueVector(newOffsets);
-    if (offsets.back() % shrinkRatio != 0)
+    if (newOffsets.back() % shrinkRatio != 0)
       return failure();
-    offsets.back() = offsets.back() / shrinkRatio;
-    newOffsets = rewriter.getI64ArrayAttr(offsets);
+    newOffsets.back() = newOffsets.back() / shrinkRatio;
 
     SmallVector<int64_t> srcDims =
         llvm::to_vector<4>(insertOp.getSourceVectorType().getShape());
@@ -863,7 +842,7 @@ struct BubbleUpBitCastForStridedSliceInsert
         bitcastOp.getLoc(), newCastDstType, insertOp.getDest());
 
     rewriter.replaceOpWithNewOp<vector::InsertStridedSliceOp>(
-        bitcastOp, bitcastOp.getType(), newCastSrcOp, newCastDstOp, newOffsets,
+        bitcastOp, newCastSrcOp, newCastDstOp, newOffsets,
         insertOp.getStrides());
 
     return success();

>From 1b343e032a70e45d4aaf2c45c3b47ac782c462f4 Mon Sep 17 00:00:00 2001
From: MacDue <macdue at dueutil.tech>
Date: Sat, 3 Aug 2024 19:34:17 +0100
Subject: [PATCH 2/3] Automatically upgrade MLIR check tests

All changes in this commit where done automatically with the following
Python script:

https://gist.github.com/MacDue/ca84d3ec19cf83ae71aab2be8f09c3c5
---
 .../ArithToAMDGPU/8-bit-float-saturation.mlir |   2 +-
 .../ArithToAMDGPU/8-bit-floats.mlir           |  30 +--
 .../func-signature-vector-unroll.mlir         | 108 +++++-----
 .../Conversion/ConvertToSPIRV/vector.mlir     |   2 +-
 ...fold-arith-vector-to-mma-ops-mma-sync.mlir |   8 +-
 .../vector-to-mma-ops-mma-sync.mlir           |  24 +--
 .../VectorToLLVM/vector-to-llvm.mlir          |  20 +-
 .../VectorToSPIRV/vector-to-spirv.mlir        |   8 +-
 mlir/test/Dialect/Arith/emulate-wide-int.mlir |  38 ++--
 mlir/test/Dialect/Arith/int-narrowing.mlir    |  32 +--
 .../Dialect/ArmNeon/lower-to-arm-neon.mlir    | 166 +++++++-------
 mlir/test/Dialect/ArmNeon/roundtrip.mlir      |   4 +-
 .../Dialect/GPU/subgroup-redule-lowering.mlir |  12 +-
 .../vectorize-conv-masked-and-scalable.mlir   |  20 +-
 .../Linalg/vectorize-convolution-flatten.mlir |  38 ++--
 .../Dialect/Linalg/vectorize-convolution.mlir | 192 ++++++++---------
 mlir/test/Dialect/Vector/canonicalize.mlir    | 126 +++++------
 mlir/test/Dialect/Vector/invalid.mlir         |  34 +--
 mlir/test/Dialect/Vector/linearize.mlir       |   6 +-
 mlir/test/Dialect/Vector/ops.mlir             |  16 +-
 .../Vector/vector-break-down-bitcast.mlir     |  24 +--
 ...tract-to-matrix-intrinsics-transforms.mlir |  16 +-
 .../vector-dropleadunitdim-transforms.mlir    |  20 +-
 ...vector-extract-strided-slice-lowering.mlir |   4 +-
 .../Vector/vector-scan-transforms.mlir        |  40 ++--
 ...vector-shape-cast-lowering-transforms.mlir |   8 +-
 .../Vector/vector-transfer-unroll.mlir        | 204 +++++++++---------
 .../Dialect/Vector/vector-transforms.mlir     |  72 +++----
 .../Dialect/Vector/vector-unroll-options.mlir |  52 ++---
 .../Dialect/Vector/CPU/contraction.mlir       |   4 +-
 .../Vector/CPU/extract-strided-slice.mlir     |   2 +-
 .../Vector/CPU/insert-strided-slice.mlir      |   8 +-
 .../Dialect/Vector/CPU/transpose.mlir         |   4 +-
 33 files changed, 672 insertions(+), 672 deletions(-)

diff --git a/mlir/test/Conversion/ArithToAMDGPU/8-bit-float-saturation.mlir b/mlir/test/Conversion/ArithToAMDGPU/8-bit-float-saturation.mlir
index c7f39440a349b..f8abf88c019a6 100644
--- a/mlir/test/Conversion/ArithToAMDGPU/8-bit-float-saturation.mlir
+++ b/mlir/test/Conversion/ArithToAMDGPU/8-bit-float-saturation.mlir
@@ -46,7 +46,7 @@ func.func @scalar_trunc(%v: f16) -> f8E5M2FNUZ {
 // CHECK: [[F0:%.+]] = vector.extract [[SATURATED]][0]
 // CHECK: [[F1:%.+]] = vector.extract [[SATURATED]][1]
 // CHECK: [[W0:%.+]] = amdgpu.packed_trunc_2xfp8 [[F0]], [[F1]] into undef[word 0] : f32 to vector<4xf8E4M3FNUZ>
-// CHECK: [[W:%.+]] = vector.extract_strided_slice [[W0]] {offsets = [0], sizes = [2], strides = [1]} : vector<4xf8E4M3FNUZ> to vector<2xf8E4M3FNUZ>
+// CHECK: [[W:%.+]] = vector.extract_strided_slice [[W0]][0:2:1] : vector<4xf8E4M3FNUZ> to vector<2xf8E4M3FNUZ>
 // CHECK: return [[W]] : vector<2xf8E4M3FNUZ>
 func.func @vector_trunc_short(%v: vector<2xf32>) -> vector<2xf8E4M3FNUZ> {
   %w = arith.truncf %v : vector<2xf32> to vector<2xf8E4M3FNUZ>
diff --git a/mlir/test/Conversion/ArithToAMDGPU/8-bit-floats.mlir b/mlir/test/Conversion/ArithToAMDGPU/8-bit-floats.mlir
index 26a222a4a788e..16a6733c8db32 100644
--- a/mlir/test/Conversion/ArithToAMDGPU/8-bit-floats.mlir
+++ b/mlir/test/Conversion/ArithToAMDGPU/8-bit-floats.mlir
@@ -34,7 +34,7 @@ func.func @vector_ext_short(%v: vector<2xf8E5M2FNUZ>) -> vector<2xf64> {
 
 // CHECK-LABEL: func.func @vector_ext_long
 // CHECK-SAME: ([[V:%.+]]: vector<9xf8E4M3FNUZ>)
-// CHECK: [[V0:%.+]] = vector.extract_strided_slice [[V]] {offsets = [0], sizes = [4], strides = [1]}
+// CHECK: [[V0:%.+]] = vector.extract_strided_slice [[V]][0:4:1]
 // CHECK: [[F0:%.+]] = amdgpu.ext_packed_fp8 [[V0]][0]
 // CHECK: [[W0:%.+]] = vector.insert [[F0]]
 // CHECK: [[F1:%.+]] = amdgpu.ext_packed_fp8 [[V0]][1]
@@ -44,7 +44,7 @@ func.func @vector_ext_short(%v: vector<2xf8E5M2FNUZ>) -> vector<2xf64> {
 // CHECK: [[F3:%.+]] = amdgpu.ext_packed_fp8 [[V0]][3]
 // CHECK: [[W3:%.+]] = vector.insert [[F3]], [[W2]]
 
-// CHECK: [[V1:%.+]] = vector.extract_strided_slice [[V]] {offsets = [4], sizes = [4], strides = [1]} : vector<9xf8E4M3FNUZ> to vector<4xf8E4M3FNUZ>
+// CHECK: [[V1:%.+]] = vector.extract_strided_slice [[V]][4:4:1] : vector<9xf8E4M3FNUZ> to vector<4xf8E4M3FNUZ>
 // CHECK: [[F4:%.+]] = amdgpu.ext_packed_fp8 [[V1]][0]
 // CHECK: [[W4:%.+]] = vector.insert [[F4]], [[W3]]
 // CHECK: [[F5:%.+]] = amdgpu.ext_packed_fp8 [[V1]][1]
@@ -54,7 +54,7 @@ func.func @vector_ext_short(%v: vector<2xf8E5M2FNUZ>) -> vector<2xf64> {
 // CHECK: [[F7:%.+]] = amdgpu.ext_packed_fp8 [[V1]][3]
 // CHECK: [[W7:%.+]] = vector.insert [[F7]], [[W6]]
 
-// CHECK: [[V2:%.+]] = vector.extract_strided_slice [[V]] {offsets = [8], sizes = [1], strides = [1]} : vector<9xf8E4M3FNUZ> to vector<1xf8E4M3FNUZ>
+// CHECK: [[V2:%.+]] = vector.extract_strided_slice [[V]][8:1:1] : vector<9xf8E4M3FNUZ> to vector<1xf8E4M3FNUZ>
 // CHECK: [[F8:%.+]] = amdgpu.ext_packed_fp8 [[V2]][0]
 // CHECK: [[W8:%.+]] = vector.insert [[F8]], [[W7]]
 // CHECK: return [[W8]]
@@ -87,7 +87,7 @@ func.func @scalar_trunc(%v: f16) -> f8E5M2FNUZ {
 // CHECK: [[V1:%.+]] = vector.extract [[V]][1]
 // CHECK: [[F1:%.+]] = arith.truncf [[V1]] : f64 to f32
 // CHECK: [[W0:%.+]] = amdgpu.packed_trunc_2xfp8 [[F0]], [[F1]] into undef[word 0] : f32 to vector<4xf8E5M2FNUZ>
-// CHECK: [[W:%.+]] = vector.extract_strided_slice [[W0]] {offsets = [0], sizes = [2], strides = [1]} : vector<4xf8E5M2FNUZ> to vector<2xf8E5M2FNUZ>
+// CHECK: [[W:%.+]] = vector.extract_strided_slice [[W0]][0:2:1] : vector<4xf8E5M2FNUZ> to vector<2xf8E5M2FNUZ>
 // CHECK: return [[W]] : vector<2xf8E5M2FNUZ>
 func.func @vector_trunc_short(%v: vector<2xf64>) -> vector<2xf8E5M2FNUZ> {
   %w = arith.truncf %v : vector<2xf64> to vector<2xf8E5M2FNUZ>
@@ -101,15 +101,15 @@ func.func @vector_trunc_short(%v: vector<2xf64>) -> vector<2xf8E5M2FNUZ> {
 // CHECK: [[ZEROES:%.+]] = arith.constant dense<0.000000e+00> : vector<9xf8E4M3FNUZ>
 // CHECK: [[T0:%.+]] = amdgpu.packed_trunc_2xfp8 %{{.+}}, %{{.+}} into undef[word 0]
 // CHECK: [[T1:%.+]] = amdgpu.packed_trunc_2xfp8 %{{.+}}, %{{.+}} into [[T0]][word 1]
-// CHECK: [[W0:%.+]] = vector.insert_strided_slice [[T1]], [[ZEROES]] {offsets = [0], strides = [1]}
+// CHECK: [[W0:%.+]] = vector.insert_strided_slice [[T1]], [[ZEROES]][0:1]
 
 // CHECK: [[T2:%.+]] = amdgpu.packed_trunc_2xfp8 %{{.+}}, %{{.+}} into undef[word 0]
 // CHECK: [[T3:%.+]] = amdgpu.packed_trunc_2xfp8 %{{.+}}, %{{.+}} into [[T2]][word 1]
-// CHECK: [[W1:%.+]] = vector.insert_strided_slice [[T3]], [[W0]] {offsets = [4], strides = [1]}
+// CHECK: [[W1:%.+]] = vector.insert_strided_slice [[T3]], [[W0]][4:1]
 
 // CHECK: [[T4:%.+]] = amdgpu.packed_trunc_2xfp8 %{{.+}}, undef into undef[word 0]
-// CHECK: [[T4_SHORT:%.+]] = vector.extract_strided_slice [[T4]] {offsets = [0], sizes = [1], strides = [1]}
-// CHECK: [[W:%.+]] = vector.insert_strided_slice [[T4_SHORT]], [[W1]] {offsets = [8], strides = [1]}
+// CHECK: [[T4_SHORT:%.+]] = vector.extract_strided_slice [[T4]][0:1:1]
+// CHECK: [[W:%.+]] = vector.insert_strided_slice [[T4_SHORT]], [[W1]][8:1]
 // CHECK: return [[W]]
 func.func @vector_trunc_long(%v: vector<9xf32>) -> vector<9xf8E4M3FNUZ> {
   %w = arith.truncf %v : vector<9xf32> to vector<9xf8E4M3FNUZ>
@@ -123,15 +123,15 @@ func.func @vector_trunc_long(%v: vector<9xf32>) -> vector<9xf8E4M3FNUZ> {
 // CHECK: [[ZEROES:%.+]] = arith.constant dense<0.000000e+00> : vector<9xf8E4M3FNUZ>
 // CHECK: [[T0:%.+]] = amdgpu.packed_trunc_2xfp8 %{{.+}}, %{{.+}} into undef[word 0]
 // CHECK: [[T1:%.+]] = amdgpu.packed_trunc_2xfp8 %{{.+}}, %{{.+}} into [[T0]][word 1]
-// CHECK: [[W0:%.+]] = vector.insert_strided_slice [[T1]], [[ZEROES]] {offsets = [0], strides = [1]}
+// CHECK: [[W0:%.+]] = vector.insert_strided_slice [[T1]], [[ZEROES]][0:1]
 
 // CHECK: [[T2:%.+]] = amdgpu.packed_trunc_2xfp8 %{{.+}}, %{{.+}} into undef[word 0]
 // CHECK: [[T3:%.+]] = amdgpu.packed_trunc_2xfp8 %{{.+}}, %{{.+}} into [[T2]][word 1]
-// CHECK: [[W1:%.+]] = vector.insert_strided_slice [[T3]], [[W0]] {offsets = [4], strides = [1]}
+// CHECK: [[W1:%.+]] = vector.insert_strided_slice [[T3]], [[W0]][4:1]
 
 // CHECK: [[T4:%.+]] = amdgpu.packed_trunc_2xfp8 %{{.+}}, undef into undef[word 0]
-// CHECK: [[T4_SHORT:%.+]] = vector.extract_strided_slice [[T4]] {offsets = [0], sizes = [1], strides = [1]}
-// CHECK: [[W:%.+]] = vector.insert_strided_slice [[T4_SHORT]], [[W1]] {offsets = [8], strides = [1]}
+// CHECK: [[T4_SHORT:%.+]] = vector.extract_strided_slice [[T4]][0:1:1]
+// CHECK: [[W:%.+]] = vector.insert_strided_slice [[T4_SHORT]], [[W1]][8:1]
 // CHECK: [[RE:%.+]] = vector.shape_cast [[W]] : vector<9xf8E4M3FNUZ> to vector<1x9xf8E4M3FNUZ>
 // CHECK: return [[RE]]
 func.func @vector_trunc_long_2d(%v: vector<1x9xf32>) -> vector<1x9xf8E4M3FNUZ> {
@@ -144,7 +144,7 @@ func.func @vector_trunc_long_2d(%v: vector<1x9xf32>) -> vector<1x9xf8E4M3FNUZ> {
 // CHECK-LABEL: func.func @vector_ext_long_2d
 // CHECK-SAME: ([[V:%.+]]: vector<1x9xf8E4M3FNUZ>)
 // CHECK: [[CAST:%.+]] = vector.shape_cast [[V]] : vector<1x9xf8E4M3FNUZ> to vector<9xf8E4M3FNUZ>
-// CHECK: [[V0:%.+]] = vector.extract_strided_slice [[CAST]] {offsets = [0], sizes = [4], strides = [1]}
+// CHECK: [[V0:%.+]] = vector.extract_strided_slice [[CAST]][0:4:1]
 // CHECK: [[F0:%.+]] = amdgpu.ext_packed_fp8 [[V0]][0]
 // CHECK: [[W0:%.+]] = vector.insert [[F0]]
 // CHECK: [[F1:%.+]] = amdgpu.ext_packed_fp8 [[V0]][1]
@@ -154,7 +154,7 @@ func.func @vector_trunc_long_2d(%v: vector<1x9xf32>) -> vector<1x9xf8E4M3FNUZ> {
 // CHECK: [[F3:%.+]] = amdgpu.ext_packed_fp8 [[V0]][3]
 // CHECK: [[W3:%.+]] = vector.insert [[F3]], [[W2]]
 
-// CHECK: [[V1:%.+]] = vector.extract_strided_slice [[CAST]] {offsets = [4], sizes = [4], strides = [1]} : vector<9xf8E4M3FNUZ> to vector<4xf8E4M3FNUZ>
+// CHECK: [[V1:%.+]] = vector.extract_strided_slice [[CAST]][4:4:1] : vector<9xf8E4M3FNUZ> to vector<4xf8E4M3FNUZ>
 // CHECK: [[F4:%.+]] = amdgpu.ext_packed_fp8 [[V1]][0]
 // CHECK: [[W4:%.+]] = vector.insert [[F4]], [[W3]]
 // CHECK: [[F5:%.+]] = amdgpu.ext_packed_fp8 [[V1]][1]
@@ -164,7 +164,7 @@ func.func @vector_trunc_long_2d(%v: vector<1x9xf32>) -> vector<1x9xf8E4M3FNUZ> {
 // CHECK: [[F7:%.+]] = amdgpu.ext_packed_fp8 [[V1]][3]
 // CHECK: [[W7:%.+]] = vector.insert [[F7]], [[W6]]
 
-// CHECK: [[V2:%.+]] = vector.extract_strided_slice [[CAST]] {offsets = [8], sizes = [1], strides = [1]} : vector<9xf8E4M3FNUZ> to vector<1xf8E4M3FNUZ>
+// CHECK: [[V2:%.+]] = vector.extract_strided_slice [[CAST]][8:1:1] : vector<9xf8E4M3FNUZ> to vector<1xf8E4M3FNUZ>
 // CHECK: [[F8:%.+]] = amdgpu.ext_packed_fp8 [[V2]][0]
 // CHECK: [[W8:%.+]] = vector.insert [[F8]], [[W7]]
 // CHECK: [[CAST:%.+]] = vector.shape_cast [[W8]] : vector<9xf32> to vector<1x9xf32>
diff --git a/mlir/test/Conversion/ConvertToSPIRV/func-signature-vector-unroll.mlir b/mlir/test/Conversion/ConvertToSPIRV/func-signature-vector-unroll.mlir
index c018ccb924983..d49b5c81afbff 100644
--- a/mlir/test/Conversion/ConvertToSPIRV/func-signature-vector-unroll.mlir
+++ b/mlir/test/Conversion/ConvertToSPIRV/func-signature-vector-unroll.mlir
@@ -22,16 +22,16 @@ func.func @simple_vector_4(%arg0 : vector<4xi32>) -> vector<4xi32> {
 // CHECK-SAME: (%[[ARG0:.+]]: vector<1xi32>, %[[ARG1:.+]]: vector<1xi32>, %[[ARG2:.+]]: vector<1xi32>, %[[ARG3:.+]]: vector<1xi32>, %[[ARG4:.+]]: vector<1xi32>)
 func.func @simple_vector_5(%arg0 : vector<5xi32>) -> vector<5xi32> {
   // CHECK: %[[CST:.*]] = arith.constant dense<0> : vector<5xi32>
-  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]] {offsets = [0], strides = [1]} : vector<1xi32> into vector<5xi32>
-  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]] {offsets = [1], strides = [1]} : vector<1xi32> into vector<5xi32>
-  // CHECK: %[[INSERT2:.*]] = vector.insert_strided_slice %[[ARG2]], %[[INSERT1]] {offsets = [2], strides = [1]} : vector<1xi32> into vector<5xi32>
-  // CHECK: %[[INSERT3:.*]] = vector.insert_strided_slice %[[ARG3]], %[[INSERT2]] {offsets = [3], strides = [1]} : vector<1xi32> into vector<5xi32>
-  // CHECK: %[[INSERT4:.*]] = vector.insert_strided_slice %[[ARG4]], %[[INSERT3]] {offsets = [4], strides = [1]} : vector<1xi32> into vector<5xi32>
-  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT4]] {offsets = [0], sizes = [1], strides = [1]} : vector<5xi32> to vector<1xi32>
-  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT4]] {offsets = [1], sizes = [1], strides = [1]} : vector<5xi32> to vector<1xi32>
-  // CHECK: %[[EXTRACT2:.*]] = vector.extract_strided_slice %[[INSERT4]] {offsets = [2], sizes = [1], strides = [1]} : vector<5xi32> to vector<1xi32>
-  // CHECK: %[[EXTRACT3:.*]] = vector.extract_strided_slice %[[INSERT4]] {offsets = [3], sizes = [1], strides = [1]} : vector<5xi32> to vector<1xi32>
-  // CHECK: %[[EXTRACT4:.*]] = vector.extract_strided_slice %[[INSERT4]] {offsets = [4], sizes = [1], strides = [1]} : vector<5xi32> to vector<1xi32>
+  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]][0:1] : vector<1xi32> into vector<5xi32>
+  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]][1:1] : vector<1xi32> into vector<5xi32>
+  // CHECK: %[[INSERT2:.*]] = vector.insert_strided_slice %[[ARG2]], %[[INSERT1]][2:1] : vector<1xi32> into vector<5xi32>
+  // CHECK: %[[INSERT3:.*]] = vector.insert_strided_slice %[[ARG3]], %[[INSERT2]][3:1] : vector<1xi32> into vector<5xi32>
+  // CHECK: %[[INSERT4:.*]] = vector.insert_strided_slice %[[ARG4]], %[[INSERT3]][4:1] : vector<1xi32> into vector<5xi32>
+  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT4]][0:1:1] : vector<5xi32> to vector<1xi32>
+  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT4]][1:1:1] : vector<5xi32> to vector<1xi32>
+  // CHECK: %[[EXTRACT2:.*]] = vector.extract_strided_slice %[[INSERT4]][2:1:1] : vector<5xi32> to vector<1xi32>
+  // CHECK: %[[EXTRACT3:.*]] = vector.extract_strided_slice %[[INSERT4]][3:1:1] : vector<5xi32> to vector<1xi32>
+  // CHECK: %[[EXTRACT4:.*]] = vector.extract_strided_slice %[[INSERT4]][4:1:1] : vector<5xi32> to vector<1xi32>
   // CHECK: return %[[EXTRACT0]], %[[EXTRACT1]], %[[EXTRACT2]], %[[EXTRACT3]], %[[EXTRACT4]] : vector<1xi32>, vector<1xi32>, vector<1xi32>, vector<1xi32>, vector<1xi32>
   return %arg0 : vector<5xi32>
 }
@@ -42,10 +42,10 @@ func.func @simple_vector_5(%arg0 : vector<5xi32>) -> vector<5xi32> {
 // CHECK-SAME: (%[[ARG0:.+]]: vector<3xi32>, %[[ARG1:.+]]: vector<3xi32>)
 func.func @simple_vector_6(%arg0 : vector<6xi32>) -> vector<6xi32> {
   // CHECK: %[[CST:.*]] = arith.constant dense<0> : vector<6xi32>
-  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]] {offsets = [0], strides = [1]} : vector<3xi32> into vector<6xi32>
-  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]] {offsets = [3], strides = [1]} : vector<3xi32> into vector<6xi32>
-  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT1]] {offsets = [0], sizes = [3], strides = [1]} : vector<6xi32> to vector<3xi32>
-  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT1]] {offsets = [3], sizes = [3], strides = [1]} : vector<6xi32> to vector<3xi32>
+  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]][0:1] : vector<3xi32> into vector<6xi32>
+  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]][3:1] : vector<3xi32> into vector<6xi32>
+  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT1]][0:3:1] : vector<6xi32> to vector<3xi32>
+  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT1]][3:3:1] : vector<6xi32> to vector<3xi32>
   // CHECK: return %[[EXTRACT0]], %[[EXTRACT1]] : vector<3xi32>, vector<3xi32>
   return %arg0 : vector<6xi32>
 }
@@ -56,10 +56,10 @@ func.func @simple_vector_6(%arg0 : vector<6xi32>) -> vector<6xi32> {
 // CHECK-SAME: (%[[ARG0:.+]]: vector<4xi32>, %[[ARG1:.+]]: vector<4xi32>)
 func.func @simple_vector_8(%arg0 : vector<8xi32>) -> vector<8xi32> {
   // CHECK: %[[CST:.*]] = arith.constant dense<0> : vector<8xi32>
-  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]] {offsets = [0], strides = [1]} : vector<4xi32> into vector<8xi32>
-  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]] {offsets = [4], strides = [1]} : vector<4xi32> into vector<8xi32>
-  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT1]] {offsets = [0], sizes = [4], strides = [1]} : vector<8xi32> to vector<4xi32>
-  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT1]] {offsets = [4], sizes = [4], strides = [1]} : vector<8xi32> to vector<4xi32>
+  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]][0:1] : vector<4xi32> into vector<8xi32>
+  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]][4:1] : vector<4xi32> into vector<8xi32>
+  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT1]][0:4:1] : vector<8xi32> to vector<4xi32>
+  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT1]][4:4:1] : vector<8xi32> to vector<4xi32>
   // CHECK: return %[[EXTRACT0]], %[[EXTRACT1]] : vector<4xi32>, vector<4xi32>
   return %arg0 : vector<8xi32>
 }
@@ -70,17 +70,17 @@ func.func @simple_vector_8(%arg0 : vector<8xi32>) -> vector<8xi32> {
 // CHECK-SAME: (%[[ARG0:.+]]: vector<4xi32>, %[[ARG1:.+]]: vector<4xi32>, %[[ARG2:.+]]: vector<4xi32>, %[[ARG3:.+]]: vector<4xi32>)
 func.func @simple_vector_2d(%arg0 : vector<4x4xi32>) -> vector<4x4xi32> {
   // CHECK: %[[CST:.*]] = arith.constant dense<0> : vector<4x4xi32>
-  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]] {offsets = [0, 0], strides = [1]} : vector<4xi32> into vector<4x4xi32>
-  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]] {offsets = [1, 0], strides = [1]} : vector<4xi32> into vector<4x4xi32>
-  // CHECK: %[[INSERT2:.*]] = vector.insert_strided_slice %[[ARG2]], %[[INSERT1]] {offsets = [2, 0], strides = [1]} : vector<4xi32> into vector<4x4xi32>
-  // CHECK: %[[INSERT3:.*]] = vector.insert_strided_slice %[[ARG3]], %[[INSERT2]] {offsets = [3, 0], strides = [1]} : vector<4xi32> into vector<4x4xi32>
-  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT3]] {offsets = [0, 0], sizes = [1, 4], strides = [1, 1]} : vector<4x4xi32> to vector<1x4xi32>
+  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]][0][0:1] : vector<4xi32> into vector<4x4xi32>
+  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]][1][0:1] : vector<4xi32> into vector<4x4xi32>
+  // CHECK: %[[INSERT2:.*]] = vector.insert_strided_slice %[[ARG2]], %[[INSERT1]][2][0:1] : vector<4xi32> into vector<4x4xi32>
+  // CHECK: %[[INSERT3:.*]] = vector.insert_strided_slice %[[ARG3]], %[[INSERT2]][3][0:1] : vector<4xi32> into vector<4x4xi32>
+  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT3]][0:1:1][0:4:1] : vector<4x4xi32> to vector<1x4xi32>
   // CHECK: %[[EXTRACT0_1:.*]] = vector.extract %[[EXTRACT0]][0] : vector<4xi32> from vector<1x4xi32>
-  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT3]] {offsets = [1, 0], sizes = [1, 4], strides = [1, 1]} : vector<4x4xi32> to vector<1x4xi32>
+  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT3]][1:1:1][0:4:1] : vector<4x4xi32> to vector<1x4xi32>
   // CHECK: %[[EXTRACT1_1:.*]] = vector.extract %[[EXTRACT1]][0] : vector<4xi32> from vector<1x4xi32>
-  // CHECK: %[[EXTRACT2:.*]] = vector.extract_strided_slice %[[INSERT3]] {offsets = [2, 0], sizes = [1, 4], strides = [1, 1]} : vector<4x4xi32> to vector<1x4xi32>
+  // CHECK: %[[EXTRACT2:.*]] = vector.extract_strided_slice %[[INSERT3]][2:1:1][0:4:1] : vector<4x4xi32> to vector<1x4xi32>
   // CHECK: %[[EXTRACT2_1:.*]] = vector.extract %[[EXTRACT2]][0] : vector<4xi32> from vector<1x4xi32>
-  // CHECK: %[[EXTRACT3:.*]] = vector.extract_strided_slice %[[INSERT3]] {offsets = [3, 0], sizes = [1, 4], strides = [1, 1]} : vector<4x4xi32> to vector<1x4xi32>
+  // CHECK: %[[EXTRACT3:.*]] = vector.extract_strided_slice %[[INSERT3]][3:1:1][0:4:1] : vector<4x4xi32> to vector<1x4xi32>
   // CHECK: %[[EXTRACT3_1:.*]] = vector.extract %[[EXTRACT3]][0] : vector<4xi32> from vector<1x4xi32>
   // CHECK: return %[[EXTRACT0_1]], %[[EXTRACT1_1]], %[[EXTRACT2_1]], %[[EXTRACT3_1]] : vector<4xi32>, vector<4xi32>, vector<4xi32>, vector<4xi32>
   return %arg0 : vector<4x4xi32>
@@ -93,14 +93,14 @@ func.func @simple_vector_2d(%arg0 : vector<4x4xi32>) -> vector<4x4xi32> {
 func.func @vector_6and8(%arg0 : vector<6xi32>, %arg1 : vector<8xi32>) -> (vector<6xi32>, vector<8xi32>) {
   // CHECK: %[[CST:.*]] = arith.constant dense<0> : vector<8xi32>
   // CHECK: %[[CST0:.*]] = arith.constant dense<0> : vector<6xi32>
-  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST0]] {offsets = [0], strides = [1]} : vector<3xi32> into vector<6xi32>
-  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]] {offsets = [3], strides = [1]} : vector<3xi32> into vector<6xi32>
-  // CHECK: %[[INSERT2:.*]] = vector.insert_strided_slice %[[ARG2]], %[[CST]] {offsets = [0], strides = [1]} : vector<4xi32> into vector<8xi32>
-  // CHECK: %[[INSERT3:.*]] = vector.insert_strided_slice %[[ARG3]], %[[INSERT2]] {offsets = [4], strides = [1]} : vector<4xi32> into vector<8xi32>
-  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT1]] {offsets = [0], sizes = [3], strides = [1]} : vector<6xi32> to vector<3xi32>
-  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT1]] {offsets = [3], sizes = [3], strides = [1]} : vector<6xi32> to vector<3xi32>
-  // CHECK: %[[EXTRACT2:.*]] = vector.extract_strided_slice %[[INSERT3]] {offsets = [0], sizes = [4], strides = [1]} : vector<8xi32> to vector<4xi32>
-  // CHECK: %[[EXTRACT3:.*]] = vector.extract_strided_slice %[[INSERT3]] {offsets = [4], sizes = [4], strides = [1]} : vector<8xi32> to vector<4xi32>
+  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST0]][0:1] : vector<3xi32> into vector<6xi32>
+  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]][3:1] : vector<3xi32> into vector<6xi32>
+  // CHECK: %[[INSERT2:.*]] = vector.insert_strided_slice %[[ARG2]], %[[CST]][0:1] : vector<4xi32> into vector<8xi32>
+  // CHECK: %[[INSERT3:.*]] = vector.insert_strided_slice %[[ARG3]], %[[INSERT2]][4:1] : vector<4xi32> into vector<8xi32>
+  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT1]][0:3:1] : vector<6xi32> to vector<3xi32>
+  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT1]][3:3:1] : vector<6xi32> to vector<3xi32>
+  // CHECK: %[[EXTRACT2:.*]] = vector.extract_strided_slice %[[INSERT3]][0:4:1] : vector<8xi32> to vector<4xi32>
+  // CHECK: %[[EXTRACT3:.*]] = vector.extract_strided_slice %[[INSERT3]][4:4:1] : vector<8xi32> to vector<4xi32>
   // CHECK: return %[[EXTRACT0]], %[[EXTRACT1]], %[[EXTRACT2]], %[[EXTRACT3]] : vector<3xi32>, vector<3xi32>, vector<4xi32>, vector<4xi32>
   return %arg0, %arg1 : vector<6xi32>, vector<8xi32>
 }
@@ -111,10 +111,10 @@ func.func @vector_6and8(%arg0 : vector<6xi32>, %arg1 : vector<8xi32>) -> (vector
 // CHECK-SAME: (%[[ARG0:.+]]: vector<3xi32>, %[[ARG1:.+]]: vector<4xi32>, %[[ARG2:.+]]: vector<4xi32>)
 func.func @vector_3and8(%arg0 : vector<3xi32>, %arg1 : vector<8xi32>) -> (vector<3xi32>, vector<8xi32>) {
   // CHECK: %[[CST:.*]] = arith.constant dense<0> : vector<8xi32>
-  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG1]], %[[CST]] {offsets = [0], strides = [1]} : vector<4xi32> into vector<8xi32>
-  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG2]], %[[INSERT0]] {offsets = [4], strides = [1]} : vector<4xi32> into vector<8xi32>
-  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT1]] {offsets = [0], sizes = [4], strides = [1]} : vector<8xi32> to vector<4xi32>
-  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT1]] {offsets = [4], sizes = [4], strides = [1]} : vector<8xi32> to vector<4xi32>
+  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG1]], %[[CST]][0:1] : vector<4xi32> into vector<8xi32>
+  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG2]], %[[INSERT0]][4:1] : vector<4xi32> into vector<8xi32>
+  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT1]][0:4:1] : vector<8xi32> to vector<4xi32>
+  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT1]][4:4:1] : vector<8xi32> to vector<4xi32>
   // CHECK: return %[[ARG0]], %[[EXTRACT0]], %[[EXTRACT1]] : vector<3xi32>, vector<4xi32>, vector<4xi32>
   return %arg0, %arg1 : vector<3xi32>, vector<8xi32>
 }
@@ -125,10 +125,10 @@ func.func @vector_3and8(%arg0 : vector<3xi32>, %arg1 : vector<8xi32>) -> (vector
 // CHECK-SAME: (%[[ARG0:.+]]: vector<4xi32>, %[[ARG1:.+]]: vector<4xi32>, %[[ARG2:.+]]: vector<3xi32>, %[[ARG3:.+]]: i32)
 func.func @scalar_vector(%arg0 : vector<8xi32>, %arg1 : vector<3xi32>, %arg2 : i32) -> (vector<8xi32>, vector<3xi32>, i32) {
   // CHECK: %[[CST:.*]] = arith.constant dense<0> : vector<8xi32>
-  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]] {offsets = [0], strides = [1]} : vector<4xi32> into vector<8xi32>
-  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]] {offsets = [4], strides = [1]} : vector<4xi32> into vector<8xi32>
-  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT1]] {offsets = [0], sizes = [4], strides = [1]} : vector<8xi32> to vector<4xi32>
-  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT1]] {offsets = [4], sizes = [4], strides = [1]} : vector<8xi32> to vector<4xi32>
+  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]][0:1] : vector<4xi32> into vector<8xi32>
+  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]][4:1] : vector<4xi32> into vector<8xi32>
+  // CHECK: %[[EXTRACT0:.*]] = vector.extract_strided_slice %[[INSERT1]][0:4:1] : vector<8xi32> to vector<4xi32>
+  // CHECK: %[[EXTRACT1:.*]] = vector.extract_strided_slice %[[INSERT1]][4:4:1] : vector<8xi32> to vector<4xi32>
   // CHECK: return %[[EXTRACT0]], %[[EXTRACT1]], %[[ARG2]], %[[ARG3]] : vector<4xi32>, vector<4xi32>, vector<3xi32>, i32
   return %arg0, %arg1, %arg2 : vector<8xi32>, vector<3xi32>, i32
 }
@@ -139,17 +139,17 @@ func.func @scalar_vector(%arg0 : vector<8xi32>, %arg1 : vector<3xi32>, %arg2 : i
 // CHECK-SAME: (%[[ARG0:.+]]: vector<3xi32>, %[[ARG1:.+]]: vector<3xi32>, %[[ARG2:.+]]: vector<3xi32>, %[[ARG3:.+]]: vector<3xi32>, %[[ARG4:.+]]: vector<4xi32>)
 func.func @vector_2dand1d(%arg0 : vector<2x6xi32>, %arg1 : vector<4xi32>) -> (vector<2x6xi32>, vector<4xi32>) {
   // CHECK: %[[CST:.*]] = arith.constant dense<0> : vector<2x6xi32>
-  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]] {offsets = [0, 0], strides = [1]} : vector<3xi32> into vector<2x6xi32>
-  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]] {offsets = [0, 3], strides = [1]} : vector<3xi32> into vector<2x6xi32>
-  // CHECK: %[[INSERT2:.*]] = vector.insert_strided_slice %[[ARG2]], %[[INSERT1]] {offsets = [1, 0], strides = [1]} : vector<3xi32> into vector<2x6xi32>
-  // CHECK: %[[INSERT3:.*]] = vector.insert_strided_slice %[[ARG3]], %[[INSERT2]] {offsets = [1, 3], strides = [1]} : vector<3xi32> into vector<2x6xi32>
-  // CHECK: %[[EXTRACT0:.*]]  = vector.extract_strided_slice %[[INSERT3]] {offsets = [0, 0], sizes = [1, 3], strides = [1, 1]} : vector<2x6xi32> to vector<1x3xi32>
+  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]][0][0:1] : vector<3xi32> into vector<2x6xi32>
+  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]][0][3:1] : vector<3xi32> into vector<2x6xi32>
+  // CHECK: %[[INSERT2:.*]] = vector.insert_strided_slice %[[ARG2]], %[[INSERT1]][1][0:1] : vector<3xi32> into vector<2x6xi32>
+  // CHECK: %[[INSERT3:.*]] = vector.insert_strided_slice %[[ARG3]], %[[INSERT2]][1][3:1] : vector<3xi32> into vector<2x6xi32>
+  // CHECK: %[[EXTRACT0:.*]]  = vector.extract_strided_slice %[[INSERT3]][0:1:1][0:3:1] : vector<2x6xi32> to vector<1x3xi32>
   // CHECK: %[[EXTRACT0_1:.*]]  = vector.extract %[[EXTRACT0]][0] : vector<3xi32> from vector<1x3xi32>
-  // CHECK: %[[EXTRACT1:.*]]  = vector.extract_strided_slice %[[INSERT3]] {offsets = [0, 3], sizes = [1, 3], strides = [1, 1]} : vector<2x6xi32> to vector<1x3xi32>
+  // CHECK: %[[EXTRACT1:.*]]  = vector.extract_strided_slice %[[INSERT3]][0:1:1][3:3:1] : vector<2x6xi32> to vector<1x3xi32>
   // CHECK: %[[EXTRACT1_1:.*]]  = vector.extract %[[EXTRACT1]][0] : vector<3xi32> from vector<1x3xi32>
-  // CHECK: %[[EXTRACT2:.*]]  = vector.extract_strided_slice %[[INSERT3]] {offsets = [1, 0], sizes = [1, 3], strides = [1, 1]} : vector<2x6xi32> to vector<1x3xi32>
+  // CHECK: %[[EXTRACT2:.*]]  = vector.extract_strided_slice %[[INSERT3]][1:1:1][0:3:1] : vector<2x6xi32> to vector<1x3xi32>
   // CHECK: %[[EXTRACT2_1:.*]]  = vector.extract %[[EXTRACT2]][0] : vector<3xi32> from vector<1x3xi32>
-  // CHECK: %[[EXTRACT3:.*]]  = vector.extract_strided_slice %[[INSERT3]] {offsets = [1, 3], sizes = [1, 3], strides = [1, 1]} : vector<2x6xi32> to vector<1x3xi32>
+  // CHECK: %[[EXTRACT3:.*]]  = vector.extract_strided_slice %[[INSERT3]][1:1:1][3:3:1] : vector<2x6xi32> to vector<1x3xi32>
   // CHECK: %[[EXTRACT3_1:.*]]  = vector.extract %[[EXTRACT3]][0] : vector<3xi32> from vector<1x3xi32>
   // CHECK: return %[[EXTRACT0_1]], %[[EXTRACT1_1]], %[[EXTRACT2_1]], %[[EXTRACT3_1]], %[[ARG4]] : vector<3xi32>, vector<3xi32>, vector<3xi32>, vector<3xi32>, vector<4xi32>
   return %arg0, %arg1 : vector<2x6xi32>, vector<4xi32>
@@ -161,10 +161,10 @@ func.func @vector_2dand1d(%arg0 : vector<2x6xi32>, %arg1 : vector<4xi32>) -> (ve
 // CHECK-SAME: (%[[ARG0:.+]]: vector<4xi32>, %[[ARG1:.+]]: vector<4xi32>, %[[ARG2:.+]]: vector<4xi32>, %[[ARG3:.+]]: vector<4xi32>, %[[ARG4:.+]]: i32)
 func.func @reduction(%arg0 : vector<8xi32>, %arg1 : vector<8xi32>, %arg2 : i32) -> (i32) {
   // CHECK: %[[CST:.*]] = arith.constant dense<0> : vector<8xi32>
-  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]] {offsets = [0], strides = [1]} : vector<4xi32> into vector<8xi32>
-  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]] {offsets = [4], strides = [1]} : vector<4xi32> into vector<8xi32>
-  // CHECK: %[[INSERT2:.*]] = vector.insert_strided_slice %[[ARG2]], %[[CST]] {offsets = [0], strides = [1]} : vector<4xi32> into vector<8xi32>
-  // CHECK: %[[INSERT3:.*]] = vector.insert_strided_slice %[[ARG3]], %[[INSERT2]] {offsets = [4], strides = [1]} : vector<4xi32> into vector<8xi32>
+  // CHECK: %[[INSERT0:.*]] = vector.insert_strided_slice %[[ARG0]], %[[CST]][0:1] : vector<4xi32> into vector<8xi32>
+  // CHECK: %[[INSERT1:.*]] = vector.insert_strided_slice %[[ARG1]], %[[INSERT0]][4:1] : vector<4xi32> into vector<8xi32>
+  // CHECK: %[[INSERT2:.*]] = vector.insert_strided_slice %[[ARG2]], %[[CST]][0:1] : vector<4xi32> into vector<8xi32>
+  // CHECK: %[[INSERT3:.*]] = vector.insert_strided_slice %[[ARG3]], %[[INSERT2]][4:1] : vector<4xi32> into vector<8xi32>
   // CHECK: %[[ADDI:.*]] = arith.addi %[[INSERT1]], %[[INSERT3]] : vector<8xi32>
   // CHECK: %[[REDUCTION:.*]] = vector.reduction <add>, %[[ADDI]] : vector<8xi32> into i32
   // CHECK: %[[RET:.*]] = arith.addi %[[REDUCTION]], %[[ARG4]] : i32
diff --git a/mlir/test/Conversion/ConvertToSPIRV/vector.mlir b/mlir/test/Conversion/ConvertToSPIRV/vector.mlir
index e369eadca5730..a3f186b66afeb 100644
--- a/mlir/test/Conversion/ConvertToSPIRV/vector.mlir
+++ b/mlir/test/Conversion/ConvertToSPIRV/vector.mlir
@@ -158,7 +158,7 @@ func.func @insert_element_0d_vector(%scalar: f32, %vector : vector<f32>) -> vect
 //       CHECK:   %[[RET:.*]] = spirv.CompositeInsert %[[SUB]], %[[FULL]][2 : i32] : f32 into vector<3xf32>
 //       CHECK:   spirv.ReturnValue %[[RET]] : vector<3xf32>
 func.func @insert_size1_vector(%arg0 : vector<1xf32>, %arg1: vector<3xf32>) -> vector<3xf32> {
-  %1 = vector.insert_strided_slice %arg0, %arg1 {offsets = [2], strides = [1]} : vector<1xf32> into vector<3xf32>
+  %1 = vector.insert_strided_slice %arg0, %arg1[2:1] : vector<1xf32> into vector<3xf32>
   return %1 : vector<3xf32>
 }
 
diff --git a/mlir/test/Conversion/VectorToGPU/fold-arith-vector-to-mma-ops-mma-sync.mlir b/mlir/test/Conversion/VectorToGPU/fold-arith-vector-to-mma-ops-mma-sync.mlir
index 0afaa19d59d15..ed1916053d78a 100644
--- a/mlir/test/Conversion/VectorToGPU/fold-arith-vector-to-mma-ops-mma-sync.mlir
+++ b/mlir/test/Conversion/VectorToGPU/fold-arith-vector-to-mma-ops-mma-sync.mlir
@@ -25,18 +25,18 @@ func.func @m16n8k16_mmasync16816_f16_f16_f32_row_row_row(%arg0: memref<42x32xf16
   %B = vector.transfer_read %arg1[%c0, %c0], %cst_f16 {permutation_map = #map0, in_bounds = [true, true]} : memref<32x64xf16, #gpu.address_space<workgroup>>, vector<16x16xf16>
   %C = vector.transfer_read %arg2[%c0, %c0], %cst_f32 {in_bounds = [true, true]} : memref<42x64xf32, #gpu.address_space<workgroup>>, vector<16x16xf32>
 
-  %B0 = vector.extract_strided_slice %B {offsets = [0, 0], sizes = [8, 16], strides = [1, 1]} : vector<16x16xf16> to vector<8x16xf16>
+  %B0 = vector.extract_strided_slice %B[0:8:1][0:16:1] : vector<16x16xf16> to vector<8x16xf16>
   %B0_f32 = arith.extf %B0 : vector<8x16xf16> to vector<8x16xf32>
-  %C0 = vector.extract_strided_slice %C {offsets = [0, 0], sizes = [16, 8], strides = [1, 1]} : vector<16x16xf32> to vector<16x8xf32>
+  %C0 = vector.extract_strided_slice %C[0:16:1][0:8:1] : vector<16x16xf32> to vector<16x8xf32>
   
   // CHECK-DAG: nvgpu.mma.sync({{.*}}) {mmaShape = [16, 8, 16]} : (vector<4x2xf16>, vector<2x2xf16>, vector<2x2xf32>) -> vector<2x2xf32>
   %D0 = vector.contract {indexing_maps = [#map1, #map2, #map3], iterator_types = ["parallel", "parallel", "reduction"], kind = #vector.kind<add>} %A_f32, %B0_f32, %C0 : vector<16x16xf32>, vector<8x16xf32> into vector<16x8xf32>
   vector.transfer_write %D0, %arg2[%c0, %c0] {in_bounds = [true, true]} : vector<16x8xf32>, memref<42x64xf32, #gpu.address_space<workgroup>>
 
 
-  %B1 = vector.extract_strided_slice %B {offsets = [8, 0], sizes = [8, 16], strides = [1, 1]} : vector<16x16xf16> to vector<8x16xf16>
+  %B1 = vector.extract_strided_slice %B[8:8:1][0:16:1] : vector<16x16xf16> to vector<8x16xf16>
   %B1_f32 = arith.extf %B1 : vector<8x16xf16> to vector<8x16xf32>
-  %C1 = vector.extract_strided_slice %C {offsets = [0, 8], sizes = [16, 8], strides = [1, 1]} : vector<16x16xf32> to vector<16x8xf32>
+  %C1 = vector.extract_strided_slice %C[0:16:1][8:8:1] : vector<16x16xf32> to vector<16x8xf32>
 
   // CHECK-DAG: nvgpu.mma.sync({{.*}}) {mmaShape = [16, 8, 16]} : (vector<4x2xf16>, vector<2x2xf16>, vector<2x2xf32>) -> vector<2x2xf32>
   %D1 = vector.contract {indexing_maps = [#map1, #map2, #map3], iterator_types = ["parallel", "parallel", "reduction"], kind = #vector.kind<add>} %A_f32, %B1_f32, %C1 : vector<16x16xf32>, vector<8x16xf32> into vector<16x8xf32>
diff --git a/mlir/test/Conversion/VectorToGPU/vector-to-mma-ops-mma-sync.mlir b/mlir/test/Conversion/VectorToGPU/vector-to-mma-ops-mma-sync.mlir
index 912f7fba59e60..bdb8a7c881aa4 100644
--- a/mlir/test/Conversion/VectorToGPU/vector-to-mma-ops-mma-sync.mlir
+++ b/mlir/test/Conversion/VectorToGPU/vector-to-mma-ops-mma-sync.mlir
@@ -232,19 +232,19 @@ func.func @m16n16k16_mmasync16816_fp16_f16_row_row_row(%arg0: memref<42x32xf16,
   // CHECK-DAG: [[fragmentC:%.*]] = nvgpu.ldmatrix %arg2[[[m_coord]], [[n_coord]]] {numTiles = 4 : i32, transpose = false}
   %C = vector.transfer_read %arg2[%c0, %c0], %cst {in_bounds = [true, true]} : memref<42x64xf16, #gpu.address_space<workgroup>>, vector<16x16xf16>
 
-  // CHECK-DAG: [[fragmentB0:%.+]] = vector.extract_strided_slice [[fragmentB]] {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x2xf16> to vector<2x2xf16>
-  // CHECK-DAG: [[fragmentC0:%.+]] = vector.extract_strided_slice [[fragmentC]] {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x2xf16> to vector<2x2xf16>
+  // CHECK-DAG: [[fragmentB0:%.+]] = vector.extract_strided_slice [[fragmentB]][0:2:1][0:2:1] : vector<4x2xf16> to vector<2x2xf16>
+  // CHECK-DAG: [[fragmentC0:%.+]] = vector.extract_strided_slice [[fragmentC]][0:2:1][0:2:1] : vector<4x2xf16> to vector<2x2xf16>
   // CHECK: nvgpu.mma.sync([[fragmentA]], [[fragmentB0]], [[fragmentC0]]) {mmaShape = [16, 8, 16]} : (vector<4x2xf16>, vector<2x2xf16>, vector<2x2xf16>) -> vector<2x2xf16>
-  %B0 = vector.extract_strided_slice %B {offsets = [0, 0], sizes = [8, 16], strides = [1, 1]} : vector<16x16xf16> to vector<8x16xf16>
-  %C0 = vector.extract_strided_slice %C {offsets = [0, 0], sizes = [16, 8], strides = [1, 1]} : vector<16x16xf16> to vector<16x8xf16>
+  %B0 = vector.extract_strided_slice %B[0:8:1][0:16:1] : vector<16x16xf16> to vector<8x16xf16>
+  %C0 = vector.extract_strided_slice %C[0:16:1][0:8:1] : vector<16x16xf16> to vector<16x8xf16>
   %D0 = vector.contract {indexing_maps = [#map1, #map2, #map3], iterator_types = ["parallel", "parallel", "reduction"], kind = #vector.kind<add>} %A, %B0, %C0 : vector<16x16xf16>, vector<8x16xf16> into vector<16x8xf16>
   vector.transfer_write %D0, %arg2[%c0, %c0] {in_bounds = [true, true]} : vector<16x8xf16>, memref<42x64xf16, #gpu.address_space<workgroup>>
 
-  // CHECK-DAG: [[fragmentB1:%.+]] = vector.extract_strided_slice [[fragmentB]] {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x2xf16> to vector<2x2xf16>
-  // CHECK-DAG: [[fragmentC1:%.+]] = vector.extract_strided_slice [[fragmentC]] {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x2xf16> to vector<2x2xf16>
+  // CHECK-DAG: [[fragmentB1:%.+]] = vector.extract_strided_slice [[fragmentB]][2:2:1][0:2:1] : vector<4x2xf16> to vector<2x2xf16>
+  // CHECK-DAG: [[fragmentC1:%.+]] = vector.extract_strided_slice [[fragmentC]][2:2:1][0:2:1] : vector<4x2xf16> to vector<2x2xf16>
   // CHECK: nvgpu.mma.sync([[fragmentA]], [[fragmentB1]], [[fragmentC1]]) {mmaShape = [16, 8, 16]} : (vector<4x2xf16>, vector<2x2xf16>, vector<2x2xf16>) -> vector<2x2xf16>
-  %B1 = vector.extract_strided_slice %B {offsets = [8, 0], sizes = [8, 16], strides = [1, 1]} : vector<16x16xf16> to vector<8x16xf16>
-  %C1 = vector.extract_strided_slice %C {offsets = [0, 8], sizes = [16, 8], strides = [1, 1]} : vector<16x16xf16> to vector<16x8xf16>
+  %B1 = vector.extract_strided_slice %B[8:8:1][0:16:1] : vector<16x16xf16> to vector<8x16xf16>
+  %C1 = vector.extract_strided_slice %C[0:16:1][8:8:1] : vector<16x16xf16> to vector<16x8xf16>
   %D1 = vector.contract {indexing_maps = [#map1, #map2, #map3], iterator_types = ["parallel", "parallel", "reduction"], kind = #vector.kind<add>} %A, %B1, %C1 : vector<16x16xf16>, vector<8x16xf16> into vector<16x8xf16>
   vector.transfer_write %D1, %arg2[%c0, %c0] {in_bounds = [true, true]} : vector<16x8xf16>, memref<42x64xf16, #gpu.address_space<workgroup>>
 
@@ -288,11 +288,11 @@ func.func @multi_dim_m16n8k16_fp16_row_row_row(%arg0: memref<4x32x1x32xf16, #gpu
   // CHECK-DAG: [[fragmentC:%.*]] = nvgpu.ldmatrix %arg2[[[c0]], [[m_coord]], [[n_coord]]] {numTiles = 4 : i32, transpose = false}
   %C = vector.transfer_read %arg2[%c0, %c0, %c0], %cst {in_bounds = [true, true]} : memref<1x32x40xf16, #gpu.address_space<workgroup>>, vector<16x16xf16>
 
-  // CHECK-DAG: [[fragmentB0:%.+]] = vector.extract_strided_slice [[fragmentB]] {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x2xf16> to vector<2x2xf16>
-  // CHECK-DAG: [[fragmentC0:%.+]] = vector.extract_strided_slice [[fragmentC]] {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x2xf16> to vector<2x2xf16>
+  // CHECK-DAG: [[fragmentB0:%.+]] = vector.extract_strided_slice [[fragmentB]][0:2:1][0:2:1] : vector<4x2xf16> to vector<2x2xf16>
+  // CHECK-DAG: [[fragmentC0:%.+]] = vector.extract_strided_slice [[fragmentC]][0:2:1][0:2:1] : vector<4x2xf16> to vector<2x2xf16>
   // CHECK: nvgpu.mma.sync([[fragmentA]], [[fragmentB0]], [[fragmentC0]]) {mmaShape = [16, 8, 16]} : (vector<4x2xf16>, vector<2x2xf16>, vector<2x2xf16>) -> vector<2x2xf16>
-  %B0 = vector.extract_strided_slice %B {offsets = [0, 0], sizes = [8, 16], strides = [1, 1]} : vector<16x16xf16> to vector<8x16xf16>
-  %C0 = vector.extract_strided_slice %C {offsets = [0, 0], sizes = [16, 8], strides = [1, 1]} : vector<16x16xf16> to vector<16x8xf16>
+  %B0 = vector.extract_strided_slice %B[0:8:1][0:16:1] : vector<16x16xf16> to vector<8x16xf16>
+  %C0 = vector.extract_strided_slice %C[0:16:1][0:8:1] : vector<16x16xf16> to vector<16x8xf16>
   %D0 = vector.contract {indexing_maps = [#map1, #map2, #map3], iterator_types = ["parallel", "parallel", "reduction"], kind = #vector.kind<add>} %A, %B0, %C0 : vector<16x16xf16>, vector<8x16xf16> into vector<16x8xf16>
   vector.transfer_write %D0, %arg2[%c0, %c0, %c0] {in_bounds = [true, true]} : vector<16x8xf16>, memref<1x32x40xf16, #gpu.address_space<workgroup>>
 
diff --git a/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir b/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
index c310954b906e4..647d3c8291870 100644
--- a/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
+++ b/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
@@ -1107,7 +1107,7 @@ func.func @vector_print_string() {
 // -----
 
 func.func @extract_strided_slice1(%arg0: vector<4xf32>) -> vector<2xf32> {
-  %0 = vector.extract_strided_slice %arg0 {offsets = [2], sizes = [2], strides = [1]} : vector<4xf32> to vector<2xf32>
+  %0 = vector.extract_strided_slice %arg0[2:2:1] : vector<4xf32> to vector<2xf32>
   return %0 : vector<2xf32>
 }
 // CHECK-LABEL: @extract_strided_slice1(
@@ -1118,7 +1118,7 @@ func.func @extract_strided_slice1(%arg0: vector<4xf32>) -> vector<2xf32> {
 // -----
 
 func.func @extract_strided_index_slice1(%arg0: vector<4xindex>) -> vector<2xindex> {
-  %0 = vector.extract_strided_slice %arg0 {offsets = [2], sizes = [2], strides = [1]} : vector<4xindex> to vector<2xindex>
+  %0 = vector.extract_strided_slice %arg0[2:2:1] : vector<4xindex> to vector<2xindex>
   return %0 : vector<2xindex>
 }
 // CHECK-LABEL: @extract_strided_index_slice1(
@@ -1131,7 +1131,7 @@ func.func @extract_strided_index_slice1(%arg0: vector<4xindex>) -> vector<2xinde
 // -----
 
 func.func @extract_strided_slice2(%arg0: vector<4x8xf32>) -> vector<2x8xf32> {
-  %0 = vector.extract_strided_slice %arg0 {offsets = [2], sizes = [2], strides = [1]} : vector<4x8xf32> to vector<2x8xf32>
+  %0 = vector.extract_strided_slice %arg0[2:2:1] : vector<4x8xf32> to vector<2x8xf32>
   return %0 : vector<2x8xf32>
 }
 // CHECK-LABEL: @extract_strided_slice2(
@@ -1148,7 +1148,7 @@ func.func @extract_strided_slice2(%arg0: vector<4x8xf32>) -> vector<2x8xf32> {
 // -----
 
 func.func @extract_strided_slice3(%arg0: vector<4x8xf32>) -> vector<2x2xf32> {
-  %0 = vector.extract_strided_slice %arg0 {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x8xf32> to vector<2x2xf32>
+  %0 = vector.extract_strided_slice %arg0[2:2:1][2:2:1] : vector<4x8xf32> to vector<2x2xf32>
   return %0 : vector<2x2xf32>
 }
 // CHECK-LABEL: @extract_strided_slice3(
@@ -1168,7 +1168,7 @@ func.func @extract_strided_slice3(%arg0: vector<4x8xf32>) -> vector<2x2xf32> {
 // -----
 
 func.func @extract_strided_slice_scalable(%arg0 : vector<1x4x[4]xi32>) -> vector<1x1x[4]xi32> {
-  %0 = vector.extract_strided_slice %arg0 {offsets = [0, 3, 0], sizes = [1, 1, 4], strides = [1, 1, 1]} : vector<1x4x[4]xi32> to vector<1x1x[4]xi32>
+  %0 = vector.extract_strided_slice %arg0[0:1:1][3:1:1][0:4:1] : vector<1x4x[4]xi32> to vector<1x1x[4]xi32>
   return %0 : vector<1x1x[4]xi32>
 }
 
@@ -1190,7 +1190,7 @@ func.func @extract_strided_slice_scalable(%arg0 : vector<1x4x[4]xi32>) -> vector
 // -----
 
 func.func @insert_strided_slice1(%b: vector<4x4xf32>, %c: vector<4x4x4xf32>) -> vector<4x4x4xf32> {
-  %0 = vector.insert_strided_slice %b, %c {offsets = [2, 0, 0], strides = [1, 1]} : vector<4x4xf32> into vector<4x4x4xf32>
+  %0 = vector.insert_strided_slice %b, %c[2][0:1][0:1] : vector<4x4xf32> into vector<4x4x4xf32>
   return %0 : vector<4x4x4xf32>
 }
 // CHECK-LABEL: @insert_strided_slice1
@@ -1200,7 +1200,7 @@ func.func @insert_strided_slice1(%b: vector<4x4xf32>, %c: vector<4x4x4xf32>) ->
 // -----
 
 func.func @insert_strided_index_slice1(%b: vector<4x4xindex>, %c: vector<4x4x4xindex>) -> vector<4x4x4xindex> {
-  %0 = vector.insert_strided_slice %b, %c {offsets = [2, 0, 0], strides = [1, 1]} : vector<4x4xindex> into vector<4x4x4xindex>
+  %0 = vector.insert_strided_slice %b, %c[2][0:1][0:1] : vector<4x4xindex> into vector<4x4x4xindex>
   return %0 : vector<4x4x4xindex>
 }
 // CHECK-LABEL: @insert_strided_index_slice1(
@@ -1210,7 +1210,7 @@ func.func @insert_strided_index_slice1(%b: vector<4x4xindex>, %c: vector<4x4x4xi
 // -----
 
 func.func @insert_strided_slice2(%a: vector<2x2xf32>, %b: vector<4x4xf32>) -> vector<4x4xf32> {
-  %0 = vector.insert_strided_slice %a, %b {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+  %0 = vector.insert_strided_slice %a, %b[2:1][2:1] : vector<2x2xf32> into vector<4x4xf32>
   return %0 : vector<4x4xf32>
 }
 
@@ -1235,7 +1235,7 @@ func.func @insert_strided_slice2(%a: vector<2x2xf32>, %b: vector<4x4xf32>) -> ve
 // -----
 
 func.func @insert_strided_slice3(%arg0: vector<2x4xf32>, %arg1: vector<16x4x8xf32>) -> vector<16x4x8xf32> {
-  %0 = vector.insert_strided_slice %arg0, %arg1 {offsets = [0, 0, 2], strides = [1, 1]}:
+  %0 = vector.insert_strided_slice %arg0, %arg1[0][0:1][2:1]:
         vector<2x4xf32> into vector<16x4x8xf32>
   return %0 : vector<16x4x8xf32>
 }
@@ -1255,7 +1255,7 @@ func.func @insert_strided_slice3(%arg0: vector<2x4xf32>, %arg1: vector<16x4x8xf3
 // -----
 
 func.func @insert_strided_slice_scalable(%arg0 : vector<1x1x[4]xi32>, %arg1: vector<1x4x[4]xi32>) -> vector<1x4x[4]xi32> {
-  %0 = vector.insert_strided_slice %arg0, %arg1 {offsets = [0, 3, 0], strides = [1, 1, 1]} : vector<1x1x[4]xi32> into vector<1x4x[4]xi32>
+  %0 = vector.insert_strided_slice %arg0, %arg1[0:1][3:1][0:1] : vector<1x1x[4]xi32> into vector<1x4x[4]xi32>
   return %0 : vector<1x4x[4]xi32>
 }
 // CHECK-LABEL:   func.func @insert_strided_slice_scalable(
diff --git a/mlir/test/Conversion/VectorToSPIRV/vector-to-spirv.mlir b/mlir/test/Conversion/VectorToSPIRV/vector-to-spirv.mlir
index dd0ed77470a25..0ac79e15d0256 100644
--- a/mlir/test/Conversion/VectorToSPIRV/vector-to-spirv.mlir
+++ b/mlir/test/Conversion/VectorToSPIRV/vector-to-spirv.mlir
@@ -282,8 +282,8 @@ func.func @extract_element_0d_vector(%arg0 : f32) -> f32 {
 //       CHECK:   spirv.VectorShuffle [1 : i32, 2 : i32] %[[ARG]], %[[ARG]] : vector<4xf32>, vector<4xf32> -> vector<2xf32>
 //       CHECK:   spirv.CompositeExtract %[[ARG]][1 : i32] : vector<4xf32>
 func.func @extract_strided_slice(%arg0: vector<4xf32>) -> (vector<2xf32>, vector<1xf32>) {
-  %0 = vector.extract_strided_slice %arg0 {offsets = [1], sizes = [2], strides = [1]} : vector<4xf32> to vector<2xf32>
-  %1 = vector.extract_strided_slice %arg0 {offsets = [1], sizes = [1], strides = [1]} : vector<4xf32> to vector<1xf32>
+  %0 = vector.extract_strided_slice %arg0[1:2:1] : vector<4xf32> to vector<2xf32>
+  %1 = vector.extract_strided_slice %arg0[1:1:1] : vector<4xf32> to vector<1xf32>
   return %0, %1 : vector<2xf32>, vector<1xf32>
 }
 
@@ -354,7 +354,7 @@ func.func @insert_element_0d_vector(%scalar: f32, %vector : vector<f32>) -> vect
 //  CHECK-SAME: %[[PART:.+]]: vector<2xf32>, %[[ALL:.+]]: vector<4xf32>
 //       CHECK:   spirv.VectorShuffle [0 : i32, 4 : i32, 5 : i32, 3 : i32] %[[ALL]], %[[PART]] : vector<4xf32>, vector<2xf32> -> vector<4xf32>
 func.func @insert_strided_slice(%arg0: vector<2xf32>, %arg1: vector<4xf32>) -> vector<4xf32> {
-  %0 = vector.insert_strided_slice %arg0, %arg1 {offsets = [1], strides = [1]} : vector<2xf32> into vector<4xf32>
+  %0 = vector.insert_strided_slice %arg0, %arg1[1:1] : vector<2xf32> into vector<4xf32>
   return %0 : vector<4xf32>
 }
 
@@ -365,7 +365,7 @@ func.func @insert_strided_slice(%arg0: vector<2xf32>, %arg1: vector<4xf32>) -> v
 //       CHECK:   %[[S:.+]] = builtin.unrealized_conversion_cast %[[SUB]]
 //       CHECK:   spirv.CompositeInsert %[[S]], %[[FULL]][2 : i32] : f32 into vector<3xf32>
 func.func @insert_size1_vector(%arg0 : vector<1xf32>, %arg1: vector<3xf32>) -> vector<3xf32> {
-  %1 = vector.insert_strided_slice %arg0, %arg1 {offsets = [2], strides = [1]} : vector<1xf32> into vector<3xf32>
+  %1 = vector.insert_strided_slice %arg0, %arg1[2:1] : vector<1xf32> into vector<3xf32>
   return %1 : vector<3xf32>
 }
 
diff --git a/mlir/test/Dialect/Arith/emulate-wide-int.mlir b/mlir/test/Dialect/Arith/emulate-wide-int.mlir
index ed08779c10266..ee3972351cb25 100644
--- a/mlir/test/Dialect/Arith/emulate-wide-int.mlir
+++ b/mlir/test/Dialect/Arith/emulate-wide-int.mlir
@@ -114,16 +114,16 @@ func.func @addi_scalar_a_b(%a : i64, %b : i64) -> i64 {
 
 // CHECK-LABEL: func @addi_vector_a_b
 // CHECK-SAME:    ([[ARG0:%.+]]: vector<4x2xi32>, [[ARG1:%.+]]: vector<4x2xi32>) -> vector<4x2xi32>
-// CHECK-NEXT:    [[LOW0:%.+]]   = vector.extract_strided_slice [[ARG0]] {offsets = [0, 0], sizes = [4, 1], strides = [1, 1]} : vector<4x2xi32> to vector<4x1xi32>
-// CHECK-NEXT:    [[HIGH0:%.+]]  = vector.extract_strided_slice [[ARG0]] {offsets = [0, 1], sizes = [4, 1], strides = [1, 1]} : vector<4x2xi32> to vector<4x1xi32>
-// CHECK-NEXT:    [[LOW1:%.+]]   = vector.extract_strided_slice [[ARG1]] {offsets = [0, 0], sizes = [4, 1], strides = [1, 1]} : vector<4x2xi32> to vector<4x1xi32>
-// CHECK-NEXT:    [[HIGH1:%.+]]  = vector.extract_strided_slice [[ARG1]] {offsets = [0, 1], sizes = [4, 1], strides = [1, 1]} : vector<4x2xi32> to vector<4x1xi32>
+// CHECK-NEXT:    [[LOW0:%.+]]   = vector.extract_strided_slice [[ARG0]][0:4:1][0:1:1] : vector<4x2xi32> to vector<4x1xi32>
+// CHECK-NEXT:    [[HIGH0:%.+]]  = vector.extract_strided_slice [[ARG0]][0:4:1][1:1:1] : vector<4x2xi32> to vector<4x1xi32>
+// CHECK-NEXT:    [[LOW1:%.+]]   = vector.extract_strided_slice [[ARG1]][0:4:1][0:1:1] : vector<4x2xi32> to vector<4x1xi32>
+// CHECK-NEXT:    [[HIGH1:%.+]]  = vector.extract_strided_slice [[ARG1]][0:4:1][1:1:1] : vector<4x2xi32> to vector<4x1xi32>
 // CHECK-NEXT:    [[SUM_L:%.+]], [[CB:%.+]] = arith.addui_extended [[LOW0]], [[LOW1]] : vector<4x1xi32>, vector<4x1xi1>
 // CHECK-NEXT:    [[CARRY:%.+]]  = arith.extui [[CB]] : vector<4x1xi1> to vector<4x1xi32>
 // CHECK-NEXT:    [[SUM_H0:%.+]] = arith.addi [[CARRY]], [[HIGH0]] : vector<4x1xi32>
 // CHECK-NEXT:    [[SUM_H1:%.+]] = arith.addi [[SUM_H0]], [[HIGH1]] : vector<4x1xi32>
-// CHECK:         [[INS0:%.+]]   = vector.insert_strided_slice [[SUM_L]], {{%.+}} {offsets = [0, 0], strides = [1, 1]} : vector<4x1xi32> into vector<4x2xi32>
-// CHECK-NEXT:    [[INS1:%.+]]   = vector.insert_strided_slice [[SUM_H1]], [[INS0]] {offsets = [0, 1], strides = [1, 1]} : vector<4x1xi32> into vector<4x2xi32>
+// CHECK:         [[INS0:%.+]]   = vector.insert_strided_slice [[SUM_L]], {{%.+}}[0:1][0:1] : vector<4x1xi32> into vector<4x2xi32>
+// CHECK-NEXT:    [[INS1:%.+]]   = vector.insert_strided_slice [[SUM_H1]], [[INS0]][0:1][1:1] : vector<4x1xi32> into vector<4x2xi32>
 // CHECK-NEXT:    return [[INS1]] : vector<4x2xi32>
 func.func @addi_vector_a_b(%a : vector<4xi64>, %b : vector<4xi64>) -> vector<4xi64> {
     %x = arith.addi %a, %b : vector<4xi64>
@@ -147,10 +147,10 @@ func.func @cmpi_eq_scalar(%a : i64, %b : i64) -> i1 {
 
 // CHECK-LABEL: func.func @cmpi_eq_vector
 // CHECK-SAME:    ([[ARG0:%.+]]: vector<3x2xi32>, [[ARG1:%.+]]: vector<3x2xi32>) -> vector<3xi1>
-// CHECK-NEXT:    [[LOW0:%.+]]  = vector.extract_strided_slice [[ARG0]] {offsets = [0, 0], sizes = [3, 1], strides = [1, 1]} : vector<3x2xi32> to vector<3x1xi32>
-// CHECK-NEXT:    [[HIGH0:%.+]] = vector.extract_strided_slice [[ARG0]] {offsets = [0, 1], sizes = [3, 1], strides = [1, 1]} : vector<3x2xi32> to vector<3x1xi32>
-// CHECK-NEXT:    [[LOW1:%.+]]  = vector.extract_strided_slice [[ARG1]] {offsets = [0, 0], sizes = [3, 1], strides = [1, 1]} : vector<3x2xi32> to vector<3x1xi32>
-// CHECK-NEXT:    [[HIGH1:%.+]] = vector.extract_strided_slice [[ARG1]] {offsets = [0, 1], sizes = [3, 1], strides = [1, 1]} : vector<3x2xi32> to vector<3x1xi32>
+// CHECK-NEXT:    [[LOW0:%.+]]  = vector.extract_strided_slice [[ARG0]][0:3:1][0:1:1] : vector<3x2xi32> to vector<3x1xi32>
+// CHECK-NEXT:    [[HIGH0:%.+]] = vector.extract_strided_slice [[ARG0]][0:3:1][1:1:1] : vector<3x2xi32> to vector<3x1xi32>
+// CHECK-NEXT:    [[LOW1:%.+]]  = vector.extract_strided_slice [[ARG1]][0:3:1][0:1:1] : vector<3x2xi32> to vector<3x1xi32>
+// CHECK-NEXT:    [[HIGH1:%.+]] = vector.extract_strided_slice [[ARG1]][0:3:1][1:1:1] : vector<3x2xi32> to vector<3x1xi32>
 // CHECK-NEXT:    [[CLOW:%.+]]  = arith.cmpi eq, [[LOW0]], [[LOW1]] : vector<3x1xi32>
 // CHECK-NEXT:    [[CHIGH:%.+]] = arith.cmpi eq, [[HIGH0]], [[HIGH1]] : vector<3x1xi32>
 // CHECK-NEXT:    [[RES:%.+]]   = arith.andi [[CLOW]], [[CHIGH]] : vector<3x1xi1>
@@ -324,8 +324,8 @@ func.func @extsi_scalar(%a : i16) -> i64 {
 // CHECK-NEXT:    [[CMP:%.+]]   = arith.cmpi slt, [[EXT]], [[CSTE]] : vector<3x1xi32>
 // CHECK-NEXT:    [[HIGH:%.+]]  = arith.extsi [[CMP]] : vector<3x1xi1> to vector<3x1xi32>
 // CHECK-NEXT:    [[CSTZ:%.+]]  = arith.constant dense<0> : vector<3x2xi32>
-// CHECK-NEXT:    [[INS0:%.+]]  = vector.insert_strided_slice [[EXT]], [[CSTZ]] {offsets = [0, 0], strides = [1, 1]} : vector<3x1xi32> into vector<3x2xi32>
-// CHECK-NEXT:    [[INS1:%.+]]  = vector.insert_strided_slice [[HIGH]], [[INS0]] {offsets = [0, 1], strides = [1, 1]} : vector<3x1xi32> into vector<3x2xi32>
+// CHECK-NEXT:    [[INS0:%.+]]  = vector.insert_strided_slice [[EXT]], [[CSTZ]][0:1][0:1] : vector<3x1xi32> into vector<3x2xi32>
+// CHECK-NEXT:    [[INS1:%.+]]  = vector.insert_strided_slice [[HIGH]], [[INS0]][0:1][1:1] : vector<3x1xi32> into vector<3x2xi32>
 // CHECK-NEXT:    return [[INS1]] : vector<3x2xi32>
 func.func @extsi_vector(%a : vector<3xi16>) -> vector<3xi64> {
     %r = arith.extsi %a : vector<3xi16> to vector<3xi64>
@@ -358,7 +358,7 @@ func.func @extui_scalar2(%a : i32) -> i64 {
 // CHECK-NEXT:    [[SHAPE:%.+]] = vector.shape_cast [[ARG]] : vector<3xi16> to vector<3x1xi16>
 // CHECK-NEXT:    [[EXT:%.+]]   = arith.extui [[SHAPE]] : vector<3x1xi16> to vector<3x1xi32>
 // CHECK-NEXT:    [[CST:%.+]]   = arith.constant dense<0> : vector<3x2xi32>
-// CHECK-NEXT:    [[INS0:%.+]]  = vector.insert_strided_slice [[EXT]], [[CST]] {offsets = [0, 0], strides = [1, 1]} : vector<3x1xi32> into vector<3x2xi32>
+// CHECK-NEXT:    [[INS0:%.+]]  = vector.insert_strided_slice [[EXT]], [[CST]][0:1][0:1] : vector<3x1xi32> into vector<3x2xi32>
 // CHECK:         return [[INS0]] : vector<3x2xi32>
 func.func @extui_vector(%a : vector<3xi16>) -> vector<3xi64> {
     %r = arith.extui %a : vector<3xi16> to vector<3xi64>
@@ -377,7 +377,7 @@ func.func @index_cast_int_to_index_scalar(%a : i64) -> index {
 
 // CHECK-LABEL: func @index_cast_int_to_index_vector
 // CHECK-SAME:    ([[ARG:%.+]]: vector<3x2xi32>) -> vector<3xindex>
-// CHECK-NEXT:    [[EXT:%.+]]   = vector.extract_strided_slice [[ARG]] {offsets = [0, 0], sizes = [3, 1], strides = [1, 1]} : vector<3x2xi32> to vector<3x1xi32>
+// CHECK-NEXT:    [[EXT:%.+]]   = vector.extract_strided_slice [[ARG]][0:3:1][0:1:1] : vector<3x2xi32> to vector<3x1xi32>
 // CHECK-NEXT:    [[SHAPE:%.+]] = vector.shape_cast [[EXT]] : vector<3x1xi32> to vector<3xi32>
 // CHECK-NEXT:    [[RES:%.+]]   = arith.index_cast [[SHAPE]] : vector<3xi32> to vector<3xindex>
 // CHECK-NEXT:    return [[RES]] : vector<3xindex>
@@ -398,7 +398,7 @@ func.func @index_castui_int_to_index_scalar(%a : i64) -> index {
 
 // CHECK-LABEL: func @index_castui_int_to_index_vector
 // CHECK-SAME:    ([[ARG:%.+]]: vector<3x2xi32>) -> vector<3xindex>
-// CHECK-NEXT:    [[EXT:%.+]]   = vector.extract_strided_slice [[ARG]] {offsets = [0, 0], sizes = [3, 1], strides = [1, 1]} : vector<3x2xi32> to vector<3x1xi32>
+// CHECK-NEXT:    [[EXT:%.+]]   = vector.extract_strided_slice [[ARG]][0:3:1][0:1:1] : vector<3x2xi32> to vector<3x1xi32>
 // CHECK-NEXT:    [[SHAPE:%.+]] = vector.shape_cast [[EXT]] : vector<3x1xi32> to vector<3xi32>
 // CHECK-NEXT:    [[RES:%.+]]   = arith.index_castui [[SHAPE]] : vector<3xi32> to vector<3xindex>
 // CHECK-NEXT:    return [[RES]] : vector<3xindex>
@@ -454,7 +454,7 @@ func.func @index_castui_index_to_int_scalar(%a : index) -> i64 {
 // CHECK-NEXT:    [[CAST:%.+]]  = arith.index_castui [[ARG]] : vector<3xindex> to vector<3xi32>
 // CHECK-NEXT:    [[SHAPE:%.+]] = vector.shape_cast [[CAST]] : vector<3xi32> to vector<3x1xi32>
 // CHECK-NEXT:    [[CST:%.+]]   = arith.constant dense<0> : vector<3x2xi32>
-// CHECK-NEXT:    [[RES:%.+]]   = vector.insert_strided_slice [[SHAPE]], [[CST]] {offsets = [0, 0], strides = [1, 1]} : vector<3x1xi32> into vector<3x2xi32>
+// CHECK-NEXT:    [[RES:%.+]]   = vector.insert_strided_slice [[SHAPE]], [[CST]][0:1][0:1] : vector<3x1xi32> into vector<3x2xi32>
 // CHECK-NEXT:    return [[RES]] : vector<3x2xi32>
 func.func @index_castui_index_to_int_vector(%a : vector<3xindex>) -> vector<3xi64> {
     %r = arith.index_castui %a : vector<3xindex> to vector<3xi64>
@@ -482,7 +482,7 @@ func.func @trunci_scalar2(%a : i64) -> i16 {
 
 // CHECK-LABEL: func @trunci_vector
 // CHECK-SAME:    ([[ARG:%.+]]: vector<3x2xi32>) -> vector<3xi16>
-// CHECK-NEXT:    [[EXTR:%.+]]  = vector.extract_strided_slice [[ARG]] {offsets = [0, 0], sizes = [3, 1], strides = [1, 1]} : vector<3x2xi32> to vector<3x1xi32>
+// CHECK-NEXT:    [[EXTR:%.+]]  = vector.extract_strided_slice [[ARG]][0:3:1][0:1:1] : vector<3x2xi32> to vector<3x1xi32>
 // CHECK-NEXT:    [[SHAPE:%.+]] = vector.shape_cast [[EXTR]] : vector<3x1xi32> to vector<3xi32>
 // CHECK-NEXT:    [[TRNC:%.+]]  = arith.trunci [[SHAPE]] : vector<3xi32> to vector<3xi16>
 // CHECK-NEXT:    return [[TRNC]] : vector<3xi16>
@@ -929,8 +929,8 @@ func.func @uitofp_i64_f64(%a : i64) -> f64 {
 
 // CHECK-LABEL: func @uitofp_i64_f64_vector
 // CHECK-SAME:    ([[ARG:%.+]]: vector<3x2xi32>) -> vector<3xf64>
-// CHECK-NEXT:    [[EXTLOW:%.+]] = vector.extract_strided_slice [[ARG]] {offsets = [0, 0], sizes = [3, 1], strides = [1, 1]} : vector<3x2xi32> to vector<3x1xi32>
-// CHECK-NEXT:    [[EXTHI:%.+]]  = vector.extract_strided_slice [[ARG]] {offsets = [0, 1], sizes = [3, 1], strides = [1, 1]} : vector<3x2xi32> to vector<3x1xi32>
+// CHECK-NEXT:    [[EXTLOW:%.+]] = vector.extract_strided_slice [[ARG]][0:3:1][0:1:1] : vector<3x2xi32> to vector<3x1xi32>
+// CHECK-NEXT:    [[EXTHI:%.+]]  = vector.extract_strided_slice [[ARG]][0:3:1][1:1:1] : vector<3x2xi32> to vector<3x1xi32>
 // CHECK-NEXT:    [[LOW:%.+]]    = vector.shape_cast [[EXTLOW]] : vector<3x1xi32> to vector<3xi32>
 // CHECK-NEXT:    [[HI:%.+]]     = vector.shape_cast [[EXTHI]] : vector<3x1xi32> to vector<3xi32>
 // CHECK-NEXT:    [[CST0:%.+]]   = arith.constant dense<0> : vector<3xi32>
diff --git a/mlir/test/Dialect/Arith/int-narrowing.mlir b/mlir/test/Dialect/Arith/int-narrowing.mlir
index 153c0a8576262..8d26fee61df0a 100644
--- a/mlir/test/Dialect/Arith/int-narrowing.mlir
+++ b/mlir/test/Dialect/Arith/int-narrowing.mlir
@@ -655,49 +655,49 @@ func.func @extui_over_extractelement_3xi16(%a: vector<3xi16>, %pos: i32) -> f16
 
 // CHECK-LABEL: func.func @extsi_over_extract_strided_slice_1d
 // CHECK-SAME:    (%[[ARG:.+]]: vector<3xi16>)
-// CHECK-NEXT:    %[[EXTR:.+]] = vector.extract_strided_slice %[[ARG]] {offsets = [1], sizes = [2], strides = [1]} : vector<3xi16> to vector<2xi16>
+// CHECK-NEXT:    %[[EXTR:.+]] = vector.extract_strided_slice %[[ARG]][1:2:1] : vector<3xi16> to vector<2xi16>
 // CHECK-NEXT:    %[[RET:.+]]  = arith.extsi %[[EXTR]] : vector<2xi16> to vector<2xi32>
 // CHECK-NEXT:    return %[[RET]] : vector<2xi32>
 func.func @extsi_over_extract_strided_slice_1d(%a: vector<3xi16>) -> vector<2xi32> {
   %b = arith.extsi %a : vector<3xi16> to vector<3xi32>
   %c = vector.extract_strided_slice %b
-   {offsets = [1], sizes = [2], strides = [1]} : vector<3xi32> to vector<2xi32>
+  [1:2:1] : vector<3xi32> to vector<2xi32>
   return %c : vector<2xi32>
 }
 
 // CHECK-LABEL: func.func @extui_over_extract_strided_slice_1d
 // CHECK-SAME:    (%[[ARG:.+]]: vector<3xi16>)
-// CHECK-NEXT:    %[[EXTR:.+]] = vector.extract_strided_slice %[[ARG]] {offsets = [1], sizes = [2], strides = [1]} : vector<3xi16> to vector<2xi16>
+// CHECK-NEXT:    %[[EXTR:.+]] = vector.extract_strided_slice %[[ARG]][1:2:1] : vector<3xi16> to vector<2xi16>
 // CHECK-NEXT:    %[[RET:.+]]  = arith.extui %[[EXTR]] : vector<2xi16> to vector<2xi32>
 // CHECK-NEXT:    return %[[RET]] : vector<2xi32>
 func.func @extui_over_extract_strided_slice_1d(%a: vector<3xi16>) -> vector<2xi32> {
   %b = arith.extui %a : vector<3xi16> to vector<3xi32>
   %c = vector.extract_strided_slice %b
-   {offsets = [1], sizes = [2], strides = [1]} : vector<3xi32> to vector<2xi32>
+  [1:2:1] : vector<3xi32> to vector<2xi32>
   return %c : vector<2xi32>
 }
 
 // CHECK-LABEL: func.func @extsi_over_extract_strided_slice_2d
 // CHECK-SAME:    (%[[ARG:.+]]: vector<2x3xi16>)
-// CHECK-NEXT:    %[[EXTR:.+]] = vector.extract_strided_slice %arg0 {offsets = [1, 1], sizes = [1, 2], strides = [1, 1]} : vector<2x3xi16> to vector<1x2xi16>
+// CHECK-NEXT:    %[[EXTR:.+]] = vector.extract_strided_slice %arg0[1:1:1][1:2:1] : vector<2x3xi16> to vector<1x2xi16>
 // CHECK-NEXT:    %[[RET:.+]]  = arith.extsi %[[EXTR]] : vector<1x2xi16> to vector<1x2xi32>
 // CHECK-NEXT:    return %[[RET]] : vector<1x2xi32>
 func.func @extsi_over_extract_strided_slice_2d(%a: vector<2x3xi16>) -> vector<1x2xi32> {
   %b = arith.extsi %a : vector<2x3xi16> to vector<2x3xi32>
   %c = vector.extract_strided_slice %b
-   {offsets = [1, 1], sizes = [1, 2], strides = [1, 1]} : vector<2x3xi32> to vector<1x2xi32>
+  [1:1:1][1:2:1] : vector<2x3xi32> to vector<1x2xi32>
   return %c : vector<1x2xi32>
 }
 
 // CHECK-LABEL: func.func @extui_over_extract_strided_slice_2d
 // CHECK-SAME:    (%[[ARG:.+]]: vector<2x3xi16>)
-// CHECK-NEXT:    %[[EXTR:.+]] = vector.extract_strided_slice %arg0 {offsets = [1, 1], sizes = [1, 2], strides = [1, 1]} : vector<2x3xi16> to vector<1x2xi16>
+// CHECK-NEXT:    %[[EXTR:.+]] = vector.extract_strided_slice %arg0[1:1:1][1:2:1] : vector<2x3xi16> to vector<1x2xi16>
 // CHECK-NEXT:    %[[RET:.+]]  = arith.extui %[[EXTR]] : vector<1x2xi16> to vector<1x2xi32>
 // CHECK-NEXT:    return %[[RET]] : vector<1x2xi32>
 func.func @extui_over_extract_strided_slice_2d(%a: vector<2x3xi16>) -> vector<1x2xi32> {
   %b = arith.extui %a : vector<2x3xi16> to vector<2x3xi32>
   %c = vector.extract_strided_slice %b
-   {offsets = [1, 1], sizes = [1, 2], strides = [1, 1]} : vector<2x3xi32> to vector<1x2xi32>
+  [1:1:1][1:2:1] : vector<2x3xi32> to vector<1x2xi32>
   return %c : vector<1x2xi32>
 }
 
@@ -851,26 +851,26 @@ func.func @extui_over_insertelement_3xi16_cst_i16(%a: i8, %pos: i32) -> vector<3
 // CHECK-LABEL: func.func @extsi_over_insert_strided_slice_1d
 // CHECK-SAME:    (%[[ARG0:.+]]: vector<3xi16>, %[[ARG1:.+]]: vector<2xi16>)
 // CHECK-NEXT:    %[[INS:.+]] = vector.insert_strided_slice %[[ARG1]], %[[ARG0]]
-// CHECK-SAME:                    {offsets = [1], strides = [1]} : vector<2xi16> into vector<3xi16>
+// CHECK-SAME:                   [1:1] : vector<2xi16> into vector<3xi16>
 // CHECK-NEXT:    %[[RET:.+]] = arith.extsi %[[INS]] : vector<3xi16> to vector<3xi32>
 // CHECK-NEXT:    return %[[RET]] : vector<3xi32>
 func.func @extsi_over_insert_strided_slice_1d(%a: vector<3xi16>, %b: vector<2xi16>) -> vector<3xi32> {
   %c = arith.extsi %a : vector<3xi16> to vector<3xi32>
   %d = arith.extsi %b : vector<2xi16> to vector<2xi32>
-  %e = vector.insert_strided_slice %d, %c {offsets = [1], strides = [1]} : vector<2xi32> into vector<3xi32>
+  %e = vector.insert_strided_slice %d, %c[1:1] : vector<2xi32> into vector<3xi32>
   return %e : vector<3xi32>
 }
 
 // CHECK-LABEL: func.func @extui_over_insert_strided_slice_1d
 // CHECK-SAME:    (%[[ARG0:.+]]: vector<3xi16>, %[[ARG1:.+]]: vector<2xi16>)
 // CHECK-NEXT:    %[[INS:.+]] = vector.insert_strided_slice %[[ARG1]], %[[ARG0]]
-// CHECK-SAME:                    {offsets = [1], strides = [1]} : vector<2xi16> into vector<3xi16>
+// CHECK-SAME:                   [1:1] : vector<2xi16> into vector<3xi16>
 // CHECK-NEXT:    %[[RET:.+]] = arith.extui %[[INS]] : vector<3xi16> to vector<3xi32>
 // CHECK-NEXT:    return %[[RET]] : vector<3xi32>
 func.func @extui_over_insert_strided_slice_1d(%a: vector<3xi16>, %b: vector<2xi16>) -> vector<3xi32> {
   %c = arith.extui %a : vector<3xi16> to vector<3xi32>
   %d = arith.extui %b : vector<2xi16> to vector<2xi32>
-  %e = vector.insert_strided_slice %d, %c {offsets = [1], strides = [1]} : vector<2xi32> into vector<3xi32>
+  %e = vector.insert_strided_slice %d, %c[1:1] : vector<2xi32> into vector<3xi32>
   return %e : vector<3xi32>
 }
 
@@ -881,13 +881,13 @@ func.func @extui_over_insert_strided_slice_1d(%a: vector<3xi16>, %b: vector<2xi1
 // CHECK-NEXT:    %[[SRCE:.+]] = arith.extsi %[[ARG]] : vector<1x2xi8> to vector<1x2xi32>
 // CHECK-NEXT:    %[[SRCT:.+]] = arith.trunci %[[SRCE]] : vector<1x2xi32> to vector<1x2xi16>
 // CHECK-NEXT:    %[[INS:.+]] = vector.insert_strided_slice %[[SRCT]], %[[CST]]
-// CHECK-SAME:                    {offsets = [0, 1], strides = [1, 1]} : vector<1x2xi16> into vector<2x3xi16>
+// CHECK-SAME:                   [0:1][1:1] : vector<1x2xi16> into vector<2x3xi16>
 // CHECK-NEXT:    %[[RET:.+]]  = arith.extsi %[[INS]] : vector<2x3xi16> to vector<2x3xi32>
 // CHECK-NEXT:    return %[[RET]] : vector<2x3xi32>
 func.func @extsi_over_insert_strided_slice_cst_2d(%a: vector<1x2xi8>) -> vector<2x3xi32> {
   %cst = arith.constant dense<[[-1, 128, 0], [-129, 42, 1337]]> : vector<2x3xi32>
   %d = arith.extsi %a : vector<1x2xi8> to vector<1x2xi32>
-  %e = vector.insert_strided_slice %d, %cst {offsets = [0, 1], strides = [1, 1]} : vector<1x2xi32> into vector<2x3xi32>
+  %e = vector.insert_strided_slice %d, %cst[0:1][1:1] : vector<1x2xi32> into vector<2x3xi32>
   return %e : vector<2x3xi32>
 }
 
@@ -898,13 +898,13 @@ func.func @extsi_over_insert_strided_slice_cst_2d(%a: vector<1x2xi8>) -> vector<
 // CHECK-NEXT:    %[[SRCE:.+]] = arith.extui %[[ARG]] : vector<1x2xi8> to vector<1x2xi32>
 // CHECK-NEXT:    %[[SRCT:.+]] = arith.trunci %[[SRCE]] : vector<1x2xi32> to vector<1x2xi16>
 // CHECK-NEXT:    %[[INS:.+]] = vector.insert_strided_slice %[[SRCT]], %[[CST]]
-// CHECK-SAME:                    {offsets = [0, 1], strides = [1, 1]} : vector<1x2xi16> into vector<2x3xi16>
+// CHECK-SAME:                   [0:1][1:1] : vector<1x2xi16> into vector<2x3xi16>
 // CHECK-NEXT:    %[[RET:.+]]  = arith.extui %[[INS]] : vector<2x3xi16> to vector<2x3xi32>
 // CHECK-NEXT:    return %[[RET]] : vector<2x3xi32>
 func.func @extui_over_insert_strided_slice_cst_2d(%a: vector<1x2xi8>) -> vector<2x3xi32> {
   %cst = arith.constant dense<[[1, 128, 0], [256, 42, 1337]]> : vector<2x3xi32>
   %d = arith.extui %a : vector<1x2xi8> to vector<1x2xi32>
-  %e = vector.insert_strided_slice %d, %cst {offsets = [0, 1], strides = [1, 1]} : vector<1x2xi32> into vector<2x3xi32>
+  %e = vector.insert_strided_slice %d, %cst[0:1][1:1] : vector<1x2xi32> into vector<2x3xi32>
   return %e : vector<2x3xi32>
 }
 
diff --git a/mlir/test/Dialect/ArmNeon/lower-to-arm-neon.mlir b/mlir/test/Dialect/ArmNeon/lower-to-arm-neon.mlir
index 297be91e77283..25fb697a329a5 100644
--- a/mlir/test/Dialect/ArmNeon/lower-to-arm-neon.mlir
+++ b/mlir/test/Dialect/ArmNeon/lower-to-arm-neon.mlir
@@ -46,42 +46,42 @@ func.func @vector_arm_neon_without_extsi(%lhs: vector<2x8xi32>, %rhs: vector<2x8
 // CHECK-LABEL: vector_arm_neon_unroll
 // CHECK-SAME: %[[VAL_0:.*]]: vector<4x8xi8>, %[[VAL_1:.*]]: vector<4x8xi8>, %[[VAL_2:.*]]: vector<4x4xi32>
 // CHECK-DAG:  %[[VAL_3:.*]] = arith.constant dense<0> : vector<4x4xi32>
-// CHECK-DAG:  %[[VAL_4:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [0, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x8xi8> to vector<2x8xi8>
-// CHECK-DAG:  %[[VAL_5:.*]] = vector.extract_strided_slice %[[VAL_1]] {offsets = [0, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x8xi8> to vector<2x8xi8>
-// CHECK-DAG:  %[[VAL_6:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xi32> to vector<2x2xi32>
+// CHECK-DAG:  %[[VAL_4:.*]] = vector.extract_strided_slice %[[VAL_0]][0:2:1][0:8:1] : vector<4x8xi8> to vector<2x8xi8>
+// CHECK-DAG:  %[[VAL_5:.*]] = vector.extract_strided_slice %[[VAL_1]][0:2:1][0:8:1] : vector<4x8xi8> to vector<2x8xi8>
+// CHECK-DAG:  %[[VAL_6:.*]] = vector.extract_strided_slice %[[VAL_2]][0:2:1][0:2:1] : vector<4x4xi32> to vector<2x2xi32>
 // CHECK-DAG:  %[[VAL_7:.*]] = vector.shape_cast %[[VAL_4]] : vector<2x8xi8> to vector<16xi8>
 // CHECK-DAG:  %[[VAL_8:.*]] = vector.shape_cast %[[VAL_5]] : vector<2x8xi8> to vector<16xi8>
 // CHECK-DAG:  %[[VAL_9:.*]] = vector.shape_cast %[[VAL_6]] : vector<2x2xi32> to vector<4xi32>
 // CHECK-DAG:  %[[VAL_10:.*]] = arm_neon.intr.smmla %[[VAL_9]], %[[VAL_7]], %[[VAL_8]] : vector<16xi8> to vector<4xi32>
 // CHECK-DAG:  %[[VAL_11:.*]] = vector.shape_cast %[[VAL_10]] : vector<4xi32> to vector<2x2xi32>
-// CHECK-DAG:  %[[VAL_12:.*]] = vector.insert_strided_slice %[[VAL_11]], %[[VAL_3]] {offsets = [0, 0], strides = [1, 1]} : vector<2x2xi32> into vector<4x4xi32>
-// CHECK-DAG:  %[[VAL_13:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [0, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x8xi8> to vector<2x8xi8>
-// CHECK-DAG:  %[[VAL_14:.*]] = vector.extract_strided_slice %[[VAL_1]] {offsets = [2, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x8xi8> to vector<2x8xi8>
-// CHECK-DAG:  %[[VAL_15:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xi32> to vector<2x2xi32>
+// CHECK-DAG:  %[[VAL_12:.*]] = vector.insert_strided_slice %[[VAL_11]], %[[VAL_3]][0:1][0:1] : vector<2x2xi32> into vector<4x4xi32>
+// CHECK-DAG:  %[[VAL_13:.*]] = vector.extract_strided_slice %[[VAL_0]][0:2:1][0:8:1] : vector<4x8xi8> to vector<2x8xi8>
+// CHECK-DAG:  %[[VAL_14:.*]] = vector.extract_strided_slice %[[VAL_1]][2:2:1][0:8:1] : vector<4x8xi8> to vector<2x8xi8>
+// CHECK-DAG:  %[[VAL_15:.*]] = vector.extract_strided_slice %[[VAL_2]][0:2:1][2:2:1] : vector<4x4xi32> to vector<2x2xi32>
 // CHECK-DAG:  %[[VAL_16:.*]] = vector.shape_cast %[[VAL_13]] : vector<2x8xi8> to vector<16xi8>
 // CHECK-DAG:  %[[VAL_17:.*]] = vector.shape_cast %[[VAL_14]] : vector<2x8xi8> to vector<16xi8>
 // CHECK-DAG:  %[[VAL_18:.*]] = vector.shape_cast %[[VAL_15]] : vector<2x2xi32> to vector<4xi32>
 // CHECK-DAG:  %[[VAL_19:.*]] = arm_neon.intr.smmla %[[VAL_18]], %[[VAL_16]], %[[VAL_17]] : vector<16xi8> to vector<4xi32>
 // CHECK-DAG:  %[[VAL_20:.*]] = vector.shape_cast %[[VAL_19]] : vector<4xi32> to vector<2x2xi32>
-// CHECK-DAG:  %[[VAL_21:.*]] = vector.insert_strided_slice %[[VAL_20]], %[[VAL_12]] {offsets = [0, 2], strides = [1, 1]} : vector<2x2xi32> into vector<4x4xi32>
-// CHECK-DAG:  %[[VAL_22:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [2, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x8xi8> to vector<2x8xi8>
-// CHECK-DAG:  %[[VAL_23:.*]] = vector.extract_strided_slice %[[VAL_1]] {offsets = [0, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x8xi8> to vector<2x8xi8>
-// CHECK-DAG:  %[[VAL_24:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xi32> to vector<2x2xi32>
+// CHECK-DAG:  %[[VAL_21:.*]] = vector.insert_strided_slice %[[VAL_20]], %[[VAL_12]][0:1][2:1] : vector<2x2xi32> into vector<4x4xi32>
+// CHECK-DAG:  %[[VAL_22:.*]] = vector.extract_strided_slice %[[VAL_0]][2:2:1][0:8:1] : vector<4x8xi8> to vector<2x8xi8>
+// CHECK-DAG:  %[[VAL_23:.*]] = vector.extract_strided_slice %[[VAL_1]][0:2:1][0:8:1] : vector<4x8xi8> to vector<2x8xi8>
+// CHECK-DAG:  %[[VAL_24:.*]] = vector.extract_strided_slice %[[VAL_2]][2:2:1][0:2:1] : vector<4x4xi32> to vector<2x2xi32>
 // CHECK-DAG:  %[[VAL_25:.*]] = vector.shape_cast %[[VAL_22]] : vector<2x8xi8> to vector<16xi8>
 // CHECK-DAG:  %[[VAL_26:.*]] = vector.shape_cast %[[VAL_23]] : vector<2x8xi8> to vector<16xi8>
 // CHECK-DAG:  %[[VAL_27:.*]] = vector.shape_cast %[[VAL_24]] : vector<2x2xi32> to vector<4xi32>
 // CHECK-DAG:  %[[VAL_28:.*]] = arm_neon.intr.smmla %[[VAL_27]], %[[VAL_25]], %[[VAL_26]] : vector<16xi8> to vector<4xi32>
 // CHECK-DAG:  %[[VAL_29:.*]] = vector.shape_cast %[[VAL_28]] : vector<4xi32> to vector<2x2xi32>
-// CHECK-DAG:  %[[VAL_30:.*]] = vector.insert_strided_slice %[[VAL_29]], %[[VAL_21]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xi32> into vector<4x4xi32>
-// CHECK-DAG:  %[[VAL_31:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [2, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x8xi8> to vector<2x8xi8>
-// CHECK-DAG:  %[[VAL_32:.*]] = vector.extract_strided_slice %[[VAL_1]] {offsets = [2, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x8xi8> to vector<2x8xi8>
-// CHECK-DAG:  %[[VAL_33:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xi32> to vector<2x2xi32>
+// CHECK-DAG:  %[[VAL_30:.*]] = vector.insert_strided_slice %[[VAL_29]], %[[VAL_21]][2:1][0:1] : vector<2x2xi32> into vector<4x4xi32>
+// CHECK-DAG:  %[[VAL_31:.*]] = vector.extract_strided_slice %[[VAL_0]][2:2:1][0:8:1] : vector<4x8xi8> to vector<2x8xi8>
+// CHECK-DAG:  %[[VAL_32:.*]] = vector.extract_strided_slice %[[VAL_1]][2:2:1][0:8:1] : vector<4x8xi8> to vector<2x8xi8>
+// CHECK-DAG:  %[[VAL_33:.*]] = vector.extract_strided_slice %[[VAL_2]][2:2:1][2:2:1] : vector<4x4xi32> to vector<2x2xi32>
 // CHECK-DAG:  %[[VAL_34:.*]] = vector.shape_cast %[[VAL_31]] : vector<2x8xi8> to vector<16xi8>
 // CHECK-DAG:  %[[VAL_35:.*]] = vector.shape_cast %[[VAL_32]] : vector<2x8xi8> to vector<16xi8>
 // CHECK-DAG:  %[[VAL_36:.*]] = vector.shape_cast %[[VAL_33]] : vector<2x2xi32> to vector<4xi32>
 // CHECK-DAG:  %[[VAL_37:.*]] = arm_neon.intr.smmla %[[VAL_36]], %[[VAL_34]], %[[VAL_35]] : vector<16xi8> to vector<4xi32>
 // CHECK-DAG:  %[[VAL_38:.*]] = vector.shape_cast %[[VAL_37]] : vector<4xi32> to vector<2x2xi32>
-// CHECK-DAG:  %[[VAL_39:.*]] = vector.insert_strided_slice %[[VAL_38]], %[[VAL_30]] {offsets = [2, 2], strides = [1, 1]} : vector<2x2xi32> into vector<4x4xi32>
+// CHECK-DAG:  %[[VAL_39:.*]] = vector.insert_strided_slice %[[VAL_38]], %[[VAL_30]][2:1][2:1] : vector<2x2xi32> into vector<4x4xi32>
 // CHECK-DAG:  return %[[VAL_39]] : vector<4x4xi32>
 // CHECK-DAG:  }
 func.func @vector_arm_neon_unroll(%lhs: vector<4x8xi8>, %rhs: vector<4x8xi8>, %acc : vector<4x4xi32>) -> vector<4x4xi32> {
@@ -99,22 +99,22 @@ func.func @vector_arm_neon_unroll(%lhs: vector<4x8xi8>, %rhs: vector<4x8xi8>, %a
 // CHECK-SAME:                                                       %[[VAL_2:.*]]: vector<4x2xi32>) -> vector<4x2xi32> {
 // CHECK-DAG:  %[[VAL_3:.*]] = arith.constant dense<0> : vector<4x2xi32>
 // CHECK-DAG:  %[[VAL_4:.*]] = arith.extsi %[[VAL_1]] : vector<2x8xi4> to vector<2x8xi8>
-// CHECK-DAG:  %[[VAL_5:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [0, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x8xi8> to vector<2x8xi8>
-// CHECK-DAG:  %[[VAL_6:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x2xi32> to vector<2x2xi32>
+// CHECK-DAG:  %[[VAL_5:.*]] = vector.extract_strided_slice %[[VAL_0]][0:2:1][0:8:1] : vector<4x8xi8> to vector<2x8xi8>
+// CHECK-DAG:  %[[VAL_6:.*]] = vector.extract_strided_slice %[[VAL_2]][0:2:1][0:2:1] : vector<4x2xi32> to vector<2x2xi32>
 // CHECK-DAG:  %[[VAL_7:.*]] = vector.shape_cast %[[VAL_5]] : vector<2x8xi8> to vector<16xi8>
 // CHECK-DAG:  %[[VAL_8:.*]] = vector.shape_cast %[[VAL_4]] : vector<2x8xi8> to vector<16xi8>
 // CHECK-DAG:  %[[VAL_9:.*]] = vector.shape_cast %[[VAL_6]] : vector<2x2xi32> to vector<4xi32>
 // CHECK-DAG:  %[[VAL_10:.*]] = arm_neon.intr.smmla %[[VAL_9]], %[[VAL_7]], %[[VAL_8]] : vector<16xi8> to vector<4xi32>
 // CHECK-DAG:  %[[VAL_11:.*]] = vector.shape_cast %[[VAL_10]] : vector<4xi32> to vector<2x2xi32>
-// CHECK-DAG:  %[[VAL_12:.*]] = vector.insert_strided_slice %[[VAL_11]], %[[VAL_3]] {offsets = [0, 0], strides = [1, 1]} : vector<2x2xi32> into vector<4x2xi32>
-// CHECK-DAG:  %[[VAL_13:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [2, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x8xi8> to vector<2x8xi8>
-// CHECK-DAG:  %[[VAL_14:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x2xi32> to vector<2x2xi32>
+// CHECK-DAG:  %[[VAL_12:.*]] = vector.insert_strided_slice %[[VAL_11]], %[[VAL_3]][0:1][0:1] : vector<2x2xi32> into vector<4x2xi32>
+// CHECK-DAG:  %[[VAL_13:.*]] = vector.extract_strided_slice %[[VAL_0]][2:2:1][0:8:1] : vector<4x8xi8> to vector<2x8xi8>
+// CHECK-DAG:  %[[VAL_14:.*]] = vector.extract_strided_slice %[[VAL_2]][2:2:1][0:2:1] : vector<4x2xi32> to vector<2x2xi32>
 // CHECK-DAG:  %[[VAL_15:.*]] = vector.shape_cast %[[VAL_13]] : vector<2x8xi8> to vector<16xi8>
 // CHECK-DAG:  %[[VAL_16:.*]] = vector.shape_cast %[[VAL_4]] : vector<2x8xi8> to vector<16xi8>
 // CHECK-DAG:  %[[VAL_17:.*]] = vector.shape_cast %[[VAL_14]] : vector<2x2xi32> to vector<4xi32>
 // CHECK-DAG:  %[[VAL_18:.*]] = arm_neon.intr.smmla %[[VAL_17]], %[[VAL_15]], %[[VAL_16]] : vector<16xi8> to vector<4xi32>
 // CHECK-DAG:  %[[VAL_19:.*]] = vector.shape_cast %[[VAL_18]] : vector<4xi32> to vector<2x2xi32>
-// CHECK-DAG:  %[[VAL_20:.*]] = vector.insert_strided_slice %[[VAL_19]], %[[VAL_12]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xi32> into vector<4x2xi32>
+// CHECK-DAG:  %[[VAL_20:.*]] = vector.insert_strided_slice %[[VAL_19]], %[[VAL_12]][2:1][0:1] : vector<2x2xi32> into vector<4x2xi32>
 // CHECK-DAG:  return %[[VAL_20]] : vector<4x2xi32>
 // CHECK-DAG:  }
 func.func @vector_arm_neon_mixed_unroll(%lhs: vector<4x8xi8>, %rhs: vector<2x8xi4>, %acc : vector<4x2xi32>) -> vector<4x2xi32> {
@@ -144,50 +144,50 @@ func.func @vector_arm_neon_unroll_incompatible_shape(%lhs: vector<4x12xi8>, %rhs
 // CHECK:  %[[VAL_3:.*]] = arith.constant dense<0> : vector<2x2xi32>
 // CHECK:  %[[VAL_4:.*]] = arith.constant dense<0> : vector<2x8xi8>
 // CHECK:  %[[VAL_5:.*]] = arith.constant dense<0> : vector<8xi32>
-// CHECK:  %[[VAL_6:.*]] = vector.extract_strided_slice %[[VAL_1]] {offsets = [0, 0], sizes = [2, 8], strides = [1, 1]} : vector<8x8xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_7:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [0], sizes = [2], strides = [1]} : vector<8xi32> to vector<2xi32>
-// CHECK:  %[[VAL_8:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]] {offsets = [0, 0], strides = [1]} : vector<8xi8> into vector<2x8xi8>
-// CHECK:  %[[VAL_9:.*]] = vector.insert_strided_slice %[[VAL_7]], %[[VAL_3]] {offsets = [0, 0], strides = [1]} : vector<2xi32> into vector<2x2xi32>
+// CHECK:  %[[VAL_6:.*]] = vector.extract_strided_slice %[[VAL_1]][0:2:1][0:8:1] : vector<8x8xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_7:.*]] = vector.extract_strided_slice %[[VAL_2]][0:2:1] : vector<8xi32> to vector<2xi32>
+// CHECK:  %[[VAL_8:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]][0][0:1] : vector<8xi8> into vector<2x8xi8>
+// CHECK:  %[[VAL_9:.*]] = vector.insert_strided_slice %[[VAL_7]], %[[VAL_3]][0][0:1] : vector<2xi32> into vector<2x2xi32>
 // CHECK:  %[[VAL_10:.*]] = vector.shape_cast %[[VAL_8]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_11:.*]] = vector.shape_cast %[[VAL_6]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_12:.*]] = vector.shape_cast %[[VAL_9]] : vector<2x2xi32> to vector<4xi32>
 // CHECK:  %[[VAL_13:.*]] = arm_neon.intr.smmla %[[VAL_12]], %[[VAL_10]], %[[VAL_11]] : vector<16xi8> to vector<4xi32>
 // CHECK:  %[[VAL_14:.*]] = vector.shape_cast %[[VAL_13]] : vector<4xi32> to vector<2x2xi32>
 // CHECK:  %[[VAL_15:.*]] = vector.extract %[[VAL_14]][0] : vector<2xi32> from vector<2x2xi32>
-// CHECK:  %[[VAL_16:.*]] = vector.insert_strided_slice %[[VAL_15]], %[[VAL_5]] {offsets = [0], strides = [1]} : vector<2xi32> into vector<8xi32>
-// CHECK:  %[[VAL_17:.*]] = vector.extract_strided_slice %[[VAL_1]] {offsets = [2, 0], sizes = [2, 8], strides = [1, 1]} : vector<8x8xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_18:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [2], sizes = [2], strides = [1]} : vector<8xi32> to vector<2xi32>
-// CHECK:  %[[VAL_19:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]] {offsets = [0, 0], strides = [1]} : vector<8xi8> into vector<2x8xi8>
-// CHECK:  %[[VAL_20:.*]] = vector.insert_strided_slice %[[VAL_18]], %[[VAL_3]] {offsets = [0, 0], strides = [1]} : vector<2xi32> into vector<2x2xi32>
+// CHECK:  %[[VAL_16:.*]] = vector.insert_strided_slice %[[VAL_15]], %[[VAL_5]][0:1] : vector<2xi32> into vector<8xi32>
+// CHECK:  %[[VAL_17:.*]] = vector.extract_strided_slice %[[VAL_1]][2:2:1][0:8:1] : vector<8x8xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_18:.*]] = vector.extract_strided_slice %[[VAL_2]][2:2:1] : vector<8xi32> to vector<2xi32>
+// CHECK:  %[[VAL_19:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]][0][0:1] : vector<8xi8> into vector<2x8xi8>
+// CHECK:  %[[VAL_20:.*]] = vector.insert_strided_slice %[[VAL_18]], %[[VAL_3]][0][0:1] : vector<2xi32> into vector<2x2xi32>
 // CHECK:  %[[VAL_21:.*]] = vector.shape_cast %[[VAL_19]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_22:.*]] = vector.shape_cast %[[VAL_17]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_23:.*]] = vector.shape_cast %[[VAL_20]] : vector<2x2xi32> to vector<4xi32>
 // CHECK:  %[[VAL_24:.*]] = arm_neon.intr.smmla %[[VAL_23]], %[[VAL_21]], %[[VAL_22]] : vector<16xi8> to vector<4xi32>
 // CHECK:  %[[VAL_25:.*]] = vector.shape_cast %[[VAL_24]] : vector<4xi32> to vector<2x2xi32>
 // CHECK:  %[[VAL_26:.*]] = vector.extract %[[VAL_25]][0] : vector<2xi32> from vector<2x2xi32>
-// CHECK:  %[[VAL_27:.*]] = vector.insert_strided_slice %[[VAL_26]], %[[VAL_16]] {offsets = [2], strides = [1]} : vector<2xi32> into vector<8xi32>
-// CHECK:  %[[VAL_28:.*]] = vector.extract_strided_slice %[[VAL_1]] {offsets = [4, 0], sizes = [2, 8], strides = [1, 1]} : vector<8x8xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_29:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [4], sizes = [2], strides = [1]} : vector<8xi32> to vector<2xi32>
-// CHECK:  %[[VAL_30:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]] {offsets = [0, 0], strides = [1]} : vector<8xi8> into vector<2x8xi8>
-// CHECK:  %[[VAL_31:.*]] = vector.insert_strided_slice %[[VAL_29]], %[[VAL_3]] {offsets = [0, 0], strides = [1]} : vector<2xi32> into vector<2x2xi32>
+// CHECK:  %[[VAL_27:.*]] = vector.insert_strided_slice %[[VAL_26]], %[[VAL_16]][2:1] : vector<2xi32> into vector<8xi32>
+// CHECK:  %[[VAL_28:.*]] = vector.extract_strided_slice %[[VAL_1]][4:2:1][0:8:1] : vector<8x8xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_29:.*]] = vector.extract_strided_slice %[[VAL_2]][4:2:1] : vector<8xi32> to vector<2xi32>
+// CHECK:  %[[VAL_30:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]][0][0:1] : vector<8xi8> into vector<2x8xi8>
+// CHECK:  %[[VAL_31:.*]] = vector.insert_strided_slice %[[VAL_29]], %[[VAL_3]][0][0:1] : vector<2xi32> into vector<2x2xi32>
 // CHECK:  %[[VAL_32:.*]] = vector.shape_cast %[[VAL_30]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_33:.*]] = vector.shape_cast %[[VAL_28]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_34:.*]] = vector.shape_cast %[[VAL_31]] : vector<2x2xi32> to vector<4xi32>
 // CHECK:  %[[VAL_35:.*]] = arm_neon.intr.smmla %[[VAL_34]], %[[VAL_32]], %[[VAL_33]] : vector<16xi8> to vector<4xi32>
 // CHECK:  %[[VAL_36:.*]] = vector.shape_cast %[[VAL_35]] : vector<4xi32> to vector<2x2xi32>
 // CHECK:  %[[VAL_37:.*]] = vector.extract %[[VAL_36]][0] : vector<2xi32> from vector<2x2xi32>
-// CHECK:  %[[VAL_38:.*]] = vector.insert_strided_slice %[[VAL_37]], %[[VAL_27]] {offsets = [4], strides = [1]} : vector<2xi32> into vector<8xi32>
-// CHECK:  %[[VAL_39:.*]] = vector.extract_strided_slice %[[VAL_1]] {offsets = [6, 0], sizes = [2, 8], strides = [1, 1]} : vector<8x8xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_40:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [6], sizes = [2], strides = [1]} : vector<8xi32> to vector<2xi32>
-// CHECK:  %[[VAL_41:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]] {offsets = [0, 0], strides = [1]} : vector<8xi8> into vector<2x8xi8>
-// CHECK:  %[[VAL_42:.*]] = vector.insert_strided_slice %[[VAL_40]], %[[VAL_3]] {offsets = [0, 0], strides = [1]} : vector<2xi32> into vector<2x2xi32>
+// CHECK:  %[[VAL_38:.*]] = vector.insert_strided_slice %[[VAL_37]], %[[VAL_27]][4:1] : vector<2xi32> into vector<8xi32>
+// CHECK:  %[[VAL_39:.*]] = vector.extract_strided_slice %[[VAL_1]][6:2:1][0:8:1] : vector<8x8xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_40:.*]] = vector.extract_strided_slice %[[VAL_2]][6:2:1] : vector<8xi32> to vector<2xi32>
+// CHECK:  %[[VAL_41:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]][0][0:1] : vector<8xi8> into vector<2x8xi8>
+// CHECK:  %[[VAL_42:.*]] = vector.insert_strided_slice %[[VAL_40]], %[[VAL_3]][0][0:1] : vector<2xi32> into vector<2x2xi32>
 // CHECK:  %[[VAL_43:.*]] = vector.shape_cast %[[VAL_41]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_44:.*]] = vector.shape_cast %[[VAL_39]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_45:.*]] = vector.shape_cast %[[VAL_42]] : vector<2x2xi32> to vector<4xi32>
 // CHECK:  %[[VAL_46:.*]] = arm_neon.intr.smmla %[[VAL_45]], %[[VAL_43]], %[[VAL_44]] : vector<16xi8> to vector<4xi32>
 // CHECK:  %[[VAL_47:.*]] = vector.shape_cast %[[VAL_46]] : vector<4xi32> to vector<2x2xi32>
 // CHECK:  %[[VAL_48:.*]] = vector.extract %[[VAL_47]][0] : vector<2xi32> from vector<2x2xi32>
-// CHECK:  %[[VAL_49:.*]] = vector.insert_strided_slice %[[VAL_48]], %[[VAL_38]] {offsets = [6], strides = [1]} : vector<2xi32> into vector<8xi32>
+// CHECK:  %[[VAL_49:.*]] = vector.insert_strided_slice %[[VAL_48]], %[[VAL_38]][6:1] : vector<2xi32> into vector<8xi32>
 // CHECK:  return %[[VAL_49]] : vector<8xi32>
 // CHECK:  }
 func.func @vector_arm_neon_vecmat_unroll(%lhs: vector<8xi8>, %rhs: vector<8x8xi8>, %acc : vector<8xi32>) -> vector<8xi32> {
@@ -206,50 +206,50 @@ func.func @vector_arm_neon_vecmat_unroll(%lhs: vector<8xi8>, %rhs: vector<8x8xi8
 // CHECK:  %[[VAL_3:.*]] = arith.constant dense<0> : vector<2x2xi32>
 // CHECK:  %[[VAL_4:.*]] = arith.constant dense<0> : vector<2x8xi8>
 // CHECK:  %[[VAL_5:.*]] = arith.constant dense<0> : vector<1x8xi32>
-// CHECK:  %[[VAL_6:.*]] = vector.extract_strided_slice %[[VAL_1]] {offsets = [0, 0], sizes = [2, 8], strides = [1, 1]} : vector<8x8xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_7:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [0, 0], sizes = [1, 2], strides = [1, 1]} : vector<1x8xi32> to vector<1x2xi32>
-// CHECK:  %[[VAL_8:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]] {offsets = [0, 0], strides = [1, 1]} : vector<1x8xi8> into vector<2x8xi8>
-// CHECK:  %[[VAL_9:.*]] = vector.insert_strided_slice %[[VAL_7]], %[[VAL_3]] {offsets = [0, 0], strides = [1, 1]} : vector<1x2xi32> into vector<2x2xi32>
+// CHECK:  %[[VAL_6:.*]] = vector.extract_strided_slice %[[VAL_1]][0:2:1][0:8:1] : vector<8x8xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_7:.*]] = vector.extract_strided_slice %[[VAL_2]][0:1:1][0:2:1] : vector<1x8xi32> to vector<1x2xi32>
+// CHECK:  %[[VAL_8:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]][0:1][0:1] : vector<1x8xi8> into vector<2x8xi8>
+// CHECK:  %[[VAL_9:.*]] = vector.insert_strided_slice %[[VAL_7]], %[[VAL_3]][0:1][0:1] : vector<1x2xi32> into vector<2x2xi32>
 // CHECK:  %[[VAL_10:.*]] = vector.shape_cast %[[VAL_8]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_11:.*]] = vector.shape_cast %[[VAL_6]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_12:.*]] = vector.shape_cast %[[VAL_9]] : vector<2x2xi32> to vector<4xi32>
 // CHECK:  %[[VAL_13:.*]] = arm_neon.intr.smmla %[[VAL_12]], %[[VAL_10]], %[[VAL_11]] : vector<16xi8> to vector<4xi32>
 // CHECK:  %[[VAL_14:.*]] = vector.shape_cast %[[VAL_13]] : vector<4xi32> to vector<2x2xi32>
 // CHECK:  %[[VAL_15:.*]] = vector.extract %[[VAL_14]][0] : vector<2xi32> from vector<2x2xi32>
-// CHECK:  %[[VAL_16:.*]] = vector.insert_strided_slice %[[VAL_15]], %[[VAL_5]] {offsets = [0, 0], strides = [1]} : vector<2xi32> into vector<1x8xi32>
-// CHECK:  %[[VAL_17:.*]] = vector.extract_strided_slice %[[VAL_1]] {offsets = [2, 0], sizes = [2, 8], strides = [1, 1]} : vector<8x8xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_18:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [0, 2], sizes = [1, 2], strides = [1, 1]} : vector<1x8xi32> to vector<1x2xi32>
-// CHECK:  %[[VAL_19:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]] {offsets = [0, 0], strides = [1, 1]} : vector<1x8xi8> into vector<2x8xi8>
-// CHECK:  %[[VAL_20:.*]] = vector.insert_strided_slice %[[VAL_18]], %[[VAL_3]] {offsets = [0, 0], strides = [1, 1]} : vector<1x2xi32> into vector<2x2xi32>
+// CHECK:  %[[VAL_16:.*]] = vector.insert_strided_slice %[[VAL_15]], %[[VAL_5]][0][0:1] : vector<2xi32> into vector<1x8xi32>
+// CHECK:  %[[VAL_17:.*]] = vector.extract_strided_slice %[[VAL_1]][2:2:1][0:8:1] : vector<8x8xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_18:.*]] = vector.extract_strided_slice %[[VAL_2]][0:1:1][2:2:1] : vector<1x8xi32> to vector<1x2xi32>
+// CHECK:  %[[VAL_19:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]][0:1][0:1] : vector<1x8xi8> into vector<2x8xi8>
+// CHECK:  %[[VAL_20:.*]] = vector.insert_strided_slice %[[VAL_18]], %[[VAL_3]][0:1][0:1] : vector<1x2xi32> into vector<2x2xi32>
 // CHECK:  %[[VAL_21:.*]] = vector.shape_cast %[[VAL_19]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_22:.*]] = vector.shape_cast %[[VAL_17]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_23:.*]] = vector.shape_cast %[[VAL_20]] : vector<2x2xi32> to vector<4xi32>
 // CHECK:  %[[VAL_24:.*]] = arm_neon.intr.smmla %[[VAL_23]], %[[VAL_21]], %[[VAL_22]] : vector<16xi8> to vector<4xi32>
 // CHECK:  %[[VAL_25:.*]] = vector.shape_cast %[[VAL_24]] : vector<4xi32> to vector<2x2xi32>
 // CHECK:  %[[VAL_26:.*]] = vector.extract %[[VAL_25]][0] : vector<2xi32> from vector<2x2xi32>
-// CHECK:  %[[VAL_27:.*]] = vector.insert_strided_slice %[[VAL_26]], %[[VAL_16]] {offsets = [0, 2], strides = [1]} : vector<2xi32> into vector<1x8xi32>
-// CHECK:  %[[VAL_28:.*]] = vector.extract_strided_slice %[[VAL_1]] {offsets = [4, 0], sizes = [2, 8], strides = [1, 1]} : vector<8x8xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_29:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [0, 4], sizes = [1, 2], strides = [1, 1]} : vector<1x8xi32> to vector<1x2xi32>
-// CHECK:  %[[VAL_30:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]] {offsets = [0, 0], strides = [1, 1]} : vector<1x8xi8> into vector<2x8xi8>
-// CHECK:  %[[VAL_31:.*]] = vector.insert_strided_slice %[[VAL_29]], %[[VAL_3]] {offsets = [0, 0], strides = [1, 1]} : vector<1x2xi32> into vector<2x2xi32>
+// CHECK:  %[[VAL_27:.*]] = vector.insert_strided_slice %[[VAL_26]], %[[VAL_16]][0][2:1] : vector<2xi32> into vector<1x8xi32>
+// CHECK:  %[[VAL_28:.*]] = vector.extract_strided_slice %[[VAL_1]][4:2:1][0:8:1] : vector<8x8xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_29:.*]] = vector.extract_strided_slice %[[VAL_2]][0:1:1][4:2:1] : vector<1x8xi32> to vector<1x2xi32>
+// CHECK:  %[[VAL_30:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]][0:1][0:1] : vector<1x8xi8> into vector<2x8xi8>
+// CHECK:  %[[VAL_31:.*]] = vector.insert_strided_slice %[[VAL_29]], %[[VAL_3]][0:1][0:1] : vector<1x2xi32> into vector<2x2xi32>
 // CHECK:  %[[VAL_32:.*]] = vector.shape_cast %[[VAL_30]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_33:.*]] = vector.shape_cast %[[VAL_28]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_34:.*]] = vector.shape_cast %[[VAL_31]] : vector<2x2xi32> to vector<4xi32>
 // CHECK:  %[[VAL_35:.*]] = arm_neon.intr.smmla %[[VAL_34]], %[[VAL_32]], %[[VAL_33]] : vector<16xi8> to vector<4xi32>
 // CHECK:  %[[VAL_36:.*]] = vector.shape_cast %[[VAL_35]] : vector<4xi32> to vector<2x2xi32>
 // CHECK:  %[[VAL_37:.*]] = vector.extract %[[VAL_36]][0] : vector<2xi32> from vector<2x2xi32>
-// CHECK:  %[[VAL_38:.*]] = vector.insert_strided_slice %[[VAL_37]], %[[VAL_27]] {offsets = [0, 4], strides = [1]} : vector<2xi32> into vector<1x8xi32>
-// CHECK:  %[[VAL_39:.*]] = vector.extract_strided_slice %[[VAL_1]] {offsets = [6, 0], sizes = [2, 8], strides = [1, 1]} : vector<8x8xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_40:.*]] = vector.extract_strided_slice %[[VAL_2]] {offsets = [0, 6], sizes = [1, 2], strides = [1, 1]} : vector<1x8xi32> to vector<1x2xi32>
-// CHECK:  %[[VAL_41:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]] {offsets = [0, 0], strides = [1, 1]} : vector<1x8xi8> into vector<2x8xi8>
-// CHECK:  %[[VAL_42:.*]] = vector.insert_strided_slice %[[VAL_40]], %[[VAL_3]] {offsets = [0, 0], strides = [1, 1]} : vector<1x2xi32> into vector<2x2xi32>
+// CHECK:  %[[VAL_38:.*]] = vector.insert_strided_slice %[[VAL_37]], %[[VAL_27]][0][4:1] : vector<2xi32> into vector<1x8xi32>
+// CHECK:  %[[VAL_39:.*]] = vector.extract_strided_slice %[[VAL_1]][6:2:1][0:8:1] : vector<8x8xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_40:.*]] = vector.extract_strided_slice %[[VAL_2]][0:1:1][6:2:1] : vector<1x8xi32> to vector<1x2xi32>
+// CHECK:  %[[VAL_41:.*]] = vector.insert_strided_slice %[[VAL_0]], %[[VAL_4]][0:1][0:1] : vector<1x8xi8> into vector<2x8xi8>
+// CHECK:  %[[VAL_42:.*]] = vector.insert_strided_slice %[[VAL_40]], %[[VAL_3]][0:1][0:1] : vector<1x2xi32> into vector<2x2xi32>
 // CHECK:  %[[VAL_43:.*]] = vector.shape_cast %[[VAL_41]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_44:.*]] = vector.shape_cast %[[VAL_39]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_45:.*]] = vector.shape_cast %[[VAL_42]] : vector<2x2xi32> to vector<4xi32>
 // CHECK:  %[[VAL_46:.*]] = arm_neon.intr.smmla %[[VAL_45]], %[[VAL_43]], %[[VAL_44]] : vector<16xi8> to vector<4xi32>
 // CHECK:  %[[VAL_47:.*]] = vector.shape_cast %[[VAL_46]] : vector<4xi32> to vector<2x2xi32>
 // CHECK:  %[[VAL_48:.*]] = vector.extract %[[VAL_47]][0] : vector<2xi32> from vector<2x2xi32>
-// CHECK:  %[[VAL_49:.*]] = vector.insert_strided_slice %[[VAL_48]], %[[VAL_38]] {offsets = [0, 6], strides = [1]} : vector<2xi32> into vector<1x8xi32>
+// CHECK:  %[[VAL_49:.*]] = vector.insert_strided_slice %[[VAL_48]], %[[VAL_38]][0][6:1] : vector<2xi32> into vector<1x8xi32>
 // CHECK:  return %[[VAL_49]] : vector<1x8xi32>
 // CHECK:  }
 func.func @vector_arm_neon_vecmat_unroll_leading_dim(%lhs: vector<1x8xi8>, %rhs: vector<8x8xi8>, %acc : vector<1x8xi32>) -> vector<1x8xi32> {
@@ -278,14 +278,14 @@ func.func @vector_arm_neon_matvec(%lhs: vector<8x8xi8>, %rhs: vector<8xi8>, %acc
 // CHECK-SAME: %[[VAL_1:.*]]: vector<2x16xi4>,
 // CHECK-SAME: %[[VAL_2:.*]]: vector<2x2xi32>) -> vector<2x2xi32> {
 // CHECK:  %[[VAL_3:.*]] = arith.extsi %[[VAL_1]] : vector<2x16xi4> to vector<2x16xi8>
-// CHECK:  %[[VAL_4:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [0, 0], sizes = [2, 8], strides = [1, 1]} : vector<2x16xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_5:.*]] = vector.extract_strided_slice %[[VAL_3]] {offsets = [0, 0], sizes = [2, 8], strides = [1, 1]} : vector<2x16xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_4:.*]] = vector.extract_strided_slice %[[VAL_0]][0:2:1][0:8:1] : vector<2x16xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_5:.*]] = vector.extract_strided_slice %[[VAL_3]][0:2:1][0:8:1] : vector<2x16xi8> to vector<2x8xi8>
 // CHECK:  %[[VAL_6:.*]] = vector.shape_cast %[[VAL_4]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_7:.*]] = vector.shape_cast %[[VAL_5]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_8:.*]] = vector.shape_cast %[[VAL_2]] : vector<2x2xi32> to vector<4xi32>
 // CHECK:  %[[KACC_0:.*]] = arm_neon.intr.smmla %[[VAL_8]], %[[VAL_6]], %[[VAL_7]] : vector<16xi8> to vector<4xi32>
-// CHECK:  %[[VAL_10:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [0, 8], sizes = [2, 8], strides = [1, 1]} : vector<2x16xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_11:.*]] = vector.extract_strided_slice %[[VAL_3]] {offsets = [0, 8], sizes = [2, 8], strides = [1, 1]} : vector<2x16xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_10:.*]] = vector.extract_strided_slice %[[VAL_0]][0:2:1][8:8:1] : vector<2x16xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_11:.*]] = vector.extract_strided_slice %[[VAL_3]][0:2:1][8:8:1] : vector<2x16xi8> to vector<2x8xi8>
 // CHECK:  %[[VAL_12:.*]] = vector.shape_cast %[[VAL_10]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_13:.*]] = vector.shape_cast %[[VAL_11]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[KACC_1:.*]] = arm_neon.intr.smmla %[[KACC_0]], %[[VAL_12]], %[[VAL_13]] : vector<16xi8> to vector<4xi32>
@@ -309,44 +309,44 @@ func.func @vector_arm_neon_k_unroll(%lhs: vector<2x16xi8>, %rhs: vector<2x16xi4>
 // CHECK:  %[[VAL_4:.*]] = arith.constant dense<0> : vector<2x8xi8>
 // CHECK:  %[[VAL_5:.*]] = arith.constant dense<0> : vector<1x2xi32>
 // CHECK:  %[[VAL_6:.*]] = arith.extsi %[[VAL_1]] : vector<2x32xi4> to vector<2x32xi8>
-// CHECK:  %[[VAL_7:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [0, 0], sizes = [1, 8], strides = [1, 1]} : vector<1x32xi8> to vector<1x8xi8>
-// CHECK:  %[[VAL_8:.*]] = vector.extract_strided_slice %[[VAL_6]] {offsets = [0, 0], sizes = [2, 8], strides = [1, 1]} : vector<2x32xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_9:.*]] = vector.insert_strided_slice %[[VAL_7]], %[[VAL_4]] {offsets = [0, 0], strides = [1, 1]} : vector<1x8xi8> into vector<2x8xi8>
-// CHECK:  %[[VAL_10:.*]] = vector.insert_strided_slice %[[VAL_2]], %[[VAL_3]] {offsets = [0, 0], strides = [1, 1]} : vector<1x2xi32> into vector<2x2xi32>
+// CHECK:  %[[VAL_7:.*]] = vector.extract_strided_slice %[[VAL_0]][0:1:1][0:8:1] : vector<1x32xi8> to vector<1x8xi8>
+// CHECK:  %[[VAL_8:.*]] = vector.extract_strided_slice %[[VAL_6]][0:2:1][0:8:1] : vector<2x32xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_9:.*]] = vector.insert_strided_slice %[[VAL_7]], %[[VAL_4]][0:1][0:1] : vector<1x8xi8> into vector<2x8xi8>
+// CHECK:  %[[VAL_10:.*]] = vector.insert_strided_slice %[[VAL_2]], %[[VAL_3]][0:1][0:1] : vector<1x2xi32> into vector<2x2xi32>
 // CHECK:  %[[VAL_11:.*]] = vector.shape_cast %[[VAL_9]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_12:.*]] = vector.shape_cast %[[VAL_8]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_13:.*]] = vector.shape_cast %[[VAL_10]] : vector<2x2xi32> to vector<4xi32>
 // CHECK:  %[[KACC_0:.*]] = arm_neon.intr.smmla %[[VAL_13]], %[[VAL_11]], %[[VAL_12]] : vector<16xi8> to vector<4xi32>
 // CHECK:  %[[VAL_15:.*]] = vector.shape_cast %[[KACC_0]] : vector<4xi32> to vector<2x2xi32>
 // CHECK:  %[[VAL_16:.*]] = vector.extract %[[VAL_15]][0] : vector<2xi32> from vector<2x2xi32>
-// CHECK:  %[[VAL_17:.*]] = vector.insert_strided_slice %[[VAL_16]], %[[VAL_5]] {offsets = [0, 0], strides = [1]} : vector<2xi32> into vector<1x2xi32>
-// CHECK:  %[[VAL_18:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [0, 8], sizes = [1, 8], strides = [1, 1]} : vector<1x32xi8> to vector<1x8xi8>
-// CHECK:  %[[VAL_19:.*]] = vector.extract_strided_slice %[[VAL_6]] {offsets = [0, 8], sizes = [2, 8], strides = [1, 1]} : vector<2x32xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_20:.*]] = vector.insert_strided_slice %[[VAL_18]], %[[VAL_4]] {offsets = [0, 0], strides = [1, 1]} : vector<1x8xi8> into vector<2x8xi8>
+// CHECK:  %[[VAL_17:.*]] = vector.insert_strided_slice %[[VAL_16]], %[[VAL_5]][0][0:1] : vector<2xi32> into vector<1x2xi32>
+// CHECK:  %[[VAL_18:.*]] = vector.extract_strided_slice %[[VAL_0]][0:1:1][8:8:1] : vector<1x32xi8> to vector<1x8xi8>
+// CHECK:  %[[VAL_19:.*]] = vector.extract_strided_slice %[[VAL_6]][0:2:1][8:8:1] : vector<2x32xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_20:.*]] = vector.insert_strided_slice %[[VAL_18]], %[[VAL_4]][0:1][0:1] : vector<1x8xi8> into vector<2x8xi8>
 // CHECK:  %[[VAL_21:.*]] = vector.shape_cast %[[VAL_20]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_22:.*]] = vector.shape_cast %[[VAL_19]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[KACC_1:.*]] = arm_neon.intr.smmla %[[KACC_0]], %[[VAL_21]], %[[VAL_22]] : vector<16xi8> to vector<4xi32>
 // CHECK:  %[[VAL_24:.*]] = vector.shape_cast %[[KACC_1]] : vector<4xi32> to vector<2x2xi32>
 // CHECK:  %[[VAL_25:.*]] = vector.extract %[[VAL_24]][0] : vector<2xi32> from vector<2x2xi32>
-// CHECK:  %[[VAL_26:.*]] = vector.insert_strided_slice %[[VAL_25]], %[[VAL_17]] {offsets = [0, 0], strides = [1]} : vector<2xi32> into vector<1x2xi32>
-// CHECK:  %[[VAL_27:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [0, 16], sizes = [1, 8], strides = [1, 1]} : vector<1x32xi8> to vector<1x8xi8>
-// CHECK:  %[[VAL_28:.*]] = vector.extract_strided_slice %[[VAL_6]] {offsets = [0, 16], sizes = [2, 8], strides = [1, 1]} : vector<2x32xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_29:.*]] = vector.insert_strided_slice %[[VAL_27]], %[[VAL_4]] {offsets = [0, 0], strides = [1, 1]} : vector<1x8xi8> into vector<2x8xi8>
+// CHECK:  %[[VAL_26:.*]] = vector.insert_strided_slice %[[VAL_25]], %[[VAL_17]][0][0:1] : vector<2xi32> into vector<1x2xi32>
+// CHECK:  %[[VAL_27:.*]] = vector.extract_strided_slice %[[VAL_0]][0:1:1][16:8:1] : vector<1x32xi8> to vector<1x8xi8>
+// CHECK:  %[[VAL_28:.*]] = vector.extract_strided_slice %[[VAL_6]][0:2:1][16:8:1] : vector<2x32xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_29:.*]] = vector.insert_strided_slice %[[VAL_27]], %[[VAL_4]][0:1][0:1] : vector<1x8xi8> into vector<2x8xi8>
 // CHECK:  %[[VAL_30:.*]] = vector.shape_cast %[[VAL_29]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_31:.*]] = vector.shape_cast %[[VAL_28]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[KACC_2:.*]] = arm_neon.intr.smmla %[[KACC_1]], %[[VAL_30]], %[[VAL_31]] : vector<16xi8> to vector<4xi32>
 // CHECK:  %[[VAL_33:.*]] = vector.shape_cast %[[KACC_2]] : vector<4xi32> to vector<2x2xi32>
 // CHECK:  %[[VAL_34:.*]] = vector.extract %[[VAL_33]][0] : vector<2xi32> from vector<2x2xi32>
-// CHECK:  %[[VAL_35:.*]] = vector.insert_strided_slice %[[VAL_34]], %[[VAL_26]] {offsets = [0, 0], strides = [1]} : vector<2xi32> into vector<1x2xi32>
-// CHECK:  %[[VAL_36:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [0, 24], sizes = [1, 8], strides = [1, 1]} : vector<1x32xi8> to vector<1x8xi8>
-// CHECK:  %[[VAL_37:.*]] = vector.extract_strided_slice %[[VAL_6]] {offsets = [0, 24], sizes = [2, 8], strides = [1, 1]} : vector<2x32xi8> to vector<2x8xi8>
-// CHECK:  %[[VAL_38:.*]] = vector.insert_strided_slice %[[VAL_36]], %[[VAL_4]] {offsets = [0, 0], strides = [1, 1]} : vector<1x8xi8> into vector<2x8xi8>
+// CHECK:  %[[VAL_35:.*]] = vector.insert_strided_slice %[[VAL_34]], %[[VAL_26]][0][0:1] : vector<2xi32> into vector<1x2xi32>
+// CHECK:  %[[VAL_36:.*]] = vector.extract_strided_slice %[[VAL_0]][0:1:1][24:8:1] : vector<1x32xi8> to vector<1x8xi8>
+// CHECK:  %[[VAL_37:.*]] = vector.extract_strided_slice %[[VAL_6]][0:2:1][24:8:1] : vector<2x32xi8> to vector<2x8xi8>
+// CHECK:  %[[VAL_38:.*]] = vector.insert_strided_slice %[[VAL_36]], %[[VAL_4]][0:1][0:1] : vector<1x8xi8> into vector<2x8xi8>
 // CHECK:  %[[VAL_39:.*]] = vector.shape_cast %[[VAL_38]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[VAL_40:.*]] = vector.shape_cast %[[VAL_37]] : vector<2x8xi8> to vector<16xi8>
 // CHECK:  %[[KACC_3:.*]] = arm_neon.intr.smmla %[[KACC_2]], %[[VAL_39]], %[[VAL_40]] : vector<16xi8> to vector<4xi32>
 // CHECK:  %[[VAL_42:.*]] = vector.shape_cast %[[KACC_3]] : vector<4xi32> to vector<2x2xi32>
 // CHECK:  %[[VAL_43:.*]] = vector.extract %[[VAL_42]][0] : vector<2xi32> from vector<2x2xi32>
-// CHECK:  %[[VAL_44:.*]] = vector.insert_strided_slice %[[VAL_43]], %[[VAL_35]] {offsets = [0, 0], strides = [1]} : vector<2xi32> into vector<1x2xi32>
+// CHECK:  %[[VAL_44:.*]] = vector.insert_strided_slice %[[VAL_43]], %[[VAL_35]][0][0:1] : vector<2xi32> into vector<1x2xi32>
 // CHECK:  return %[[VAL_44]] : vector<1x2xi32>
 func.func @vector_arm_neon_k_unroll_vecmat(%lhs: vector<1x32xi8>, %rhs: vector<2x32xi4>, %acc : vector<1x2xi32>) -> vector<1x2xi32> {
   %lhs_extsi = arith.extsi %lhs : vector<1x32xi8> to vector<1x32xi32>
diff --git a/mlir/test/Dialect/ArmNeon/roundtrip.mlir b/mlir/test/Dialect/ArmNeon/roundtrip.mlir
index b5df0ffa8105c..f7b3e9257f566 100644
--- a/mlir/test/Dialect/ArmNeon/roundtrip.mlir
+++ b/mlir/test/Dialect/ArmNeon/roundtrip.mlir
@@ -5,12 +5,12 @@ func.func @arm_neon_smull(%a: vector<8xi8>, %b: vector<8xi8>)
     -> (vector<8xi16>, vector<4xi32>, vector<2xi64>) {
   // CHECK: arm_neon.intr.smull {{.*}}: vector<8xi8> to vector<8xi16>
   %0 = arm_neon.intr.smull %a, %b : vector<8xi8> to vector<8xi16>
-  %00 = vector.extract_strided_slice %0 {offsets = [3], sizes = [4], strides = [1]}:
+  %00 = vector.extract_strided_slice %0[3:4:1]:
     vector<8xi16> to vector<4xi16>
 
   // CHECK: arm_neon.intr.smull {{.*}}: vector<4xi16> to vector<4xi32>
   %1 = arm_neon.intr.smull %00, %00 : vector<4xi16> to vector<4xi32>
-  %11 = vector.extract_strided_slice %1 {offsets = [1], sizes = [2], strides = [1]}:
+  %11 = vector.extract_strided_slice %1[1:2:1]:
     vector<4xi32> to vector<2xi32>
 
   // CHECK: arm_neon.intr.smull {{.*}}: vector<2xi32> to vector<2xi64>
diff --git a/mlir/test/Dialect/GPU/subgroup-redule-lowering.mlir b/mlir/test/Dialect/GPU/subgroup-redule-lowering.mlir
index f04a01ffe75d3..2ae6fcca81ce3 100644
--- a/mlir/test/Dialect/GPU/subgroup-redule-lowering.mlir
+++ b/mlir/test/Dialect/GPU/subgroup-redule-lowering.mlir
@@ -16,12 +16,12 @@ gpu.module @kernels {
   // CHECK-SHFL-LABEL: gpu.func @kernel0(
   gpu.func @kernel0(%arg0: vector<5xf16>) kernel {
     // CHECK-SUB: %[[VZ:.+]] = arith.constant dense<0.0{{.*}}> : vector<5xf16>
-    // CHECK-SUB: %[[E0:.+]] = vector.extract_strided_slice %[[ARG0]] {offsets = [0], sizes = [2], strides = [1]} : vector<5xf16> to vector<2xf16>
+    // CHECK-SUB: %[[E0:.+]] = vector.extract_strided_slice %[[ARG0]][0:2:1] : vector<5xf16> to vector<2xf16>
     // CHECK-SUB: %[[R0:.+]] = gpu.subgroup_reduce add %[[E0]] : (vector<2xf16>) -> vector<2xf16>
-    // CHECK-SUB: %[[V0:.+]] = vector.insert_strided_slice %[[R0]], %[[VZ]] {offsets = [0], strides = [1]} : vector<2xf16> into vector<5xf16>
-    // CHECK-SUB: %[[E1:.+]] = vector.extract_strided_slice %[[ARG0]] {offsets = [2], sizes = [2], strides = [1]} : vector<5xf16> to vector<2xf16>
+    // CHECK-SUB: %[[V0:.+]] = vector.insert_strided_slice %[[R0]], %[[VZ]][0:1] : vector<2xf16> into vector<5xf16>
+    // CHECK-SUB: %[[E1:.+]] = vector.extract_strided_slice %[[ARG0]][2:2:1] : vector<5xf16> to vector<2xf16>
     // CHECK-SUB: %[[R1:.+]] = gpu.subgroup_reduce add %[[E1]] : (vector<2xf16>) -> vector<2xf16>
-    // CHECK-SUB: %[[V1:.+]] = vector.insert_strided_slice %[[R1]], %[[V0]] {offsets = [2], strides = [1]} : vector<2xf16> into vector<5xf16>
+    // CHECK-SUB: %[[V1:.+]] = vector.insert_strided_slice %[[R1]], %[[V0]][2:1] : vector<2xf16> into vector<5xf16>
     // CHECK-SUB: %[[E2:.+]] = vector.extract %[[ARG0]][4] : f16 from vector<5xf16>
     // CHECK-SUB: %[[R2:.+]] = gpu.subgroup_reduce add %[[E2]] : (f16) -> f16
     // CHECK-SUB: %[[V2:.+]] = vector.insert %[[R2]], %[[V1]] [4] : f16 into vector<5xf16>
@@ -168,7 +168,7 @@ gpu.module @kernels {
   // CHECK-SHFL-SAME:    %[[ARG0:.+]]: vector<3xi8>)
   gpu.func @kernel6(%arg0: vector<3xi8>) kernel {
     // CHECK-SHFL: %[[CZ:.+]] = arith.constant dense<0> : vector<4xi8>
-    // CHECK-SHFL: %[[V0:.+]] = vector.insert_strided_slice %[[ARG0]], %[[CZ]] {offsets = [0], strides = [1]} : vector<3xi8> into vector<4xi8>
+    // CHECK-SHFL: %[[V0:.+]] = vector.insert_strided_slice %[[ARG0]], %[[CZ]][0:1] : vector<3xi8> into vector<4xi8>
     // CHECK-SHFL: %[[BC0:.+]] = vector.bitcast %[[V0]] : vector<4xi8> to vector<1xi32>
     // CHECK-SHFL: %[[I0:.+]] = vector.extract %[[BC0]][0] : i32 from vector<1xi32>
     // CHECK-SHFL: %[[S0:.+]], %{{.+}} = gpu.shuffle xor %[[I0]], {{.+}} : i32
@@ -178,7 +178,7 @@ gpu.module @kernels {
     // CHECK-SHFL: %[[BC2:.+]] = vector.bitcast %[[ADD0]] : vector<4xi8> to vector<1xi32>
     // CHECK-SHFL: %[[I1:.+]] = vector.extract %[[BC2]][0] : i32 from vector<1xi32>
     // CHECK-SHFL-COUNT-4: gpu.shuffle xor
-    // CHECK-SHFL: %[[ESS:.+]] = vector.extract_strided_slice %{{.+}} {offsets = [0], sizes = [3], strides = [1]} : vector<4xi8> to vector<3xi8>
+    // CHECK-SHFL: %[[ESS:.+]] = vector.extract_strided_slice %{{.+}}[0:3:1] : vector<4xi8> to vector<3xi8>
     // CHECK-SHFL: "test.consume"(%[[ESS]]) : (vector<3xi8>) -> ()
     %sum0 = gpu.subgroup_reduce add %arg0 : (vector<3xi8>) -> (vector<3xi8>)
     "test.consume"(%sum0) : (vector<3xi8>) -> ()
diff --git a/mlir/test/Dialect/Linalg/vectorize-conv-masked-and-scalable.mlir b/mlir/test/Dialect/Linalg/vectorize-conv-masked-and-scalable.mlir
index 4964a8d2e0db8..9bf25ca1b5a14 100644
--- a/mlir/test/Dialect/Linalg/vectorize-conv-masked-and-scalable.mlir
+++ b/mlir/test/Dialect/Linalg/vectorize-conv-masked-and-scalable.mlir
@@ -48,13 +48,13 @@ module attributes {transform.with_named_sequence} {
 // CHECK:           %[[VEC_OUT:.*]] = vector.mask %[[MASK_OUT]] { vector.transfer_read %[[OUTPUT]]{{\[}}%[[C0]], %[[C0]], %[[C0]]], %[[PAD]] {in_bounds = [true, true, true]} : tensor<1x8x?xi8>, vector<1x8x4xi8> } : vector<1x8x4xi1> -> vector<1x8x4xi8>
 
 /// Convolution
-// CHECK:           %[[IN_1:.*]] = vector.extract_strided_slice %[[VEC_IN]] {offsets = [0, 0, 0], sizes = [1, 8, 4], strides = [1, 1, 1]} : vector<1x8x4xi8> to vector<1x8x4xi8>
+// CHECK:           %[[IN_1:.*]] = vector.extract_strided_slice %[[VEC_IN]][0:1:1][0:8:1][0:4:1] : vector<1x8x4xi8> to vector<1x8x4xi8>
 // CHECK:           %[[FLT_1:.*]] = vector.extract %[[VEC_FLT]][0] : vector<4xi8> from vector<1x4xi8>
-// CHECK:           %[[OUT_1:.*]] = vector.extract_strided_slice %[[VEC_OUT]] {offsets = [0, 0, 0], sizes = [1, 8, 4], strides = [1, 1, 1]} : vector<1x8x4xi8> to vector<1x8x4xi8>
+// CHECK:           %[[OUT_1:.*]] = vector.extract_strided_slice %[[VEC_OUT]][0:1:1][0:8:1][0:4:1] : vector<1x8x4xi8> to vector<1x8x4xi8>
 // CHECK:           %[[FLT_1_B:.*]] = vector.broadcast %[[FLT_1]] : vector<4xi8> to vector<1x8x4xi8>
 // CHECK:           %[[MULI:.*]] = arith.muli %[[IN_1]], %[[FLT_1_B]] : vector<1x8x4xi8>
 // CHECK:           %[[ADDI:.*]] = arith.addi %[[MULI]], %[[OUT_1]] : vector<1x8x4xi8>
-// CHECK:           %[[OUT_INS:.*]] = vector.insert_strided_slice %[[ADDI]], %[[VEC_OUT]] {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<1x8x4xi8> into vector<1x8x4xi8>
+// CHECK:           %[[OUT_INS:.*]] = vector.insert_strided_slice %[[ADDI]], %[[VEC_OUT]][0:1][0:1][0:1] : vector<1x8x4xi8> into vector<1x8x4xi8>
 // CHECK:           %[[OUT:.*]] = vector.mask %[[MASK_OUT]] { vector.transfer_write %[[OUT_INS]], %[[OUTPUT]]{{\[}}%[[C0]], %[[C0]], %[[C0]]] {in_bounds = [true, true, true]} : vector<1x8x4xi8>, tensor<1x8x?xi8> } : vector<1x8x4xi1> -> tensor<1x8x?xi8>
 // CHECK:           return %[[OUT]] : tensor<1x8x?xi8>
 
@@ -110,13 +110,13 @@ module attributes {transform.with_named_sequence} {
 // CHECK:           %[[VEC_OUT:.*]] = vector.mask %[[MASK_OUT]] { vector.transfer_read %[[OUTPUT]]{{\[}}%[[C0]], %[[C0]], %[[C0]]], %[[PAD]] {in_bounds = [true, true, true]} : tensor<1x8x?xi8>, vector<1x8x[4]xi8> } : vector<1x8x[4]xi1> -> vector<1x8x[4]xi8>
 
 /// Convolution
-// CHECK:           %[[IN_1:.*]] = vector.extract_strided_slice %[[VEC_IN]] {offsets = [0, 0, 0], sizes = [1, 8, 4], strides = [1, 1, 1]} : vector<1x8x[4]xi8> to vector<1x8x[4]xi8>
+// CHECK:           %[[IN_1:.*]] = vector.extract_strided_slice %[[VEC_IN]][0:1:1][0:8:1][0:4:1] : vector<1x8x[4]xi8> to vector<1x8x[4]xi8>
 // CHECK:           %[[FLT_1:.*]] = vector.extract %[[VEC_FLT]][0] : vector<[4]xi8> from vector<1x[4]xi8>
-// CHECK:           %[[OUT_1:.*]] = vector.extract_strided_slice %[[VEC_OUT]] {offsets = [0, 0, 0], sizes = [1, 8, 4], strides = [1, 1, 1]} : vector<1x8x[4]xi8> to vector<1x8x[4]xi8>
+// CHECK:           %[[OUT_1:.*]] = vector.extract_strided_slice %[[VEC_OUT]][0:1:1][0:8:1][0:4:1] : vector<1x8x[4]xi8> to vector<1x8x[4]xi8>
 // CHECK:           %[[FLT_1_B:.*]] = vector.broadcast %[[FLT_1]] : vector<[4]xi8> to vector<1x8x[4]xi8>
 // CHECK:           %[[MULI:.*]] = arith.muli %[[IN_1]], %[[FLT_1_B]] : vector<1x8x[4]xi8>
 // CHECK:           %[[ADDI:.*]] = arith.addi %[[MULI]], %[[OUT_1]] : vector<1x8x[4]xi8>
-// CHECK:           %[[OUT_INS:.*]] = vector.insert_strided_slice %[[ADDI]], %[[VEC_OUT]] {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<1x8x[4]xi8> into vector<1x8x[4]xi8>
+// CHECK:           %[[OUT_INS:.*]] = vector.insert_strided_slice %[[ADDI]], %[[VEC_OUT]][0:1][0:1][0:1] : vector<1x8x[4]xi8> into vector<1x8x[4]xi8>
 // CHECK:           %[[OUT:.*]] = vector.mask %[[MASK_OUT]] { vector.transfer_write %[[OUT_INS]], %[[OUTPUT]]{{\[}}%[[C0]], %[[C0]], %[[C0]]] {in_bounds = [true, true, true]} : vector<1x8x[4]xi8>, tensor<1x8x?xi8> } : vector<1x8x[4]xi1> -> tensor<1x8x?xi8>
 // CHECK:           return %[[OUT]] : tensor<1x8x?xi8>
 
@@ -172,14 +172,14 @@ module attributes {transform.with_named_sequence} {
 // CHECK:           %[[VEC_OUT:.*]] = vector.mask %[[MASK_OUT]] { vector.transfer_read %[[OUTPUT]]{{\[}}%[[C0]], %[[C0]], %[[C0]]], %[[PAD]] {in_bounds = [true, true, true]} : memref<3x2x?xf32>, vector<3x2x[4]xf32> } : vector<3x2x[4]xi1> -> vector<3x2x[4]xf32>
 
 /// Convolution
-// CHECK:           %[[IN_1:.*]] = vector.extract_strided_slice %[[VEC_IN]] {offsets = [0, 0, 0], sizes = [3, 2, 4], strides = [1, 1, 1]} : vector<3x4x[4]xf32> to vector<3x2x[4]xf32>
-// CHECK:           %[[IN_2:.*]] = vector.extract_strided_slice %[[VEC_IN]] {offsets = [0, 2, 0], sizes = [3, 2, 4], strides = [1, 1, 1]} : vector<3x4x[4]xf32> to vector<3x2x[4]xf32>
+// CHECK:           %[[IN_1:.*]] = vector.extract_strided_slice %[[VEC_IN]][0:3:1][0:2:1][0:4:1] : vector<3x4x[4]xf32> to vector<3x2x[4]xf32>
+// CHECK:           %[[IN_2:.*]] = vector.extract_strided_slice %[[VEC_IN]][0:3:1][2:2:1][0:4:1] : vector<3x4x[4]xf32> to vector<3x2x[4]xf32>
 // CHECK:           %[[FLT_1:.*]] = vector.extract %[[VEC_FLT]][0] : vector<[4]xf32> from vector<2x[4]xf32>
 // CHECK:           %[[FLT_2:.*]] = vector.extract %[[VEC_FLT]][1] : vector<[4]xf32> from vector<2x[4]xf32>
-// CHECK:           %[[OUT_1:.*]] = vector.extract_strided_slice %[[VEC_OUT]] {offsets = [0, 0, 0], sizes = [3, 2, 4], strides = [1, 1, 1]} : vector<3x2x[4]xf32> to vector<3x2x[4]xf32>
+// CHECK:           %[[OUT_1:.*]] = vector.extract_strided_slice %[[VEC_OUT]][0:3:1][0:2:1][0:4:1] : vector<3x2x[4]xf32> to vector<3x2x[4]xf32>
 // CHECK:           %[[FLT_1_B:.*]] = vector.broadcast %[[FLT_1]] : vector<[4]xf32> to vector<3x2x[4]xf32>
 // CHECK:           %[[FMA_1:.*]] = vector.fma %[[IN_1]], %[[FLT_1_B]], %[[OUT_1]] : vector<3x2x[4]xf32>
 // CHECK:           %[[FLT_2_B:.*]] = vector.broadcast %[[FLT_2]] : vector<[4]xf32> to vector<3x2x[4]xf32>
 // CHECK:           %[[FMA_2:.*]] = vector.fma %[[IN_2]], %[[FLT_2_B]], %[[FMA_1]] : vector<3x2x[4]xf32>
-// CHECK:           %[[OUT_INS:.*]] = vector.insert_strided_slice %[[FMA_2]], %[[VEC_OUT]] {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<3x2x[4]xf32> into vector<3x2x[4]xf32>
+// CHECK:           %[[OUT_INS:.*]] = vector.insert_strided_slice %[[FMA_2]], %[[VEC_OUT]][0:1][0:1][0:1] : vector<3x2x[4]xf32> into vector<3x2x[4]xf32>
 // CHECK:           vector.mask %[[MASK_OUT]] { vector.transfer_write %[[OUT_INS]], %[[OUTPUT]]{{\[}}%[[C0]], %[[C0]], %[[C0]]] {in_bounds = [true, true, true]} : vector<3x2x[4]xf32>, memref<3x2x?xf32> } : vector<3x2x[4]xi1>
diff --git a/mlir/test/Dialect/Linalg/vectorize-convolution-flatten.mlir b/mlir/test/Dialect/Linalg/vectorize-convolution-flatten.mlir
index afb59cb26188a..75e7a4b8ea981 100644
--- a/mlir/test/Dialect/Linalg/vectorize-convolution-flatten.mlir
+++ b/mlir/test/Dialect/Linalg/vectorize-convolution-flatten.mlir
@@ -70,9 +70,9 @@ func.func @depthwise_conv1d_nwc_wc_3x5x4xf32_memref_dillation_2(%input: memref<3
 //      CHECK-DAG:  %[[V_OUTPUT_R:.+]] = vector.transfer_read %[[OUTPUT]][%[[C0]], %[[C0]], %[[C0]]]
 
 //      CHECK:   %[[V_INPUT_0:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [3, 2, 4], strides = [1, 1, 1]} : vector<3x4x4xf32> to vector<3x2x4xf32>
+// CHECK-SAME:    [0:3:1][0:2:1][0:4:1] : vector<3x4x4xf32> to vector<3x2x4xf32>
 //      CHECK:   %[[V_INPUT_1:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 2, 0], sizes = [3, 2, 4], strides = [1, 1, 1]} : vector<3x4x4xf32> to vector<3x2x4xf32>
+// CHECK-SAME:    [0:3:1][2:2:1][0:4:1] : vector<3x4x4xf32> to vector<3x2x4xf32>
 
 //      CHECK:  %[[V_FILTER_0:.+]] = vector.extract %[[V_FILTER_R]][0] : vector<4xf32> from vector<2x4xf32>
 //      CHECK:  %[[V_FILTER_1:.+]] = vector.extract %[[V_FILTER_R]][1] : vector<4xf32> from vector<2x4xf32>
@@ -130,9 +130,9 @@ func.func @depthwise_conv1d_nwc_wc_3x5x4xi8_memref_dilation_2(%input: memref<3x5
 //      CHECK-DAG:  %[[V_OUTPUT_R:.+]] = vector.transfer_read %[[OUTPUT]][%[[C0]], %[[C0]], %[[C0]]]
 
 //      CHECK:   %[[V_INPUT_0:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [3, 2, 4], strides = [1, 1, 1]} : vector<3x4x4xi8> to vector<3x2x4xi8>
+// CHECK-SAME:    [0:3:1][0:2:1][0:4:1] : vector<3x4x4xi8> to vector<3x2x4xi8>
 //      CHECK:   %[[V_INPUT_1:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 2, 0], sizes = [3, 2, 4], strides = [1, 1, 1]} : vector<3x4x4xi8> to vector<3x2x4xi8>
+// CHECK-SAME:    [0:3:1][2:2:1][0:4:1] : vector<3x4x4xi8> to vector<3x2x4xi8>
 
 //      CHECK:  %[[V_FILTER_0:.+]] = vector.extract %[[V_FILTER_R]][0] : vector<4xi8> from vector<2x4xi8>
 //      CHECK:  %[[V_FILTER_1:.+]] = vector.extract %[[V_FILTER_R]][1] : vector<4xi8> from vector<2x4xi8>
@@ -196,34 +196,34 @@ func.func @depthwise_conv1d_nwc_wc_3x9x4xi8_tensor_stride_2(%input: tensor<3x9x4
 // CHECK:           %[[V_OUTPUT_R:.*]] = vector.transfer_read %[[OUTPUT]][%[[C0_IDX]], %[[C0_IDX]], %[[C0_IDX]]], %[[C0_I8]]
 
 // CHECK:           %[[V_INPUT_0:.*]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:        {offsets = [0, 0, 0], sizes = [3, 1, 4], strides = [1, 1, 1]} : vector<3x7x4xi8> to vector<3x1x4xi8>
+// CHECK-SAME:       [0:3:1][0:1:1][0:4:1] : vector<3x7x4xi8> to vector<3x1x4xi8>
 // CHECK:           %[[V_INPUT_1:.*]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:        {offsets = [0, 2, 0], sizes = [3, 1, 4], strides = [1, 1, 1]} : vector<3x7x4xi8> to vector<3x1x4xi8>
+// CHECK-SAME:       [0:3:1][2:1:1][0:4:1] : vector<3x7x4xi8> to vector<3x1x4xi8>
 // CHECK:           %[[V_INPUT_2:.*]] = vector.extract_strided_slice %[[V_INPUT_R]] 
-// CHECK-SAME:        {offsets = [0, 4, 0], sizes = [3, 1, 4], strides = [1, 1, 1]} : vector<3x7x4xi8> to vector<3x1x4xi8>
+// CHECK-SAME:       [0:3:1][4:1:1][0:4:1] : vector<3x7x4xi8> to vector<3x1x4xi8>
 // CHECK:           %[[V_INPUT_3:.*]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:        {offsets = [0, 1, 0], sizes = [3, 1, 4], strides = [1, 1, 1]} : vector<3x7x4xi8> to vector<3x1x4xi8>
+// CHECK-SAME:       [0:3:1][1:1:1][0:4:1] : vector<3x7x4xi8> to vector<3x1x4xi8>
 // CHECK:           %[[V_INPUT_4:.*]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:        {offsets = [0, 3, 0], sizes = [3, 1, 4], strides = [1, 1, 1]} : vector<3x7x4xi8> to vector<3x1x4xi8>
+// CHECK-SAME:       [0:3:1][3:1:1][0:4:1] : vector<3x7x4xi8> to vector<3x1x4xi8>
 // CHECK:           %[[V_INPUT_5:.*]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:        {offsets = [0, 5, 0], sizes = [3, 1, 4], strides = [1, 1, 1]} : vector<3x7x4xi8> to vector<3x1x4xi8>
+// CHECK-SAME:       [0:3:1][5:1:1][0:4:1] : vector<3x7x4xi8> to vector<3x1x4xi8>
 // CHECK:           %[[V_INPUT_6:.*]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:        {offsets = [0, 2, 0], sizes = [3, 1, 4], strides = [1, 1, 1]} : vector<3x7x4xi8> to vector<3x1x4xi8>
+// CHECK-SAME:       [0:3:1][2:1:1][0:4:1] : vector<3x7x4xi8> to vector<3x1x4xi8>
 // CHECK:           %[[V_INPUT_7:.*]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:        {offsets = [0, 4, 0], sizes = [3, 1, 4], strides = [1, 1, 1]} : vector<3x7x4xi8> to vector<3x1x4xi8>
+// CHECK-SAME:       [0:3:1][4:1:1][0:4:1] : vector<3x7x4xi8> to vector<3x1x4xi8>
 // CHECK:           %[[V_INPUT_8:.*]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:        {offsets = [0, 6, 0], sizes = [3, 1, 4], strides = [1, 1, 1]} : vector<3x7x4xi8> to vector<3x1x4xi8>
+// CHECK-SAME:       [0:3:1][6:1:1][0:4:1] : vector<3x7x4xi8> to vector<3x1x4xi8>
 
 // CHECK:           %[[V_FILTER_0:.*]] = vector.extract %[[V_FILTER_R]][0] : vector<4xi8> from vector<3x4xi8>
 // CHECK:           %[[V_FILTER_1:.*]] = vector.extract %[[V_FILTER_R]][1] : vector<4xi8> from vector<3x4xi8>
 // CHECK:           %[[V_FILTER_2:.*]] = vector.extract %[[V_FILTER_R]][2] : vector<4xi8> from vector<3x4xi8>
 
 // CHECK:           %[[V_OUTPUT_0:.*]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:        {offsets = [0, 0, 0], sizes = [3, 1, 4], strides = [1, 1, 1]} : vector<3x3x4xi8> to vector<3x1x4xi8>
+// CHECK-SAME:       [0:3:1][0:1:1][0:4:1] : vector<3x3x4xi8> to vector<3x1x4xi8>
 // CHECK:           %[[V_OUTPUT_1:.*]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:       {offsets = [0, 1, 0], sizes = [3, 1, 4], strides = [1, 1, 1]} : vector<3x3x4xi8> to vector<3x1x4xi8>
+// CHECK-SAME:      [0:3:1][1:1:1][0:4:1] : vector<3x3x4xi8> to vector<3x1x4xi8>
 // CHECK:           %[[V_OUTPUT_2:.*]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:        {offsets = [0, 2, 0], sizes = [3, 1, 4], strides = [1, 1, 1]} : vector<3x3x4xi8> to vector<3x1x4xi8>
+// CHECK-SAME:       [0:3:1][2:1:1][0:4:1] : vector<3x3x4xi8> to vector<3x1x4xi8>
 
 /// w == 0, kw == 0
 // CHECK:           %[[VAL_23:.*]] = vector.shape_cast %[[V_INPUT_0]] : vector<3x1x4xi8> to vector<3x4xi8>
@@ -287,11 +287,11 @@ func.func @depthwise_conv1d_nwc_wc_3x9x4xi8_tensor_stride_2(%input: tensor<3x9x4
 // Write the result back.
 // CHECK:           %[[VAL_73:.*]] = vector.shape_cast %[[VAL_72]] : vector<3x4xi8> to vector<3x1x4xi8>
 // CHECK:           %[[VAL_74:.*]] = vector.insert_strided_slice %[[VAL_61]], %[[V_OUTPUT_R]]
-// CHECK-SAME:        {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<3x1x4xi8> into vector<3x3x4xi8>
+// CHECK-SAME:       [0:1][0:1][0:1] : vector<3x1x4xi8> into vector<3x3x4xi8>
 // CHECK:           %[[VAL_75:.*]] = vector.insert_strided_slice %[[VAL_67]], %[[VAL_74]]
-// CHECK-SAME:        {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<3x1x4xi8> into vector<3x3x4xi8>
+// CHECK-SAME:       [0:1][1:1][0:1] : vector<3x1x4xi8> into vector<3x3x4xi8>
 // CHECK:           %[[VAL_76:.*]] = vector.insert_strided_slice %[[VAL_73]], %[[VAL_75]]
-// CHECK-SAME:        {offsets = [0, 2, 0], strides = [1, 1, 1]} : vector<3x1x4xi8> into vector<3x3x4xi8>
+// CHECK-SAME:       [0:1][2:1][0:1] : vector<3x1x4xi8> into vector<3x3x4xi8>
 // CHECK:           %[[VAL_77:.*]] = vector.transfer_write %[[VAL_76]], %[[OUTPUT]][%[[C0_IDX]], %[[C0_IDX]], %[[C0_IDX]]]
 
 module attributes {transform.with_named_sequence} {
diff --git a/mlir/test/Dialect/Linalg/vectorize-convolution.mlir b/mlir/test/Dialect/Linalg/vectorize-convolution.mlir
index 93e36a69567bd..9d6e0c69a87d8 100644
--- a/mlir/test/Dialect/Linalg/vectorize-convolution.mlir
+++ b/mlir/test/Dialect/Linalg/vectorize-convolution.mlir
@@ -24,16 +24,16 @@ func.func @conv1d_nwc_4x2x8_memref(%input: memref<4x6x3xf32>, %filter: memref<1x
 //  CHECK-DAG:  %[[V_OUTPUT_R:.+]] = vector.transfer_read %[[OUTPUT]][%[[C0]], %[[C0]], %[[C0]]], %[[F0]]
 
 //      CHECK:   %[[V_INPUT_0:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x1x3xf32>
+// CHECK-SAME:    [0:4:1][0:1:1][0:3:1] : vector<4x4x3xf32> to vector<4x1x3xf32>
 //      CHECK:   %[[V_INPUT_1:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 3, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x1x3xf32>
+// CHECK-SAME:    [0:4:1][3:1:1][0:3:1] : vector<4x4x3xf32> to vector<4x1x3xf32>
 
 //      CHECK:    %[[V_FILTER:.+]] = vector.extract %[[V_FILTER_R]][0] : vector<3x8xf32> from vector<1x3x8xf32>
 
 //      CHECK:  %[[V_OUTPUT_0:.+]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [4, 1, 8], strides = [1, 1, 1]} : vector<4x2x8xf32> to vector<4x1x8xf32>
+// CHECK-SAME:    [0:4:1][0:1:1][0:8:1] : vector<4x2x8xf32> to vector<4x1x8xf32>
 //      CHECK:  %[[V_OUTPUT_1:.+]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 1, 0], sizes = [4, 1, 8], strides = [1, 1, 1]} : vector<4x2x8xf32> to vector<4x1x8xf32>
+// CHECK-SAME:    [0:4:1][1:1:1][0:8:1] : vector<4x2x8xf32> to vector<4x1x8xf32>
 
 /// w == 0, kw == 0
 //      CHECK:   %[[CONTRACT_0:.+]] = vector.contract {
@@ -51,10 +51,10 @@ func.func @conv1d_nwc_4x2x8_memref(%input: memref<4x6x3xf32>, %filter: memref<1x
 
 /// w == 0, kw == 0
 //      CHECK:   %[[RES_0:.+]] = vector.insert_strided_slice %[[CONTRACT_0]], %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x8xf32> into vector<4x2x8xf32>
+// CHECK-SAME:    [0:1][0:1][0:1] : vector<4x1x8xf32> into vector<4x2x8xf32>
 /// w == 1, kw == 0
 //      CHECK:   %[[RES_1:.+]] = vector.insert_strided_slice %[[CONTRACT_1]], %[[RES_0]]
-// CHECK-SAME:     {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x8xf32> into vector<4x2x8xf32>
+// CHECK-SAME:    [0:1][1:1][0:1] : vector<4x1x8xf32> into vector<4x2x8xf32>
 
 // Write the result back in one shot.
 //      CHECK:   vector.transfer_write %[[RES_1]], %[[OUTPUT]][%[[C0]], %[[C0]], %[[C0]]]
@@ -88,16 +88,16 @@ func.func @conv1d_nwc_4x2x8_i8i8i32_memref(%input: memref<4x6x3xi8>, %filter: me
 //  CHECK-DAG:  %[[V_OUTPUT_R:.+]] = vector.transfer_read %[[OUTPUT]][%[[C0]], %[[C0]], %[[C0]]], %[[C0_I32]]
 
 //      CHECK:   %[[V_INPUT_0:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xi8> to vector<4x1x3xi8>
+// CHECK-SAME:    [0:4:1][0:1:1][0:3:1] : vector<4x4x3xi8> to vector<4x1x3xi8>
 //      CHECK:   %[[V_INPUT_1:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 3, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xi8> to vector<4x1x3xi8>
+// CHECK-SAME:    [0:4:1][3:1:1][0:3:1] : vector<4x4x3xi8> to vector<4x1x3xi8>
 
 //      CHECK:    %[[V_FILTER:.+]] = vector.extract %[[V_FILTER_R]][0] : vector<3x8xi8> from vector<1x3x8xi8>
 
 //      CHECK:  %[[V_OUTPUT_0:.+]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [4, 1, 8], strides = [1, 1, 1]} : vector<4x2x8xi32> to vector<4x1x8xi32>
+// CHECK-SAME:    [0:4:1][0:1:1][0:8:1] : vector<4x2x8xi32> to vector<4x1x8xi32>
 //      CHECK:  %[[V_OUTPUT_1:.+]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 1, 0], sizes = [4, 1, 8], strides = [1, 1, 1]} : vector<4x2x8xi32> to vector<4x1x8xi32>
+// CHECK-SAME:    [0:4:1][1:1:1][0:8:1] : vector<4x2x8xi32> to vector<4x1x8xi32>
 
 /// w == 0, kw == 0
 //      CHECK:   %[[EXT_LHS_0:.+]] = arith.extsi %[[V_INPUT_0]] : vector<4x1x3xi8> to vector<4x1x3xi32>
@@ -119,10 +119,10 @@ func.func @conv1d_nwc_4x2x8_i8i8i32_memref(%input: memref<4x6x3xi8>, %filter: me
 
 /// w == 0, kw == 0
 //      CHECK:   %[[RES_0:.+]] = vector.insert_strided_slice %[[CONTRACT_0]], %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x8xi32> into vector<4x2x8xi32>
+// CHECK-SAME:    [0:1][0:1][0:1] : vector<4x1x8xi32> into vector<4x2x8xi32>
 /// w == 1, kw == 0
 //      CHECK:   %[[RES_1:.+]] = vector.insert_strided_slice %[[CONTRACT_1]], %[[RES_0]]
-// CHECK-SAME:     {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x8xi32> into vector<4x2x8xi32>
+// CHECK-SAME:    [0:1][1:1][0:1] : vector<4x1x8xi32> into vector<4x2x8xi32>
 
 // Write the result back in one shot.
 //      CHECK:   vector.transfer_write %[[RES_1]], %[[OUTPUT]][%[[C0]], %[[C0]], %[[C0]]]
@@ -153,21 +153,21 @@ func.func @conv1d_nwc_4x2x8_memref(%input: memref<4x6x3xf32>, %filter: memref<2x
 //  CHECK-DAG:   %[[V_OUTPUT_R:.+]] = vector.transfer_read %[[OUTPUT]][%[[C0]], %[[C0]], %[[C0]]], %[[F0]]
 
 //      CHECK:   %[[V_INPUT_0:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK-SAME:    [0:4:1][0:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
 //      CHECK:   %[[V_INPUT_1:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 3, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK-SAME:    [0:4:1][3:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
 //      CHECK:   %[[V_INPUT_2:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 2, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK-SAME:    [0:4:1][2:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
 //      CHECK:   %[[V_INPUT_3:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 5, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK-SAME:    [0:4:1][5:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
 
 //      CHECK:  %[[V_FILTER_0:.+]] = vector.extract %[[V_FILTER_R]][0] : vector<3x8xf32> from vector<2x3x8xf32>
 //      CHECK:  %[[V_FILTER_1:.+]] = vector.extract %[[V_FILTER_R]][1] : vector<3x8xf32> from vector<2x3x8xf32>
 
 //      CHECK:  %[[V_OUTPUT_0:.+]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [4, 1, 8], strides = [1, 1, 1]} : vector<4x2x8xf32> to vector<4x1x8xf32>
+// CHECK-SAME:    [0:4:1][0:1:1][0:8:1] : vector<4x2x8xf32> to vector<4x1x8xf32>
 //      CHECK:  %[[V_OUTPUT_1:.+]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 1, 0], sizes = [4, 1, 8], strides = [1, 1, 1]} : vector<4x2x8xf32> to vector<4x1x8xf32>
+// CHECK-SAME:    [0:4:1][1:1:1][0:8:1] : vector<4x2x8xf32> to vector<4x1x8xf32>
 
 /// w == 0, kw == 0
 //      CHECK:   %[[CONTRACT_0:.+]] = vector.contract {
@@ -196,10 +196,10 @@ func.func @conv1d_nwc_4x2x8_memref(%input: memref<4x6x3xf32>, %filter: memref<2x
 
 /// w == 0, kw == 0
 //      CHECK:   %[[RES_0:.+]] = vector.insert_strided_slice %[[CONTRACT_2]], %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x8xf32> into vector<4x2x8xf32>
+// CHECK-SAME:    [0:1][0:1][0:1] : vector<4x1x8xf32> into vector<4x2x8xf32>
 /// w == 1, kw == 0
 //      CHECK:   %[[RES_1:.+]] = vector.insert_strided_slice %[[CONTRACT_3]], %[[RES_0]]
-// CHECK-SAME:     {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x8xf32> into vector<4x2x8xf32>
+// CHECK-SAME:    [0:1][1:1][0:1] : vector<4x1x8xf32> into vector<4x2x8xf32>
 
 // Write the result back in one shot.
 //      CHECK:   vector.transfer_write %[[RES_1]], %[[OUTPUT]][%[[C0]], %[[C0]], %[[C0]]]
@@ -230,9 +230,9 @@ func.func @conv1d_nwc_4x2x8_memref(%input: memref<4x6x3xf32>, %filter: memref<2x
 //  CHECK-DAG:  %[[V_OUTPUT_R:.+]] = vector.transfer_read %[[OUTPUT]][%[[C0]], %[[C0]], %[[C0]]], %[[F0]]
 
 //      CHECK:   %[[V_INPUT_0:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [4, 2, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x2x3xf32>
+// CHECK-SAME:    [0:4:1][0:2:1][0:3:1] : vector<4x4x3xf32> to vector<4x2x3xf32>
 //      CHECK:   %[[V_INPUT_1:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 2, 0], sizes = [4, 2, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x2x3xf32>
+// CHECK-SAME:    [0:4:1][2:2:1][0:3:1] : vector<4x4x3xf32> to vector<4x2x3xf32>
 
 //      CHECK:  %[[V_FILTER_0:.+]] = vector.extract %[[V_FILTER_R]][0] : vector<3x8xf32> from vector<2x3x8xf32>
 //      CHECK:  %[[V_FILTER_1:.+]] = vector.extract %[[V_FILTER_R]][1] : vector<3x8xf32> from vector<2x3x8xf32>
@@ -284,16 +284,16 @@ func.func @conv1d_ncw_4x8x2_memref(%input: memref<4x3x6xf32>, %filter: memref<8x
 //  CHECK-DAG:  %[[V_OUTPUT_R:.+]] = vector.transpose %[[V_NWC_OUTPUT_R]], [0, 2, 1]
 
 //      CHECK:   %[[V_INPUT_0:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x1x3xf32>
+// CHECK-SAME:    [0:4:1][0:1:1][0:3:1] : vector<4x4x3xf32> to vector<4x1x3xf32>
 //      CHECK:   %[[V_INPUT_1:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 3, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x1x3xf32>
+// CHECK-SAME:    [0:4:1][3:1:1][0:3:1] : vector<4x4x3xf32> to vector<4x1x3xf32>
 
 //      CHECK:    %[[V_FILTER:.+]] = vector.extract %[[V_FILTER_R]][0] : vector<3x8xf32> from vector<1x3x8xf32>
 
 //      CHECK:  %[[V_OUTPUT_0:.+]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [4, 1, 8], strides = [1, 1, 1]} : vector<4x2x8xf32> to vector<4x1x8xf32>
+// CHECK-SAME:    [0:4:1][0:1:1][0:8:1] : vector<4x2x8xf32> to vector<4x1x8xf32>
 //      CHECK:  %[[V_OUTPUT_1:.+]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 1, 0], sizes = [4, 1, 8], strides = [1, 1, 1]} : vector<4x2x8xf32> to vector<4x1x8xf32>
+// CHECK-SAME:    [0:4:1][1:1:1][0:8:1] : vector<4x2x8xf32> to vector<4x1x8xf32>
 
 /// w == 0, kw == 0
 //      CHECK:   %[[CONTRACT_0:.+]] = vector.contract {
@@ -311,10 +311,10 @@ func.func @conv1d_ncw_4x8x2_memref(%input: memref<4x3x6xf32>, %filter: memref<8x
 
 /// w == 0, kw == 0
 //      CHECK:   %[[RES_0:.+]] = vector.insert_strided_slice %[[CONTRACT_0]], %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x8xf32> into vector<4x2x8xf32>
+// CHECK-SAME:    [0:1][0:1][0:1] : vector<4x1x8xf32> into vector<4x2x8xf32>
 /// w == 1, kw == 0
 //      CHECK:   %[[RES_1:.+]] = vector.insert_strided_slice %[[CONTRACT_1]], %[[RES_0]]
-// CHECK-SAME:     {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x8xf32> into vector<4x2x8xf32>
+// CHECK-SAME:    [0:1][1:1][0:1] : vector<4x1x8xf32> into vector<4x2x8xf32>
 
 /// Transpose result to ncw format.
 //  CHECK:  %[[RES_2:.+]] = vector.transpose %[[RES_1]], [0, 2, 1]
@@ -353,21 +353,21 @@ func.func @conv1d_ncw_4x8x2_memref(%input: memref<4x3x6xf32>, %filter: memref<8x
 //  CHECK-DAG:  %[[V_OUTPUT_R:.+]] = vector.transpose %[[V_NWC_OUTPUT_R]], [0, 2, 1]
 
 //      CHECK:   %[[V_INPUT_0:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK-SAME:    [0:4:1][0:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
 //      CHECK:   %[[V_INPUT_1:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 3, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK-SAME:    [0:4:1][3:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
 //      CHECK:   %[[V_INPUT_2:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 2, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK-SAME:    [0:4:1][2:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
 //      CHECK:   %[[V_INPUT_3:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 5, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK-SAME:    [0:4:1][5:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
 
 //      CHECK:  %[[V_FILTER_0:.+]] = vector.extract %[[V_FILTER_R]][0] : vector<3x8xf32> from vector<2x3x8xf32>
 //      CHECK:  %[[V_FILTER_1:.+]] = vector.extract %[[V_FILTER_R]][1] : vector<3x8xf32> from vector<2x3x8xf32>
 
 //      CHECK:  %[[V_OUTPUT_0:.+]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [4, 1, 8], strides = [1, 1, 1]} : vector<4x2x8xf32> to vector<4x1x8xf32>
+// CHECK-SAME:    [0:4:1][0:1:1][0:8:1] : vector<4x2x8xf32> to vector<4x1x8xf32>
 //      CHECK:  %[[V_OUTPUT_1:.+]] = vector.extract_strided_slice %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 1, 0], sizes = [4, 1, 8], strides = [1, 1, 1]} : vector<4x2x8xf32> to vector<4x1x8xf32>
+// CHECK-SAME:    [0:4:1][1:1:1][0:8:1] : vector<4x2x8xf32> to vector<4x1x8xf32>
 
 /// w == 0, kw == 0
 //      CHECK:   %[[CONTRACT_0:.+]] = vector.contract {
@@ -396,10 +396,10 @@ func.func @conv1d_ncw_4x8x2_memref(%input: memref<4x3x6xf32>, %filter: memref<8x
 
 /// w == 0, kw == 0
 //      CHECK:   %[[RES_0:.+]] = vector.insert_strided_slice %[[CONTRACT_2]], %[[V_OUTPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x8xf32> into vector<4x2x8xf32>
+// CHECK-SAME:    [0:1][0:1][0:1] : vector<4x1x8xf32> into vector<4x2x8xf32>
 /// w == 1, kw == 0
 //      CHECK:   %[[RES_1:.+]] = vector.insert_strided_slice %[[CONTRACT_3]], %[[RES_0]]
-// CHECK-SAME:     {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x8xf32> into vector<4x2x8xf32>
+// CHECK-SAME:    [0:1][1:1][0:1] : vector<4x1x8xf32> into vector<4x2x8xf32>
 
 /// Transpose result to ncw format.
 //  CHECK:  %[[RES_2:.+]] = vector.transpose %[[RES_1]], [0, 2, 1]
@@ -438,9 +438,9 @@ func.func @conv1d_ncw_4x8x2_memref(%input: memref<4x3x6xf32>, %filter: memref<8x
 //  CHECK-DAG:  %[[V_OUTPUT_R:.+]] = vector.transpose %[[V_NWC_OUTPUT_R]], [0, 2, 1]
 
 //      CHECK:   %[[V_INPUT_0:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [4, 2, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x2x3xf32>
+// CHECK-SAME:    [0:4:1][0:2:1][0:3:1] : vector<4x4x3xf32> to vector<4x2x3xf32>
 //      CHECK:   %[[V_INPUT_1:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 2, 0], sizes = [4, 2, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x2x3xf32>
+// CHECK-SAME:    [0:4:1][2:2:1][0:3:1] : vector<4x4x3xf32> to vector<4x2x3xf32>
 
 //      CHECK:  %[[V_FILTER_0:.+]] = vector.extract %[[V_FILTER_R]][0] : vector<3x8xf32> from vector<2x3x8xf32>
 //      CHECK:  %[[V_FILTER_1:.+]] = vector.extract %[[V_FILTER_R]][1] : vector<3x8xf32> from vector<2x3x8xf32>
@@ -485,13 +485,13 @@ func.func @conv1d_8_tensor(%input: tensor<11xf32>, %filter: tensor<4xf32>, %outp
 //  CHECK-DAG:  %[[V_OUTPUT_R:.+]] = vector.transfer_read %[[OUTPUT]][%[[C0]]], %[[F0]]
 
 //      CHECK:   %[[V_INPUT_0:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0], sizes = [8], strides = [1]} : vector<11xf32> to vector<8xf32>
+// CHECK-SAME:    [0:8:1] : vector<11xf32> to vector<8xf32>
 //      CHECK:   %[[V_INPUT_1:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [1], sizes = [8], strides = [1]} : vector<11xf32> to vector<8xf32>
+// CHECK-SAME:    [1:8:1] : vector<11xf32> to vector<8xf32>
 //      CHECK:   %[[V_INPUT_2:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [2], sizes = [8], strides = [1]} : vector<11xf32> to vector<8xf32>
+// CHECK-SAME:    [2:8:1] : vector<11xf32> to vector<8xf32>
 //      CHECK:   %[[V_INPUT_3:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [3], sizes = [8], strides = [1]} : vector<11xf32> to vector<8xf32>
+// CHECK-SAME:    [3:8:1] : vector<11xf32> to vector<8xf32>
 
 //      CHECK:  %[[V_FILTER_0:.+]] = vector.extract %[[V_FILTER_R]][0] : f32 from vector<4xf32>
 //      CHECK:  %[[V_FILTER_1:.+]] = vector.extract %[[V_FILTER_R]][1] : f32 from vector<4xf32>
@@ -540,9 +540,9 @@ func.func @depthwise_conv1d_nwc_wc_3x5x4xf32_memref(%input: memref<3x5x4xf32>, %
 //      CHECK:  %[[V_OUTPUT_R:.+]] = vector.transfer_read %[[OUTPUT]][%[[C0]], %[[C0]], %[[C0]]]
 
 //      CHECK:   %[[V_INPUT_0:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [3, 2, 4], strides = [1, 1, 1]} : vector<3x4x4xf32> to vector<3x2x4xf32>
+// CHECK-SAME:    [0:3:1][0:2:1][0:4:1] : vector<3x4x4xf32> to vector<3x2x4xf32>
 //      CHECK:   %[[V_INPUT_1:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 2, 0], sizes = [3, 2, 4], strides = [1, 1, 1]} : vector<3x4x4xf32> to vector<3x2x4xf32>
+// CHECK-SAME:    [0:3:1][2:2:1][0:4:1] : vector<3x4x4xf32> to vector<3x2x4xf32>
 
 //      CHECK:  %[[V_FILTER_0:.+]] = vector.extract %[[V_FILTER_R]][0] : vector<4xf32> from vector<2x4xf32>
 //      CHECK:  %[[V_FILTER_1:.+]] = vector.extract %[[V_FILTER_R]][1] : vector<4xf32> from vector<2x4xf32>
@@ -580,9 +580,9 @@ func.func @depthwise_conv1d_nwc_wc_3x5x4xi8_memref(%input: memref<3x5x4xi8>, %fi
 //      CHECK:  %[[V_OUTPUT_R:.+]] = vector.transfer_read %[[OUTPUT]][%[[C0]], %[[C0]], %[[C0]]]
 
 //      CHECK:   %[[V_INPUT_0:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 0, 0], sizes = [3, 2, 4], strides = [1, 1, 1]} : vector<3x4x4xi8> to vector<3x2x4xi8>
+// CHECK-SAME:    [0:3:1][0:2:1][0:4:1] : vector<3x4x4xi8> to vector<3x2x4xi8>
 //      CHECK:   %[[V_INPUT_1:.+]] = vector.extract_strided_slice %[[V_INPUT_R]]
-// CHECK-SAME:     {offsets = [0, 2, 0], sizes = [3, 2, 4], strides = [1, 1, 1]} : vector<3x4x4xi8> to vector<3x2x4xi8>
+// CHECK-SAME:    [0:3:1][2:2:1][0:4:1] : vector<3x4x4xi8> to vector<3x2x4xi8>
 
 //      CHECK:  %[[V_FILTER_0:.+]] = vector.extract %[[V_FILTER_R]][0] : vector<4xi8> from vector<2x4xi8>
 //      CHECK:  %[[V_FILTER_1:.+]] = vector.extract %[[V_FILTER_R]][1] : vector<4xi8> from vector<2x4xi8>
@@ -670,14 +670,14 @@ func.func @pooling_nwc_sum_memref_1_2_1_3(%input: memref<4x4x3xf32>, %filter: me
 // CHECK-DAG: %[[Vcst:.+]] = arith.constant 0.000000e+00 : f32
 // CHECK: %[[V0:.+]] = vector.transfer_read %[[INPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vcst]] {in_bounds = [true, true, true]} : memref<4x4x3xf32>, vector<4x4x3xf32>
 // CHECK: %[[V1:.+]] = vector.transfer_read %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vcst]] {in_bounds = [true, true, true]} : memref<4x2x3xf32>, vector<4x2x3xf32>
-// CHECK: %[[V2:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V3:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 3, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V1]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V1]] {offsets = [0, 1, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V2:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][0:1:1][0:3:1] : vector<4x4x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V3:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][3:1:1][0:3:1] : vector<4x4x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V1]][0:4:1][0:1:1][0:3:1] : vector<4x2x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V1]][0:4:1][1:1:1][0:3:1] : vector<4x2x3xf32> to vector<4x1x3xf32>
 // CHECK: %[[V6:.+]] = arith.addf %[[V2]], %[[V4]] : vector<4x1x3xf32>
 // CHECK: %[[V7:.+]] = arith.addf %[[V3]], %[[V5]] : vector<4x1x3xf32>
-// CHECK: %[[V8:.+]] = vector.insert_strided_slice %[[V6]], %[[V1]] {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x3xf32> into vector<4x2x3xf32>
-// CHECK: %[[V9:.+]] = vector.insert_strided_slice %[[V7]], %[[V8]] {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x3xf32> into vector<4x2x3xf32>
+// CHECK: %[[V8:.+]] = vector.insert_strided_slice %[[V6]], %[[V1]][0:1][0:1][0:1] : vector<4x1x3xf32> into vector<4x2x3xf32>
+// CHECK: %[[V9:.+]] = vector.insert_strided_slice %[[V7]], %[[V8]][0:1][1:1][0:1] : vector<4x1x3xf32> into vector<4x2x3xf32>
 // CHECK: vector.transfer_write %[[V9]], %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]] {in_bounds = [true, true, true]} : vector<4x2x3xf32>, memref<4x2x3xf32>
 
 // -----
@@ -696,14 +696,14 @@ func.func @pooling_nwc_max_memref_1_2_1_3(%input: memref<4x4x3xf32>, %filter: me
 // CHECK-DAG: %[[Vcst:.+]] = arith.constant 0.000000e+00 : f32
 // CHECK: %[[V0:.+]] = vector.transfer_read %[[INPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vcst]] {in_bounds = [true, true, true]} : memref<4x4x3xf32>, vector<4x4x3xf32>
 // CHECK: %[[V1:.+]] = vector.transfer_read %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vcst]] {in_bounds = [true, true, true]} : memref<4x2x3xf32>, vector<4x2x3xf32>
-// CHECK: %[[V2:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V3:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 3, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V1]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V1]] {offsets = [0, 1, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V2:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][0:1:1][0:3:1] : vector<4x4x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V3:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][3:1:1][0:3:1] : vector<4x4x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V1]][0:4:1][0:1:1][0:3:1] : vector<4x2x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V1]][0:4:1][1:1:1][0:3:1] : vector<4x2x3xf32> to vector<4x1x3xf32>
 // CHECK: %[[V6:.+]] = arith.maximumf %[[V2]], %[[V4]] : vector<4x1x3xf32>
 // CHECK: %[[V7:.+]] = arith.maximumf %[[V3]], %[[V5]] : vector<4x1x3xf32>
-// CHECK: %[[V8:.+]] = vector.insert_strided_slice %[[V6]], %[[V1]] {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x3xf32> into vector<4x2x3xf32>
-// CHECK: %[[V9:.+]] = vector.insert_strided_slice %[[V7]], %[[V8]] {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x3xf32> into vector<4x2x3xf32>
+// CHECK: %[[V8:.+]] = vector.insert_strided_slice %[[V6]], %[[V1]][0:1][0:1][0:1] : vector<4x1x3xf32> into vector<4x2x3xf32>
+// CHECK: %[[V9:.+]] = vector.insert_strided_slice %[[V7]], %[[V8]][0:1][1:1][0:1] : vector<4x1x3xf32> into vector<4x2x3xf32>
 // CHECK: vector.transfer_write %[[V9]], %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]] {in_bounds = [true, true, true]} : vector<4x2x3xf32>, memref<4x2x3xf32>
 
 // -----
@@ -725,16 +725,16 @@ func.func @pooling_nwc_sum_i8i8i32_memref_1_2_1_3(%input: memref<4x4x3xi8>, %fil
 // CHECK-DAG: %[[Vc0_i32:.+]] = arith.constant 0 : i32
 // CHECK: %[[V0:.+]] = vector.transfer_read %[[INPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vc0_i8]] {in_bounds = [true, true, true]} : memref<4x4x3xi8>, vector<4x4x3xi8>
 // CHECK: %[[V1:.+]] = vector.transfer_read %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vc0_i32]] {in_bounds = [true, true, true]} : memref<4x2x3xi32>, vector<4x2x3xi32>
-// CHECK: %[[V2:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xi8> to vector<4x1x3xi8>
-// CHECK: %[[V3:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 3, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xi8> to vector<4x1x3xi8>
-// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V1]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xi32> to vector<4x1x3xi32>
-// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V1]] {offsets = [0, 1, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xi32> to vector<4x1x3xi32>
+// CHECK: %[[V2:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][0:1:1][0:3:1] : vector<4x4x3xi8> to vector<4x1x3xi8>
+// CHECK: %[[V3:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][3:1:1][0:3:1] : vector<4x4x3xi8> to vector<4x1x3xi8>
+// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V1]][0:4:1][0:1:1][0:3:1] : vector<4x2x3xi32> to vector<4x1x3xi32>
+// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V1]][0:4:1][1:1:1][0:3:1] : vector<4x2x3xi32> to vector<4x1x3xi32>
 // CHECK: %[[V6:.+]] = arith.extsi %[[V2]] : vector<4x1x3xi8> to vector<4x1x3xi32>
 // CHECK: %[[V7:.+]] = arith.addi %[[V6]], %[[V4]] : vector<4x1x3xi32>
 // CHECK: %[[V8:.+]] = arith.extsi %[[V3]] : vector<4x1x3xi8> to vector<4x1x3xi32>
 // CHECK: %[[V9:.+]] = arith.addi %[[V8]], %[[V5]] : vector<4x1x3xi32>
-// CHECK: %[[V10:.+]] = vector.insert_strided_slice %[[V7]], %[[V1]] {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x3xi32> into vector<4x2x3xi32>
-// CHECK: %[[V11:.+]] = vector.insert_strided_slice %[[V9]], %[[V10]] {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x3xi32> into vector<4x2x3xi32>
+// CHECK: %[[V10:.+]] = vector.insert_strided_slice %[[V7]], %[[V1]][0:1][0:1][0:1] : vector<4x1x3xi32> into vector<4x2x3xi32>
+// CHECK: %[[V11:.+]] = vector.insert_strided_slice %[[V9]], %[[V10]][0:1][1:1][0:1] : vector<4x1x3xi32> into vector<4x2x3xi32>
 // CHECK: vector.transfer_write %[[V11]], %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]] {in_bounds = [true, true, true]} : vector<4x2x3xi32>, memref<4x2x3xi32>
 // CHECK: return
 
@@ -757,16 +757,16 @@ func.func @pooling_nwc_max_i8i8i32_memref_1_2_1_3(%input: memref<4x4x3xi8>, %fil
 // CHECK-DAG: %[[Vc0_i32:.+]] = arith.constant 0 : i32
 // CHECK: %[[V0:.+]] = vector.transfer_read %[[INPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vc0_i8]] {in_bounds = [true, true, true]} : memref<4x4x3xi8>, vector<4x4x3xi8>
 // CHECK: %[[V1:.+]] = vector.transfer_read %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vc0_i32]] {in_bounds = [true, true, true]} : memref<4x2x3xi32>, vector<4x2x3xi32>
-// CHECK: %[[V2:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xi8> to vector<4x1x3xi8>
-// CHECK: %[[V3:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 3, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xi8> to vector<4x1x3xi8>
-// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V1]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xi32> to vector<4x1x3xi32>
-// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V1]] {offsets = [0, 1, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xi32> to vector<4x1x3xi32>
+// CHECK: %[[V2:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][0:1:1][0:3:1] : vector<4x4x3xi8> to vector<4x1x3xi8>
+// CHECK: %[[V3:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][3:1:1][0:3:1] : vector<4x4x3xi8> to vector<4x1x3xi8>
+// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V1]][0:4:1][0:1:1][0:3:1] : vector<4x2x3xi32> to vector<4x1x3xi32>
+// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V1]][0:4:1][1:1:1][0:3:1] : vector<4x2x3xi32> to vector<4x1x3xi32>
 // CHECK: %[[V6:.+]] = arith.extsi %[[V2]] : vector<4x1x3xi8> to vector<4x1x3xi32>
 // CHECK: %[[V7:.+]] = arith.maxsi %[[V6]], %[[V4]] : vector<4x1x3xi32>
 // CHECK: %[[V8:.+]] = arith.extsi %[[V3]] : vector<4x1x3xi8> to vector<4x1x3xi32>
 // CHECK: %[[V9:.+]] = arith.maxsi %[[V8]], %[[V5]] : vector<4x1x3xi32>
-// CHECK: %[[V10:.+]] = vector.insert_strided_slice %[[V7]], %[[V1]] {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x3xi32> into vector<4x2x3xi32>
-// CHECK: %[[V11:.+]] = vector.insert_strided_slice %[[V9]], %[[V10]] {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x3xi32> into vector<4x2x3xi32>
+// CHECK: %[[V10:.+]] = vector.insert_strided_slice %[[V7]], %[[V1]][0:1][0:1][0:1] : vector<4x1x3xi32> into vector<4x2x3xi32>
+// CHECK: %[[V11:.+]] = vector.insert_strided_slice %[[V9]], %[[V10]][0:1][1:1][0:1] : vector<4x1x3xi32> into vector<4x2x3xi32>
 // CHECK: vector.transfer_write %[[V11]], %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]] {in_bounds = [true, true, true]} : vector<4x2x3xi32>, memref<4x2x3xi32>
 // CHECK: return
 
@@ -786,18 +786,18 @@ func.func @pooling_nwc_sum_memref_2_2_2_3(%input: memref<4x6x3xf32>, %filter: me
 // CHECK-DAG: %[[Vcst:.+]] = arith.constant 0.000000e+00 : f32
 // CHECK: %[[V0:.+]] = vector.transfer_read %[[INPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vcst]] {in_bounds = [true, true, true]} : memref<4x6x3xf32>, vector<4x6x3xf32>
 // CHECK: %[[V1:.+]] = vector.transfer_read %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vcst]] {in_bounds = [true, true, true]} : memref<4x2x3xf32>, vector<4x2x3xf32>
-// CHECK: %[[V2:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V3:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 3, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 2, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 5, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V6:.+]] = vector.extract_strided_slice %[[V1]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V7:.+]] = vector.extract_strided_slice %[[V1]] {offsets = [0, 1, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V2:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][0:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V3:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][3:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][2:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][5:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V6:.+]] = vector.extract_strided_slice %[[V1]][0:4:1][0:1:1][0:3:1] : vector<4x2x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V7:.+]] = vector.extract_strided_slice %[[V1]][0:4:1][1:1:1][0:3:1] : vector<4x2x3xf32> to vector<4x1x3xf32>
 // CHECK: %[[V8:.+]] = arith.addf %[[V2]], %[[V6]] : vector<4x1x3xf32>
 // CHECK: %[[V9:.+]] = arith.addf %[[V3]], %[[V7]] : vector<4x1x3xf32>
 // CHECK: %[[V10:.+]] = arith.addf %[[V4]], %[[V8]] : vector<4x1x3xf32>
 // CHECK: %[[V11:.+]] = arith.addf %[[V5]], %[[V9]] : vector<4x1x3xf32>
-// CHECK: %[[V12:.+]] = vector.insert_strided_slice %[[V10]], %[[V1]] {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x3xf32> into vector<4x2x3xf32>
-// CHECK: %[[V13:.+]] = vector.insert_strided_slice %[[V11]], %[[V12]] {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x3xf32> into vector<4x2x3xf32>
+// CHECK: %[[V12:.+]] = vector.insert_strided_slice %[[V10]], %[[V1]][0:1][0:1][0:1] : vector<4x1x3xf32> into vector<4x2x3xf32>
+// CHECK: %[[V13:.+]] = vector.insert_strided_slice %[[V11]], %[[V12]][0:1][1:1][0:1] : vector<4x1x3xf32> into vector<4x2x3xf32>
 // CHECK: vector.transfer_write %[[V13:.+]], %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]] {in_bounds = [true, true, true]} : vector<4x2x3xf32>, memref<4x2x3xf32>
 
 
@@ -819,14 +819,14 @@ func.func @pooling_ncw_sum_memref_1_2_1_3(%input: memref<4x3x4xf32>, %filter: me
 // CHECK: %[[V1:.+]] = vector.transfer_read %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vcst]] {in_bounds = [true, true, true]} : memref<4x3x2xf32>, vector<4x3x2xf32>
 // CHECK: %[[V2:.+]] = vector.transpose %[[V0]], [0, 2, 1] : vector<4x3x4xf32> to vector<4x4x3xf32>
 // CHECK: %[[V3:.+]] = vector.transpose %[[V1]], [0, 2, 1] : vector<4x3x2xf32> to vector<4x2x3xf32>
-// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V2]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V2]] {offsets = [0, 3, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V6:.+]] = vector.extract_strided_slice %[[V3]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V7:.+]] = vector.extract_strided_slice %[[V3]] {offsets = [0, 1, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V2]][0:4:1][0:1:1][0:3:1] : vector<4x4x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V2]][0:4:1][3:1:1][0:3:1] : vector<4x4x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V6:.+]] = vector.extract_strided_slice %[[V3]][0:4:1][0:1:1][0:3:1] : vector<4x2x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V7:.+]] = vector.extract_strided_slice %[[V3]][0:4:1][1:1:1][0:3:1] : vector<4x2x3xf32> to vector<4x1x3xf32>
 // CHECK: %[[V8:.+]] = arith.addf %[[V4]], %[[V6]] : vector<4x1x3xf32>
 // CHECK: %[[V9:.+]] = arith.addf %[[V5]], %[[V7]] : vector<4x1x3xf32>
-// CHECK: %[[V10:.+]] = vector.insert_strided_slice %[[V8]], %[[V3]] {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x3xf32> into vector<4x2x3xf32>
-// CHECK: %[[V11:.+]] = vector.insert_strided_slice %[[V9]], %[[V10]] {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x3xf32> into vector<4x2x3xf32>
+// CHECK: %[[V10:.+]] = vector.insert_strided_slice %[[V8]], %[[V3]][0:1][0:1][0:1] : vector<4x1x3xf32> into vector<4x2x3xf32>
+// CHECK: %[[V11:.+]] = vector.insert_strided_slice %[[V9]], %[[V10]][0:1][1:1][0:1] : vector<4x1x3xf32> into vector<4x2x3xf32>
 // CHECK: %[[V12:.+]] = vector.transpose %[[V11]], [0, 2, 1] : vector<4x2x3xf32> to vector<4x3x2xf32>
 // CHECK: vector.transfer_write %[[V12:.+]], %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]] {in_bounds = [true, true, true]} : vector<4x3x2xf32>, memref<4x3x2xf32>
 
@@ -868,8 +868,8 @@ func.func @pooling_nwc_sum_memref_2_2_2_1(%input: memref<4x4x3xf32>, %filter: me
 // CHECK-DAG: %[[Vcst:.+]] = arith.constant 0.000000e+00 : f32
 // CHECK: %[[V0:.+]] = vector.transfer_read %[[INPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vcst]] {in_bounds = [true, true, true]} : memref<4x4x3xf32>, vector<4x4x3xf32>
 // CHECK: %[[V1:.+]] = vector.transfer_read %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vcst]] {in_bounds = [true, true, true]} : memref<4x2x3xf32>, vector<4x2x3xf32>
-// CHECK: %[[V2:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 0, 0], sizes = [4, 2, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x2x3xf32>
-// CHECK: %[[V3:.+]] = vector.extract_strided_slice %[[V0]] {offsets = [0, 2, 0], sizes = [4, 2, 3], strides = [1, 1, 1]} : vector<4x4x3xf32> to vector<4x2x3xf32>
+// CHECK: %[[V2:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][0:2:1][0:3:1] : vector<4x4x3xf32> to vector<4x2x3xf32>
+// CHECK: %[[V3:.+]] = vector.extract_strided_slice %[[V0]][0:4:1][2:2:1][0:3:1] : vector<4x4x3xf32> to vector<4x2x3xf32>
 // CHECK: %[[V4:.+]] = arith.addf %[[V2]], %[[V1]] : vector<4x2x3xf32>
 // CHECK: %[[V5:.+]] = arith.addf %[[V3]], %[[V4]] : vector<4x2x3xf32>
 // CHECK: vector.transfer_write %[[V5:.+]], %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]] {in_bounds = [true, true, true]} : vector<4x2x3xf32>, memref<4x2x3xf32>
@@ -893,18 +893,18 @@ func.func @pooling_ncw_sum_memref_2_2_2_3(%input: memref<4x3x6xf32>, %filter: me
 // CHECK: %[[V1:.+]] = vector.transfer_read %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vcst]] {in_bounds = [true, true, true]} : memref<4x3x2xf32>, vector<4x3x2xf32>
 // CHECK: %[[V2:.+]] = vector.transpose %[[V0]], [0, 2, 1] : vector<4x3x6xf32> to vector<4x6x3xf32>
 // CHECK: %[[V3:.+]] = vector.transpose %[[V1]], [0, 2, 1] : vector<4x3x2xf32> to vector<4x2x3xf32>
-// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V2]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V2]] {offsets = [0, 3, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V6:.+]] = vector.extract_strided_slice %[[V2]] {offsets = [0, 2, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V7:.+]] = vector.extract_strided_slice %[[V2]] {offsets = [0, 5, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x6x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V8:.+]] = vector.extract_strided_slice %[[V3]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xf32> to vector<4x1x3xf32>
-// CHECK: %[[V9:.+]] = vector.extract_strided_slice %[[V3]] {offsets = [0, 1, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V2]][0:4:1][0:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V2]][0:4:1][3:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V6:.+]] = vector.extract_strided_slice %[[V2]][0:4:1][2:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V7:.+]] = vector.extract_strided_slice %[[V2]][0:4:1][5:1:1][0:3:1] : vector<4x6x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V8:.+]] = vector.extract_strided_slice %[[V3]][0:4:1][0:1:1][0:3:1] : vector<4x2x3xf32> to vector<4x1x3xf32>
+// CHECK: %[[V9:.+]] = vector.extract_strided_slice %[[V3]][0:4:1][1:1:1][0:3:1] : vector<4x2x3xf32> to vector<4x1x3xf32>
 // CHECK: %[[V10:.+]] = arith.addf %[[V4]], %[[V8]] : vector<4x1x3xf32>
 // CHECK: %[[V11:.+]] = arith.addf %[[V5]], %[[V9]] : vector<4x1x3xf32>
 // CHECK: %[[V12:.+]] = arith.addf %[[V6]], %[[V10]] : vector<4x1x3xf32>
 // CHECK: %[[V13:.+]] = arith.addf %[[V7]], %[[V11]] : vector<4x1x3xf32>
-// CHECK: %[[V14:.+]] = vector.insert_strided_slice %[[V12]], %[[V3]] {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x3xf32> into vector<4x2x3xf32>
-// CHECK: %[[V15:.+]] = vector.insert_strided_slice %[[V13]], %[[V14]] {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x3xf32> into vector<4x2x3xf32>
+// CHECK: %[[V14:.+]] = vector.insert_strided_slice %[[V12]], %[[V3]][0:1][0:1][0:1] : vector<4x1x3xf32> into vector<4x2x3xf32>
+// CHECK: %[[V15:.+]] = vector.insert_strided_slice %[[V13]], %[[V14]][0:1][1:1][0:1] : vector<4x1x3xf32> into vector<4x2x3xf32>
 // CHECK: %[[V16:.+]] = vector.transpose %[[V15]], [0, 2, 1] : vector<4x2x3xf32> to vector<4x3x2xf32>
 // CHECK: vector.transfer_write %[[V16:.+]], %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]] {in_bounds = [true, true, true]} : vector<4x3x2xf32>, memref<4x3x2xf32>
 
@@ -926,8 +926,8 @@ func.func @pooling_ncw_sum_memref_2_3_2_1(%input: memref<4x2x5xf32>, %filter: me
 // CHECK: %[[V1:.+]] = vector.transfer_read %[[OUTPUT]][%[[Vc0]], %[[Vc0]], %[[Vc0]]], %[[Vcst]] {in_bounds = [true, true, true]} : memref<4x2x3xf32>, vector<4x2x3xf32>
 // CHECK: %[[V2:.+]] = vector.transpose %[[V0]], [0, 2, 1] : vector<4x2x5xf32> to vector<4x5x2xf32>
 // CHECK: %[[V3:.+]] = vector.transpose %[[V1]], [0, 2, 1] : vector<4x2x3xf32> to vector<4x3x2xf32>
-// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V2]] {offsets = [0, 0, 0], sizes = [4, 3, 2], strides = [1, 1, 1]} : vector<4x5x2xf32> to vector<4x3x2xf32>
-// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V2]] {offsets = [0, 2, 0], sizes = [4, 3, 2], strides = [1, 1, 1]} : vector<4x5x2xf32> to vector<4x3x2xf32>
+// CHECK: %[[V4:.+]] = vector.extract_strided_slice %[[V2]][0:4:1][0:3:1][0:2:1] : vector<4x5x2xf32> to vector<4x3x2xf32>
+// CHECK: %[[V5:.+]] = vector.extract_strided_slice %[[V2]][0:4:1][2:3:1][0:2:1] : vector<4x5x2xf32> to vector<4x3x2xf32>
 // CHECK: %[[V6:.+]] = arith.addf %[[V4]], %[[V3]] : vector<4x3x2xf32>
 // CHECK: %[[V7:.+]] = arith.addf %[[V5]], %[[V6]] : vector<4x3x2xf32>
 // CHECK: %[[V8:.+]] = vector.transpose %[[V7]], [0, 2, 1] : vector<4x3x2xf32> to vector<4x2x3xf32>
diff --git a/mlir/test/Dialect/Vector/canonicalize.mlir b/mlir/test/Dialect/Vector/canonicalize.mlir
index e71a6eb02ea46..3395e7aed4ce8 100644
--- a/mlir/test/Dialect/Vector/canonicalize.mlir
+++ b/mlir/test/Dialect/Vector/canonicalize.mlir
@@ -201,7 +201,7 @@ func.func @constant_mask_transpose_to_transposed_constant_mask() -> (vector<2x3x
 func.func @extract_strided_slice_of_constant_mask() -> (vector<2x2xi1>) {
   %0 = vector.constant_mask [2, 2] : vector<4x3xi1>
   %1 = vector.extract_strided_slice %0
-    {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]}
+   [0:2:1][0:2:1]
       : vector<4x3xi1> to vector<2x2xi1>
   // CHECK: vector.constant_mask [2, 2] : vector<2x2xi1>
   return %1 : vector<2x2xi1>
@@ -212,7 +212,7 @@ func.func @extract_strided_slice_of_constant_mask() -> (vector<2x2xi1>) {
 func.func @extract_strided_slice_of_constant_mask() -> (vector<2x2xi1>) {
   %0 = vector.constant_mask [2, 2] : vector<4x3xi1>
   %1 = vector.extract_strided_slice %0
-    {offsets = [1, 0], sizes = [2, 2], strides = [1, 1]}
+   [1:2:1][0:2:1]
       : vector<4x3xi1> to vector<2x2xi1>
   // CHECK: vector.constant_mask [1, 2] : vector<2x2xi1>
   return %1 : vector<2x2xi1>
@@ -223,7 +223,7 @@ func.func @extract_strided_slice_of_constant_mask() -> (vector<2x2xi1>) {
 func.func @extract_strided_slice_of_constant_mask() -> (vector<2x2xi1>) {
   %0 = vector.constant_mask [2, 2] : vector<4x3xi1>
   %1 = vector.extract_strided_slice %0
-    {offsets = [0, 1], sizes = [2, 2], strides = [1, 1]}
+   [0:2:1][1:2:1]
       : vector<4x3xi1> to vector<2x2xi1>
   // CHECK: vector.constant_mask [2, 1] : vector<2x2xi1>
   return %1 : vector<2x2xi1>
@@ -234,7 +234,7 @@ func.func @extract_strided_slice_of_constant_mask() -> (vector<2x2xi1>) {
 func.func @extract_strided_slice_of_constant_mask() -> (vector<2x2xi1>) {
   %0 = vector.constant_mask [2, 2] : vector<4x3xi1>
   %1 = vector.extract_strided_slice %0
-    {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]}
+   [2:2:1][0:2:1]
       : vector<4x3xi1> to vector<2x2xi1>
   // CHECK: vector.constant_mask [0, 0] : vector<2x2xi1>
   return %1 : vector<2x2xi1>
@@ -245,7 +245,7 @@ func.func @extract_strided_slice_of_constant_mask() -> (vector<2x2xi1>) {
 func.func @extract_strided_slice_of_constant_mask() -> (vector<2x1xi1>) {
   %0 = vector.constant_mask [2, 2] : vector<4x3xi1>
   %1 = vector.extract_strided_slice %0
-    {offsets = [0, 2], sizes = [2, 1], strides = [1, 1]}
+   [0:2:1][2:1:1]
       : vector<4x3xi1> to vector<2x1xi1>
   // CHECK: vector.constant_mask [0, 0] : vector<2x1xi1>
   return %1 : vector<2x1xi1>
@@ -256,7 +256,7 @@ func.func @extract_strided_slice_of_constant_mask() -> (vector<2x1xi1>) {
 func.func @extract_strided_slice_of_constant_mask() -> (vector<2x1xi1>) {
   %0 = vector.constant_mask [2, 2] : vector<4x3xi1>
   %1 = vector.extract_strided_slice %0
-    {offsets = [0, 1], sizes = [2, 1], strides = [1, 1]}
+   [0:2:1][1:1:1]
       : vector<4x3xi1> to vector<2x1xi1>
   // CHECK: vector.constant_mask [2, 1] : vector<2x1xi1>
   return %1 : vector<2x1xi1>
@@ -267,7 +267,7 @@ func.func @extract_strided_slice_of_constant_mask() -> (vector<2x1xi1>) {
 func.func @extract_strided_slice_of_constant_mask() -> (vector<2x1xi1>) {
   %0 = vector.constant_mask [2, 2] : vector<4x3xi1>
   %1 = vector.extract_strided_slice %0
-    {offsets = [1, 1], sizes = [2, 1], strides = [1, 1]}
+   [1:2:1][1:1:1]
       : vector<4x3xi1> to vector<2x1xi1>
   // CHECK: vector.constant_mask [1, 1] : vector<2x1xi1>
   return %1 : vector<2x1xi1>
@@ -280,7 +280,7 @@ func.func @extract_strided_slice_of_constant_mask() -> (vector<2x1xi1>) {
 //  CHECK-NEXT:   return %[[ARG]] : vector<4x3xi1>
 func.func @extract_strided_fold(%arg : vector<4x3xi1>) -> (vector<4x3xi1>) {
   %0 = vector.extract_strided_slice %arg
-    {offsets = [0, 0], sizes = [4, 3], strides = [1, 1]}
+   [0:4:1][0:3:1]
       : vector<4x3xi1> to vector<4x3xi1>
   return %0 : vector<4x3xi1>
 }
@@ -292,10 +292,10 @@ func.func @extract_strided_fold(%arg : vector<4x3xi1>) -> (vector<4x3xi1>) {
 //  CHECK-NEXT:   return %[[ARG]] : vector<4x4xf32>
 func.func @extract_strided_fold_insert(%a: vector<4x4xf32>, %b: vector<8x16xf32>)
   -> (vector<4x4xf32>) {
-  %0 = vector.insert_strided_slice %a, %b {offsets = [2, 2], strides = [1, 1]}
+  %0 = vector.insert_strided_slice %a, %b[2:1][2:1]
     : vector<4x4xf32> into vector<8x16xf32>
   %1 = vector.extract_strided_slice %0
-    {offsets = [2, 2], sizes = [4, 4], strides = [1, 1]}
+   [2:4:1][2:4:1]
       : vector<8x16xf32> to vector<4x4xf32>
   return %1 : vector<4x4xf32>
 }
@@ -306,15 +306,15 @@ func.func @extract_strided_fold_insert(%a: vector<4x4xf32>, %b: vector<8x16xf32>
 // CHECK-LABEL: extract_strided_fold_insert
 //  CHECK-SAME: (%[[ARG0:.*]]: vector<6x4xf32>
 //  CHECK-NEXT:   %[[EXT:.*]] = vector.extract_strided_slice %[[ARG0]]
-//  CHECK-SAME:     {offsets = [0, 0], sizes = [4, 4], strides = [1, 1]}
+//  CHECK-SAME:    [0:4:1][0:4:1]
 //  CHECK-SAME:       : vector<6x4xf32> to vector<4x4xf32>
 //  CHECK-NEXT:   return %[[EXT]] : vector<4x4xf32>
 func.func @extract_strided_fold_insert(%a: vector<6x4xf32>, %b: vector<8x16xf32>)
   -> (vector<4x4xf32>) {
-  %0 = vector.insert_strided_slice %a, %b {offsets = [2, 2], strides = [1, 1]}
+  %0 = vector.insert_strided_slice %a, %b[2:1][2:1]
     : vector<6x4xf32> into vector<8x16xf32>
   %1 = vector.extract_strided_slice %0
-    {offsets = [2, 2], sizes = [4, 4], strides = [1, 1]}
+   [2:4:1][2:4:1]
       : vector<8x16xf32> to vector<4x4xf32>
   return %1 : vector<4x4xf32>
 }
@@ -325,18 +325,18 @@ func.func @extract_strided_fold_insert(%a: vector<6x4xf32>, %b: vector<8x16xf32>
 // CHECK-LABEL: extract_strided_fold_negative
 //  CHECK-SAME: (%[[ARG0:.*]]: vector<4x4xf32>, %[[ARG1:.*]]: vector<8x16xf32>
 //       CHECK:   %[[INS:.*]] = vector.insert_strided_slice %[[ARG0]], %[[ARG1]]
-//  CHECK-SAME:     {offsets = [2, 2], strides = [1, 1]}
+//  CHECK-SAME:    [2:1][2:1]
 //  CHECK-SAME:       : vector<4x4xf32> into vector<8x16xf32>
 //       CHECK:   %[[EXT:.*]] = vector.extract_strided_slice %[[INS]]
-//  CHECK-SAME:     {offsets = [2, 2], sizes = [6, 4], strides = [1, 1]}
+//  CHECK-SAME:    [2:6:1][2:4:1]
 //  CHECK-SAME:       : vector<8x16xf32> to vector<6x4xf32>
 //  CHECK-NEXT:   return %[[EXT]] : vector<6x4xf32>
 func.func @extract_strided_fold_negative(%a: vector<4x4xf32>, %b: vector<8x16xf32>)
   -> (vector<6x4xf32>) {
-  %0 = vector.insert_strided_slice %a, %b {offsets = [2, 2], strides = [1, 1]}
+  %0 = vector.insert_strided_slice %a, %b[2:1][2:1]
     : vector<4x4xf32> into vector<8x16xf32>
   %1 = vector.extract_strided_slice %0
-    {offsets = [2, 2], sizes = [6, 4], strides = [1, 1]}
+   [2:6:1][2:4:1]
       : vector<8x16xf32> to vector<6x4xf32>
   return %1 : vector<6x4xf32>
 }
@@ -347,17 +347,17 @@ func.func @extract_strided_fold_negative(%a: vector<4x4xf32>, %b: vector<8x16xf3
 // CHECK-LABEL: extract_strided_fold_insert
 //  CHECK-SAME: (%[[ARG0:.*]]: vector<2x8xf32>, %[[ARG1:.*]]: vector<1x4xf32>,
 //  CHECK-NEXT:   %[[EXT:.*]] = vector.extract_strided_slice %[[ARG1]]
-//  CHECK-SAME:     {offsets = [0, 0], sizes = [1, 1], strides = [1, 1]}
+//  CHECK-SAME:    [0:1:1][0:1:1]
 //  CHECK-SAME:       : vector<1x4xf32> to vector<1x1xf32>
 //  CHECK-NEXT:   return %[[EXT]] : vector<1x1xf32>
 func.func @extract_strided_fold_insert(%a: vector<2x8xf32>, %b: vector<1x4xf32>,
                                   %c : vector<1x4xf32>) -> (vector<1x1xf32>) {
-  %0 = vector.insert_strided_slice %b, %a {offsets = [0, 1], strides = [1, 1]}
+  %0 = vector.insert_strided_slice %b, %a[0:1][1:1]
     : vector<1x4xf32> into vector<2x8xf32>
-  %1 = vector.insert_strided_slice %c, %0 {offsets = [1, 0], strides = [1, 1]}
+  %1 = vector.insert_strided_slice %c, %0[1:1][0:1]
     : vector<1x4xf32> into vector<2x8xf32>
   %2 = vector.extract_strided_slice %1
-      {offsets = [0, 1], sizes = [1, 1], strides = [1, 1]}
+     [0:1:1][1:1:1]
         : vector<2x8xf32> to vector<1x1xf32>
   return %2 : vector<1x1xf32>
 }
@@ -1025,10 +1025,10 @@ func.func @extract_strided_constant() -> (vector<12x2xf32>, vector<2x13x3xi32>)
   %cst = arith.constant dense<2.000000e+00> : vector<29x7xf32>
   %cst_1 = arith.constant dense<1> : vector<4x37x9xi32>
   %0 = vector.extract_strided_slice %cst
-    {offsets = [2, 3], sizes = [12, 2], strides = [1, 1]}
+   [2:12:1][3:2:1]
       : vector<29x7xf32> to vector<12x2xf32>
   %1 = vector.extract_strided_slice %cst_1
-    {offsets = [1, 2, 5], sizes = [2, 13, 3], strides = [1, 1, 1]}
+   [1:2:1][2:13:1][5:3:1]
       : vector<4x37x9xi32> to vector<2x13x3xi32>
   return %0, %1 : vector<12x2xf32>, vector<2x13x3xi32>
 }
@@ -1041,7 +1041,7 @@ func.func @extract_strided_constant() -> (vector<12x2xf32>, vector<2x13x3xi32>)
 func.func @extract_strided_broadcast(%arg0: vector<4xf16>) -> vector<2x4xf16> {
  %0 = vector.broadcast %arg0 : vector<4xf16> to vector<16x4xf16>
  %1 = vector.extract_strided_slice %0
-  {offsets = [0, 0], sizes = [2, 4], strides = [1, 1]} :
+ [0:2:1][0:4:1] :
   vector<16x4xf16> to vector<2x4xf16>
   return %1 : vector<2x4xf16>
 }
@@ -1049,13 +1049,13 @@ func.func @extract_strided_broadcast(%arg0: vector<4xf16>) -> vector<2x4xf16> {
 // -----
 
 // CHECK-LABEL: extract_strided_broadcast2
-//       CHECK:   %[[E:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0], sizes = [2], strides = [1]} : vector<4xf16> to vector<2xf16>
+//       CHECK:   %[[E:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1] : vector<4xf16> to vector<2xf16>
 //  CHECK-NEXT:   %[[B:.*]] = vector.broadcast %[[E]] : vector<2xf16> to vector<2x2xf16>
 //  CHECK-NEXT:   return %[[B]] : vector<2x2xf16>
 func.func @extract_strided_broadcast2(%arg0: vector<4xf16>) -> vector<2x2xf16> {
  %0 = vector.broadcast %arg0 : vector<4xf16> to vector<16x4xf16>
  %1 = vector.extract_strided_slice %0
-  {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} :
+ [0:2:1][0:2:1] :
   vector<16x4xf16> to vector<2x2xf16>
   return %1 : vector<2x2xf16>
 }
@@ -1069,7 +1069,7 @@ func.func @extract_strided_broadcast2(%arg0: vector<4xf16>) -> vector<2x2xf16> {
 func.func @extract_strided_broadcast3(%arg0: vector<1xf32>) -> vector<1x4xf32> {
  %0 = vector.broadcast %arg0 : vector<1xf32> to vector<1x8xf32>
  %1 = vector.extract_strided_slice %0
-      {offsets = [0, 4], sizes = [1, 4], strides = [1, 1]}
+     [0:1:1][4:4:1]
       : vector<1x8xf32> to vector<1x4xf32>
   return %1 : vector<1x4xf32>
 }
@@ -1083,7 +1083,7 @@ func.func @extract_strided_broadcast3(%arg0: vector<1xf32>) -> vector<1x4xf32> {
 func.func @extract_strided_broadcast4(%arg0: f32) -> vector<1x4xf32> {
  %0 = vector.broadcast %arg0 : f32 to vector<1x8xf32>
  %1 = vector.extract_strided_slice %0
-      {offsets = [0, 4], sizes = [1, 4], strides = [1, 1]}
+     [0:1:1][4:4:1]
       : vector<1x8xf32> to vector<1x4xf32>
   return %1 : vector<1x4xf32>
 }
@@ -1589,7 +1589,7 @@ func.func @masked_vector_multi_reduction_unit_dimensions_single_elem(%source: ve
 // CHECK-LABEL: func @insert_strided_slice_full_range
 //  CHECK-SAME: %[[SOURCE:.+]]: vector<16x16xf16>, %{{.+}}: vector<16x16xf16>
 func.func @insert_strided_slice_full_range(%source: vector<16x16xf16>, %dest: vector<16x16xf16>) -> vector<16x16xf16> {
-  %0 = vector.insert_strided_slice %source, %dest {offsets = [0, 0], strides = [1, 1]} : vector<16x16xf16> into vector<16x16xf16>
+  %0 = vector.insert_strided_slice %source, %dest[0:1][0:1] : vector<16x16xf16> into vector<16x16xf16>
   // CHECK: return %[[SOURCE]]
   return %0: vector<16x16xf16>
 }
@@ -1602,7 +1602,7 @@ func.func @insert_strided_slice_full_range(%source: vector<16x16xf16>, %dest: ve
 func.func @extract_strided_splat(%arg0: f16) -> vector<2x4xf16> {
  %0 = vector.splat %arg0 : vector<16x4xf16>
  %1 = vector.extract_strided_slice %0
-  {offsets = [1, 0], sizes = [2, 4], strides = [1, 1]} :
+ [1:2:1][0:4:1] :
   vector<16x4xf16> to vector<2x4xf16>
   return %1 : vector<2x4xf16>
 }
@@ -1741,11 +1741,11 @@ func.func @extract_splat_vector_3d_constant() -> (vector<2xi32>, vector<2xi32>,
 func.func @extract_strided_slice_1d_constant() -> (vector<3xi32>, vector<2xi32>, vector<1xi32>) {
   %cst = arith.constant dense<[0, 1, 2]> : vector<3xi32>
   %a = vector.extract_strided_slice %cst
-   {offsets = [0], sizes = [3], strides = [1]} : vector<3xi32> to vector<3xi32>
+  [0:3:1] : vector<3xi32> to vector<3xi32>
   %b = vector.extract_strided_slice %cst
-   {offsets = [1], sizes = [2], strides = [1]} : vector<3xi32> to vector<2xi32>
+  [1:2:1] : vector<3xi32> to vector<2xi32>
   %c = vector.extract_strided_slice %cst
-   {offsets = [2], sizes = [1], strides = [1]} : vector<3xi32> to vector<1xi32>
+  [2:1:1] : vector<3xi32> to vector<1xi32>
   return %a, %b, %c : vector<3xi32>, vector<2xi32>, vector<1xi32>
 }
 
@@ -1759,11 +1759,11 @@ func.func @extract_strided_slice_1d_constant() -> (vector<3xi32>, vector<2xi32>,
 func.func @extract_strided_slice_2d_constant() -> (vector<1x1xi32>, vector<1x2xi32>, vector<2x2xi32>) {
   %cst = arith.constant dense<[[0, 1, 2], [3, 4, 5]]> : vector<2x3xi32>
   %a = vector.extract_strided_slice %cst
-   {offsets = [0, 0], sizes = [1, 1], strides = [1, 1]} : vector<2x3xi32> to vector<1x1xi32>
+  [0:1:1][0:1:1] : vector<2x3xi32> to vector<1x1xi32>
   %b = vector.extract_strided_slice %cst
-   {offsets = [1, 1], sizes = [1, 2], strides = [1, 1]} : vector<2x3xi32> to vector<1x2xi32>
+  [1:1:1][1:2:1] : vector<2x3xi32> to vector<1x2xi32>
   %c = vector.extract_strided_slice %cst
-   {offsets = [0, 1], sizes = [2, 2], strides = [1, 1]} : vector<2x3xi32> to vector<2x2xi32>
+  [0:2:1][1:2:1] : vector<2x3xi32> to vector<2x2xi32>
   return %a, %b, %c : vector<1x1xi32>, vector<1x2xi32>, vector<2x2xi32>
 }
 
@@ -1778,13 +1778,13 @@ func.func @extract_strided_slice_2d_constant() -> (vector<1x1xi32>, vector<1x2xi
 func.func @extract_strided_slice_3d_constant() -> (vector<1x2x2xi32>, vector<1x1x2xi32>, vector<2x1x2xi32>, vector<1x1x1xi32>) {
   %cst = arith.constant dense<[[[0, 1], [2, 3]], [[4, 5], [6, 7]], [[8, 9], [10, 11]]]> : vector<3x2x2xi32>
   %a = vector.extract_strided_slice %cst
-   {offsets = [2], sizes = [1], strides = [1]} : vector<3x2x2xi32> to vector<1x2x2xi32>
+  [2:1:1] : vector<3x2x2xi32> to vector<1x2x2xi32>
   %b = vector.extract_strided_slice %cst
-   {offsets = [0, 1], sizes = [1, 1], strides = [1, 1]} : vector<3x2x2xi32> to vector<1x1x2xi32>
+  [0:1:1][1:1:1] : vector<3x2x2xi32> to vector<1x1x2xi32>
   %c = vector.extract_strided_slice %cst
-   {offsets = [1, 1, 0], sizes = [2, 1, 2], strides = [1, 1, 1]} : vector<3x2x2xi32> to vector<2x1x2xi32>
+  [1:2:1][1:1:1][0:2:1] : vector<3x2x2xi32> to vector<2x1x2xi32>
   %d = vector.extract_strided_slice %cst
-   {offsets = [2, 1, 1], sizes = [1, 1, 1], strides = [1, 1, 1]} : vector<3x2x2xi32> to vector<1x1x1xi32>
+  [2:1:1][1:1:1][1:1:1] : vector<3x2x2xi32> to vector<1x1x1xi32>
   return %a, %b, %c, %d : vector<1x2x2xi32>, vector<1x1x2xi32>, vector<2x1x2xi32>, vector<1x1x1xi32>
 }
 
@@ -1796,7 +1796,7 @@ func.func @extract_strided_slice_3d_constant() -> (vector<1x2x2xi32>, vector<1x1
 //       CHECK: return %[[V]] : vector<4xf16>
 func.func @extract_extract_strided(%arg0: vector<32x16x4xf16>) -> vector<4xf16> {
  %1 = vector.extract_strided_slice %arg0
-  {offsets = [7, 3], sizes = [10, 8], strides = [1, 1]} :
+ [7:10:1][3:8:1] :
   vector<32x16x4xf16> to vector<10x8x4xf16>
   %2 = vector.extract %1[2, 4] : vector<4xf16> from vector<10x8x4xf16>
   return %2 : vector<4xf16>
@@ -1810,7 +1810,7 @@ func.func @extract_extract_strided(%arg0: vector<32x16x4xf16>) -> vector<4xf16>
 //       CHECK: return %[[V]] : f32
 func.func @extract_insert_strided(%a: vector<6x4xf32>, %b: vector<8x16xf32>)
   -> f32 {
-  %0 = vector.insert_strided_slice %a, %b {offsets = [2, 2], strides = [1, 1]}
+  %0 = vector.insert_strided_slice %a, %b[2:1][2:1]
     : vector<6x4xf32> into vector<8x16xf32>
   %2 = vector.extract %0[2, 4] : f32 from vector<8x16xf32>
   return %2 : f32
@@ -1824,7 +1824,7 @@ func.func @extract_insert_strided(%a: vector<6x4xf32>, %b: vector<8x16xf32>)
 //       CHECK: return %[[V]] : f32
 func.func @extract_insert_rank_reduce(%a: vector<4xf32>, %b: vector<8x16xf32>)
   -> f32 {
-  %0 = vector.insert_strided_slice %a, %b {offsets = [2, 2], strides = [1]}
+  %0 = vector.insert_strided_slice %a, %b[2][2:1]
     : vector<4xf32> into vector<8x16xf32>
   %2 = vector.extract %0[2, 4] : f32 from vector<8x16xf32>
   return %2 : f32
@@ -1837,7 +1837,7 @@ func.func @extract_insert_rank_reduce(%a: vector<4xf32>, %b: vector<8x16xf32>)
 //       CHECK: vector.extract
 func.func @extract_insert_negative(%a: vector<2x15xf32>, %b: vector<12x8x16xf32>)
   -> vector<16xf32> {
-  %0 = vector.insert_strided_slice %a, %b {offsets = [4, 2, 0], strides = [1, 1]}
+  %0 = vector.insert_strided_slice %a, %b[4][2:1][0:1]
     : vector<2x15xf32> into vector<12x8x16xf32>
   %2 = vector.extract %0[4, 2] : vector<16xf32> from vector<12x8x16xf32>
   return %2 : vector<16xf32>
@@ -1851,9 +1851,9 @@ func.func @extract_insert_negative(%a: vector<2x15xf32>, %b: vector<12x8x16xf32>
 //       CHECK: return %[[V]] : vector<16xf32>
 func.func @extract_insert_chain(%a: vector<2x16xf32>, %b: vector<12x8x16xf32>, %c: vector<2x16xf32>)
   -> vector<16xf32> {
-  %0 = vector.insert_strided_slice %c, %b {offsets = [4, 2, 0], strides = [1, 1]}
+  %0 = vector.insert_strided_slice %c, %b[4][2:1][0:1]
     : vector<2x16xf32> into vector<12x8x16xf32>
-  %1 = vector.insert_strided_slice %a, %0 {offsets = [0, 2, 0], strides = [1, 1]}
+  %1 = vector.insert_strided_slice %a, %0[0][2:1][0:1]
     : vector<2x16xf32> into vector<12x8x16xf32>
   %2 = vector.extract %1[4, 2] : vector<16xf32> from vector<12x8x16xf32>
   return %2 : vector<16xf32>
@@ -1879,7 +1879,7 @@ func.func @extract_from_extract_chain_should_not_fold_dynamic_extracts(%v: vecto
 //       CHECK: return %[[V]] : vector<4xf32>
 func.func @extract_extract_strided2(%A: vector<2x4xf32>)
   -> (vector<4xf32>) {
- %0 = vector.extract_strided_slice %A {offsets = [1, 0], sizes = [1, 4], strides = [1, 1]} : vector<2x4xf32> to vector<1x4xf32>
+ %0 = vector.extract_strided_slice %A[1:1:1][0:4:1] : vector<2x4xf32> to vector<1x4xf32>
  %1 = vector.extract %0[0] : vector<4xf32> from vector<1x4xf32>
  return %1 : vector<4xf32>
 }
@@ -2283,7 +2283,7 @@ func.func @bitcast(%a: vector<4x8xf32>) -> vector<4x16xi16> {
 func.func @insert_strided_slice_splat(%x: f32) -> (vector<8x16xf32>) {
   %splat0 = vector.splat %x : vector<4x4xf32>
   %splat1 = vector.splat %x : vector<8x16xf32>
-  %0 = vector.insert_strided_slice %splat0, %splat1 {offsets = [2, 2], strides = [1, 1]}
+  %0 = vector.insert_strided_slice %splat0, %splat1[2:1][2:1]
     : vector<4x4xf32> into vector<8x16xf32>
   return %0 : vector<8x16xf32>
 }
@@ -2295,9 +2295,9 @@ func.func @insert_strided_slice_splat(%x: f32) -> (vector<8x16xf32>) {
 //  CHECK-SAME: (%[[ARG:.*]]: vector<8x16xf32>)
 //  CHECK-NEXT:   return %[[ARG]] : vector<8x16xf32>
 func.func @insert_extract_strided_slice(%x: vector<8x16xf32>) -> (vector<8x16xf32>) {
-  %0 = vector.extract_strided_slice %x {offsets = [0, 8], sizes = [2, 4], strides = [1, 1]}
+  %0 = vector.extract_strided_slice %x[0:2:1][8:4:1]
         : vector<8x16xf32> to vector<2x4xf32>
-  %1 = vector.insert_strided_slice %0, %x {offsets = [0, 8], strides = [1, 1]}
+  %1 = vector.insert_strided_slice %0, %x[0:1][8:1]
         : vector<2x4xf32> into vector<8x16xf32>
   return %1 : vector<8x16xf32>
 }
@@ -2317,11 +2317,11 @@ func.func @insert_strided_1d_constant() ->
   %cst_1 = arith.constant dense<4> : vector<1xi32>
   %cst_2 = arith.constant dense<[5, 6]> : vector<2xi32>
   %cst_3 = arith.constant dense<[7, 8, 9]> : vector<3xi32>
-  %a = vector.insert_strided_slice %cst_1, %vcst {offsets = [0], strides = [1]} : vector<1xi32> into vector<3xi32>
-  %b = vector.insert_strided_slice %cst_1, %vcst {offsets = [2], strides = [1]} : vector<1xi32> into vector<3xi32>
-  %c = vector.insert_strided_slice %cst_2, %vcst {offsets = [0], strides = [1]} : vector<2xi32> into vector<3xi32>
-  %d = vector.insert_strided_slice %cst_2, %vcst {offsets = [1], strides = [1]} : vector<2xi32> into vector<3xi32>
-  %e = vector.insert_strided_slice %cst_3, %vcst {offsets = [0], strides = [1]} : vector<3xi32> into vector<3xi32>
+  %a = vector.insert_strided_slice %cst_1, %vcst[0:1] : vector<1xi32> into vector<3xi32>
+  %b = vector.insert_strided_slice %cst_1, %vcst[2:1] : vector<1xi32> into vector<3xi32>
+  %c = vector.insert_strided_slice %cst_2, %vcst[0:1] : vector<2xi32> into vector<3xi32>
+  %d = vector.insert_strided_slice %cst_2, %vcst[1:1] : vector<2xi32> into vector<3xi32>
+  %e = vector.insert_strided_slice %cst_3, %vcst[0:1] : vector<3xi32> into vector<3xi32>
   return %a, %b, %c, %d, %e : vector<3xi32>, vector<3xi32>, vector<3xi32>, vector<3xi32>, vector<3xi32>
 }
 
@@ -2342,13 +2342,13 @@ func.func @insert_strided_2d_constant() ->
   %cst_1 = arith.constant dense<9> : vector<1xi32>
   %cst_2 = arith.constant dense<[18, 19]> : vector<2xi32>
   %cst_3 = arith.constant dense<[[28, 29], [38, 39]]> : vector<2x2xi32>
-  %a = vector.insert_strided_slice %cst_1, %vcst {offsets = [1, 0], strides = [1]} : vector<1xi32> into vector<3x2xi32>
-  %b = vector.insert_strided_slice %cst_1, %vcst {offsets = [2, 1], strides = [1]} : vector<1xi32> into vector<3x2xi32>
-  %c = vector.insert_strided_slice %cst_2, %vcst {offsets = [0, 0], strides = [1]} : vector<2xi32> into vector<3x2xi32>
-  %d = vector.insert_strided_slice %cst_2, %vcst {offsets = [1, 0], strides = [1]} : vector<2xi32> into vector<3x2xi32>
-  %e = vector.insert_strided_slice %cst_2, %vcst {offsets = [2, 0], strides = [1]} : vector<2xi32> into vector<3x2xi32>
-  %f = vector.insert_strided_slice %cst_3, %vcst {offsets = [0, 0], strides = [1, 1]} : vector<2x2xi32> into vector<3x2xi32>
-  %g = vector.insert_strided_slice %cst_3, %vcst {offsets = [1, 0], strides = [1, 1]} : vector<2x2xi32> into vector<3x2xi32>
+  %a = vector.insert_strided_slice %cst_1, %vcst[1][0:1] : vector<1xi32> into vector<3x2xi32>
+  %b = vector.insert_strided_slice %cst_1, %vcst[2][1:1] : vector<1xi32> into vector<3x2xi32>
+  %c = vector.insert_strided_slice %cst_2, %vcst[0][0:1] : vector<2xi32> into vector<3x2xi32>
+  %d = vector.insert_strided_slice %cst_2, %vcst[1][0:1] : vector<2xi32> into vector<3x2xi32>
+  %e = vector.insert_strided_slice %cst_2, %vcst[2][0:1] : vector<2xi32> into vector<3x2xi32>
+  %f = vector.insert_strided_slice %cst_3, %vcst[0:1][0:1] : vector<2x2xi32> into vector<3x2xi32>
+  %g = vector.insert_strided_slice %cst_3, %vcst[1:1][0:1] : vector<2x2xi32> into vector<3x2xi32>
   return %a, %b, %c, %d, %e, %f, %g :
     vector<3x2xi32>, vector<3x2xi32>, vector<3x2xi32>, vector<3x2xi32>, vector<3x2xi32>, vector<3x2xi32>, vector<3x2xi32>
 }
@@ -2422,7 +2422,7 @@ func.func @extract_strided_slice_of_constant_mask() -> vector<5x7xi1>{
   %c4 = arith.constant 4 : index
   %c10 = arith.constant 10 : index
   %mask = vector.create_mask %c10, %c4 : vector<12x7xi1>
-  %res = vector.extract_strided_slice %mask {offsets = [3], sizes = [5], strides = [1]} : vector<12x7xi1> to vector<5x7xi1>
+  %res = vector.extract_strided_slice %mask[3:5:1] : vector<12x7xi1> to vector<5x7xi1>
   return %res : vector<5x7xi1>
 }
 
diff --git a/mlir/test/Dialect/Vector/invalid.mlir b/mlir/test/Dialect/Vector/invalid.mlir
index 00914c1d1baf6..1cb6b83a2e4f1 100644
--- a/mlir/test/Dialect/Vector/invalid.mlir
+++ b/mlir/test/Dialect/Vector/invalid.mlir
@@ -619,49 +619,49 @@ func.func @test_vector.transfer_write(%arg0: memref<?xf32>, %arg1: vector<7xf32>
 
 func.func @insert_strided_slice(%a: vector<4x4xf32>, %b: vector<4x8x16xf32>) {
   // expected-error at +1 {{expected offsets of same size as destination vector rank}}
-  %1 = vector.insert_strided_slice %a, %b {offsets = [100], strides = [1, 1]} : vector<4x4xf32> into vector<4x8x16xf32>
+  %1 = vector.insert_strided_slice %a, %b[100:1][100:1] : vector<4x4xf32> into vector<4x8x16xf32>
 }
 
 // -----
 
 func.func @insert_strided_slice(%a: vector<4x4xf32>, %b: vector<4x8x16xf32>) {
   // expected-error at +1 {{expected strides of same size as source vector rank}}
-  %1 = vector.insert_strided_slice %a, %b {offsets = [2, 2, 2], strides = [1]} : vector<4x4xf32> into vector<4x8x16xf32>
+  %1 = vector.insert_strided_slice %a, %b[2, 2][2:1] : vector<4x4xf32> into vector<4x8x16xf32>
 }
 
 // -----
 
 func.func @insert_strided_slice(%a: vector<4x4xf32>, %b: vector<4x8x16xf32>) {
   // expected-error at +1 {{expected source rank to be no greater than destination rank}}
-  %1 = vector.insert_strided_slice %b, %a {offsets = [2, 2], strides = [1, 1, 1]} : vector<4x8x16xf32> into vector<4x4xf32>
+  %1 = vector.insert_strided_slice %b, %a[2:1][2:1][2:1] : vector<4x8x16xf32> into vector<4x4xf32>
 }
 
 // -----
 
 func.func @insert_strided_slice(%a: vector<4x4xf32>, %b: vector<4x8x16xf32>) {
   // expected-error at +1 {{op expected offsets dimension 0 to be confined to [0, 4)}}
-  %1 = vector.insert_strided_slice %a, %b {offsets = [100,100,100], strides = [1, 1]} : vector<4x4xf32> into vector<4x8x16xf32>
+  %1 = vector.insert_strided_slice %a, %b[100][100:1][100:1] : vector<4x4xf32> into vector<4x8x16xf32>
 }
 
 // -----
 
 func.func @insert_strided_slice(%a: vector<4x4xf32>, %b: vector<4x8x16xf32>) {
   // expected-error at +1 {{op expected strides to be confined to [1, 2)}}
-  %1 = vector.insert_strided_slice %a, %b {offsets = [2, 2, 2], strides = [100, 100]} : vector<4x4xf32> into vector<4x8x16xf32>
+  %1 = vector.insert_strided_slice %a, %b[2][2:100][2:100] : vector<4x4xf32> into vector<4x8x16xf32>
 }
 
 // -----
 
 func.func @insert_strided_slice(%a: vector<4x4xf32>, %b: vector<4x8x16xf32>) {
   // expected-error at +1 {{op expected sum(offsets, source vector shape) dimension 1 to be confined to [1, 9)}}
-  %1 = vector.insert_strided_slice %a, %b {offsets = [2, 7, 2], strides = [1, 1]} : vector<4x4xf32> into vector<4x8x16xf32>
+  %1 = vector.insert_strided_slice %a, %b[2][7:1][2:1] : vector<4x4xf32> into vector<4x8x16xf32>
 }
 
 // -----
 
 func.func @insert_strided_slice_scalable(%a : vector<1x1x[2]xi32>, %b: vector<1x4x[4]xi32>) -> vector<1x4x[4]xi32> {
   // expected-error at +1 {{op expected size at idx=2 to match the corresponding base size from the input vector (2 vs 4)}}
-  %0 = vector.insert_strided_slice %a, %b {offsets = [0, 3, 0], strides = [1, 1, 1]} : vector<1x1x[2]xi32> into vector<1x4x[4]xi32>
+  %0 = vector.insert_strided_slice %a, %b[0:1][3:1][0:1] : vector<1x1x[2]xi32> into vector<1x4x[4]xi32>
   return %0 : vector<1x4x[4]xi32>
 }
 
@@ -669,7 +669,7 @@ func.func @insert_strided_slice_scalable(%a : vector<1x1x[2]xi32>, %b: vector<1x
 
 func.func @insert_strided_slice_scalable(%a : vector<1x1x4xi32>, %b: vector<1x4x[4]xi32>) -> vector<1x4x[4]xi32> {
   // expected-error at +1 {{op mismatching scalable flags (at source vector idx=2)}}
-  %0 = vector.insert_strided_slice %a, %b {offsets = [0, 3, 0], strides = [1, 1, 1]} : vector<1x1x4xi32> into vector<1x4x[4]xi32>
+  %0 = vector.insert_strided_slice %a, %b[0:1][3:1][0:1] : vector<1x1x4xi32> into vector<1x4x[4]xi32>
   return %0 : vector<1x4x[4]xi32>
 }
 
@@ -677,42 +677,42 @@ func.func @insert_strided_slice_scalable(%a : vector<1x1x4xi32>, %b: vector<1x4x
 
 func.func @extract_strided_slice(%arg0: vector<4x8x16xf32>) {
   // expected-error at +1 {{expected offsets, sizes and strides attributes of same size}}
-  %1 = vector.extract_strided_slice %arg0 {offsets = [100], sizes = [2, 2], strides = [1, 1]} : vector<4x8x16xf32> to vector<2x2x16xf32>
+  %1 = vector.extract_strided_slice %arg0[100:2:1][100:2:1] : vector<4x8x16xf32> to vector<2x2x16xf32>
 }
 
 // -----
 
 func.func @extract_strided_slice(%arg0: vector<4x8x16xf32>) {
   // expected-error at +1 {{expected offsets attribute of rank no greater than vector rank}}
-  %1 = vector.extract_strided_slice %arg0 {offsets = [2, 2, 2, 2], sizes = [2, 2, 2, 2], strides = [1, 1, 1, 1]} : vector<4x8x16xf32> to vector<2x2x16xf32>
+  %1 = vector.extract_strided_slice %arg0[2:2:1][2:2:1][2:2:1][2:2:1] : vector<4x8x16xf32> to vector<2x2x16xf32>
 }
 
 // -----
 
 func.func @extract_strided_slice(%arg0: vector<4x8x16xf32>) {
   // expected-error at +1 {{op expected offsets dimension 0 to be confined to [0, 4)}}
-  %1 = vector.extract_strided_slice %arg0 {offsets = [100], sizes = [100], strides = [100]} : vector<4x8x16xf32> to vector<100x8x16xf32>
+  %1 = vector.extract_strided_slice %arg0[100:100:100] : vector<4x8x16xf32> to vector<100x8x16xf32>
 }
 
 // -----
 
 func.func @extract_strided_slice(%arg0: vector<4x8x16xf32>) {
   // expected-error at +1 {{op expected sizes dimension 0 to be confined to [1, 5)}}
-  %1 = vector.extract_strided_slice %arg0 {offsets = [2], sizes = [100], strides = [100]} : vector<4x8x16xf32> to vector<100x8x16xf32>
+  %1 = vector.extract_strided_slice %arg0[2:100:100] : vector<4x8x16xf32> to vector<100x8x16xf32>
 }
 
 // -----
 
 func.func @extract_strided_slice(%arg0: vector<4x8x16xf32>) {
   // expected-error at +1 {{op expected strides to be confined to [1, 2)}}
-  %1 = vector.extract_strided_slice %arg0 {offsets = [2], sizes = [1], strides = [100]} : vector<4x8x16xf32> to vector<1x8x16xf32>
+  %1 = vector.extract_strided_slice %arg0[2:1:100] : vector<4x8x16xf32> to vector<1x8x16xf32>
 }
 
 // -----
 
 func.func @extract_strided_slice_scalable(%arg0 : vector<1x4x[4]xi32>) -> vector<1x1x[2]xi32> {
     // expected-error at +1 {{op expected size at idx=2 to match the corresponding base size from the input vector (2 vs 4)}}
-    %1 = vector.extract_strided_slice %arg0 {offsets = [0, 3, 0], sizes = [1, 1, 2], strides = [1, 1, 1]} : vector<1x4x[4]xi32> to vector<1x1x[2]xi32>
+    %1 = vector.extract_strided_slice %arg0[0:1:1][3:1:1][0:2:1] : vector<1x4x[4]xi32> to vector<1x1x[2]xi32>
     return %1 : vector<1x1x[2]xi32>
   }
 
@@ -720,21 +720,21 @@ func.func @extract_strided_slice_scalable(%arg0 : vector<1x4x[4]xi32>) -> vector
 
 func.func @extract_strided_slice(%arg0: vector<4x8x16xf32>) {
   // expected-error at +1 {{op expected strides to be confined to [1, 2)}}
-  %1 = vector.extract_strided_slice %arg0 {offsets = [2], sizes = [1], strides = [100]} : vector<4x8x16xf32> to vector<1x8x16xf32>
+  %1 = vector.extract_strided_slice %arg0[2:1:100] : vector<4x8x16xf32> to vector<1x8x16xf32>
 }
 
 // -----
 
 func.func @extract_strided_slice(%arg0: vector<4x8x16xf32>) {
   // expected-error at +1 {{op expected sum(offsets, sizes) dimension 0 to be confined to [1, 5)}}
-  %1 = vector.extract_strided_slice %arg0 {offsets = [2], sizes = [3], strides = [1]} : vector<4x8x16xf32> to vector<3x8x16xf32>
+  %1 = vector.extract_strided_slice %arg0[2:3:1] : vector<4x8x16xf32> to vector<3x8x16xf32>
 }
 
 // -----
 
 func.func @extract_strided_slice(%arg0: vector<4x8x16xf32>) {
   // expected-error at +1 {{op expected result type to be 'vector<2x8x16xf32>'}}
-  %1 = vector.extract_strided_slice %arg0 {offsets = [2], sizes = [2], strides = [1]} : vector<4x8x16xf32> to vector<3x1xf32>
+  %1 = vector.extract_strided_slice %arg0[2:2:1] : vector<4x8x16xf32> to vector<3x1xf32>
 }
 
 // -----
diff --git a/mlir/test/Dialect/Vector/linearize.mlir b/mlir/test/Dialect/Vector/linearize.mlir
index 916e3e5fd2529..59b7b7b58adfb 100644
--- a/mlir/test/Dialect/Vector/linearize.mlir
+++ b/mlir/test/Dialect/Vector/linearize.mlir
@@ -170,7 +170,7 @@ func.func @test_extract_strided_slice_1(%arg0 : vector<4x8xf32>) -> vector<2x2xf
   // BW-128: %[[RES:.*]] = vector.shape_cast %[[SHUFFLE]] : vector<4xf32> to vector<2x2xf32>
   // BW-128: return %[[RES]] : vector<2x2xf32>
 
-  // BW-0: %[[RES:.*]] = vector.extract_strided_slice %[[ARG:.*]] {offsets = [0, 4], sizes = [2, 2], strides = [1, 1]} : vector<4x8xf32> to vector<2x2xf32>
+  // BW-0: %[[RES:.*]] = vector.extract_strided_slice %[[ARG:.*]][0:2:1][4:2:1] : vector<4x8xf32> to vector<2x2xf32>
   // BW-0: return %[[RES]] : vector<2x2xf32>
   %0 = vector.extract_strided_slice %arg0 { sizes = [2, 2], strides = [1, 1], offsets = [0, 4]}
      : vector<4x8xf32> to vector<2x2xf32>
@@ -182,7 +182,7 @@ func.func @test_extract_strided_slice_1(%arg0 : vector<4x8xf32>) -> vector<2x2xf
 func.func @test_extract_strided_slice_1_scalable(%arg0: vector<4x[8]xf32>) -> vector<2x[8]xf32> {  
   // ALL-NOT: vector.shuffle
   // ALL-NOT: vector.shape_cast
-  // ALL: %[[RES:.*]] = vector.extract_strided_slice %[[VAL_0]] {offsets = [1, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x[8]xf32> to vector<2x[8]xf32>
+  // ALL: %[[RES:.*]] = vector.extract_strided_slice %[[VAL_0]][1:2:1][0:8:1] : vector<4x[8]xf32> to vector<2x[8]xf32>
   %0 = vector.extract_strided_slice %arg0 { sizes = [2, 8], strides = [1, 1], offsets = [1, 0] } : vector<4x[8]xf32> to vector<2x[8]xf32>
   // ALL: return %[[RES]] : vector<2x[8]xf32>
   return %0 : vector<2x[8]xf32>
@@ -204,7 +204,7 @@ func.func @test_extract_strided_slice_2(%arg0 : vector<2x8x2xf32>) -> vector<1x4
   // BW-128: %[[RES:.*]] = vector.shape_cast %[[SHUFFLE]] : vector<8xf32> to vector<1x4x2xf32>
   // BW-128: return %[[RES]] : vector<1x4x2xf32>
 
-  // BW-0: %[[RES:.*]] = vector.extract_strided_slice %[[ORIG_ARG]] {offsets = [1, 2], sizes = [1, 4], strides = [1, 1]} : vector<2x8x2xf32> to vector<1x4x2xf32>
+  // BW-0: %[[RES:.*]] = vector.extract_strided_slice %[[ORIG_ARG]][1:1:1][2:4:1] : vector<2x8x2xf32> to vector<1x4x2xf32>
   // BW-0: return %[[RES]] : vector<1x4x2xf32>
   %0 = vector.extract_strided_slice %arg0 { offsets = [1, 2], strides = [1, 1], sizes = [1, 4] }
     : vector<2x8x2xf32> to vector<1x4x2xf32>
diff --git a/mlir/test/Dialect/Vector/ops.mlir b/mlir/test/Dialect/Vector/ops.mlir
index 7e578452b82cc..179e90477cd53 100644
--- a/mlir/test/Dialect/Vector/ops.mlir
+++ b/mlir/test/Dialect/Vector/ops.mlir
@@ -314,29 +314,29 @@ func.func @outerproduct_scalable(%arg0 : vector<[4]xf32>, %arg1 : vector<[8]xf32
 
 // CHECK-LABEL: @insert_strided_slice
 func.func @insert_strided_slice(%a: vector<4x4xf32>, %b: vector<4x8x16xf32>) {
-  // CHECK: vector.insert_strided_slice %{{.*}}, %{{.*}} {offsets = [2, 2, 2], strides = [1, 1]} : vector<4x4xf32> into vector<4x8x16xf32>
-  %1 = vector.insert_strided_slice %a, %b {offsets = [2, 2, 2], strides = [1, 1]} : vector<4x4xf32> into vector<4x8x16xf32>
+  // CHECK: vector.insert_strided_slice %{{.*}}, %{{.*}}[2][2:1][2:1] : vector<4x4xf32> into vector<4x8x16xf32>
+  %1 = vector.insert_strided_slice %a, %b[2][2:1][2:1] : vector<4x4xf32> into vector<4x8x16xf32>
   return
 }
 
 // CHECK-LABEL: @insert_strided_slice_scalable
 func.func @insert_strided_slice_scalable(%a: vector<4x[16]xf32>, %b: vector<4x8x[16]xf32>) {
-  // CHECK: vector.insert_strided_slice %{{.*}}, %{{.*}} {offsets = [2, 2, 0], strides = [1, 1]} : vector<4x[16]xf32> into vector<4x8x[16]xf32>
-  %1 = vector.insert_strided_slice %a, %b {offsets = [2, 2, 0], strides = [1, 1]} : vector<4x[16]xf32> into vector<4x8x[16]xf32>
+  // CHECK: vector.insert_strided_slice %{{.*}}, %{{.*}}[2][2:1][0:1] : vector<4x[16]xf32> into vector<4x8x[16]xf32>
+  %1 = vector.insert_strided_slice %a, %b[2][2:1][0:1] : vector<4x[16]xf32> into vector<4x8x[16]xf32>
   return
 }
 
 // CHECK-LABEL: @extract_strided_slice
 func.func @extract_strided_slice(%arg0: vector<4x8x16xf32>) -> vector<2x2x16xf32> {
-  // CHECK: vector.extract_strided_slice %{{.*}} {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x8x16xf32>
-  %1 = vector.extract_strided_slice %arg0 {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x8x16xf32> to vector<2x2x16xf32>
+  // CHECK: vector.extract_strided_slice %{{.*}}[2:2:1][2:2:1] : vector<4x8x16xf32>
+  %1 = vector.extract_strided_slice %arg0[2:2:1][2:2:1] : vector<4x8x16xf32> to vector<2x2x16xf32>
   return %1: vector<2x2x16xf32>
 }
 
 // CHECK-LABEL: @extract_strided_slice_scalable
 func.func @extract_strided_slice_scalable(%arg0: vector<4x[8]x16xf32>) -> vector<2x[8]x16xf32> {
-  // CHECK: vector.extract_strided_slice %{{.*}} {offsets = [2, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x[8]x16xf32>
-  %1 = vector.extract_strided_slice %arg0 {offsets = [2, 0], sizes = [2, 8], strides = [1, 1]} : vector<4x[8]x16xf32> to vector<2x[8]x16xf32>
+  // CHECK: vector.extract_strided_slice %{{.*}}[2:2:1][0:8:1] : vector<4x[8]x16xf32>
+  %1 = vector.extract_strided_slice %arg0[2:2:1][0:8:1] : vector<4x[8]x16xf32> to vector<2x[8]x16xf32>
   return %1: vector<2x[8]x16xf32>
 }
 
diff --git a/mlir/test/Dialect/Vector/vector-break-down-bitcast.mlir b/mlir/test/Dialect/Vector/vector-break-down-bitcast.mlir
index fbb2f7605e649..0096568db5241 100644
--- a/mlir/test/Dialect/Vector/vector-break-down-bitcast.mlir
+++ b/mlir/test/Dialect/Vector/vector-break-down-bitcast.mlir
@@ -8,12 +8,12 @@ func.func @bitcast_f16_to_f32(%input: vector<8xf16>) -> vector<4xf32> {
 }
 
 // CHECK: %[[INIT:.+]] = arith.constant dense<0.000000e+00> : vector<4xf32>
-// CHECK: %[[EXTRACT0:.+]] = vector.extract_strided_slice %[[INPUT]] {offsets = [0], sizes = [4], strides = [1]} : vector<8xf16> to vector<4xf16>
+// CHECK: %[[EXTRACT0:.+]] = vector.extract_strided_slice %[[INPUT]][0:4:1] : vector<8xf16> to vector<4xf16>
 // CHECK: %[[CAST0:.+]] = vector.bitcast %[[EXTRACT0]] : vector<4xf16> to vector<2xf32>
-// CHECK: %[[INSERT0:.+]] = vector.insert_strided_slice %[[CAST0]], %[[INIT]] {offsets = [0], strides = [1]} : vector<2xf32> into vector<4xf32>
-// CHECK: %[[EXTRACT1:.+]] = vector.extract_strided_slice %[[INPUT]] {offsets = [4], sizes = [4], strides = [1]} : vector<8xf16> to vector<4xf16>
+// CHECK: %[[INSERT0:.+]] = vector.insert_strided_slice %[[CAST0]], %[[INIT]][0:1] : vector<2xf32> into vector<4xf32>
+// CHECK: %[[EXTRACT1:.+]] = vector.extract_strided_slice %[[INPUT]][4:4:1] : vector<8xf16> to vector<4xf16>
 // CHECK: %[[CAST1:.+]] = vector.bitcast %[[EXTRACT1]] : vector<4xf16> to vector<2xf32>
-// CHECK: %[[INSERT1:.+]] = vector.insert_strided_slice %[[CAST1]], %[[INSERT0]] {offsets = [2], strides = [1]} : vector<2xf32> into vector<4xf32>
+// CHECK: %[[INSERT1:.+]] = vector.insert_strided_slice %[[CAST1]], %[[INSERT0]][2:1] : vector<2xf32> into vector<4xf32>
 // CHECK: return %[[INSERT1]]
 
 // -----
@@ -26,16 +26,16 @@ func.func @bitcast_i8_to_i32(%input: vector<16xi8>) -> vector<4xi32> {
 }
 
 // CHECK: %[[INIT:.+]] = arith.constant dense<0> : vector<4xi32>
-// CHECK: %[[EXTRACT0:.+]] = vector.extract_strided_slice %[[INPUT]] {offsets = [0], sizes = [4], strides = [1]} : vector<16xi8> to vector<4xi8>
+// CHECK: %[[EXTRACT0:.+]] = vector.extract_strided_slice %[[INPUT]][0:4:1] : vector<16xi8> to vector<4xi8>
 // CHECK: %[[CAST0:.+]] = vector.bitcast %[[EXTRACT0]] : vector<4xi8> to vector<1xi32>
-// CHECK: %[[INSERT0:.+]] = vector.insert_strided_slice %[[CAST0]], %[[INIT]] {offsets = [0], strides = [1]} : vector<1xi32> into vector<4xi32>
-// CHECK: %[[EXTRACT1:.+]] = vector.extract_strided_slice %[[INPUT]] {offsets = [4], sizes = [4], strides = [1]} : vector<16xi8> to vector<4xi8>
+// CHECK: %[[INSERT0:.+]] = vector.insert_strided_slice %[[CAST0]], %[[INIT]][0:1] : vector<1xi32> into vector<4xi32>
+// CHECK: %[[EXTRACT1:.+]] = vector.extract_strided_slice %[[INPUT]][4:4:1] : vector<16xi8> to vector<4xi8>
 // CHECK: %[[CAST1:.+]] = vector.bitcast %[[EXTRACT1]] : vector<4xi8> to vector<1xi32>
-// CHECK: %[[INSERT1:.+]] = vector.insert_strided_slice %[[CAST1]], %[[INSERT0]] {offsets = [1], strides = [1]} : vector<1xi32> into vector<4xi32>
-// CHECK: %[[EXTRACT2:.+]] = vector.extract_strided_slice %[[INPUT]] {offsets = [8], sizes = [4], strides = [1]} : vector<16xi8> to vector<4xi8>
+// CHECK: %[[INSERT1:.+]] = vector.insert_strided_slice %[[CAST1]], %[[INSERT0]][1:1] : vector<1xi32> into vector<4xi32>
+// CHECK: %[[EXTRACT2:.+]] = vector.extract_strided_slice %[[INPUT]][8:4:1] : vector<16xi8> to vector<4xi8>
 // CHECK: %[[CAST2:.+]] = vector.bitcast %[[EXTRACT2]] : vector<4xi8> to vector<1xi32>
-// CHECK: %[[INSERT2:.+]] = vector.insert_strided_slice %[[CAST2]], %[[INSERT1]] {offsets = [2], strides = [1]} : vector<1xi32> into vector<4xi32>
-// CHECK: %[[EXTRACT3:.+]] = vector.extract_strided_slice %[[INPUT]] {offsets = [12], sizes = [4], strides = [1]} : vector<16xi8> to vector<4xi8>
+// CHECK: %[[INSERT2:.+]] = vector.insert_strided_slice %[[CAST2]], %[[INSERT1]][2:1] : vector<1xi32> into vector<4xi32>
+// CHECK: %[[EXTRACT3:.+]] = vector.extract_strided_slice %[[INPUT]][12:4:1] : vector<16xi8> to vector<4xi8>
 // CHECK: %[[CAST3:.+]] = vector.bitcast %[[EXTRACT3]] : vector<4xi8> to vector<1xi32>
-// CHECK: %[[INSERT3:.+]] = vector.insert_strided_slice %[[CAST3]], %[[INSERT2]] {offsets = [3], strides = [1]} : vector<1xi32> into vector<4xi32>
+// CHECK: %[[INSERT3:.+]] = vector.insert_strided_slice %[[CAST3]], %[[INSERT2]][3:1] : vector<1xi32> into vector<4xi32>
 // CHECK: return %[[INSERT3]]
diff --git a/mlir/test/Dialect/Vector/vector-contract-to-matrix-intrinsics-transforms.mlir b/mlir/test/Dialect/Vector/vector-contract-to-matrix-intrinsics-transforms.mlir
index 78cf82e1ab6c1..6a4fadc2281f5 100644
--- a/mlir/test/Dialect/Vector/vector-contract-to-matrix-intrinsics-transforms.mlir
+++ b/mlir/test/Dialect/Vector/vector-contract-to-matrix-intrinsics-transforms.mlir
@@ -18,21 +18,21 @@
 //  CHECK-DAG:  %[[vcst_0:.*]] = arith.constant dense<0.000000e+00> : vector<12xf32>
 //  CHECK-DAG:  %[[vcst_1:.*]] = arith.constant dense<0.000000e+00> : vector<2x3xf32>
 //      CHECK:  %[[a0:.*]] = vector.extract %[[A]][0] : vector<4xf32> from vector<2x4xf32>
-//      CHECK:  %[[a1:.*]] = vector.insert_strided_slice %[[a0]], %[[vcst]] {offsets = [0], strides = [1]} : vector<4xf32> into vector<8xf32>
+//      CHECK:  %[[a1:.*]] = vector.insert_strided_slice %[[a0]], %[[vcst]][0:1] : vector<4xf32> into vector<8xf32>
 //      CHECK:  %[[a2:.*]] = vector.extract %[[A]][1] : vector<4xf32> from vector<2x4xf32>
-//      CHECK:  %[[a3:.*]] = vector.insert_strided_slice %[[a2]], %[[a1]] {offsets = [4], strides = [1]} : vector<4xf32> into vector<8xf32>
+//      CHECK:  %[[a3:.*]] = vector.insert_strided_slice %[[a2]], %[[a1]][4:1] : vector<4xf32> into vector<8xf32>
 //      CHECK:  %[[b0:.*]] = vector.extract %[[B]][0] : vector<3xf32> from vector<4x3xf32>
-//      CHECK:  %[[b1:.*]] = vector.insert_strided_slice %[[b0]], %[[vcst_0]] {offsets = [0], strides = [1]} : vector<3xf32> into vector<12xf32>
+//      CHECK:  %[[b1:.*]] = vector.insert_strided_slice %[[b0]], %[[vcst_0]][0:1] : vector<3xf32> into vector<12xf32>
 //      CHECK:  %[[b2:.*]] = vector.extract %[[B]][1] : vector<3xf32> from vector<4x3xf32>
-//      CHECK:  %[[b3:.*]] = vector.insert_strided_slice %[[b2]], %[[b1]] {offsets = [3], strides = [1]} : vector<3xf32> into vector<12xf32>
+//      CHECK:  %[[b3:.*]] = vector.insert_strided_slice %[[b2]], %[[b1]][3:1] : vector<3xf32> into vector<12xf32>
 //      CHECK:  %[[b4:.*]] = vector.extract %[[B]][2] : vector<3xf32> from vector<4x3xf32>
-//      CHECK:  %[[b5:.*]] = vector.insert_strided_slice %[[b4]], %[[b3]] {offsets = [6], strides = [1]} : vector<3xf32> into vector<12xf32>
+//      CHECK:  %[[b5:.*]] = vector.insert_strided_slice %[[b4]], %[[b3]][6:1] : vector<3xf32> into vector<12xf32>
 //      CHECK:  %[[b6:.*]] = vector.extract %[[B]][3] : vector<3xf32> from vector<4x3xf32>
-//      CHECK:  %[[b7:.*]] = vector.insert_strided_slice %[[b6]], %[[b5]] {offsets = [9], strides = [1]} : vector<3xf32> into vector<12xf32>
+//      CHECK:  %[[b7:.*]] = vector.insert_strided_slice %[[b6]], %[[b5]][9:1] : vector<3xf32> into vector<12xf32>
 //      CHECK:  %[[mm1:.*]] = vector.matrix_multiply %[[a3]], %[[b7]] {lhs_columns = 4 : i32, lhs_rows = 2 : i32, rhs_columns = 3 : i32} : (vector<8xf32>, vector<12xf32>) -> vector<6xf32>
-//      CHECK:  %[[mm2:.*]] = vector.extract_strided_slice %[[mm1]] {offsets = [0], sizes = [3], strides = [1]} : vector<6xf32> to vector<3xf32>
+//      CHECK:  %[[mm2:.*]] = vector.extract_strided_slice %[[mm1]][0:3:1] : vector<6xf32> to vector<3xf32>
 //      CHECK:  %[[mm3:.*]] = vector.insert %[[mm2]], %[[vcst_1]] [0] : vector<3xf32> into vector<2x3xf32>
-//      CHECK:  %[[mm4:.*]] = vector.extract_strided_slice %[[mm1]] {offsets = [3], sizes = [3], strides = [1]} : vector<6xf32> to vector<3xf32>
+//      CHECK:  %[[mm4:.*]] = vector.extract_strided_slice %[[mm1]][3:3:1] : vector<6xf32> to vector<3xf32>
 //      CHECK:  %[[mm5:.*]] = vector.insert %[[mm4]], %[[mm3]] [1] : vector<3xf32> into vector<2x3xf32>
 //      CHECK:  %[[mm6:.*]] = arith.addf %[[C]], %[[mm5]] : vector<2x3xf32>
 func.func @matmul(%arg0: vector<2x4xf32>,
diff --git a/mlir/test/Dialect/Vector/vector-dropleadunitdim-transforms.mlir b/mlir/test/Dialect/Vector/vector-dropleadunitdim-transforms.mlir
index 9526d610e490e..ec57f86999205 100644
--- a/mlir/test/Dialect/Vector/vector-dropleadunitdim-transforms.mlir
+++ b/mlir/test/Dialect/Vector/vector-dropleadunitdim-transforms.mlir
@@ -254,8 +254,8 @@ func.func @cast_away_contraction_does_not_transpose_leading_unit_dims(%lhs: vect
 // CHECK-LABEL: func @cast_away_extract_strided_slice_leading_one_dims
 func.func @cast_away_extract_strided_slice_leading_one_dims(%arg0: vector<1x8x8xf16>) -> vector<1x1x8xf16> {
   // CHECK:     %[[SRC:.+]] = vector.extract %{{.*}}[0] : vector<8x8xf16> from vector<1x8x8xf16>
-  // CHECK: %[[EXTRACT:.+]] = vector.extract_strided_slice %[[SRC]] {offsets = [4], sizes = [1], strides = [1]} : vector<8x8xf16> to vector<1x8xf16>
-  %0 = vector.extract_strided_slice %arg0 {offsets = [0, 4], sizes = [1, 1], strides = [1, 1]} : vector<1x8x8xf16> to vector<1x1x8xf16>
+  // CHECK: %[[EXTRACT:.+]] = vector.extract_strided_slice %[[SRC]][4:1:1] : vector<8x8xf16> to vector<1x8xf16>
+  %0 = vector.extract_strided_slice %arg0[0:1:1][4:1:1] : vector<1x8x8xf16> to vector<1x1x8xf16>
   // CHECK:     %[[RET:.+]] = vector.broadcast %[[EXTRACT]] : vector<1x8xf16> to vector<1x1x8xf16>
   // CHECK: return %[[RET]]
   return %0: vector<1x1x8xf16>
@@ -264,8 +264,8 @@ func.func @cast_away_extract_strided_slice_leading_one_dims(%arg0: vector<1x8x8x
 // CHECK-LABEL: func @cast_away_extract_strided_slice_leading_one_dims_scalable
 func.func @cast_away_extract_strided_slice_leading_one_dims_scalable(%arg0: vector<1x8x[8]xf16>) -> vector<1x1x[8]xf16> {
   // CHECK:     %[[SRC:.+]] = vector.extract %{{.*}}[0] : vector<8x[8]xf16> from vector<1x8x[8]xf16>
-  // CHECK: %[[EXTRACT:.+]] = vector.extract_strided_slice %[[SRC]] {offsets = [4], sizes = [1], strides = [1]} : vector<8x[8]xf16> to vector<1x[8]xf16>
-  %0 = vector.extract_strided_slice %arg0 {offsets = [0, 4], sizes = [1, 1], strides = [1, 1]} : vector<1x8x[8]xf16> to vector<1x1x[8]xf16>
+  // CHECK: %[[EXTRACT:.+]] = vector.extract_strided_slice %[[SRC]][4:1:1] : vector<8x[8]xf16> to vector<1x[8]xf16>
+  %0 = vector.extract_strided_slice %arg0[0:1:1][4:1:1] : vector<1x8x[8]xf16> to vector<1x1x[8]xf16>
   // CHECK:     %[[RET:.+]] = vector.broadcast %[[EXTRACT]] : vector<1x[8]xf16> to vector<1x1x[8]xf16>
   // CHECK: return %[[RET]]
   return %0: vector<1x1x[8]xf16>
@@ -275,8 +275,8 @@ func.func @cast_away_extract_strided_slice_leading_one_dims_scalable(%arg0: vect
 func.func @cast_away_insert_strided_slice_leading_one_dims(%arg0: vector<1x8xf16>, %arg1: vector<1x8x8xf16>) -> vector<1x8x8xf16> {
   // CHECK:    %[[SRC:.+]] = vector.extract %{{.*}}[0] : vector<8xf16> from vector<1x8xf16>
   // CHECK:    %[[DST:.+]] = vector.extract %{{.*}}[0] : vector<8x8xf16> from vector<1x8x8xf16>
-  // CHECK: %[[INSERT:.+]] = vector.insert_strided_slice %[[SRC]], %[[DST]] {offsets = [0, 0], strides = [1]} : vector<8xf16> into vector<8x8xf16>
-  %0 = vector.insert_strided_slice %arg0, %arg1 {offsets = [0, 0, 0], strides = [1, 1]} : vector<1x8xf16> into vector<1x8x8xf16>
+  // CHECK: %[[INSERT:.+]] = vector.insert_strided_slice %[[SRC]], %[[DST]][0][0:1] : vector<8xf16> into vector<8x8xf16>
+  %0 = vector.insert_strided_slice %arg0, %arg1[0][0:1][0:1] : vector<1x8xf16> into vector<1x8x8xf16>
   // CHECK:    %[[RET:.+]] = vector.broadcast %[[INSERT]] : vector<8x8xf16> to vector<1x8x8xf16>
   // CHECK: return %[[RET]]
   return %0: vector<1x8x8xf16>
@@ -286,8 +286,8 @@ func.func @cast_away_insert_strided_slice_leading_one_dims(%arg0: vector<1x8xf16
 func.func @cast_away_insert_strided_slice_leading_one_dims_scalable(%arg0: vector<1x[8]xf16>, %arg1: vector<1x8x[8]xf16>) -> vector<1x8x[8]xf16> {
   // CHECK:    %[[SRC:.+]] = vector.extract %{{.*}}[0] : vector<[8]xf16> from vector<1x[8]xf16>
   // CHECK:    %[[DST:.+]] = vector.extract %{{.*}}[0] : vector<8x[8]xf16> from vector<1x8x[8]xf16>
-  // CHECK: %[[INSERT:.+]] = vector.insert_strided_slice %[[SRC]], %[[DST]] {offsets = [0, 0], strides = [1]} : vector<[8]xf16> into vector<8x[8]xf16>
-  %0 = vector.insert_strided_slice %arg0, %arg1 {offsets = [0, 0, 0], strides = [1, 1]} : vector<1x[8]xf16> into vector<1x8x[8]xf16>
+  // CHECK: %[[INSERT:.+]] = vector.insert_strided_slice %[[SRC]], %[[DST]][0][0:1] : vector<[8]xf16> into vector<8x[8]xf16>
+  %0 = vector.insert_strided_slice %arg0, %arg1[0][0:1][0:1] : vector<1x[8]xf16> into vector<1x8x[8]xf16>
   // CHECK:    %[[RET:.+]] = vector.broadcast %[[INSERT]] : vector<8x[8]xf16> to vector<1x8x[8]xf16>
   // CHECK: return %[[RET]]
   return %0: vector<1x8x[8]xf16>
@@ -298,7 +298,7 @@ func.func @cast_away_insert_strided_slice_leading_one_dims_scalable(%arg0: vecto
 func.func @cast_away_insert_strided_slice_leading_one_dims_one_element(%arg0: vector<1x1xf16>, %arg1: vector<1x1x1xf16>) -> vector<1x1x1xf16> {
   // CHECK: %[[EXT:.+]] = vector.extract %{{.*}}[0] : vector<1xf16> from vector<1x1xf16>
   // CHECK: %[[B:.+]] = vector.broadcast %[[EXT]] : vector<1xf16> to vector<1x1x1xf16>
-  %0 = vector.insert_strided_slice %arg0, %arg1 {offsets = [0, 0, 0], strides = [1, 1]} : vector<1x1xf16> into vector<1x1x1xf16>
+  %0 = vector.insert_strided_slice %arg0, %arg1[0][0:1][0:1] : vector<1x1xf16> into vector<1x1x1xf16>
   // CHECK: return %[[B]]
   return %0: vector<1x1x1xf16>
 }
@@ -308,7 +308,7 @@ func.func @cast_away_insert_strided_slice_leading_one_dims_one_element(%arg0: ve
 func.func @cast_away_insert_strided_slice_leading_one_dims_one_element_scalable(%arg0: vector<1x[1]xf16>, %arg1: vector<1x1x[1]xf16>) -> vector<1x1x[1]xf16> {
   // CHECK: %[[EXT:.+]] = vector.extract %{{.*}}[0] : vector<[1]xf16> from vector<1x[1]xf16>
   // CHECK: %[[B:.+]] = vector.broadcast %[[EXT]] : vector<[1]xf16> to vector<1x1x[1]xf16>
-  %0 = vector.insert_strided_slice %arg0, %arg1 {offsets = [0, 0, 0], strides = [1, 1]} : vector<1x[1]xf16> into vector<1x1x[1]xf16>
+  %0 = vector.insert_strided_slice %arg0, %arg1[0][0:1][0:1] : vector<1x[1]xf16> into vector<1x1x[1]xf16>
   // CHECK: return %[[B]]
   return %0: vector<1x1x[1]xf16>
 }
diff --git a/mlir/test/Dialect/Vector/vector-extract-strided-slice-lowering.mlir b/mlir/test/Dialect/Vector/vector-extract-strided-slice-lowering.mlir
index d840b204e5288..ee86e3f000267 100644
--- a/mlir/test/Dialect/Vector/vector-extract-strided-slice-lowering.mlir
+++ b/mlir/test/Dialect/Vector/vector-extract-strided-slice-lowering.mlir
@@ -3,7 +3,7 @@
 // CHECK-LABEL: func.func @extract_strided_slice_1D
 //  CHECK-SAME: (%[[INPUT:.+]]: vector<8xf16>)
 func.func @extract_strided_slice_1D(%input: vector<8xf16>) -> vector<4xf16> {
-  %0 = vector.extract_strided_slice %input {offsets = [1], sizes = [4], strides = [1]} : vector<8xf16> to vector<4xf16>
+  %0 = vector.extract_strided_slice %input[1:4:1] : vector<8xf16> to vector<4xf16>
   return %0: vector<4xf16>
 }
 
@@ -24,6 +24,6 @@ func.func @extract_strided_slice_1D(%input: vector<8xf16>) -> vector<4xf16> {
 // CHECK-LABEL: func.func @extract_strided_slice_2D
 func.func @extract_strided_slice_2D(%input: vector<1x8xf16>) -> vector<1x4xf16> {
   // CHECK: vector.extract_strided_slice
-  %0 = vector.extract_strided_slice %input {offsets = [0, 1], sizes = [1, 4], strides = [1, 1]} : vector<1x8xf16> to vector<1x4xf16>
+  %0 = vector.extract_strided_slice %input[0:1:1][1:4:1] : vector<1x8xf16> to vector<1x4xf16>
   return %0: vector<1x4xf16>
 }
diff --git a/mlir/test/Dialect/Vector/vector-scan-transforms.mlir b/mlir/test/Dialect/Vector/vector-scan-transforms.mlir
index 1d8f440e0fb03..68c23ac0627d6 100644
--- a/mlir/test/Dialect/Vector/vector-scan-transforms.mlir
+++ b/mlir/test/Dialect/Vector/vector-scan-transforms.mlir
@@ -4,11 +4,11 @@
 // CHECK-SAME: %[[ARG0:.*]]: vector<2xi32>,
 // CHECK-SAME: %[[ARG1:.*]]: vector<i32>
 // CHECK:      %[[A:.*]] = arith.constant dense<0> : vector<2xi32>
-// CHECK:      %[[B:.*]] = vector.extract_strided_slice %[[ARG0]] {offsets = [0], sizes = [1], strides = [1]} : vector<2xi32> to vector<1xi32>
-// CHECK:      %[[C:.*]] = vector.insert_strided_slice %[[B]], %[[A]] {offsets = [0], strides = [1]} : vector<1xi32> into vector<2xi32>
-// CHECK:      %[[D:.*]] = vector.extract_strided_slice %[[ARG0]] {offsets = [1], sizes = [1], strides = [1]} : vector<2xi32> to vector<1xi32>
+// CHECK:      %[[B:.*]] = vector.extract_strided_slice %[[ARG0]][0:1:1] : vector<2xi32> to vector<1xi32>
+// CHECK:      %[[C:.*]] = vector.insert_strided_slice %[[B]], %[[A]][0:1] : vector<1xi32> into vector<2xi32>
+// CHECK:      %[[D:.*]] = vector.extract_strided_slice %[[ARG0]][1:1:1] : vector<2xi32> to vector<1xi32>
 // CHECK:      %[[E:.*]] = arith.addi %[[B]], %[[D]] : vector<1xi32>
-// CHECK:      %[[F:.*]] = vector.insert_strided_slice %[[E]], %[[C]] {offsets = [1], strides = [1]} : vector<1xi32> into vector<2xi32>
+// CHECK:      %[[F:.*]] = vector.insert_strided_slice %[[E]], %[[C]][1:1] : vector<1xi32> into vector<2xi32>
 // CHECK:      %[[G:.*]] = vector.extract %[[E]][0] : i32 from vector<1xi32>
 // CHECK:      %[[H:.*]] = vector.broadcast %[[G]] : i32 to vector<i32>
 // CHECK:      return %[[F]], %[[H]] : vector<2xi32>, vector<i32>
@@ -22,11 +22,11 @@ func.func @scan1d_inc(%arg0 : vector<2xi32>, %arg1 : vector<i32>) -> (vector<2xi
 // CHECK-SAME: %[[ARG0:.*]]: vector<2xi32>,
 // CHECK-SAME: %[[ARG1:.*]]: vector<i32>
 // CHECK:      %[[A:.*]] = arith.constant dense<0> : vector<2xi32>
-// CHECK:      %[[B:.*]] = vector.extract_strided_slice %[[ARG0]] {offsets = [0], sizes = [1], strides = [1]} : vector<2xi32> to vector<1xi32>
+// CHECK:      %[[B:.*]] = vector.extract_strided_slice %[[ARG0]][0:1:1] : vector<2xi32> to vector<1xi32>
 // CHECK:      %[[C:.*]] = vector.broadcast %[[ARG1]] : vector<i32> to vector<1xi32>
-// CHECK:      %[[D:.*]] = vector.insert_strided_slice %[[C]], %[[A]] {offsets = [0], strides = [1]} : vector<1xi32> into vector<2xi32>
+// CHECK:      %[[D:.*]] = vector.insert_strided_slice %[[C]], %[[A]][0:1] : vector<1xi32> into vector<2xi32>
 // CHECK:      %[[E:.*]] = arith.addi %[[C]], %[[B]] : vector<1xi32>
-// CHECK:      %[[F:.*]] = vector.insert_strided_slice %[[E]], %[[D]] {offsets = [1], strides = [1]} : vector<1xi32> into vector<2xi32>
+// CHECK:      %[[F:.*]] = vector.insert_strided_slice %[[E]], %[[D]][1:1] : vector<1xi32> into vector<2xi32>
 // CHECK:      %[[G:.*]] = vector.extract %[[E]][0] : i32 from vector<1xi32>
 // CHECK:      %[[H:.*]] = vector.broadcast %[[G]] : i32 to vector<i32>
 // CHECK:      return %[[F]], %[[H]] : vector<2xi32>, vector<i32>
@@ -40,11 +40,11 @@ func.func @scan1d_exc(%arg0 : vector<2xi32>, %arg1 : vector<i32>) -> (vector<2xi
 // CHECK-SAME: %[[ARG0:.*]]: vector<2x3xi32>,
 // CHECK-SAME: %[[ARG1:.*]]: vector<3xi32>
 // CHECK:      %[[A:.*]] = arith.constant dense<0> : vector<2x3xi32>
-// CHECK:      %[[B:.*]] = vector.extract_strided_slice %[[ARG0]] {offsets = [0, 0], sizes = [1, 3], strides = [1, 1]} : vector<2x3xi32> to vector<1x3xi32>
-// CHECK:      %[[C:.*]] = vector.insert_strided_slice %[[B]], %[[A]] {offsets = [0, 0], strides = [1, 1]} : vector<1x3xi32> into vector<2x3xi32>
-// CHECK:      %[[D:.*]] = vector.extract_strided_slice %[[ARG0]] {offsets = [1, 0], sizes = [1, 3], strides = [1, 1]} : vector<2x3xi32> to vector<1x3xi32>
+// CHECK:      %[[B:.*]] = vector.extract_strided_slice %[[ARG0]][0:1:1][0:3:1] : vector<2x3xi32> to vector<1x3xi32>
+// CHECK:      %[[C:.*]] = vector.insert_strided_slice %[[B]], %[[A]][0:1][0:1] : vector<1x3xi32> into vector<2x3xi32>
+// CHECK:      %[[D:.*]] = vector.extract_strided_slice %[[ARG0]][1:1:1][0:3:1] : vector<2x3xi32> to vector<1x3xi32>
 // CHECK:      %[[E:.*]] = arith.muli %[[B]], %[[D]] : vector<1x3xi32>
-// CHECK:      %[[F:.*]] = vector.insert_strided_slice %[[E]], %[[C]] {offsets = [1, 0], strides = [1, 1]} : vector<1x3xi32> into vector<2x3xi32>
+// CHECK:      %[[F:.*]] = vector.insert_strided_slice %[[E]], %[[C]][1:1][0:1] : vector<1x3xi32> into vector<2x3xi32>
 // CHECK:      %[[G:.*]] = vector.shape_cast %[[E]] : vector<1x3xi32> to vector<3xi32>
 // CHECK:      return %[[F]], %[[G]] : vector<2x3xi32>, vector<3xi32>
 func.func @scan2d_mul_dim0(%arg0 : vector<2x3xi32>, %arg1 : vector<3xi32>) -> (vector<2x3xi32>, vector<3xi32>) {
@@ -57,14 +57,14 @@ func.func @scan2d_mul_dim0(%arg0 : vector<2x3xi32>, %arg1 : vector<3xi32>) -> (v
 // CHECK-SAME: %[[ARG0:.*]]: vector<2x3xi32>,
 // CHECK-SAME: %[[ARG1:.*]]: vector<2xi32>
 // CHECK:      %[[A:.*]] = arith.constant dense<0> : vector<2x3xi32>
-// CHECK:      %[[B:.*]] = vector.extract_strided_slice %[[ARG0]] {offsets = [0, 0], sizes = [2, 1], strides = [1, 1]} : vector<2x3xi32> to vector<2x1xi32>
-// CHECK:      %[[C:.*]] = vector.insert_strided_slice %[[B]], %[[A]] {offsets = [0, 0], strides = [1, 1]} : vector<2x1xi32> into vector<2x3xi32>
-// CHECK:      %[[D:.*]] = vector.extract_strided_slice %[[ARG0]] {offsets = [0, 1], sizes = [2, 1], strides = [1, 1]} : vector<2x3xi32> to vector<2x1xi32>
+// CHECK:      %[[B:.*]] = vector.extract_strided_slice %[[ARG0]][0:2:1][0:1:1] : vector<2x3xi32> to vector<2x1xi32>
+// CHECK:      %[[C:.*]] = vector.insert_strided_slice %[[B]], %[[A]][0:1][0:1] : vector<2x1xi32> into vector<2x3xi32>
+// CHECK:      %[[D:.*]] = vector.extract_strided_slice %[[ARG0]][0:2:1][1:1:1] : vector<2x3xi32> to vector<2x1xi32>
 // CHECK:      %[[E:.*]] = arith.muli %[[B]], %[[D]] : vector<2x1xi32>
-// CHECK:      %[[F:.*]] = vector.insert_strided_slice %[[E]], %[[C]] {offsets = [0, 1], strides = [1, 1]} : vector<2x1xi32> into vector<2x3xi32>
-// CHECK:      %[[G:.*]] = vector.extract_strided_slice %[[ARG0]] {offsets = [0, 2], sizes = [2, 1], strides = [1, 1]} : vector<2x3xi32> to vector<2x1xi32>
+// CHECK:      %[[F:.*]] = vector.insert_strided_slice %[[E]], %[[C]][0:1][1:1] : vector<2x1xi32> into vector<2x3xi32>
+// CHECK:      %[[G:.*]] = vector.extract_strided_slice %[[ARG0]][0:2:1][2:1:1] : vector<2x3xi32> to vector<2x1xi32>
 // CHECK:      %[[H:.*]] = arith.muli %[[E]], %[[G]] : vector<2x1xi32>
-// CHECK:      %[[I:.*]] = vector.insert_strided_slice %[[H]], %[[F]] {offsets = [0, 2], strides = [1, 1]} : vector<2x1xi32> into vector<2x3xi32>
+// CHECK:      %[[I:.*]] = vector.insert_strided_slice %[[H]], %[[F]][0:1][2:1] : vector<2x1xi32> into vector<2x3xi32>
 // CHECK:      %[[J:.*]] = vector.shape_cast %[[H]] : vector<2x1xi32> to vector<2xi32>
 // CHECK:      return %[[I]], %[[J]] : vector<2x3xi32>, vector<2xi32>
 func.func @scan2d_mul_dim1(%arg0 : vector<2x3xi32>, %arg1 : vector<2xi32>) -> (vector<2x3xi32>, vector<2xi32>) {
@@ -77,11 +77,11 @@ func.func @scan2d_mul_dim1(%arg0 : vector<2x3xi32>, %arg1 : vector<2xi32>) -> (v
 // CHECK-SAME: %[[ARG0:.*]]: vector<4x2x3xf32>,
 // CHECK-SAME: %[[ARG1:.*]]: vector<4x3xf32>
 // CHECK:      %[[A:.*]] = arith.constant dense<0.000000e+00> : vector<4x2x3xf32>
-// CHECK:      %[[B:.*]] = vector.extract_strided_slice %[[ARG0]] {offsets = [0, 0, 0], sizes = [4, 1, 3], strides = [1, 1, 1]} : vector<4x2x3xf32> to vector<4x1x3xf32>
+// CHECK:      %[[B:.*]] = vector.extract_strided_slice %[[ARG0]][0:4:1][0:1:1][0:3:1] : vector<4x2x3xf32> to vector<4x1x3xf32>
 // CHECK:      %[[C:.*]] = vector.shape_cast %[[ARG1]] : vector<4x3xf32> to vector<4x1x3xf32>
-// CHECK:      %[[D:.*]] = vector.insert_strided_slice %[[C]], %[[A]] {offsets = [0, 0, 0], strides = [1, 1, 1]} : vector<4x1x3xf32> into vector<4x2x3xf32>
+// CHECK:      %[[D:.*]] = vector.insert_strided_slice %[[C]], %[[A]][0:1][0:1][0:1] : vector<4x1x3xf32> into vector<4x2x3xf32>
 // CHECK:      %[[E:.*]] = arith.mulf %[[C]], %[[B]] : vector<4x1x3xf32>
-// CHECK:      %[[F:.*]] = vector.insert_strided_slice %[[E]], %[[D]] {offsets = [0, 1, 0], strides = [1, 1, 1]} : vector<4x1x3xf32> into vector<4x2x3xf32>
+// CHECK:      %[[F:.*]] = vector.insert_strided_slice %[[E]], %[[D]][0:1][1:1][0:1] : vector<4x1x3xf32> into vector<4x2x3xf32>
 // CHECK:      %[[G:.*]] = vector.shape_cast %[[E]] : vector<4x1x3xf32> to vector<4x3xf32>
 // CHECK:      return %[[F]], %[[G]] : vector<4x2x3xf32>, vector<4x3xf32>
 func.func @scan3d_mul_dim1(%arg0 : vector<4x2x3xf32>, %arg1 : vector<4x3xf32>) -> (vector<4x2x3xf32>, vector<4x3xf32>) {
diff --git a/mlir/test/Dialect/Vector/vector-shape-cast-lowering-transforms.mlir b/mlir/test/Dialect/Vector/vector-shape-cast-lowering-transforms.mlir
index f2f1211fd70ee..741e42fb23049 100644
--- a/mlir/test/Dialect/Vector/vector-shape-cast-lowering-transforms.mlir
+++ b/mlir/test/Dialect/Vector/vector-shape-cast-lowering-transforms.mlir
@@ -27,26 +27,26 @@ func.func @shape_casts(%a: vector<2x2xf32>) -> (vector<4xf32>, vector<2x2xf32>)
   // CHECK: %[[ex0:.*]] = vector.extract %{{.*}}[0] : vector<2xf32> from vector<2x2xf32>
   //
   // CHECK: %[[in0:.*]] = vector.insert_strided_slice %[[ex0]], %[[cst]]
-  // CHECK-SAME: {offsets = [0], strides = [1]} : vector<2xf32> into vector<4xf32>
+  // CHECK-SAME:[0:1] : vector<2xf32> into vector<4xf32>
   //
   // CHECK: %[[ex1:.*]] = vector.extract %{{.*}}[1] : vector<2xf32> from vector<2x2xf32>
   //
   // CHECK: %[[in2:.*]] = vector.insert_strided_slice %[[ex1]], %[[in0]]
-  // CHECK-SAME: {offsets = [2], strides = [1]} : vector<2xf32> into vector<4xf32>
+  // CHECK-SAME:[2:1] : vector<2xf32> into vector<4xf32>
   //
   %0 = vector.shape_cast %a : vector<2x2xf32> to vector<4xf32>
   // CHECK: %[[add:.*]] = arith.addf %[[in2]], %[[in2]] : vector<4xf32>
   %r0 = arith.addf %0, %0: vector<4xf32>
   //
   // CHECK: %[[ss0:.*]] = vector.extract_strided_slice %[[add]]
-  // CHECK-SAME: {offsets = [0], sizes = [2], strides = [1]} :
+  // CHECK-SAME:[0:2:1] :
   // CHECK-SAME: vector<4xf32> to vector<2xf32>
   //
   // CHECK: %[[res0:.*]] = vector.insert %[[ss0]], %[[cst22]] [0] :
   // CHECK-SAME: vector<2xf32> into vector<2x2xf32>
   //
   // CHECK: %[[s2:.*]] = vector.extract_strided_slice %[[add]]
-  // CHECK-SAME: {offsets = [2], sizes = [2], strides = [1]} :
+  // CHECK-SAME:[2:2:1] :
   // CHECK-SAME: vector<4xf32> to vector<2xf32>
   //
   // CHECK: %[[res1:.*]] = vector.insert %[[s2]], %[[res0]] [1] :
diff --git a/mlir/test/Dialect/Vector/vector-transfer-unroll.mlir b/mlir/test/Dialect/Vector/vector-transfer-unroll.mlir
index eb0db736d5da5..55b1c979192aa 100644
--- a/mlir/test/Dialect/Vector/vector-transfer-unroll.mlir
+++ b/mlir/test/Dialect/Vector/vector-transfer-unroll.mlir
@@ -5,26 +5,26 @@
 //       CHECK-DAG:   %[[C2:.*]] = arith.constant 2 : index
 //       CHECK-DAG:   %[[C0:.*]] = arith.constant 0 : index
 //       CHECK:   %[[VTR0:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]]], %{{.*}} : memref<4x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}} {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}}[0:1][0:1] : vector<2x2xf32> into vector<4x4xf32>
 //  CHECK-NEXT:   %[[VTR1:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C2]]], %{{.*}} : memref<4x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]] {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]][0:1][2:1] : vector<2x2xf32> into vector<4x4xf32>
 //  CHECK-NEXT:   %[[VTR2:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C0]]], %{{.*}} : memref<4x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]][2:1][0:1] : vector<2x2xf32> into vector<4x4xf32>
 //  CHECK-NEXT:   %[[VTR3:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C2]]], %{{.*}} : memref<4x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]] {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]][2:1][2:1] : vector<2x2xf32> into vector<4x4xf32>
 //  CHECK-NEXT:   return %[[VEC3]] : vector<4x4xf32>
 
 // ORDER-LABEL: func @transfer_read_unroll
 //       ORDER-DAG:   %[[C2:.*]] = arith.constant 2 : index
 //       ORDER-DAG:   %[[C0:.*]] = arith.constant 0 : index
 //       ORDER:   %[[VTR0:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]]], %{{.*}} : memref<4x4xf32>, vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}} {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+//  ORDER-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}}[0:1][0:1] : vector<2x2xf32> into vector<4x4xf32>
 //  ORDER-NEXT:   %[[VTR1:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C0]]], %{{.*}} : memref<4x4xf32>, vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+//  ORDER-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]][2:1][0:1] : vector<2x2xf32> into vector<4x4xf32>
 //  ORDER-NEXT:   %[[VTR2:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C2]]], %{{.*}} : memref<4x4xf32>, vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]] {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+//  ORDER-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]][0:1][2:1] : vector<2x2xf32> into vector<4x4xf32>
 //  ORDER-NEXT:   %[[VTR3:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C2]]], %{{.*}} : memref<4x4xf32>, vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]] {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+//  ORDER-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]][2:1][2:1] : vector<2x2xf32> into vector<4x4xf32>
 //  ORDER-NEXT:   return %[[VEC3]] : vector<4x4xf32>
 
 func.func @transfer_read_unroll(%arg0 : memref<4x4xf32>) -> vector<4x4xf32> {
@@ -37,26 +37,26 @@ func.func @transfer_read_unroll(%arg0 : memref<4x4xf32>) -> vector<4x4xf32> {
 // CHECK-LABEL: func @transfer_write_unroll
 //       CHECK-DAG:   %[[C2:.*]] = arith.constant 2 : index
 //       CHECK-DAG:   %[[C0:.*]] = arith.constant 0 : index
-//       CHECK:   %[[S0:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//       CHECK:   %[[S0:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][0:2:1] : vector<4x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   vector.transfer_write %[[S0]], {{.*}}[%[[C0]], %[[C0]]] {{.*}} : vector<2x2xf32>, memref<4x4xf32>
-//  CHECK-NEXT:   %[[S1:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//  CHECK-NEXT:   %[[S1:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][2:2:1] : vector<4x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   vector.transfer_write %[[S1]], {{.*}}[%[[C0]], %[[C2]]] {{.*}} : vector<2x2xf32>, memref<4x4xf32>
-//  CHECK-NEXT:   %[[S2:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//  CHECK-NEXT:   %[[S2:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][0:2:1] : vector<4x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   vector.transfer_write %[[S2]], {{.*}}[%[[C2]], %[[C0]]] {{.*}} : vector<2x2xf32>, memref<4x4xf32>
-//  CHECK-NEXT:   %[[S3:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//  CHECK-NEXT:   %[[S3:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][2:2:1] : vector<4x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   vector.transfer_write %[[S3]], {{.*}}[%[[C2]], %[[C2]]] {{.*}} : vector<2x2xf32>, memref<4x4xf32>
 //  CHECK-NEXT:   return
 
 // ORDER-LABEL: func @transfer_write_unroll
 //       ORDER-DAG:   %[[C2:.*]] = arith.constant 2 : index
 //       ORDER-DAG:   %[[C0:.*]] = arith.constant 0 : index
-//       ORDER:   %[[S0:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//       ORDER:   %[[S0:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][0:2:1] : vector<4x4xf32> to vector<2x2xf32>
 //  ORDER-NEXT:   vector.transfer_write %[[S0]], {{.*}}[%[[C0]], %[[C0]]] {{.*}} : vector<2x2xf32>, memref<4x4xf32>
-//  ORDER-NEXT:   %[[S1:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//  ORDER-NEXT:   %[[S1:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][0:2:1] : vector<4x4xf32> to vector<2x2xf32>
 //  ORDER-NEXT:   vector.transfer_write %[[S1]], {{.*}}[%[[C2]], %[[C0]]] {{.*}} : vector<2x2xf32>, memref<4x4xf32>
-//  ORDER-NEXT:   %[[S2:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//  ORDER-NEXT:   %[[S2:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][2:2:1] : vector<4x4xf32> to vector<2x2xf32>
 //  ORDER-NEXT:   vector.transfer_write %[[S2]], {{.*}}[%[[C0]], %[[C2]]] {{.*}} : vector<2x2xf32>, memref<4x4xf32>
-//  ORDER-NEXT:   %[[S3:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//  ORDER-NEXT:   %[[S3:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][2:2:1] : vector<4x4xf32> to vector<2x2xf32>
 //  ORDER-NEXT:   vector.transfer_write %[[S3]], {{.*}}[%[[C2]], %[[C2]]] {{.*}} : vector<2x2xf32>, memref<4x4xf32>
 //  ORDER-NEXT:   return
 
@@ -91,13 +91,13 @@ func.func @transfer_readwrite_unroll(%arg0 : memref<4x4xf32>) {
 //       CHECK-DAG:   %[[C2:.*]] = arith.constant 2 : index
 //       CHECK-DAG:   %[[C0:.*]] = arith.constant 0 : index
 //       CHECK:   %[[VTR0:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]]], %{{.*}} : tensor<4x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}} {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}}[0:1][0:1] : vector<2x2xf32> into vector<4x4xf32>
 //  CHECK-NEXT:   %[[VTR1:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C2]]], %{{.*}} : tensor<4x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]] {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]][0:1][2:1] : vector<2x2xf32> into vector<4x4xf32>
 //  CHECK-NEXT:   %[[VTR2:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C0]]], %{{.*}} : tensor<4x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]][2:1][0:1] : vector<2x2xf32> into vector<4x4xf32>
 //  CHECK-NEXT:   %[[VTR3:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C2]]], %{{.*}} : tensor<4x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]] {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]][2:1][2:1] : vector<2x2xf32> into vector<4x4xf32>
 //  CHECK-NEXT:   return %[[VEC3]] : vector<4x4xf32>
 
 func.func @transfer_read_unroll_tensor(%arg0 : tensor<4x4xf32>) -> vector<4x4xf32> {
@@ -110,13 +110,13 @@ func.func @transfer_read_unroll_tensor(%arg0 : tensor<4x4xf32>) -> vector<4x4xf3
 // CHECK-LABEL: func @transfer_write_unroll_tensor
 //       CHECK-DAG:   %[[C2:.*]] = arith.constant 2 : index
 //       CHECK-DAG:   %[[C0:.*]] = arith.constant 0 : index
-//       CHECK:   %[[S0:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//       CHECK:   %[[S0:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][0:2:1] : vector<4x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   %[[VTW0:.*]] = vector.transfer_write %[[S0]], {{.*}}[%[[C0]], %[[C0]]] {{.*}} : vector<2x2xf32>, tensor<4x4xf32>
-//  CHECK-NEXT:   %[[S1:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//  CHECK-NEXT:   %[[S1:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][2:2:1] : vector<4x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   %[[VTW1:.*]] = vector.transfer_write %[[S1]], %[[VTW0]][%[[C0]], %[[C2]]] {{.*}} : vector<2x2xf32>, tensor<4x4xf32>
-//  CHECK-NEXT:   %[[S2:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//  CHECK-NEXT:   %[[S2:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][0:2:1] : vector<4x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   %[[VTW2:.*]] = vector.transfer_write %[[S2]], %[[VTW1]][%[[C2]], %[[C0]]] {{.*}} : vector<2x2xf32>, tensor<4x4xf32>
-//  CHECK-NEXT:   %[[S3:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//  CHECK-NEXT:   %[[S3:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][2:2:1] : vector<4x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   %[[VTW3:.*]] = vector.transfer_write %[[S3]], %[[VTW2]][%[[C2]], %[[C2]]] {{.*}} : vector<2x2xf32>, tensor<4x4xf32>
 //  CHECK-NEXT:   return %[[VTW3]] : tensor<4x4xf32>
 
@@ -157,17 +157,17 @@ func.func @transfer_readwrite_unroll_tensor(%arg0 : tensor<4x4xf32>, %arg1 : ten
 //       CHECK-DAG:   %[[C2:.*]] = arith.constant 2 : index
 //       CHECK-DAG:   %[[C0:.*]] = arith.constant 0 : index
 //       CHECK:   %[[VTR0:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}} {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x6xf32>
+//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}}[0:1][0:1] : vector<2x2xf32> into vector<4x6xf32>
 //  CHECK-NEXT:   %[[VTR1:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C0]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]] {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x6xf32>
+//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]][0:1][2:1] : vector<2x2xf32> into vector<4x6xf32>
 //  CHECK-NEXT:   %[[VTR2:.*]] = vector.transfer_read {{.*}}[%[[C4]], %[[C0]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]] {offsets = [0, 4], strides = [1, 1]} : vector<2x2xf32> into vector<4x6xf32>
+//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]][0:1][4:1] : vector<2x2xf32> into vector<4x6xf32>
 //  CHECK-NEXT:   %[[VTR3:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C2]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x6xf32>
+//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]][2:1][0:1] : vector<2x2xf32> into vector<4x6xf32>
 //  CHECK-NEXT:   %[[VTR4:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C2]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VTR4]], %[[VEC3]] {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x6xf32>
+//  CHECK-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VTR4]], %[[VEC3]][2:1][2:1] : vector<2x2xf32> into vector<4x6xf32>
 //  CHECK-NEXT:   %[[VTR5:.*]] = vector.transfer_read {{.*}}[%[[C4]], %[[C2]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VTR5]], %[[VEC4]] {offsets = [2, 4], strides = [1, 1]} : vector<2x2xf32> into vector<4x6xf32>
+//  CHECK-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VTR5]], %[[VEC4]][2:1][4:1] : vector<2x2xf32> into vector<4x6xf32>
 //  CHECK-NEXT:   return %[[VEC5]] : vector<4x6xf32>
 #map0 = affine_map<(d0, d1) -> (d1, d0)>
 func.func @transfer_read_unroll_permutation(%arg0 : memref<6x4xf32>) -> vector<4x6xf32> {
@@ -183,17 +183,17 @@ func.func @transfer_read_unroll_permutation(%arg0 : memref<6x4xf32>) -> vector<4
 //       CHECK-DAG:   %[[C2:.*]] = arith.constant 2 : index
 //       CHECK-DAG:   %[[C0:.*]] = arith.constant 0 : index
 //       CHECK:   %[[VTR0:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}} {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}}[0:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   %[[VTR1:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C2]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]] {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]][0:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   %[[VTR2:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]][2:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   %[[VTR3:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C2]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]] {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]][2:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   %[[VTR4:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VTR4]], %[[VEC3]] {offsets = [4, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VTR4]], %[[VEC3]][4:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   %[[VTR5:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C2]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VTR5]], %[[VEC4]] {offsets = [4, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VTR5]], %[[VEC4]][4:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   return %[[VEC5]] : vector<6x4xf32>
 #map0 = affine_map<(d0, d1) -> (0, d1)>
 func.func @transfer_read_unroll_broadcast(%arg0 : memref<6x4xf32>) -> vector<6x4xf32> {
@@ -210,17 +210,17 @@ func.func @transfer_read_unroll_broadcast(%arg0 : memref<6x4xf32>) -> vector<6x4
 //       CHECK-DAG:   %[[C2:.*]] = arith.constant 2 : index
 //       CHECK-DAG:   %[[C0:.*]] = arith.constant 0 : index
 //       CHECK:   %[[VTR0:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}} {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x6xf32>
+//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}}[0:1][0:1] : vector<2x2xf32> into vector<4x6xf32>
 //  CHECK-NEXT:   %[[VTR1:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C0]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]] {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x6xf32>
+//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]][0:1][2:1] : vector<2x2xf32> into vector<4x6xf32>
 //  CHECK-NEXT:   %[[VTR2:.*]] = vector.transfer_read {{.*}}[%[[C4]], %[[C0]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]] {offsets = [0, 4], strides = [1, 1]} : vector<2x2xf32> into vector<4x6xf32>
+//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]][0:1][4:1] : vector<2x2xf32> into vector<4x6xf32>
 //  CHECK-NEXT:   %[[VTR3:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x6xf32>
+//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]][2:1][0:1] : vector<2x2xf32> into vector<4x6xf32>
 //  CHECK-NEXT:   %[[VTR4:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C0]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VTR4]], %[[VEC3]] {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x6xf32>
+//  CHECK-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VTR4]], %[[VEC3]][2:1][2:1] : vector<2x2xf32> into vector<4x6xf32>
 //  CHECK-NEXT:   %[[VTR5:.*]] = vector.transfer_read {{.*}}[%[[C4]], %[[C0]]], %{{.*}} : memref<6x4xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VTR5]], %[[VEC4]] {offsets = [2, 4], strides = [1, 1]} : vector<2x2xf32> into vector<4x6xf32>
+//  CHECK-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VTR5]], %[[VEC4]][2:1][4:1] : vector<2x2xf32> into vector<4x6xf32>
 //  CHECK-NEXT:   return %[[VEC5]] : vector<4x6xf32>
 #map0 = affine_map<(d0, d1) -> (0, d0)>
 func.func @transfer_read_unroll_broadcast_permuation(%arg0 : memref<6x4xf32>) -> vector<4x6xf32> {
@@ -237,17 +237,17 @@ func.func @transfer_read_unroll_broadcast_permuation(%arg0 : memref<6x4xf32>) ->
 //       CHECK-DAG:   %[[C2:.*]] = arith.constant 2 : index
 //       CHECK-DAG:   %[[C0:.*]] = arith.constant 0 : index
 //       CHECK:   %[[VTR0:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]], %[[C0]]], %{{.*}} : memref<?x?x?xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}} {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}}[0:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   %[[VTR1:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C0]], %[[C0]]], %{{.*}} : memref<?x?x?xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]] {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]][0:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   %[[VTR2:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]], %[[C2]]], %{{.*}} : memref<?x?x?xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]][2:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   %[[VTR3:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C0]], %[[C2]]], %{{.*}} : memref<?x?x?xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]] {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]][2:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   %[[VTR4:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]], %[[C4]]], %{{.*}} : memref<?x?x?xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VTR4]], %[[VEC3]] {offsets = [4, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VTR4]], %[[VEC3]][4:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   %[[VTR5:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C0]], %[[C4]]], %{{.*}} : memref<?x?x?xf32>, vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VTR5]], %[[VEC4]] {offsets = [4, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VTR5]], %[[VEC4]][4:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   return %[[VEC5]] : vector<6x4xf32>
 
 // ORDER-LABEL: func @transfer_read_unroll_different_rank
@@ -255,17 +255,17 @@ func.func @transfer_read_unroll_broadcast_permuation(%arg0 : memref<6x4xf32>) ->
 //       ORDER-DAG:   %[[C2:.*]] = arith.constant 2 : index
 //       ORDER-DAG:   %[[C0:.*]] = arith.constant 0 : index
 //       ORDER:   %[[VTR0:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]], %[[C0]]], %{{.*}} : memref<?x?x?xf32>, vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}} {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  ORDER-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VTR0]], %{{.*}}[0:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
 //  ORDER-NEXT:   %[[VTR1:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]], %[[C2]]], %{{.*}} : memref<?x?x?xf32>, vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  ORDER-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VTR1]], %[[VEC0]][2:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
 //  ORDER-NEXT:   %[[VTR2:.*]] = vector.transfer_read {{.*}}[%[[C0]], %[[C0]], %[[C4]]], %{{.*}} : memref<?x?x?xf32>, vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]] {offsets = [4, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  ORDER-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VTR2]], %[[VEC1]][4:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
 //  ORDER-NEXT:   %[[VTR3:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C0]], %[[C0]]], %{{.*}} : memref<?x?x?xf32>, vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]] {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  ORDER-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VTR3]], %[[VEC2]][0:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
 //  ORDER-NEXT:   %[[VTR4:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C0]], %[[C2]]], %{{.*}} : memref<?x?x?xf32>, vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VTR4]], %[[VEC3]] {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  ORDER-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VTR4]], %[[VEC3]][2:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
 //  ORDER-NEXT:   %[[VTR5:.*]] = vector.transfer_read {{.*}}[%[[C2]], %[[C0]], %[[C4]]], %{{.*}} : memref<?x?x?xf32>, vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VTR5]], %[[VEC4]] {offsets = [4, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  ORDER-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VTR5]], %[[VEC4]][4:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
 //  ORDER-NEXT:   return %[[VEC5]] : vector<6x4xf32>
 
 #map0 = affine_map<(d0, d1, d2) -> (d2, d0)>
@@ -284,36 +284,36 @@ func.func @transfer_read_unroll_different_rank(%arg0 : memref<?x?x?xf32>) -> vec
 //  CHECK-SAME:           %[[ARG2:.*]]: vector<6x4xi1>
 //  CHECK-SAME:           %[[ARG3:.*]]: vector<6x4xf32>
 //  CHECK-DAG:   %[[C0:.*]] = arith.constant 0 : index
-//       CHECK:   %[[IDX0:.*]] = vector.extract_strided_slice %[[ARG1]] {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xindex> to vector<2x2xindex>
-//  CHECK-NEXT:   %[[MASK0:.*]] = vector.extract_strided_slice %[[ARG2]] {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xi1> to vector<2x2xi1>
-//  CHECK-NEXT:   %[[PASS0:.*]] = vector.extract_strided_slice %[[ARG3]] {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xf32> to vector<2x2xf32>
+//       CHECK:   %[[IDX0:.*]] = vector.extract_strided_slice %[[ARG1]][0:2:1][0:2:1] : vector<6x4xindex> to vector<2x2xindex>
+//  CHECK-NEXT:   %[[MASK0:.*]] = vector.extract_strided_slice %[[ARG2]][0:2:1][0:2:1] : vector<6x4xi1> to vector<2x2xi1>
+//  CHECK-NEXT:   %[[PASS0:.*]] = vector.extract_strided_slice %[[ARG3]][0:2:1][0:2:1] : vector<6x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   %[[VGT0:.*]] = vector.gather {{.*}}[%[[C0]], %[[C0]], %[[C0]]] [%[[IDX0]]], %[[MASK0]], %[[PASS0]] : memref<?x?x?xf32>, vector<2x2xindex>, vector<2x2xi1>, vector<2x2xf32> into vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VGT0]], %{{.*}} {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
-//  CHECK-NEXT:   %[[IDX1:.*]] = vector.extract_strided_slice %[[ARG1]] {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xindex> to vector<2x2xindex>
-//  CHECK-NEXT:   %[[MASK1:.*]] = vector.extract_strided_slice %[[ARG2]] {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xi1> to vector<2x2xi1>
-//  CHECK-NEXT:   %[[PASS1:.*]] = vector.extract_strided_slice %[[ARG3]] {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xf32> to vector<2x2xf32>
+//  CHECK-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VGT0]], %{{.*}}[0:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[IDX1:.*]] = vector.extract_strided_slice %[[ARG1]][0:2:1][2:2:1] : vector<6x4xindex> to vector<2x2xindex>
+//  CHECK-NEXT:   %[[MASK1:.*]] = vector.extract_strided_slice %[[ARG2]][0:2:1][2:2:1] : vector<6x4xi1> to vector<2x2xi1>
+//  CHECK-NEXT:   %[[PASS1:.*]] = vector.extract_strided_slice %[[ARG3]][0:2:1][2:2:1] : vector<6x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   %[[VGT1:.*]] = vector.gather {{.*}}[%[[C0]], %[[C0]], %[[C0]]] [%[[IDX1]]], %[[MASK1]], %[[PASS1]] : memref<?x?x?xf32>, vector<2x2xindex>, vector<2x2xi1>, vector<2x2xf32> into vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VGT1]], %[[VEC0]] {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
-//  CHECK-NEXT:   %[[IDX2:.*]] = vector.extract_strided_slice %[[ARG1]] {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xindex> to vector<2x2xindex>
-//  CHECK-NEXT:   %[[MASK2:.*]] = vector.extract_strided_slice %[[ARG2]] {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xi1> to vector<2x2xi1>
-//  CHECK-NEXT:   %[[PASS2:.*]] = vector.extract_strided_slice %[[ARG3]] {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xf32> to vector<2x2xf32>
+//  CHECK-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VGT1]], %[[VEC0]][0:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[IDX2:.*]] = vector.extract_strided_slice %[[ARG1]][2:2:1][0:2:1] : vector<6x4xindex> to vector<2x2xindex>
+//  CHECK-NEXT:   %[[MASK2:.*]] = vector.extract_strided_slice %[[ARG2]][2:2:1][0:2:1] : vector<6x4xi1> to vector<2x2xi1>
+//  CHECK-NEXT:   %[[PASS2:.*]] = vector.extract_strided_slice %[[ARG3]][2:2:1][0:2:1] : vector<6x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   %[[VGT2:.*]] = vector.gather {{.*}}[%[[C0]], %[[C0]], %[[C0]]] [%[[IDX2]]], %[[MASK2]], %[[PASS2]] : memref<?x?x?xf32>, vector<2x2xindex>, vector<2x2xi1>, vector<2x2xf32> into vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VGT2]], %[[VEC1]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
-//  CHECK-NEXT:   %[[IDX3:.*]] = vector.extract_strided_slice %[[ARG1]] {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xindex> to vector<2x2xindex>
-//  CHECK-NEXT:   %[[MASK3:.*]] = vector.extract_strided_slice %[[ARG2]] {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xi1> to vector<2x2xi1>
-//  CHECK-NEXT:   %[[PASS3:.*]] = vector.extract_strided_slice %[[ARG3]] {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xf32> to vector<2x2xf32>
+//  CHECK-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VGT2]], %[[VEC1]][2:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[IDX3:.*]] = vector.extract_strided_slice %[[ARG1]][2:2:1][2:2:1] : vector<6x4xindex> to vector<2x2xindex>
+//  CHECK-NEXT:   %[[MASK3:.*]] = vector.extract_strided_slice %[[ARG2]][2:2:1][2:2:1] : vector<6x4xi1> to vector<2x2xi1>
+//  CHECK-NEXT:   %[[PASS3:.*]] = vector.extract_strided_slice %[[ARG3]][2:2:1][2:2:1] : vector<6x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   %[[VGT3:.*]] = vector.gather {{.*}}[%[[C0]], %[[C0]], %[[C0]]] [%[[IDX3]]], %[[MASK3]], %[[PASS3]] : memref<?x?x?xf32>, vector<2x2xindex>, vector<2x2xi1>, vector<2x2xf32> into vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VGT3]], %[[VEC2]] {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
-//  CHECK-NEXT:   %[[IDX4:.*]] = vector.extract_strided_slice %[[ARG1]] {offsets = [4, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xindex> to vector<2x2xindex>
-//  CHECK-NEXT:   %[[MASK4:.*]] = vector.extract_strided_slice %[[ARG2]] {offsets = [4, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xi1> to vector<2x2xi1>
-//  CHECK-NEXT:   %[[PASS4:.*]] = vector.extract_strided_slice %[[ARG3]] {offsets = [4, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xf32> to vector<2x2xf32>
+//  CHECK-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VGT3]], %[[VEC2]][2:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[IDX4:.*]] = vector.extract_strided_slice %[[ARG1]][4:2:1][0:2:1] : vector<6x4xindex> to vector<2x2xindex>
+//  CHECK-NEXT:   %[[MASK4:.*]] = vector.extract_strided_slice %[[ARG2]][4:2:1][0:2:1] : vector<6x4xi1> to vector<2x2xi1>
+//  CHECK-NEXT:   %[[PASS4:.*]] = vector.extract_strided_slice %[[ARG3]][4:2:1][0:2:1] : vector<6x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   %[[VGT4:.*]] = vector.gather {{.*}}[%[[C0]], %[[C0]], %[[C0]]] [%[[IDX4]]], %[[MASK4]], %[[PASS4]] : memref<?x?x?xf32>, vector<2x2xindex>, vector<2x2xi1>, vector<2x2xf32> into vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VGT4]], %[[VEC3]] {offsets = [4, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
-//  CHECK-NEXT:   %[[IDX5:.*]] = vector.extract_strided_slice %[[ARG1]] {offsets = [4, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xindex> to vector<2x2xindex>
-//  CHECK-NEXT:   %[[MASK5:.*]] = vector.extract_strided_slice %[[ARG2]] {offsets = [4, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xi1> to vector<2x2xi1>
-//  CHECK-NEXT:   %[[PASS5:.*]] = vector.extract_strided_slice %[[ARG3]] {offsets = [4, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xf32> to vector<2x2xf32>
+//  CHECK-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VGT4]], %[[VEC3]][4:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[IDX5:.*]] = vector.extract_strided_slice %[[ARG1]][4:2:1][2:2:1] : vector<6x4xindex> to vector<2x2xindex>
+//  CHECK-NEXT:   %[[MASK5:.*]] = vector.extract_strided_slice %[[ARG2]][4:2:1][2:2:1] : vector<6x4xi1> to vector<2x2xi1>
+//  CHECK-NEXT:   %[[PASS5:.*]] = vector.extract_strided_slice %[[ARG3]][4:2:1][2:2:1] : vector<6x4xf32> to vector<2x2xf32>
 //  CHECK-NEXT:   %[[VGT5:.*]] = vector.gather {{.*}}[%[[C0]], %[[C0]], %[[C0]]] [%[[IDX5]]], %[[MASK5]], %[[PASS5]] : memref<?x?x?xf32>, vector<2x2xindex>, vector<2x2xi1>, vector<2x2xf32> into vector<2x2xf32>
-//  CHECK-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VGT5]], %[[VEC4]] {offsets = [4, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  CHECK-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VGT5]], %[[VEC4]][4:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
 //  CHECK-NEXT:   return %[[VEC5]] : vector<6x4xf32>
 
 // ORDER-LABEL: func @vector_gather_unroll
@@ -322,36 +322,36 @@ func.func @transfer_read_unroll_different_rank(%arg0 : memref<?x?x?xf32>) -> vec
 //  ORDER-SAME:           %[[ARG2:.*]]: vector<6x4xi1>
 //  ORDER-SAME:           %[[ARG3:.*]]: vector<6x4xf32>
 //  ORDER-DAG:   %[[C0:.*]] = arith.constant 0 : index
-//       ORDER:   %[[IDX0:.*]] = vector.extract_strided_slice %[[ARG1]] {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xindex> to vector<2x2xindex>
-//  ORDER-NEXT:   %[[MASK0:.*]] = vector.extract_strided_slice %[[ARG2]] {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xi1> to vector<2x2xi1>
-//  ORDER-NEXT:   %[[PASS0:.*]] = vector.extract_strided_slice %[[ARG3]] {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xf32> to vector<2x2xf32>
+//       ORDER:   %[[IDX0:.*]] = vector.extract_strided_slice %[[ARG1]][0:2:1][0:2:1] : vector<6x4xindex> to vector<2x2xindex>
+//  ORDER-NEXT:   %[[MASK0:.*]] = vector.extract_strided_slice %[[ARG2]][0:2:1][0:2:1] : vector<6x4xi1> to vector<2x2xi1>
+//  ORDER-NEXT:   %[[PASS0:.*]] = vector.extract_strided_slice %[[ARG3]][0:2:1][0:2:1] : vector<6x4xf32> to vector<2x2xf32>
 //  ORDER-NEXT:   %[[VGT0:.*]] = vector.gather {{.*}}[%[[C0]], %[[C0]], %[[C0]]] [%[[IDX0]]], %[[MASK0]], %[[PASS0]] : memref<?x?x?xf32>, vector<2x2xindex>, vector<2x2xi1>, vector<2x2xf32> into vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VGT0]], %{{.*}} {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
-//  ORDER-NEXT:   %[[IDX1:.*]] = vector.extract_strided_slice %[[ARG1]] {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xindex> to vector<2x2xindex>
-//  ORDER-NEXT:   %[[MASK1:.*]] = vector.extract_strided_slice %[[ARG2]] {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xi1> to vector<2x2xi1>
-//  ORDER-NEXT:   %[[PASS1:.*]] = vector.extract_strided_slice %[[ARG3]] {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xf32> to vector<2x2xf32>
+//  ORDER-NEXT:   %[[VEC0:.*]] = vector.insert_strided_slice %[[VGT0]], %{{.*}}[0:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
+//  ORDER-NEXT:   %[[IDX1:.*]] = vector.extract_strided_slice %[[ARG1]][2:2:1][0:2:1] : vector<6x4xindex> to vector<2x2xindex>
+//  ORDER-NEXT:   %[[MASK1:.*]] = vector.extract_strided_slice %[[ARG2]][2:2:1][0:2:1] : vector<6x4xi1> to vector<2x2xi1>
+//  ORDER-NEXT:   %[[PASS1:.*]] = vector.extract_strided_slice %[[ARG3]][2:2:1][0:2:1] : vector<6x4xf32> to vector<2x2xf32>
 //  ORDER-NEXT:   %[[VGT1:.*]] = vector.gather {{.*}}[%[[C0]], %[[C0]], %[[C0]]] [%[[IDX1]]], %[[MASK1]], %[[PASS1]] : memref<?x?x?xf32>, vector<2x2xindex>, vector<2x2xi1>, vector<2x2xf32> into vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VGT1]], %[[VEC0]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
-//  ORDER-NEXT:   %[[IDX2:.*]] = vector.extract_strided_slice %[[ARG1]] {offsets = [4, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xindex> to vector<2x2xindex>
-//  ORDER-NEXT:   %[[MASK2:.*]] = vector.extract_strided_slice %[[ARG2]] {offsets = [4, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xi1> to vector<2x2xi1>
-//  ORDER-NEXT:   %[[PASS2:.*]] = vector.extract_strided_slice %[[ARG3]] {offsets = [4, 0], sizes = [2, 2], strides = [1, 1]} : vector<6x4xf32> to vector<2x2xf32>
+//  ORDER-NEXT:   %[[VEC1:.*]] = vector.insert_strided_slice %[[VGT1]], %[[VEC0]][2:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
+//  ORDER-NEXT:   %[[IDX2:.*]] = vector.extract_strided_slice %[[ARG1]][4:2:1][0:2:1] : vector<6x4xindex> to vector<2x2xindex>
+//  ORDER-NEXT:   %[[MASK2:.*]] = vector.extract_strided_slice %[[ARG2]][4:2:1][0:2:1] : vector<6x4xi1> to vector<2x2xi1>
+//  ORDER-NEXT:   %[[PASS2:.*]] = vector.extract_strided_slice %[[ARG3]][4:2:1][0:2:1] : vector<6x4xf32> to vector<2x2xf32>
 //  ORDER-NEXT:   %[[VGT2:.*]] = vector.gather {{.*}}[%[[C0]], %[[C0]], %[[C0]]] [%[[IDX2]]], %[[MASK2]], %[[PASS2]] : memref<?x?x?xf32>, vector<2x2xindex>, vector<2x2xi1>, vector<2x2xf32> into vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VGT2]], %[[VEC1]] {offsets = [4, 0], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
-//  ORDER-NEXT:   %[[IDX3:.*]] = vector.extract_strided_slice %[[ARG1]] {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xindex> to vector<2x2xindex>
-//  ORDER-NEXT:   %[[MASK3:.*]] = vector.extract_strided_slice %[[ARG2]] {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xi1> to vector<2x2xi1>
-//  ORDER-NEXT:   %[[PASS3:.*]] = vector.extract_strided_slice %[[ARG3]] {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xf32> to vector<2x2xf32>
+//  ORDER-NEXT:   %[[VEC2:.*]] = vector.insert_strided_slice %[[VGT2]], %[[VEC1]][4:1][0:1] : vector<2x2xf32> into vector<6x4xf32>
+//  ORDER-NEXT:   %[[IDX3:.*]] = vector.extract_strided_slice %[[ARG1]][0:2:1][2:2:1] : vector<6x4xindex> to vector<2x2xindex>
+//  ORDER-NEXT:   %[[MASK3:.*]] = vector.extract_strided_slice %[[ARG2]][0:2:1][2:2:1] : vector<6x4xi1> to vector<2x2xi1>
+//  ORDER-NEXT:   %[[PASS3:.*]] = vector.extract_strided_slice %[[ARG3]][0:2:1][2:2:1] : vector<6x4xf32> to vector<2x2xf32>
 //  ORDER-NEXT:   %[[VGT3:.*]] = vector.gather {{.*}}[%[[C0]], %[[C0]], %[[C0]]] [%[[IDX3]]], %[[MASK3]], %[[PASS3]] : memref<?x?x?xf32>, vector<2x2xindex>, vector<2x2xi1>, vector<2x2xf32> into vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VGT3]], %[[VEC2]] {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
-//  ORDER-NEXT:   %[[IDX4:.*]] = vector.extract_strided_slice %[[ARG1]] {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xindex> to vector<2x2xindex>
-//  ORDER-NEXT:   %[[MASK4:.*]] = vector.extract_strided_slice %[[ARG2]] {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xi1> to vector<2x2xi1>
-//  ORDER-NEXT:   %[[PASS4:.*]] = vector.extract_strided_slice %[[ARG3]] {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xf32> to vector<2x2xf32>
+//  ORDER-NEXT:   %[[VEC3:.*]] = vector.insert_strided_slice %[[VGT3]], %[[VEC2]][0:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
+//  ORDER-NEXT:   %[[IDX4:.*]] = vector.extract_strided_slice %[[ARG1]][2:2:1][2:2:1] : vector<6x4xindex> to vector<2x2xindex>
+//  ORDER-NEXT:   %[[MASK4:.*]] = vector.extract_strided_slice %[[ARG2]][2:2:1][2:2:1] : vector<6x4xi1> to vector<2x2xi1>
+//  ORDER-NEXT:   %[[PASS4:.*]] = vector.extract_strided_slice %[[ARG3]][2:2:1][2:2:1] : vector<6x4xf32> to vector<2x2xf32>
 //  ORDER-NEXT:   %[[VGT4:.*]] = vector.gather {{.*}}[%[[C0]], %[[C0]], %[[C0]]] [%[[IDX4]]], %[[MASK4]], %[[PASS4]] : memref<?x?x?xf32>, vector<2x2xindex>, vector<2x2xi1>, vector<2x2xf32> into vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VGT4]], %[[VEC3]] {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
-//  ORDER-NEXT:   %[[IDX5:.*]] = vector.extract_strided_slice %[[ARG1]] {offsets = [4, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xindex> to vector<2x2xindex>
-//  ORDER-NEXT:   %[[MASK5:.*]] = vector.extract_strided_slice %[[ARG2]] {offsets = [4, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xi1> to vector<2x2xi1>
-//  ORDER-NEXT:   %[[PASS5:.*]] = vector.extract_strided_slice %[[ARG3]] {offsets = [4, 2], sizes = [2, 2], strides = [1, 1]} : vector<6x4xf32> to vector<2x2xf32>
+//  ORDER-NEXT:   %[[VEC4:.*]] = vector.insert_strided_slice %[[VGT4]], %[[VEC3]][2:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
+//  ORDER-NEXT:   %[[IDX5:.*]] = vector.extract_strided_slice %[[ARG1]][4:2:1][2:2:1] : vector<6x4xindex> to vector<2x2xindex>
+//  ORDER-NEXT:   %[[MASK5:.*]] = vector.extract_strided_slice %[[ARG2]][4:2:1][2:2:1] : vector<6x4xi1> to vector<2x2xi1>
+//  ORDER-NEXT:   %[[PASS5:.*]] = vector.extract_strided_slice %[[ARG3]][4:2:1][2:2:1] : vector<6x4xf32> to vector<2x2xf32>
 //  ORDER-NEXT:   %[[VGT5:.*]] = vector.gather {{.*}}[%[[C0]], %[[C0]], %[[C0]]] [%[[IDX5]]], %[[MASK5]], %[[PASS5]] : memref<?x?x?xf32>, vector<2x2xindex>, vector<2x2xi1>, vector<2x2xf32> into vector<2x2xf32>
-//  ORDER-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VGT5]], %[[VEC4]] {offsets = [4, 2], strides = [1, 1]} : vector<2x2xf32> into vector<6x4xf32>
+//  ORDER-NEXT:   %[[VEC5:.*]] = vector.insert_strided_slice %[[VGT5]], %[[VEC4]][4:1][2:1] : vector<2x2xf32> into vector<6x4xf32>
 //  ORDER-NEXT:   return %[[VEC5]] : vector<6x4xf32>
 
 func.func @vector_gather_unroll(%arg0 : memref<?x?x?xf32>,
diff --git a/mlir/test/Dialect/Vector/vector-transforms.mlir b/mlir/test/Dialect/Vector/vector-transforms.mlir
index eda6a5cc40d99..3999f2cb8b4b3 100644
--- a/mlir/test/Dialect/Vector/vector-transforms.mlir
+++ b/mlir/test/Dialect/Vector/vector-transforms.mlir
@@ -3,14 +3,14 @@
 // CHECK-DAG: #[[MAP1:map[0-9]*]] = affine_map<(d0, d1, d2) -> (d1, d2)>
 
 // CHECK-LABEL: func @add4x2
-//      CHECK: %[[S1:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x2xf32> to vector<2x2xf32>
-// CHECK-NEXT: %[[S2:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x2xf32> to vector<2x2xf32>
+//      CHECK: %[[S1:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][0:2:1] : vector<4x2xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S2:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][0:2:1] : vector<4x2xf32> to vector<2x2xf32>
 // CHECK-NEXT: %[[A1:.*]] = arith.addf %[[S1]], %[[S2]] : vector<2x2xf32>
-// CHECK-NEXT: %[[VEC0:.*]] = vector.insert_strided_slice %[[A1]], %{{.*}} {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x2xf32>
-// CHECK-NEXT: %[[S3:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x2xf32> to vector<2x2xf32>
-// CHECK-NEXT: %[[S4:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x2xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[VEC0:.*]] = vector.insert_strided_slice %[[A1]], %{{.*}}[0:1][0:1] : vector<2x2xf32> into vector<4x2xf32>
+// CHECK-NEXT: %[[S3:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][0:2:1] : vector<4x2xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S4:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][0:2:1] : vector<4x2xf32> to vector<2x2xf32>
 // CHECK-NEXT: %[[A2:.*]] = arith.addf %[[S3]], %[[S4]] : vector<2x2xf32>
-// CHECK-NEXT: %[[VEC1:.*]] = vector.insert_strided_slice %[[A2]], %[[VEC0]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x2xf32>
+// CHECK-NEXT: %[[VEC1:.*]] = vector.insert_strided_slice %[[A2]], %[[VEC0]][2:1][0:1] : vector<2x2xf32> into vector<4x2xf32>
 // CHECK-NEXT: return %[[VEC1:.*]] : vector<4x2xf32>
 
 func.func @add4x2(%0: vector<4x2xf32>) -> vector<4x2xf32> {
@@ -51,40 +51,40 @@ func.func @cast_away_leading_one_dim_scalable(%arg0: vector<1x[4]x1xf32>, %arg1:
 }
 
 // CHECK-LABEL: func @add4x4
-//      CHECK: %[[S1:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
-// CHECK-NEXT: %[[S2:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+//      CHECK: %[[S1:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][0:2:1] : vector<4x4xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S2:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][0:2:1] : vector<4x4xf32> to vector<2x2xf32>
 
 // CHECK-NEXT: %[[A1:.*]] = arith.addf %[[S1]], %[[S2]] : vector<2x2xf32>
 
-// CHECK-NEXT: %[[S3:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
-// CHECK-NEXT: %[[S4:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S3:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][2:2:1] : vector<4x4xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S4:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][2:2:1] : vector<4x4xf32> to vector<2x2xf32>
 
 // CHECK-NEXT: %[[A2:.*]] = arith.addf %[[S3]], %[[S4]] : vector<2x2xf32>
 
-// CHECK-NEXT: %[[S5:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
-// CHECK-NEXT: %[[S6:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S5:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][0:2:1] : vector<4x4xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S6:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][0:2:1] : vector<4x4xf32> to vector<2x2xf32>
 // CHECK-NEXT: %[[A3:.*]] = arith.addf %[[S5]], %[[S6]] : vector<2x2xf32>
 
-// CHECK-NEXT: %[[S7:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
-// CHECK-NEXT: %[[S8:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S7:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][2:2:1] : vector<4x4xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S8:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][2:2:1] : vector<4x4xf32> to vector<2x2xf32>
 // CHECK-NEXT: %[[A4:.*]] = arith.addf %[[S7]], %[[S8]] : vector<2x2xf32>
 
-// CHECK-NEXT: %[[S9:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S9:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][0:2:1] : vector<4x4xf32> to vector<2x2xf32>
 // CHECK-NEXT: %[[A5:.*]] = arith.addf %[[S9]], %[[A1]] : vector<2x2xf32>
-// CHECK-NEXT: %[[R1:.*]] = vector.insert_strided_slice %[[A5]], %{{.*}} {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+// CHECK-NEXT: %[[R1:.*]] = vector.insert_strided_slice %[[A5]], %{{.*}}[0:1][0:1] : vector<2x2xf32> into vector<4x4xf32>
 
 
-// CHECK-NEXT: %[[S11:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S11:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][2:2:1] : vector<4x4xf32> to vector<2x2xf32>
 // CHECK-NEXT: %[[A6:.*]] = arith.addf %[[S11]], %[[A2]] : vector<2x2xf32>
-// CHECK-NEXT: %[[R2:.*]] = vector.insert_strided_slice %[[A6]], %[[R1]] {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+// CHECK-NEXT: %[[R2:.*]] = vector.insert_strided_slice %[[A6]], %[[R1]][0:1][2:1] : vector<2x2xf32> into vector<4x4xf32>
 
-// CHECK-NEXT: %[[S13:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S13:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][0:2:1] : vector<4x4xf32> to vector<2x2xf32>
 // CHECK-NEXT: %[[A7:.*]] = arith.addf %[[S13]], %[[A3]] : vector<2x2xf32>
-// CHECK-NEXT: %[[R3:.*]] = vector.insert_strided_slice %[[A7]], %[[R2]] {offsets = [2, 0], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+// CHECK-NEXT: %[[R3:.*]] = vector.insert_strided_slice %[[A7]], %[[R2]][2:1][0:1] : vector<2x2xf32> into vector<4x4xf32>
 
-// CHECK-NEXT: %[[S15:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x4xf32> to vector<2x2xf32>
+// CHECK-NEXT: %[[S15:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][2:2:1] : vector<4x4xf32> to vector<2x2xf32>
 // CHECK-NEXT: %[[A8:.*]] = arith.addf %[[S15]], %[[A4]] : vector<2x2xf32>
-// CHECK-NEXT: %[[R4:.*]] = vector.insert_strided_slice %[[A8]], %[[R3]] {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
+// CHECK-NEXT: %[[R4:.*]] = vector.insert_strided_slice %[[A8]], %[[R3]][2:1][2:1] : vector<2x2xf32> into vector<4x4xf32>
 
 // CHECK-NEXT: return %[[R4]] : vector<4x4xf32>
 
@@ -302,10 +302,10 @@ func.func @bubble_down_bitcast_in_extract(%src: vector<4xf32>) -> (f16, f16) {
 // CHECK-LABEL: func @bubble_down_bitcast_in_strided_slice_extract
 //  CHECK-SAME: %[[SRC:.+]]: vector<4xf32>
 func.func @bubble_down_bitcast_in_strided_slice_extract(%arg0: vector<4xf32>) -> vector<4xf16> {
-  // CHECK: %[[EXTRACT:.+]] = vector.extract_strided_slice %[[SRC]] {offsets = [2], sizes = [2], strides = [1]} : vector<4xf32> to vector<2xf32>
+  // CHECK: %[[EXTRACT:.+]] = vector.extract_strided_slice %[[SRC]][2:2:1] : vector<4xf32> to vector<2xf32>
   // CHECK: %[[CAST:.+]] = vector.bitcast %[[EXTRACT]] : vector<2xf32> to vector<4xf16>
   %cast = vector.bitcast %arg0: vector<4xf32> to vector<8xf16>
-  %0 = vector.extract_strided_slice %cast {offsets = [4], sizes = [4], strides = [1]} : vector<8xf16> to vector<4xf16>
+  %0 = vector.extract_strided_slice %cast[4:4:1] : vector<8xf16> to vector<4xf16>
   // CHECK: return %[[CAST]]
   return %0: vector<4xf16>
 }
@@ -313,10 +313,10 @@ func.func @bubble_down_bitcast_in_strided_slice_extract(%arg0: vector<4xf32>) ->
 // CHECK-LABEL: func @bubble_down_bitcast_in_strided_slice_extract_full_last_dim
 //  CHECK-SAME: %[[SRC:.+]]: vector<4x2xf32>
 func.func @bubble_down_bitcast_in_strided_slice_extract_full_last_dim(%arg0: vector<4x2xf32>) -> vector<2x4xf16> {
-  // CHECK: %[[EXTRACT:.+]] = vector.extract_strided_slice %[[SRC]] {offsets = [1], sizes = [2], strides = [1]} : vector<4x2xf32> to vector<2x2xf32>
+  // CHECK: %[[EXTRACT:.+]] = vector.extract_strided_slice %[[SRC]][1:2:1] : vector<4x2xf32> to vector<2x2xf32>
   // CHECK: %[[CAST:.+]] = vector.bitcast %[[EXTRACT]] : vector<2x2xf32> to vector<2x4xf16>
   %cast = vector.bitcast %arg0: vector<4x2xf32> to vector<4x4xf16>
-  %0 = vector.extract_strided_slice %cast {offsets = [1], sizes = [2], strides = [1]} : vector<4x4xf16> to vector<2x4xf16>
+  %0 = vector.extract_strided_slice %cast[1:2:1] : vector<4x4xf16> to vector<2x4xf16>
   // CHECK: return %[[CAST]]
   return %0: vector<2x4xf16>
 }
@@ -326,7 +326,7 @@ func.func @bubble_down_bitcast_in_strided_slice_extract_odd_offset(%arg0: vector
   // CHECK: vector.bitcast
   // CHECK-NEXT: vector.extract_strided_slice
   %cast = vector.bitcast %arg0: vector<4xf32> to vector<8xf16>
-  %0 = vector.extract_strided_slice %cast {offsets = [3], sizes = [4], strides = [1]} : vector<8xf16> to vector<4xf16>
+  %0 = vector.extract_strided_slice %cast[3:4:1] : vector<8xf16> to vector<4xf16>
   return %0: vector<4xf16>
 }
 
@@ -335,7 +335,7 @@ func.func @bubble_down_bitcast_in_strided_slice_extract_odd_size(%arg0: vector<4
   // CHECK: vector.bitcast
   // CHECK-NEXT: vector.extract_strided_slice
   %cast = vector.bitcast %arg0: vector<4xf32> to vector<8xf16>
-  %0 = vector.extract_strided_slice %cast {offsets = [0], sizes = [3], strides = [1]} : vector<8xf16> to vector<3xf16>
+  %0 = vector.extract_strided_slice %cast[0:3:1] : vector<8xf16> to vector<3xf16>
   return %0: vector<3xf16>
 }
 
@@ -390,10 +390,10 @@ func.func @bubble_up_bitcast_in_strided_slice_insert(%dst: vector<8xf16>, %src1:
   // CHECK-DAG: %[[CAST_SRC1:.+]] = vector.bitcast %[[SRC1]] : vector<4xf16> to vector<2xf32>
   // CHECK-DAG: %[[CAST_SRC2:.+]] = vector.bitcast %[[SRC2]] : vector<4xf16> to vector<2xf32>
   // CHECK-DAG: %[[CAST_DST:.+]] = vector.bitcast %[[DST]] : vector<8xf16> to vector<4xf32>
-  // CHECK: %[[INSERT1:.+]] = vector.insert_strided_slice %[[CAST_SRC1]], %[[CAST_DST]] {offsets = [0], strides = [1]} : vector<2xf32> into vector<4xf32>
-  // CHECK: %[[INSERT2:.+]] = vector.insert_strided_slice %[[CAST_SRC2]], %[[INSERT1]] {offsets = [2], strides = [1]} : vector<2xf32> into vector<4xf32>
-  %0 = vector.insert_strided_slice %src1, %dst {offsets = [0], strides = [1]} : vector<4xf16> into vector<8xf16>
-  %1 = vector.insert_strided_slice %src2, %0   {offsets = [4], strides = [1]} : vector<4xf16> into vector<8xf16>
+  // CHECK: %[[INSERT1:.+]] = vector.insert_strided_slice %[[CAST_SRC1]], %[[CAST_DST]][0:1] : vector<2xf32> into vector<4xf32>
+  // CHECK: %[[INSERT2:.+]] = vector.insert_strided_slice %[[CAST_SRC2]], %[[INSERT1]][2:1] : vector<2xf32> into vector<4xf32>
+  %0 = vector.insert_strided_slice %src1, %dst[0:1] : vector<4xf16> into vector<8xf16>
+  %1 = vector.insert_strided_slice %src2, %0  [4:1] : vector<4xf16> into vector<8xf16>
   %cast = vector.bitcast %1: vector<8xf16> to vector<4xf32>
   // CHECK: return %[[INSERT2]]
   return %cast: vector<4xf32>
@@ -403,7 +403,7 @@ func.func @bubble_up_bitcast_in_strided_slice_insert(%dst: vector<8xf16>, %src1:
 func.func @bubble_up_bitcast_in_strided_slice_insert_odd_offset(%dst: vector<8xf16>, %src: vector<4xf16>) -> vector<4xf32> {
   // CHECK: vector.insert_strided_slice
   // CHECK-NEXT: vector.bitcast
-  %0 = vector.insert_strided_slice %src, %dst {offsets = [3], strides = [1]} : vector<4xf16> into vector<8xf16>
+  %0 = vector.insert_strided_slice %src, %dst[3:1] : vector<4xf16> into vector<8xf16>
   %cast = vector.bitcast %0: vector<8xf16> to vector<4xf32>
   return %cast: vector<4xf32>
 }
@@ -412,7 +412,7 @@ func.func @bubble_up_bitcast_in_strided_slice_insert_odd_offset(%dst: vector<8xf
 func.func @bubble_up_bitcast_in_strided_slice_insert_different_rank(%dst: vector<16x4x8xf16>, %src: vector<2x4xf16>) -> vector<16x4x4xf32> {
   // CHECK: vector.insert_strided_slice
   // CHECK-NEXT: vector.bitcast
-  %0 = vector.insert_strided_slice %src, %dst {offsets = [0, 0, 2], strides = [1, 1]} : vector<2x4xf16> into vector<16x4x8xf16>
+  %0 = vector.insert_strided_slice %src, %dst[0][0:1][2:1] : vector<2x4xf16> into vector<16x4x8xf16>
   %cast = vector.bitcast %0: vector<16x4x8xf16> to vector<16x4x4xf32>
   return %cast: vector<16x4x4xf32>
 }
@@ -421,7 +421,7 @@ func.func @bubble_up_bitcast_in_strided_slice_insert_different_rank(%dst: vector
 func.func @bubble_up_bitcast_in_strided_slice_insert_odd_shape(%dst: vector<2xf16>, %src: vector<1xf16>) -> vector<1xf32> {
   // CHECK: vector.insert_strided_slice
   // CHECK-NEXT: vector.bitcast
-  %0 = vector.insert_strided_slice %src, %dst {offsets = [0], strides = [1]} : vector<1xf16> into vector<2xf16>
+  %0 = vector.insert_strided_slice %src, %dst[0:1] : vector<1xf16> into vector<2xf16>
   %cast = vector.bitcast %0: vector<2xf16> to vector<1xf32>
   return %cast: vector<1xf32>
 }
@@ -430,7 +430,7 @@ func.func @bubble_up_bitcast_in_strided_slice_insert_odd_shape(%dst: vector<2xf1
 func.func @bubble_up_bitcast_in_strided_slice_insert_larger_odd_shape(%dst: vector<8xf16>, %src: vector<3xf16>) -> vector<4xf32> {
   // CHECK: vector.insert_strided_slice
   // CHECK-NEXT: vector.bitcast
-  %0 = vector.insert_strided_slice %src, %dst {offsets = [0], strides = [1]} : vector<3xf16> into vector<8xf16>
+  %0 = vector.insert_strided_slice %src, %dst[0:1] : vector<3xf16> into vector<8xf16>
   %cast = vector.bitcast %0: vector<8xf16> to vector<4xf32>
   return %cast: vector<4xf32>
 }
diff --git a/mlir/test/Dialect/Vector/vector-unroll-options.mlir b/mlir/test/Dialect/Vector/vector-unroll-options.mlir
index c51fc755dffa8..349d998d70e1d 100644
--- a/mlir/test/Dialect/Vector/vector-unroll-options.mlir
+++ b/mlir/test/Dialect/Vector/vector-unroll-options.mlir
@@ -194,22 +194,22 @@ func.func @vector_multi_reduction(%v : vector<4x6xf32>, %acc: vector<4xf32>) ->
 }
 // CHECK-LABEL: func @vector_multi_reduction
 //       CHECK:   %[[V0:.*]] = arith.constant dense<0.000000e+00> : vector<4xf32>
-//       CHECK:   %[[E0:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x6xf32> to vector<2x2xf32>
-//       CHECK:   %[[ACC0:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0], sizes = [2], strides = [1]} : vector<4xf32> to vector<2xf32>
+//       CHECK:   %[[E0:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][0:2:1] : vector<4x6xf32> to vector<2x2xf32>
+//       CHECK:   %[[ACC0:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1] : vector<4xf32> to vector<2xf32>
 //       CHECK:   %[[R0:.*]] = vector.multi_reduction <add>, %[[E0]], %[[ACC0]] [1] : vector<2x2xf32> to vector<2xf32>
-//       CHECK:   %[[E1:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x6xf32> to vector<2x2xf32>
+//       CHECK:   %[[E1:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][2:2:1] : vector<4x6xf32> to vector<2x2xf32>
 //       CHECK:   %[[R1:.*]] = vector.multi_reduction <add>, %[[E1]], %[[R0]] [1] : vector<2x2xf32> to vector<2xf32>
-//       CHECK:   %[[E2:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 4], sizes = [2, 2], strides = [1, 1]} : vector<4x6xf32> to vector<2x2xf32>
+//       CHECK:   %[[E2:.*]] = vector.extract_strided_slice %{{.*}}[0:2:1][4:2:1] : vector<4x6xf32> to vector<2x2xf32>
 //       CHECK:   %[[R2:.*]] = vector.multi_reduction <add>, %[[E2]], %[[R1]] [1] : vector<2x2xf32> to vector<2xf32>
-//       CHECK:   %[[E3:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 0], sizes = [2, 2], strides = [1, 1]} : vector<4x6xf32> to vector<2x2xf32>
-//       CHECK:   %[[ACC1:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2], sizes = [2], strides = [1]} : vector<4xf32> to vector<2xf32>
+//       CHECK:   %[[E3:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][0:2:1] : vector<4x6xf32> to vector<2x2xf32>
+//       CHECK:   %[[ACC1:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1] : vector<4xf32> to vector<2xf32>
 //       CHECK:   %[[R3:.*]] = vector.multi_reduction <add>, %[[E3]], %[[ACC1]] [1] : vector<2x2xf32> to vector<2xf32>
-//       CHECK:   %[[E4:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x6xf32> to vector<2x2xf32>
+//       CHECK:   %[[E4:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][2:2:1] : vector<4x6xf32> to vector<2x2xf32>
 //       CHECK:   %[[R4:.*]] = vector.multi_reduction <add>, %[[E4]], %[[R3]] [1] : vector<2x2xf32> to vector<2xf32>
-//       CHECK:   %[[E5:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [2, 4], sizes = [2, 2], strides = [1, 1]} : vector<4x6xf32> to vector<2x2xf32>
+//       CHECK:   %[[E5:.*]] = vector.extract_strided_slice %{{.*}}[2:2:1][4:2:1] : vector<4x6xf32> to vector<2x2xf32>
 //       CHECK:   %[[R5:.*]] = vector.multi_reduction <add>, %[[E5]], %[[R4]] [1] : vector<2x2xf32> to vector<2xf32>
-//       CHECK:   %[[V1:.*]] = vector.insert_strided_slice %[[R2]], %[[V0]] {offsets = [0], strides = [1]} : vector<2xf32> into vector<4xf32>
-//       CHECK:   %[[V2:.*]] = vector.insert_strided_slice %[[R5]], %[[V1]] {offsets = [2], strides = [1]} : vector<2xf32> into vector<4xf32>
+//       CHECK:   %[[V1:.*]] = vector.insert_strided_slice %[[R2]], %[[V0]][0:1] : vector<2xf32> into vector<4xf32>
+//       CHECK:   %[[V2:.*]] = vector.insert_strided_slice %[[R5]], %[[V1]][2:1] : vector<2xf32> into vector<4xf32>
 //       CHECK:   return %[[V2]] : vector<4xf32>
 
 
@@ -238,30 +238,30 @@ func.func @vector_tranpose(%v : vector<2x4x3x8xf32>) -> vector<2x3x8x4xf32> {
 }
 // CHECK-LABEL: func @vector_tranpose
 //       CHECK:   %[[VI:.*]] = arith.constant dense<0.000000e+00> : vector<2x3x8x4xf32>
-//       CHECK:   %[[E0:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 0, 0, 0], sizes = [1, 2, 3, 4], strides = [1, 1, 1, 1]} : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
+//       CHECK:   %[[E0:.*]] = vector.extract_strided_slice %{{.*}}[0:1:1][0:2:1][0:3:1][0:4:1] : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
 //       CHECK:   %[[T0:.*]] = vector.transpose %[[E0]], [0, 2, 3, 1] : vector<1x2x3x4xf32> to vector<1x3x4x2xf32>
-//       CHECK:   %[[V0:.*]] = vector.insert_strided_slice %[[T0]], %[[VI]] {offsets = [0, 0, 0, 0], strides = [1, 1, 1, 1]} : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
-//       CHECK:   %[[E1:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 2, 0, 0], sizes = [1, 2, 3, 4], strides = [1, 1, 1, 1]} : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
+//       CHECK:   %[[V0:.*]] = vector.insert_strided_slice %[[T0]], %[[VI]][0:1][0:1][0:1][0:1] : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
+//       CHECK:   %[[E1:.*]] = vector.extract_strided_slice %{{.*}}[0:1:1][2:2:1][0:3:1][0:4:1] : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
 //       CHECK:   %[[T1:.*]] = vector.transpose %[[E1]], [0, 2, 3, 1] : vector<1x2x3x4xf32> to vector<1x3x4x2xf32>
-//       CHECK:   %[[V1:.*]] = vector.insert_strided_slice %[[T1]], %[[V0]] {offsets = [0, 0, 0, 2], strides = [1, 1, 1, 1]} : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
-//       CHECK:   %[[E2:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 0, 0, 4], sizes = [1, 2, 3, 4], strides = [1, 1, 1, 1]} : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
+//       CHECK:   %[[V1:.*]] = vector.insert_strided_slice %[[T1]], %[[V0]][0:1][0:1][0:1][2:1] : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
+//       CHECK:   %[[E2:.*]] = vector.extract_strided_slice %{{.*}}[0:1:1][0:2:1][0:3:1][4:4:1] : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
 //       CHECK:   %[[T2:.*]] = vector.transpose %[[E2]], [0, 2, 3, 1] : vector<1x2x3x4xf32> to vector<1x3x4x2xf32>
-//       CHECK:   %[[V2:.*]] = vector.insert_strided_slice %[[T2]], %[[V1]] {offsets = [0, 0, 4, 0], strides = [1, 1, 1, 1]} : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
-//       CHECK:   %[[E3:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [0, 2, 0, 4], sizes = [1, 2, 3, 4], strides = [1, 1, 1, 1]} : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
+//       CHECK:   %[[V2:.*]] = vector.insert_strided_slice %[[T2]], %[[V1]][0:1][0:1][4:1][0:1] : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
+//       CHECK:   %[[E3:.*]] = vector.extract_strided_slice %{{.*}}[0:1:1][2:2:1][0:3:1][4:4:1] : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
 //       CHECK:   %[[T3:.*]] = vector.transpose %[[E3]], [0, 2, 3, 1] : vector<1x2x3x4xf32> to vector<1x3x4x2xf32>
-//       CHECK:   %[[V3:.*]] = vector.insert_strided_slice %[[T3]], %[[V2]] {offsets = [0, 0, 4, 2], strides = [1, 1, 1, 1]} : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
-//       CHECK:   %[[E4:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [1, 0, 0, 0], sizes = [1, 2, 3, 4], strides = [1, 1, 1, 1]} : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
+//       CHECK:   %[[V3:.*]] = vector.insert_strided_slice %[[T3]], %[[V2]][0:1][0:1][4:1][2:1] : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
+//       CHECK:   %[[E4:.*]] = vector.extract_strided_slice %{{.*}}[1:1:1][0:2:1][0:3:1][0:4:1] : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
 //       CHECK:   %[[T4:.*]] = vector.transpose %[[E4]], [0, 2, 3, 1] : vector<1x2x3x4xf32> to vector<1x3x4x2xf32>
-//       CHECK:   %[[V4:.*]] = vector.insert_strided_slice %[[T4]], %[[V3]] {offsets = [1, 0, 0, 0], strides = [1, 1, 1, 1]} : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
-//       CHECK:   %[[E5:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [1, 2, 0, 0], sizes = [1, 2, 3, 4], strides = [1, 1, 1, 1]} : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
+//       CHECK:   %[[V4:.*]] = vector.insert_strided_slice %[[T4]], %[[V3]][1:1][0:1][0:1][0:1] : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
+//       CHECK:   %[[E5:.*]] = vector.extract_strided_slice %{{.*}}[1:1:1][2:2:1][0:3:1][0:4:1] : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
 //       CHECK:   %[[T5:.*]] = vector.transpose %[[E5]], [0, 2, 3, 1] : vector<1x2x3x4xf32> to vector<1x3x4x2xf32>
-//       CHECK:   %[[V5:.*]] = vector.insert_strided_slice %[[T5]], %[[V4]] {offsets = [1, 0, 0, 2], strides = [1, 1, 1, 1]} : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
-//       CHECK:   %[[E6:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [1, 0, 0, 4], sizes = [1, 2, 3, 4], strides = [1, 1, 1, 1]} : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
+//       CHECK:   %[[V5:.*]] = vector.insert_strided_slice %[[T5]], %[[V4]][1:1][0:1][0:1][2:1] : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
+//       CHECK:   %[[E6:.*]] = vector.extract_strided_slice %{{.*}}[1:1:1][0:2:1][0:3:1][4:4:1] : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
 //       CHECK:   %[[T6:.*]] = vector.transpose %[[E6]], [0, 2, 3, 1] : vector<1x2x3x4xf32> to vector<1x3x4x2xf32>
-//       CHECK:   %[[V6:.*]] = vector.insert_strided_slice %[[T6]], %[[V5]] {offsets = [1, 0, 4, 0], strides = [1, 1, 1, 1]} : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
-//       CHECK:   %[[E7:.*]] = vector.extract_strided_slice %{{.*}} {offsets = [1, 2, 0, 4], sizes = [1, 2, 3, 4], strides = [1, 1, 1, 1]} : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
+//       CHECK:   %[[V6:.*]] = vector.insert_strided_slice %[[T6]], %[[V5]][1:1][0:1][4:1][0:1] : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
+//       CHECK:   %[[E7:.*]] = vector.extract_strided_slice %{{.*}}[1:1:1][2:2:1][0:3:1][4:4:1] : vector<2x4x3x8xf32> to vector<1x2x3x4xf32>
 //       CHECK:   %[[T7:.*]] = vector.transpose %[[E7]], [0, 2, 3, 1] : vector<1x2x3x4xf32> to vector<1x3x4x2xf32>
-//       CHECK:   %[[V7:.*]] = vector.insert_strided_slice %[[T7]], %[[V6]] {offsets = [1, 0, 4, 2], strides = [1, 1, 1, 1]} : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
+//       CHECK:   %[[V7:.*]] = vector.insert_strided_slice %[[T7]], %[[V6]][1:1][0:1][4:1][2:1] : vector<1x3x4x2xf32> into vector<2x3x8x4xf32>
 //       CHECK:   return %[[V7]] : vector<2x3x8x4xf32>
 
 // -----
diff --git a/mlir/test/Integration/Dialect/Vector/CPU/contraction.mlir b/mlir/test/Integration/Dialect/Vector/CPU/contraction.mlir
index ad35ff65b1157..1a1b3fc5562c5 100644
--- a/mlir/test/Integration/Dialect/Vector/CPU/contraction.mlir
+++ b/mlir/test/Integration/Dialect/Vector/CPU/contraction.mlir
@@ -183,8 +183,8 @@ func.func @entry() {
   %10 = vector.insert %b, %9[1] : vector<2xf32> into vector<3x2xf32>
   %C = vector.insert %c, %10[2] : vector<2xf32> into vector<3x2xf32>
   %cst = arith.constant dense<0.000000e+00> : vector<2x4xf32>
-  %11 = vector.insert_strided_slice %A, %cst {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<2x4xf32>
-  %D = vector.insert_strided_slice %B, %11 {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<2x4xf32>
+  %11 = vector.insert_strided_slice %A, %cst[0:1][0:1] : vector<2x2xf32> into vector<2x4xf32>
+  %D = vector.insert_strided_slice %B, %11[0:1][2:1] : vector<2x2xf32> into vector<2x4xf32>
 
   vector.print %A : vector<2x2xf32>
   vector.print %B : vector<2x2xf32>
diff --git a/mlir/test/Integration/Dialect/Vector/CPU/extract-strided-slice.mlir b/mlir/test/Integration/Dialect/Vector/CPU/extract-strided-slice.mlir
index 47c3211b8c487..0661fd416f708 100644
--- a/mlir/test/Integration/Dialect/Vector/CPU/extract-strided-slice.mlir
+++ b/mlir/test/Integration/Dialect/Vector/CPU/extract-strided-slice.mlir
@@ -20,7 +20,7 @@ func.func @entry() {
   %a3 = vector.insert %v3, %a2[2, 1] : vector<8xf32> into vector<4x4x8xf32>
   %a4 = vector.insert %v4, %a3[2, 2] : vector<8xf32> into vector<4x4x8xf32>
 
-  %ss = vector.extract_strided_slice %a4 {offsets = [1, 1], sizes = [2, 2], strides = [1, 1]} : vector<4x4x8xf32> to vector<2x2x8xf32>
+  %ss = vector.extract_strided_slice %a4[1:2:1][1:2:1] : vector<4x4x8xf32> to vector<2x2x8xf32>
 
   vector.print %ss : vector<2x2x8xf32>
   //
diff --git a/mlir/test/Integration/Dialect/Vector/CPU/insert-strided-slice.mlir b/mlir/test/Integration/Dialect/Vector/CPU/insert-strided-slice.mlir
index 91cf95a6ec376..4e0da4d7de0fe 100644
--- a/mlir/test/Integration/Dialect/Vector/CPU/insert-strided-slice.mlir
+++ b/mlir/test/Integration/Dialect/Vector/CPU/insert-strided-slice.mlir
@@ -13,10 +13,10 @@ func.func @entry() {
   %v3 = vector.broadcast %f3 : f32 to vector<4x4xf32>
   %v4 = vector.broadcast %f4 : f32 to vector<1xf32>
 
-  %s1 = vector.insert_strided_slice %v1, %v3 {offsets = [2, 0], strides = [1]} : vector<4xf32> into vector<4x4xf32>
-  %s2 = vector.insert_strided_slice %v2, %s1 {offsets = [1, 1], strides = [1]} : vector<3xf32> into vector<4x4xf32>
-  %s3 = vector.insert_strided_slice %v2, %s2 {offsets = [0, 0], strides = [1]} : vector<3xf32> into vector<4x4xf32>
-  %s4 = vector.insert_strided_slice %v4, %s3 {offsets = [3, 3], strides = [1]} : vector<1xf32> into vector<4x4xf32>
+  %s1 = vector.insert_strided_slice %v1, %v3[2][0:1] : vector<4xf32> into vector<4x4xf32>
+  %s2 = vector.insert_strided_slice %v2, %s1[1][1:1] : vector<3xf32> into vector<4x4xf32>
+  %s3 = vector.insert_strided_slice %v2, %s2[0][0:1] : vector<3xf32> into vector<4x4xf32>
+  %s4 = vector.insert_strided_slice %v4, %s3[3][3:1] : vector<1xf32> into vector<4x4xf32>
 
   vector.print %v3 : vector<4x4xf32>
   vector.print %s1 : vector<4x4xf32>
diff --git a/mlir/test/Integration/Dialect/Vector/CPU/transpose.mlir b/mlir/test/Integration/Dialect/Vector/CPU/transpose.mlir
index 11327ee2c9988..78a200e6fb759 100644
--- a/mlir/test/Integration/Dialect/Vector/CPU/transpose.mlir
+++ b/mlir/test/Integration/Dialect/Vector/CPU/transpose.mlir
@@ -34,8 +34,8 @@ func.func @entry() {
   %10 = vector.insert %b, %9[1] : vector<2xf32> into vector<3x2xf32>
   %C = vector.insert %c, %10[2] : vector<2xf32> into vector<3x2xf32>
   %cst = arith.constant dense<0.000000e+00> : vector<2x4xf32>
-  %11 = vector.insert_strided_slice %A, %cst {offsets = [0, 0], strides = [1, 1]} : vector<2x2xf32> into vector<2x4xf32>
-  %D = vector.insert_strided_slice %B, %11 {offsets = [0, 2], strides = [1, 1]} : vector<2x2xf32> into vector<2x4xf32>
+  %11 = vector.insert_strided_slice %A, %cst[0:1][0:1] : vector<2x2xf32> into vector<2x4xf32>
+  %D = vector.insert_strided_slice %B, %11[0:1][2:1] : vector<2x2xf32> into vector<2x4xf32>
 
   vector.print %A : vector<2x2xf32>
   vector.print %B : vector<2x2xf32>

>From bfb9af9c68d4cb2ecb0174a33c69e9d0b26293ea Mon Sep 17 00:00:00 2001
From: MacDue <macdue at dueutil.tech>
Date: Sat, 3 Aug 2024 21:24:04 +0100
Subject: [PATCH 3/3] Manually upgrade three tests (with non-standard
 syntax/checks)

---
 mlir/test/Dialect/Vector/invalid.mlir         | 11 +--
 mlir/test/Dialect/Vector/linearize.mlir       | 10 +--
 .../Dialect/Vector/vector-unroll-options.mlir | 88 +++++++++----------
 3 files changed, 50 insertions(+), 59 deletions(-)

diff --git a/mlir/test/Dialect/Vector/invalid.mlir b/mlir/test/Dialect/Vector/invalid.mlir
index 1cb6b83a2e4f1..51c23e1000d5a 100644
--- a/mlir/test/Dialect/Vector/invalid.mlir
+++ b/mlir/test/Dialect/Vector/invalid.mlir
@@ -631,13 +631,6 @@ func.func @insert_strided_slice(%a: vector<4x4xf32>, %b: vector<4x8x16xf32>) {
 
 // -----
 
-func.func @insert_strided_slice(%a: vector<4x4xf32>, %b: vector<4x8x16xf32>) {
-  // expected-error at +1 {{expected source rank to be no greater than destination rank}}
-  %1 = vector.insert_strided_slice %b, %a[2:1][2:1][2:1] : vector<4x8x16xf32> into vector<4x4xf32>
-}
-
-// -----
-
 func.func @insert_strided_slice(%a: vector<4x4xf32>, %b: vector<4x8x16xf32>) {
   // expected-error at +1 {{op expected offsets dimension 0 to be confined to [0, 4)}}
   %1 = vector.insert_strided_slice %a, %b[100][100:1][100:1] : vector<4x4xf32> into vector<4x8x16xf32>
@@ -677,13 +670,13 @@ func.func @insert_strided_slice_scalable(%a : vector<1x1x4xi32>, %b: vector<1x4x
 
 func.func @extract_strided_slice(%arg0: vector<4x8x16xf32>) {
   // expected-error at +1 {{expected offsets, sizes and strides attributes of same size}}
-  %1 = vector.extract_strided_slice %arg0[100:2:1][100:2:1] : vector<4x8x16xf32> to vector<2x2x16xf32>
+  %1 = vector.extract_strided_slice %arg0[100][4:2:1][0:2:1] : vector<4x8x16xf32> to vector<2x2x16xf32>
 }
 
 // -----
 
 func.func @extract_strided_slice(%arg0: vector<4x8x16xf32>) {
-  // expected-error at +1 {{expected offsets attribute of rank no greater than vector rank}}
+  // expected-error at +1 {{op expected offsets to have rank no greater than vector rank}}
   %1 = vector.extract_strided_slice %arg0[2:2:1][2:2:1][2:2:1][2:2:1] : vector<4x8x16xf32> to vector<2x2x16xf32>
 }
 
diff --git a/mlir/test/Dialect/Vector/linearize.mlir b/mlir/test/Dialect/Vector/linearize.mlir
index 59b7b7b58adfb..adc200706cc63 100644
--- a/mlir/test/Dialect/Vector/linearize.mlir
+++ b/mlir/test/Dialect/Vector/linearize.mlir
@@ -172,18 +172,17 @@ func.func @test_extract_strided_slice_1(%arg0 : vector<4x8xf32>) -> vector<2x2xf
 
   // BW-0: %[[RES:.*]] = vector.extract_strided_slice %[[ARG:.*]][0:2:1][4:2:1] : vector<4x8xf32> to vector<2x2xf32>
   // BW-0: return %[[RES]] : vector<2x2xf32>
-  %0 = vector.extract_strided_slice %arg0 { sizes = [2, 2], strides = [1, 1], offsets = [0, 4]}
-     : vector<4x8xf32> to vector<2x2xf32>
+  %0 = vector.extract_strided_slice %arg0[0:2:1][4:2:1] : vector<4x8xf32> to vector<2x2xf32>
   return %0 : vector<2x2xf32>
 }
 
 // ALL-LABEL:   func.func @test_extract_strided_slice_1_scalable(
 // ALL-SAME:    %[[VAL_0:.*]]: vector<4x[8]xf32>) -> vector<2x[8]xf32> {
-func.func @test_extract_strided_slice_1_scalable(%arg0: vector<4x[8]xf32>) -> vector<2x[8]xf32> {  
+func.func @test_extract_strided_slice_1_scalable(%arg0: vector<4x[8]xf32>) -> vector<2x[8]xf32> {
   // ALL-NOT: vector.shuffle
   // ALL-NOT: vector.shape_cast
   // ALL: %[[RES:.*]] = vector.extract_strided_slice %[[VAL_0]][1:2:1][0:8:1] : vector<4x[8]xf32> to vector<2x[8]xf32>
-  %0 = vector.extract_strided_slice %arg0 { sizes = [2, 8], strides = [1, 1], offsets = [1, 0] } : vector<4x[8]xf32> to vector<2x[8]xf32>
+  %0 = vector.extract_strided_slice %arg0[1:2:1][0:8:1] : vector<4x[8]xf32> to vector<2x[8]xf32>
   // ALL: return %[[RES]] : vector<2x[8]xf32>
   return %0 : vector<2x[8]xf32>
 }
@@ -206,8 +205,7 @@ func.func @test_extract_strided_slice_2(%arg0 : vector<2x8x2xf32>) -> vector<1x4
 
   // BW-0: %[[RES:.*]] = vector.extract_strided_slice %[[ORIG_ARG]][1:1:1][2:4:1] : vector<2x8x2xf32> to vector<1x4x2xf32>
   // BW-0: return %[[RES]] : vector<1x4x2xf32>
-  %0 = vector.extract_strided_slice %arg0 { offsets = [1, 2], strides = [1, 1], sizes = [1, 4] }
-    : vector<2x8x2xf32> to vector<1x4x2xf32>
+  %0 = vector.extract_strided_slice %arg0[1:1:1][2:4:1] : vector<2x8x2xf32> to vector<1x4x2xf32>
   return %0 : vector<1x4x2xf32>
 }
 
diff --git a/mlir/test/Dialect/Vector/vector-unroll-options.mlir b/mlir/test/Dialect/Vector/vector-unroll-options.mlir
index 349d998d70e1d..28d141e06bc22 100644
--- a/mlir/test/Dialect/Vector/vector-unroll-options.mlir
+++ b/mlir/test/Dialect/Vector/vector-unroll-options.mlir
@@ -16,66 +16,66 @@ func.func @vector_contract_f32(%lhs : vector<8x4xf32>, %rhs : vector<8x4xf32>,
 // CHECK-SAME: [[arg0:%.+]]: vector<8x4xf32>, [[arg1:%.+]]: vector<8x4xf32>, [[arg2:%.+]]: vector<8x8xf32>
 
 //       CHECK:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  CHECK-SAME:   offsets = [0, 0]
+//  CHECK-SAME:   [0:{{.*}}][0:{{.*}}]
 //       CHECK:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  CHECK-SAME:   offsets = [0, 0]
+//  CHECK-SAME:   [0:{{.*}}][0:{{.*}}]
 //       CHECK:   [[c:%.+]] = vector.extract_strided_slice [[arg2]]
-//  CHECK-SAME:   offsets = [0, 0]
+//  CHECK-SAME:   [0:{{.*}}][0:{{.*}}]
 //       CHECK:   [[accum1:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[c]]
 //  CHECK-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       CHECK:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  CHECK-SAME:   offsets = [0, 2]
+//  CHECK-SAME:   [0:{{.*}}][2:{{.*}}]
 //       CHECK:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  CHECK-SAME:   offsets = [0, 2]
+//  CHECK-SAME:   [0:{{.*}}][2:{{.*}}]
 //       CHECK:   [[accum2:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[accum1]]
 //  CHECK-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       CHECK:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  CHECK-SAME:   offsets = [0, 0]
+//  CHECK-SAME:   [0:{{.*}}][0:{{.*}}]
 //       CHECK:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  CHECK-SAME:   offsets = [4, 0]
+//  CHECK-SAME:   [4:{{.*}}][0:{{.*}}]
 //       CHECK:   [[c:%.+]] = vector.extract_strided_slice [[arg2]]
-//  CHECK-SAME:   offsets = [0, 4]
+//  CHECK-SAME:   [0:{{.*}}][4:{{.*}}]
 //       CHECK:   [[accum3:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[c]]
 //  CHECK-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       CHECK:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  CHECK-SAME:   offsets = [0, 2]
+//  CHECK-SAME:   [0:{{.*}}][2:{{.*}}]
 //       CHECK:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  CHECK-SAME:   offsets = [4, 2]
+//  CHECK-SAME:   [4:{{.*}}][2:{{.*}}]
 //       CHECK:   [[accum4:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[accum3]]
 //  CHECK-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       CHECK:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  CHECK-SAME:   offsets = [4, 0]
+//  CHECK-SAME:   [4:{{.*}}][0:{{.*}}]
 //       CHECK:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  CHECK-SAME:   offsets = [0, 0]
+//  CHECK-SAME:   [0:{{.*}}][0:{{.*}}]
 //       CHECK:   [[c:%.+]] = vector.extract_strided_slice [[arg2]]
-//  CHECK-SAME:   offsets = [4, 0]
+//  CHECK-SAME:   [4:{{.*}}][0:{{.*}}]
 //       CHECK:   [[accum5:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[c]]
 //  CHECK-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       CHECK:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  CHECK-SAME:   offsets = [4, 2]
+//  CHECK-SAME:   [4:{{.*}}][2:{{.*}}]
 //       CHECK:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  CHECK-SAME:   offsets = [0, 2]
+//  CHECK-SAME:   [0:{{.*}}][2:{{.*}}]
 //       CHECK:   [[accum6:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[accum5]]
 //  CHECK-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       CHECK:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  CHECK-SAME:   offsets = [4, 0]
+//  CHECK-SAME:   [4:{{.*}}][0:{{.*}}]
 //       CHECK:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  CHECK-SAME:   offsets = [4, 0]
+//  CHECK-SAME:   [4:{{.*}}][0:{{.*}}]
 //       CHECK:   [[c:%.+]] = vector.extract_strided_slice [[arg2]]
-//  CHECK-SAME:   offsets = [4, 4]
+//  CHECK-SAME:   [4:{{.*}}][4:{{.*}}]
 //       CHECK:   [[accum7:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[c]]
 //  CHECK-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       CHECK:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  CHECK-SAME:   offsets = [4, 2]
+//  CHECK-SAME:   [4:{{.*}}][2:{{.*}}]
 //       CHECK:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  CHECK-SAME:   offsets = [4, 2]
+//  CHECK-SAME:   [4:{{.*}}][2:{{.*}}]
 //       CHECK:   [[accum8:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[accum7]]
 //  CHECK-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
@@ -85,66 +85,66 @@ func.func @vector_contract_f32(%lhs : vector<8x4xf32>, %rhs : vector<8x4xf32>,
 // ORDER-SAME: [[arg0:%.+]]: vector<8x4xf32>, [[arg1:%.+]]: vector<8x4xf32>, [[arg2:%.+]]: vector<8x8xf32>
 
 //       ORDER:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  ORDER-SAME:   offsets = [0, 0]
+//  ORDER-SAME:   [0:{{.*}}][0:{{.*}}]
 //       ORDER:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  ORDER-SAME:   offsets = [0, 0]
+//  ORDER-SAME:   [0:{{.*}}][0:{{.*}}]
 //       ORDER:   [[c:%.+]] = vector.extract_strided_slice [[arg2]]
-//  ORDER-SAME:   offsets = [0, 0]
+//  ORDER-SAME:   [0:{{.*}}][0:{{.*}}]
 //       ORDER:   [[accum1:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[c]]
 //  ORDER-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       ORDER:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  ORDER-SAME:   offsets = [0, 0]
+//  ORDER-SAME:   [0:{{.*}}][0:{{.*}}]
 //       ORDER:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  ORDER-SAME:   offsets = [4, 0]
+//  ORDER-SAME:   [4:{{.*}}][0:{{.*}}]
 //       ORDER:   [[c:%.+]] = vector.extract_strided_slice [[arg2]]
-//  ORDER-SAME:   offsets = [0, 4]
+//  ORDER-SAME:   [0:{{.*}}][4:{{.*}}]
 //       ORDER:   [[accum2:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[c]]
 //  ORDER-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       ORDER:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  ORDER-SAME:   offsets = [4, 0]
+//  ORDER-SAME:   [4:{{.*}}][0:{{.*}}]
 //       ORDER:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  ORDER-SAME:   offsets = [0, 0]
+//  ORDER-SAME:   [0:{{.*}}][0:{{.*}}]
 //       ORDER:   [[c:%.+]] = vector.extract_strided_slice [[arg2]]
-//  ORDER-SAME:   offsets = [4, 0]
+//  ORDER-SAME:   [4:{{.*}}][0:{{.*}}]
 //       ORDER:   [[accum3:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[c]]
 //  ORDER-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       ORDER:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  ORDER-SAME:   offsets = [4, 0]
+//  ORDER-SAME:   [4:{{.*}}][0:{{.*}}]
 //       ORDER:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  ORDER-SAME:   offsets = [4, 0]
+//  ORDER-SAME:   [4:{{.*}}][0:{{.*}}]
 //       ORDER:   [[c:%.+]] = vector.extract_strided_slice [[arg2]]
-//  ORDER-SAME:   offsets = [4, 4]
+//  ORDER-SAME:   [4:{{.*}}][4:{{.*}}]
 //       ORDER:   [[accum4:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[c]]
 //  ORDER-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       ORDER:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  ORDER-SAME:   offsets = [0, 2]
+//  ORDER-SAME:   [0:{{.*}}][2:{{.*}}]
 //       ORDER:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  ORDER-SAME:   offsets = [0, 2]
+//  ORDER-SAME:   [0:{{.*}}][2:{{.*}}]
 //       ORDER:   [[accum5:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[accum1]]
 //  ORDER-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       ORDER:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  ORDER-SAME:   offsets = [0, 2]
+//  ORDER-SAME:   [0:{{.*}}][2:{{.*}}]
 //       ORDER:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  ORDER-SAME:   offsets = [4, 2]
+//  ORDER-SAME:   [4:{{.*}}][2:{{.*}}]
 //       ORDER:   [[accum6:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[accum2]]
 //  ORDER-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       ORDER:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  ORDER-SAME:   offsets = [4, 2]
+//  ORDER-SAME:   [4:{{.*}}][2:{{.*}}]
 //       ORDER:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  ORDER-SAME:   offsets = [0, 2]
+//  ORDER-SAME:   [0:{{.*}}][2:{{.*}}]
 //       ORDER:   [[accum7:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[accum3]]
 //  ORDER-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
 //       ORDER:   [[a:%.+]] = vector.extract_strided_slice [[arg0]]
-//  ORDER-SAME:   offsets = [4, 2]
+//  ORDER-SAME:   [4:{{.*}}][2:{{.*}}]
 //       ORDER:   [[b:%.+]] = vector.extract_strided_slice [[arg1]]
-//  ORDER-SAME:   offsets = [4, 2]
+//  ORDER-SAME:   [4:{{.*}}][2:{{.*}}]
 //       ORDER:   [[accum8:%.+]] = vector.contract {{{.*}}} [[a]], [[b]], [[accum4]]
 //  ORDER-SAME:     vector<4x2xf32>, vector<4x2xf32> into vector<4x4xf32>
 
@@ -219,15 +219,15 @@ func.func @vector_reduction(%v : vector<8xf32>) -> f32 {
 }
 // CHECK-LABEL: func @vector_reduction(
 //  CHECK-SAME:     %[[v:.*]]: vector<8xf32>
-//       CHECK:   %[[s0:.*]] = vector.extract_strided_slice %[[v]] {offsets = [0], sizes = [2]
+//       CHECK:   %[[s0:.*]] = vector.extract_strided_slice %[[v]][0:2:
 //       CHECK:   %[[r0:.*]] = vector.reduction <add>, %[[s0]]
-//       CHECK:   %[[s1:.*]] = vector.extract_strided_slice %[[v]] {offsets = [2], sizes = [2]
+//       CHECK:   %[[s1:.*]] = vector.extract_strided_slice %[[v]][2:2:
 //       CHECK:   %[[r1:.*]] = vector.reduction <add>, %[[s1]]
 //       CHECK:   %[[add1:.*]] = arith.addf %[[r0]], %[[r1]]
-//       CHECK:   %[[s2:.*]] = vector.extract_strided_slice %[[v]] {offsets = [4], sizes = [2]
+//       CHECK:   %[[s2:.*]] = vector.extract_strided_slice %[[v]][4:2
 //       CHECK:   %[[r2:.*]] = vector.reduction <add>, %[[s2]]
 //       CHECK:   %[[add2:.*]] = arith.addf %[[add1]], %[[r2]]
-//       CHECK:   %[[s3:.*]] = vector.extract_strided_slice %[[v]] {offsets = [6], sizes = [2]
+//       CHECK:   %[[s3:.*]] = vector.extract_strided_slice %[[v]][6:2
 //       CHECK:   %[[r3:.*]] = vector.reduction <add>, %[[s3]]
 //       CHECK:   %[[add3:.*]] = arith.addf %[[add2]], %[[r3]]
 //       CHECK:   return %[[add3]]



More information about the Mlir-commits mailing list