[Mlir-commits] [mlir] bd30a79 - [mlir] use built-in vector types instead of LLVM dialect types when possible

Alex Zinenko llvmlistbot at llvm.org
Tue Jan 12 01:08:41 PST 2021


Author: Alex Zinenko
Date: 2021-01-12T10:04:28+01:00
New Revision: bd30a796fc4b51750248ccba29cd6fb1f61859f5

URL: https://github.com/llvm/llvm-project/commit/bd30a796fc4b51750248ccba29cd6fb1f61859f5
DIFF: https://github.com/llvm/llvm-project/commit/bd30a796fc4b51750248ccba29cd6fb1f61859f5.diff

LOG: [mlir] use built-in vector types instead of LLVM dialect types when possible

Continue the convergence between LLVM dialect and built-in types by using the
built-in vector type whenever possible, that is for fixed vectors of built-in
integers and built-in floats. LLVM dialect vector type is still in use for
pointers, less frequent floating point types that do not have a built-in
equivalent, and scalable vectors. However, the top-level `LLVMVectorType` class
has been removed in favor of free functions capable of inspecting both built-in
and LLVM dialect vector types: `LLVM::getVectorElementType`,
`LLVM::getNumVectorElements` and `LLVM::getFixedVectorType`. Additional work is
necessary to design an implemented the extensions to built-in types so as to
remove the `LLVMFixedVectorType` entirely.

Note that the default output format for the built-in vectors does not have
whitespace around the `x` separator, e.g., `vector<4xf32>` as opposed to the
LLVM dialect vector type format that does, e.g., `!llvm.vec<4 x fp128>`. This
required changing the FileCheck patterns in several tests.

Reviewed By: mehdi_amini, silvas

Differential Revision: https://reviews.llvm.org/D94405

Added: 
    

Modified: 
    mlir/docs/ConversionToLLVMDialect.md
    mlir/docs/Dialects/LLVM.md
    mlir/docs/SPIRVToLLVMDialectConversion.md
    mlir/include/mlir/Dialect/LLVMIR/LLVMOpBase.td
    mlir/include/mlir/Dialect/LLVMIR/LLVMOps.td
    mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h
    mlir/integration_test/Dialect/LLVMIR/CPU/test-vector-reductions-fp.mlir
    mlir/integration_test/Dialect/LLVMIR/CPU/test-vector-reductions-int.mlir
    mlir/lib/Conversion/SPIRVToLLVM/SPIRVToLLVM.cpp
    mlir/lib/Conversion/StandardToLLVM/StandardToLLVM.cpp
    mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
    mlir/lib/Conversion/VectorToROCDL/VectorToROCDL.cpp
    mlir/lib/Dialect/LLVMIR/IR/LLVMDialect.cpp
    mlir/lib/Dialect/LLVMIR/IR/LLVMTypeSyntax.cpp
    mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp
    mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
    mlir/lib/Dialect/LLVMIR/IR/ROCDLDialect.cpp
    mlir/lib/Target/LLVMIR/ConvertFromLLVMIR.cpp
    mlir/lib/Target/LLVMIR/TypeTranslation.cpp
    mlir/test/Conversion/ArmNeonToLLVM/convert-to-llvm.mlir
    mlir/test/Conversion/SPIRVToLLVM/arithmetic-ops-to-llvm.mlir
    mlir/test/Conversion/SPIRVToLLVM/bitwise-ops-to-llvm.mlir
    mlir/test/Conversion/SPIRVToLLVM/cast-ops-to-llvm.mlir
    mlir/test/Conversion/SPIRVToLLVM/comparison-ops-to-llvm.mlir
    mlir/test/Conversion/SPIRVToLLVM/constant-op-to-llvm.mlir
    mlir/test/Conversion/SPIRVToLLVM/func-ops-to-llvm.mlir
    mlir/test/Conversion/SPIRVToLLVM/glsl-ops-to-llvm.mlir
    mlir/test/Conversion/SPIRVToLLVM/logical-ops-to-llvm.mlir
    mlir/test/Conversion/SPIRVToLLVM/memory-ops-to-llvm.mlir
    mlir/test/Conversion/SPIRVToLLVM/misc-ops-to-llvm.mlir
    mlir/test/Conversion/SPIRVToLLVM/shift-ops-to-llvm.mlir
    mlir/test/Conversion/SPIRVToLLVM/spirv-types-to-llvm.mlir
    mlir/test/Conversion/StandardToLLVM/convert-to-llvmir.mlir
    mlir/test/Conversion/StandardToLLVM/standard-to-llvm.mlir
    mlir/test/Conversion/VectorToLLVM/vector-mask-to-llvm.mlir
    mlir/test/Conversion/VectorToLLVM/vector-reduction-to-llvm.mlir
    mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
    mlir/test/Conversion/VectorToROCDL/vector-to-rocdl.mlir
    mlir/test/Dialect/LLVMIR/dialect-cast.mlir
    mlir/test/Dialect/LLVMIR/invalid.mlir
    mlir/test/Dialect/LLVMIR/nvvm.mlir
    mlir/test/Dialect/LLVMIR/rocdl.mlir
    mlir/test/Dialect/LLVMIR/roundtrip.mlir
    mlir/test/Dialect/LLVMIR/types-invalid.mlir
    mlir/test/Dialect/LLVMIR/types.mlir
    mlir/test/Target/arm-neon.mlir
    mlir/test/Target/arm-sve.mlir
    mlir/test/Target/avx512.mlir
    mlir/test/Target/import.ll
    mlir/test/Target/llvmir-intrinsics.mlir
    mlir/test/Target/llvmir-types.mlir
    mlir/test/Target/llvmir.mlir
    mlir/test/Target/nvvmir.mlir
    mlir/test/Target/rocdl.mlir

Removed: 
    


################################################################################
diff  --git a/mlir/docs/ConversionToLLVMDialect.md b/mlir/docs/ConversionToLLVMDialect.md
index d0ea746853b1..d36b4498272c 100644
--- a/mlir/docs/ConversionToLLVMDialect.md
+++ b/mlir/docs/ConversionToLLVMDialect.md
@@ -48,8 +48,8 @@ size with element type converted using these conversion rules. In the
 n-dimensional case, MLIR vectors are converted to (n-1)-dimensional array types
 of one-dimensional vectors.
 
-For example, `vector<4 x f32>` converts to `!llvm.vec<4 x f32>` and `vector<4 x
-8 x 16 x f32>` converts to `!llvm.array<4 x array<8 x vec<16 x f32>>>`.
+For example, `vector<4xf32>` converts to `vector<4xf32>` and `vector<4 x 8 x 16
+x f32>` converts to `!llvm.array<4 x array<8 x vec<16 x f32>>>`.
 
 ### Ranked Memref Types
 

diff  --git a/mlir/docs/Dialects/LLVM.md b/mlir/docs/Dialects/LLVM.md
index 3d91588b2d21..d232ffab148c 100644
--- a/mlir/docs/Dialects/LLVM.md
+++ b/mlir/docs/Dialects/LLVM.md
@@ -127,7 +127,7 @@ Examples:
 %3 = llvm.mlir.constant(42 : i32) : i32
 
 // Splat dense vector constant.
-%3 = llvm.mlir.constant(dense<1.0> : vector<4xf32>) : !llvm.vec<4 x f32>
+%3 = llvm.mlir.constant(dense<1.0> : vector<4xf32>) : vector<4xf32>
 ```
 
 Note that constants use built-in types within the initializer definition: MLIR
@@ -274,7 +274,7 @@ Vectors cannot be nested and only 1D vectors are supported. Scalable vectors are
 still considered 1D. Their syntax is as follows:
 
 ```
-  llvm-vec-type ::= `!llvm.vec<` (`?` `x`)? integer-literal `x` llvm-type `>`
+  llvm-vec-type ::= `vector<` (`?` `x`)? integer-literal `x` llvm-type `>`
 ```
 
 Internally, fixed vector types are represented as `LLVMFixedVectorType` and

diff  --git a/mlir/docs/SPIRVToLLVMDialectConversion.md b/mlir/docs/SPIRVToLLVMDialectConversion.md
index 30188f692b43..de291ac22c43 100644
--- a/mlir/docs/SPIRVToLLVMDialectConversion.md
+++ b/mlir/docs/SPIRVToLLVMDialectConversion.md
@@ -34,9 +34,9 @@ SPIR-V Dialect | LLVM Dialect
 
 ### Vector types
 
-SPIR-V Dialect                       | LLVM Dialect
-:----------------------------------: | :----------------------------------:
-`vector<<count> x <scalar-type>>`    | `!llvm.vec<<count> x <scalar-type>>`
+SPIR-V Dialect                    | LLVM Dialect
+:-------------------------------: | :-------------------------------:
+`vector<<count> x <scalar-type>>` | `vector<<count> x <scalar-type>>`
 
 ### Pointer types
 
@@ -188,11 +188,11 @@ to note:
 
     ```mlir
     // Broadcasting offset
-    %offset0 = llvm.mlir.undef : !llvm.vec<2 x i8>
+    %offset0 = llvm.mlir.undef : vector<2xi8>
     %zero = llvm.mlir.constant(0 : i32) : i32
-    %offset1 = llvm.insertelement %offset, %offset0[%zero : i32] : !llvm.vec<2 x i8>
+    %offset1 = llvm.insertelement %offset, %offset0[%zero : i32] : vector<2xi8>
     %one = llvm.mlir.constant(1 : i32) : i32
-    %vec_offset = llvm.insertelement  %offset, %offset1[%one : i32] : !llvm.vec<2 x i8>
+    %vec_offset = llvm.insertelement  %offset, %offset1[%one : i32] : vector<2xi8>
 
     // Broadcasting count
     // ...
@@ -205,7 +205,7 @@ to note:
 
     ```mlir
     // Zero extending offset after broadcasting
-    %res_offset = llvm.zext %vec_offset: !llvm.vec<2 x i8> to !llvm.vec<2 x i32>
+    %res_offset = llvm.zext %vec_offset: vector<2xi8> to vector<2xi32>
     ```
 
     Also, note that if the bitwidth of `offset` or `count` is greater than the
@@ -534,7 +534,7 @@ Also, at the moment initialization is only possible via `spv.constant`.
 ```mlir
 // Conversion of VariableOp without initialization
                                                                %size = llvm.mlir.constant(1 : i32) : i32
-%res = spv.Variable : !spv.ptr<vector<3xf32>, Function>   =>   %res  = llvm.alloca  %size x !llvm.vec<3 x f32> : (i32) -> !llvm.ptr<vec<3 x f32>>
+%res = spv.Variable : !spv.ptr<vector<3xf32>, Function>   =>   %res  = llvm.alloca  %size x vector<3xf32> : (i32) -> !llvm.ptr<vec<3 x f32>>
 
 // Conversion of VariableOp with initialization
                                                                %c    = llvm.mlir.constant(0 : i64) : i64
@@ -610,7 +610,7 @@ cover all possible corner cases.
 // %0 = llvm.mlir.constant(0 : i8) : i8
 %0 = spv.constant  0 : i8
 
-// %1 = llvm.mlir.constant(dense<[2, 3, 4]> : vector<3xi32>) : !llvm.vec<3 x i32>
+// %1 = llvm.mlir.constant(dense<[2, 3, 4]> : vector<3xi32>) : vector<3xi32>
 %1 = spv.constant dense<[2, 3, 4]> : vector<3xui32>
 ```
 

diff  --git a/mlir/include/mlir/Dialect/LLVMIR/LLVMOpBase.td b/mlir/include/mlir/Dialect/LLVMIR/LLVMOpBase.td
index 0d3f5322531d..0ef223c4b023 100644
--- a/mlir/include/mlir/Dialect/LLVMIR/LLVMOpBase.td
+++ b/mlir/include/mlir/Dialect/LLVMIR/LLVMOpBase.td
@@ -126,8 +126,8 @@ def LLVM_AnyNonAggregate : Type<Neg<LLVM_AnyAggregate.predicate>,
                                "LLVM non-aggregate type">;
 
 // Type constraint accepting any LLVM vector type.
-def LLVM_AnyVector : Type<CPred<"$_self.isa<::mlir::LLVM::LLVMVectorType>()">,
-                         "LLVM vector type">;
+def LLVM_AnyVector : Type<CPred<"::mlir::LLVM::isCompatibleVectorType($_self)">,
+                         "LLVM dialect-compatible vector type">;
 
 // Type constraint accepting an LLVM vector type with an additional constraint
 // on the vector element type.
@@ -135,9 +135,9 @@ class LLVM_VectorOf<Type element> : Type<
   And<[LLVM_AnyVector.predicate,
        SubstLeaves<
          "$_self",
-         "$_self.cast<::mlir::LLVM::LLVMVectorType>().getElementType()",
+         "::mlir::LLVM::getVectorElementType($_self)",
          element.predicate>]>,
-  "LLVM vector of " # element.summary>;
+  "LLVM dialect-compatible vector of " # element.summary>;
 
 // Type constraint accepting a constrained type, or a vector of such types.
 class LLVM_ScalarOrVectorOf<Type element> :

diff  --git a/mlir/include/mlir/Dialect/LLVMIR/LLVMOps.td b/mlir/include/mlir/Dialect/LLVMIR/LLVMOps.td
index ce91dffe861c..cb2eede3040e 100644
--- a/mlir/include/mlir/Dialect/LLVMIR/LLVMOps.td
+++ b/mlir/include/mlir/Dialect/LLVMIR/LLVMOps.td
@@ -555,10 +555,10 @@ def LLVM_ShuffleVectorOp : LLVM_Op<"shufflevector", [NoSideEffect]> {
     OpBuilderDAG<(ins "Value":$v1, "Value":$v2, "ArrayAttr":$mask,
       CArg<"ArrayRef<NamedAttribute>", "{}">:$attrs)>];
   let verifier = [{
-    auto wrappedVectorType1 = v1().getType().cast<LLVMVectorType>();
-    auto wrappedVectorType2 = v2().getType().cast<LLVMVectorType>();
-    if (wrappedVectorType1.getElementType() !=
-        wrappedVectorType2.getElementType())
+    auto type1 = v1().getType();
+    auto type2 = v2().getType();
+    if (::mlir::LLVM::getVectorElementType(type1) !=
+            ::mlir::LLVM::getVectorElementType(type2))
       return emitOpError("expected matching LLVM IR Dialect element types");
     return success();
   }];
@@ -1111,7 +1111,7 @@ def LLVM_ConstantOp
     %2 = llvm.mlir.constant(42.0 : f32) : f32
 
     // Splat dense vector constant.
-    %3 = llvm.mlir.constant(dense<1.0> : vector<4xf32>) : !llvm.vec<4 x f32>
+    %3 = llvm.mlir.constant(dense<1.0> : vector<4xf32>) : vector<4xf32>
     ```
   }];
 

diff  --git a/mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h b/mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h
index 3cd1733b8d52..f21dd1de995e 100644
--- a/mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h
+++ b/mlir/include/mlir/Dialect/LLVMIR/LLVMTypes.h
@@ -317,12 +317,11 @@ class LLVMVectorType : public Type {
 /// LLVM dialect fixed vector type, represents a sequence of elements of known
 /// length that can be processed as one.
 class LLVMFixedVectorType
-    : public Type::TypeBase<LLVMFixedVectorType, LLVMVectorType,
+    : public Type::TypeBase<LLVMFixedVectorType, Type,
                             detail::LLVMTypeAndSizeStorage> {
 public:
   /// Inherit base constructor.
   using Base::Base;
-  using LLVMVectorType::verifyConstructionInvariants;
 
   /// Gets or creates a fixed vector type containing `numElements` of
   /// `elementType` in the same context as `elementType`.
@@ -330,8 +329,21 @@ class LLVMFixedVectorType
   static LLVMFixedVectorType getChecked(Location loc, Type elementType,
                                         unsigned numElements);
 
+  /// Checks if the given type can be used in a vector type. This type supports
+  /// only a subset of LLVM dialect types that don't have a built-in
+  /// counter-part, e.g., pointers.
+  static bool isValidElementType(Type type);
+
+  /// Returns the element type of the vector.
+  Type getElementType();
+
   /// Returns the number of elements in the fixed vector.
   unsigned getNumElements();
+
+  /// Verifies that the type about to be constructed is well-formed.
+  static LogicalResult verifyConstructionInvariants(Location loc,
+                                                    Type elementType,
+                                                    unsigned numElements);
 };
 
 //===----------------------------------------------------------------------===//
@@ -342,12 +354,11 @@ class LLVMFixedVectorType
 /// unknown length that is known to be divisible by some constant. These
 /// elements can be processed as one in SIMD context.
 class LLVMScalableVectorType
-    : public Type::TypeBase<LLVMScalableVectorType, LLVMVectorType,
+    : public Type::TypeBase<LLVMScalableVectorType, Type,
                             detail::LLVMTypeAndSizeStorage> {
 public:
   /// Inherit base constructor.
   using Base::Base;
-  using LLVMVectorType::verifyConstructionInvariants;
 
   /// Gets or creates a scalable vector type containing a non-zero multiple of
   /// `minNumElements` of `elementType` in the same context as `elementType`.
@@ -355,10 +366,21 @@ class LLVMScalableVectorType
   static LLVMScalableVectorType getChecked(Location loc, Type elementType,
                                            unsigned minNumElements);
 
+  /// Checks if the given type can be used in a vector type.
+  static bool isValidElementType(Type type);
+
+  /// Returns the element type of the vector.
+  Type getElementType();
+
   /// Returns the scaling factor of the number of elements in the vector. The
   /// vector contains at least the resulting number of elements, or any non-zero
   /// multiple of this number.
   unsigned getMinNumElements();
+
+  /// Verifies that the type about to be constructed is well-formed.
+  static LogicalResult verifyConstructionInvariants(Location loc,
+                                                    Type elementType,
+                                                    unsigned minNumElements);
 };
 
 //===----------------------------------------------------------------------===//
@@ -384,9 +406,26 @@ bool isCompatibleType(Type type);
 /// the LLVM dialect.
 bool isCompatibleFloatingPointType(Type type);
 
+/// Returns `true` if the given type is a vector type compatible with the LLVM
+/// dialect. Compatible types include 1D built-in vector types of built-in
+/// integers and floating-point values, LLVM dialect fixed vector types of LLVM
+/// dialect pointers and LLVM dialect scalable vector types.
+bool isCompatibleVectorType(Type type);
+
+/// Returns the element type of any vector type compatible with the LLVM
+/// dialect.
+Type getVectorElementType(Type type);
+
+/// Returns the element count of any LLVM-compatible vector type.
+llvm::ElementCount getVectorNumElements(Type type);
+
+/// Creates an LLVM dialect-compatible type with the given element type and
+/// length.
+Type getFixedVectorType(Type elementType, unsigned numElements);
+
 /// Returns the size of the given primitive LLVM dialect-compatible type
 /// (including vectors) in bits, for example, the size of i16 is 16 and
-/// the size of !llvm.vec<4 x i16> is 64. Returns 0 for non-primitive
+/// the size of vector<4xi16> is 64. Returns 0 for non-primitive
 /// (aggregates such as struct) or types that don't have a size (such as void).
 llvm::TypeSize getPrimitiveTypeSizeInBits(Type type);
 

diff  --git a/mlir/integration_test/Dialect/LLVMIR/CPU/test-vector-reductions-fp.mlir b/mlir/integration_test/Dialect/LLVMIR/CPU/test-vector-reductions-fp.mlir
index 1d076e64ba2b..9d390fe950e1 100644
--- a/mlir/integration_test/Dialect/LLVMIR/CPU/test-vector-reductions-fp.mlir
+++ b/mlir/integration_test/Dialect/LLVMIR/CPU/test-vector-reductions-fp.mlir
@@ -12,74 +12,74 @@ module {
     %1 = llvm.mlir.constant(2.000000e+00 : f32) : f32
     %2 = llvm.mlir.constant(3.000000e+00 : f32) : f32
     %3 = llvm.mlir.constant(4.000000e+00 : f32) : f32
-    %4 = llvm.mlir.undef : !llvm.vec<4 x f32>
+    %4 = llvm.mlir.undef : vector<4xf32>
     %5 = llvm.mlir.constant(0 : index) : i64
-    %6 = llvm.insertelement %0, %4[%5 : i64] : !llvm.vec<4 x f32>
+    %6 = llvm.insertelement %0, %4[%5 : i64] : vector<4xf32>
     %7 = llvm.shufflevector %6, %4 [0 : i32, 0 : i32, 0 : i32, 0 : i32]
-        : !llvm.vec<4 x f32>, !llvm.vec<4 x f32>
+        : vector<4xf32>, vector<4xf32>
     %8 = llvm.mlir.constant(1 : i64) : i64
-    %9 = llvm.insertelement %1, %7[%8 : i64] : !llvm.vec<4 x f32>
+    %9 = llvm.insertelement %1, %7[%8 : i64] : vector<4xf32>
     %10 = llvm.mlir.constant(2 : i64) : i64
-    %11 = llvm.insertelement %2, %9[%10 : i64] : !llvm.vec<4 x f32>
+    %11 = llvm.insertelement %2, %9[%10 : i64] : vector<4xf32>
     %12 = llvm.mlir.constant(3 : i64) : i64
-    %v = llvm.insertelement %3, %11[%12 : i64] : !llvm.vec<4 x f32>
+    %v = llvm.insertelement %3, %11[%12 : i64] : vector<4xf32>
 
     %max = "llvm.intr.vector.reduce.fmax"(%v)
-        : (!llvm.vec<4 x f32>) -> f32
+        : (vector<4xf32>) -> f32
     llvm.call @printF32(%max) : (f32) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 4
 
     %min = "llvm.intr.vector.reduce.fmin"(%v)
-        : (!llvm.vec<4 x f32>) -> f32
+        : (vector<4xf32>) -> f32
     llvm.call @printF32(%min) : (f32) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 1
 
     %add1 = "llvm.intr.vector.reduce.fadd"(%0, %v)
-        : (f32, !llvm.vec<4 x f32>) -> f32
+        : (f32, vector<4xf32>) -> f32
     llvm.call @printF32(%add1) : (f32) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 11
 
     %add1r = "llvm.intr.vector.reduce.fadd"(%0, %v)
-        {reassoc = true} : (f32, !llvm.vec<4 x f32>) -> f32
+        {reassoc = true} : (f32, vector<4xf32>) -> f32
     llvm.call @printF32(%add1r) : (f32) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 11
 
     %add2 = "llvm.intr.vector.reduce.fadd"(%1, %v)
-        : (f32, !llvm.vec<4 x f32>) -> f32
+        : (f32, vector<4xf32>) -> f32
     llvm.call @printF32(%add2) : (f32) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 12
 
     %add2r = "llvm.intr.vector.reduce.fadd"(%1, %v)
-        {reassoc = true} : (f32, !llvm.vec<4 x f32>) -> f32
+        {reassoc = true} : (f32, vector<4xf32>) -> f32
     llvm.call @printF32(%add2r) : (f32) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 12
 
     %mul1 = "llvm.intr.vector.reduce.fmul"(%0, %v)
-        : (f32, !llvm.vec<4 x f32>) -> f32
+        : (f32, vector<4xf32>) -> f32
     llvm.call @printF32(%mul1) : (f32) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 24
 
     %mul1r = "llvm.intr.vector.reduce.fmul"(%0, %v)
-        {reassoc = true} : (f32, !llvm.vec<4 x f32>) -> f32
+        {reassoc = true} : (f32, vector<4xf32>) -> f32
     llvm.call @printF32(%mul1r) : (f32) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 24
 
     %mul2 = "llvm.intr.vector.reduce.fmul"(%1, %v)
-        : (f32, !llvm.vec<4 x f32>) -> f32
+        : (f32, vector<4xf32>) -> f32
     llvm.call @printF32(%mul2) : (f32) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 48
 
     %mul2r = "llvm.intr.vector.reduce.fmul"(%1, %v)
-        {reassoc = true} : (f32, !llvm.vec<4 x f32>) -> f32
+        {reassoc = true} : (f32, vector<4xf32>) -> f32
     llvm.call @printF32(%mul2r) : (f32) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 48

diff  --git a/mlir/integration_test/Dialect/LLVMIR/CPU/test-vector-reductions-int.mlir b/mlir/integration_test/Dialect/LLVMIR/CPU/test-vector-reductions-int.mlir
index 181e2e3ce0fc..74e8667bf2ec 100644
--- a/mlir/integration_test/Dialect/LLVMIR/CPU/test-vector-reductions-int.mlir
+++ b/mlir/integration_test/Dialect/LLVMIR/CPU/test-vector-reductions-int.mlir
@@ -12,68 +12,68 @@ module {
     %1 = llvm.mlir.constant(2 : i64) : i64
     %2 = llvm.mlir.constant(3 : i64) : i64
     %3 = llvm.mlir.constant(4 : i64) : i64
-    %4 = llvm.mlir.undef : !llvm.vec<4 x i64>
+    %4 = llvm.mlir.undef : vector<4xi64>
     %5 = llvm.mlir.constant(0 : index) : i64
-    %6 = llvm.insertelement %0, %4[%5 : i64] : !llvm.vec<4 x i64>
+    %6 = llvm.insertelement %0, %4[%5 : i64] : vector<4xi64>
     %7 = llvm.shufflevector %6, %4 [0 : i64, 0 : i64, 0 : i64, 0 : i64]
-        : !llvm.vec<4 x i64>, !llvm.vec<4 x i64>
+        : vector<4xi64>, vector<4xi64>
     %8 = llvm.mlir.constant(1 : i64) : i64
-    %9 = llvm.insertelement %1, %7[%8 : i64] : !llvm.vec<4 x i64>
+    %9 = llvm.insertelement %1, %7[%8 : i64] : vector<4xi64>
     %10 = llvm.mlir.constant(2 : i64) : i64
-    %11 = llvm.insertelement %2, %9[%10 : i64] : !llvm.vec<4 x i64>
+    %11 = llvm.insertelement %2, %9[%10 : i64] : vector<4xi64>
     %12 = llvm.mlir.constant(3 : i64) : i64
-    %v = llvm.insertelement %3, %11[%12 : i64] : !llvm.vec<4 x i64>
+    %v = llvm.insertelement %3, %11[%12 : i64] : vector<4xi64>
 
     %add = "llvm.intr.vector.reduce.add"(%v)
-        : (!llvm.vec<4 x i64>) -> i64
+        : (vector<4xi64>) -> i64
     llvm.call @printI64(%add) : (i64) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 10
 
     %and = "llvm.intr.vector.reduce.and"(%v)
-        : (!llvm.vec<4 x i64>) -> i64
+        : (vector<4xi64>) -> i64
     llvm.call @printI64(%and) : (i64) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 0
 
     %mul = "llvm.intr.vector.reduce.mul"(%v)
-        : (!llvm.vec<4 x i64>) -> i64
+        : (vector<4xi64>) -> i64
     llvm.call @printI64(%mul) : (i64) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 24
 
     %or = "llvm.intr.vector.reduce.or"(%v)
-        : (!llvm.vec<4 x i64>) -> i64
+        : (vector<4xi64>) -> i64
     llvm.call @printI64(%or) : (i64) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 7
 
     %smax = "llvm.intr.vector.reduce.smax"(%v)
-        : (!llvm.vec<4 x i64>) -> i64
+        : (vector<4xi64>) -> i64
     llvm.call @printI64(%smax) : (i64) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 4
 
     %smin = "llvm.intr.vector.reduce.smin"(%v)
-        : (!llvm.vec<4 x i64>) -> i64
+        : (vector<4xi64>) -> i64
     llvm.call @printI64(%smin) : (i64) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 1
 
     %umax = "llvm.intr.vector.reduce.umax"(%v)
-        : (!llvm.vec<4 x i64>) -> i64
+        : (vector<4xi64>) -> i64
     llvm.call @printI64(%umax) : (i64) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 4
 
     %umin = "llvm.intr.vector.reduce.umin"(%v)
-        : (!llvm.vec<4 x i64>) -> i64
+        : (vector<4xi64>) -> i64
     llvm.call @printI64(%umin) : (i64) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 1
 
     %xor = "llvm.intr.vector.reduce.xor"(%v)
-        : (!llvm.vec<4 x i64>) -> i64
+        : (vector<4xi64>) -> i64
     llvm.call @printI64(%xor) : (i64) -> ()
     llvm.call @printNewline() : () -> ()
     // CHECK: 4

diff  --git a/mlir/lib/Conversion/SPIRVToLLVM/SPIRVToLLVM.cpp b/mlir/lib/Conversion/SPIRVToLLVM/SPIRVToLLVM.cpp
index f60ba96e3a20..0c868cb549b7 100644
--- a/mlir/lib/Conversion/SPIRVToLLVM/SPIRVToLLVM.cpp
+++ b/mlir/lib/Conversion/SPIRVToLLVM/SPIRVToLLVM.cpp
@@ -66,8 +66,8 @@ static unsigned getBitWidth(Type type) {
 
 /// Returns the bit width of LLVMType integer or vector.
 static unsigned getLLVMTypeBitWidth(Type type) {
-  auto vectorType = type.dyn_cast<LLVM::LLVMVectorType>();
-  return (vectorType ? vectorType.getElementType() : type)
+  return (LLVM::isCompatibleVectorType(type) ? LLVM::getVectorElementType(type)
+                                             : type)
       .cast<IntegerType>()
       .getWidth();
 }

diff  --git a/mlir/lib/Conversion/StandardToLLVM/StandardToLLVM.cpp b/mlir/lib/Conversion/StandardToLLVM/StandardToLLVM.cpp
index 512273347e4b..c2dea0c99540 100644
--- a/mlir/lib/Conversion/StandardToLLVM/StandardToLLVM.cpp
+++ b/mlir/lib/Conversion/StandardToLLVM/StandardToLLVM.cpp
@@ -390,16 +390,16 @@ Type LLVMTypeConverter::convertMemRefToBarePtr(BaseMemRefType type) {
   return LLVM::LLVMPointerType::get(elementType, type.getMemorySpace());
 }
 
-// Convert an n-D vector type to an LLVM vector type via (n-1)-D array type when
-// n > 1.
-// For example, `vector<4 x f32>` converts to `!llvm.type<"<4 x f32>">` and
-// `vector<4 x 8 x 16 f32>` converts to `!llvm."[4 x [8 x <16 x f32>]]">`.
+/// Convert an n-D vector type to an LLVM vector type via (n-1)-D array type
+/// when n > 1. For example, `vector<4 x f32>` remains as is while,
+/// `vector<4x8x16xf32>` converts to `!llvm.array<4xarray<8 x vector<16xf32>>>`.
 Type LLVMTypeConverter::convertVectorType(VectorType type) {
   auto elementType = unwrap(convertType(type.getElementType()));
   if (!elementType)
     return {};
-  Type vectorType =
-      LLVM::LLVMFixedVectorType::get(elementType, type.getShape().back());
+  Type vectorType = VectorType::get(type.getShape().back(), elementType);
+  assert(LLVM::isCompatibleVectorType(vectorType) &&
+         "expected vector type compatible with the LLVM dialect");
   auto shape = type.getShape();
   for (int i = shape.size() - 2; i >= 0; --i)
     vectorType = LLVM::LLVMArrayType::get(vectorType, shape[i]);
@@ -1500,7 +1500,7 @@ static NDVectorTypeInfo extractNDVectorTypeInfo(VectorType vectorType,
         llvmTy.cast<LLVM::LLVMArrayType>().getNumElements());
     llvmTy = llvmTy.cast<LLVM::LLVMArrayType>().getElementType();
   }
-  if (!llvmTy.isa<LLVM::LLVMVectorType>())
+  if (!LLVM::isCompatibleVectorType(llvmTy))
     return info;
   info.llvmVectorTy = llvmTy;
   return info;
@@ -2484,7 +2484,7 @@ struct RsqrtOpLowering : public ConvertOpToLLVMPattern<RsqrtOp> {
 
     if (!operandType.isa<LLVM::LLVMArrayType>()) {
       LLVM::ConstantOp one;
-      if (operandType.isa<LLVM::LLVMVectorType>()) {
+      if (LLVM::isCompatibleVectorType(operandType)) {
         one = rewriter.create<LLVM::ConstantOp>(
             loc, operandType,
             SplatElementsAttr::get(resultType.cast<ShapedType>(), floatOne));
@@ -2505,8 +2505,7 @@ struct RsqrtOpLowering : public ConvertOpToLLVMPattern<RsqrtOp> {
         [&](Type llvmVectorTy, ValueRange operands) {
           auto splatAttr = SplatElementsAttr::get(
               mlir::VectorType::get(
-                  {llvmVectorTy.cast<LLVM::LLVMFixedVectorType>()
-                       .getNumElements()},
+                  {LLVM::getVectorNumElements(llvmVectorTy).getFixedValue()},
                   floatType),
               floatOne);
           auto one =

diff  --git a/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp b/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
index 5dd0b028767a..9e4c8bd127fa 100644
--- a/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
+++ b/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
@@ -182,7 +182,7 @@ static LogicalResult getIndexedPtrs(ConversionPatternRewriter &rewriter,
   if (failed(getBase(rewriter, loc, memref, memRefType, base)))
     return failure();
   auto pType = MemRefDescriptor(memref).getElementPtrType();
-  auto ptrsType = LLVM::LLVMFixedVectorType::get(pType, vType.getDimSize(0));
+  auto ptrsType = LLVM::getFixedVectorType(pType, vType.getDimSize(0));
   ptrs = rewriter.create<LLVM::GEPOp>(loc, ptrsType, base, indices);
   return success();
 }
@@ -192,8 +192,7 @@ static LogicalResult getIndexedPtrs(ConversionPatternRewriter &rewriter,
 // used when source/dst memrefs are not on address space 0.
 static Value castDataPtr(ConversionPatternRewriter &rewriter, Location loc,
                          Value ptr, MemRefType memRefType, Type vt) {
-  auto pType =
-      LLVM::LLVMPointerType::get(vt.template cast<LLVM::LLVMFixedVectorType>());
+  auto pType = LLVM::LLVMPointerType::get(vt);
   if (memRefType.getMemorySpace() == 0)
     return rewriter.create<LLVM::BitcastOp>(loc, pType, ptr);
   return rewriter.create<LLVM::AddrSpaceCastOp>(loc, pType, ptr);
@@ -1226,7 +1225,7 @@ class VectorTransferConversion : public ConvertOpToLLVMPattern<ConcreteOp> {
     //
     // TODO: when the leaf transfer rank is k > 1, we need the last `k`
     //       dimensions here.
-    unsigned vecWidth = vtp.getNumElements();
+    unsigned vecWidth = LLVM::getVectorNumElements(vtp).getFixedValue();
     unsigned lastIndex = llvm::size(xferOp.indices()) - 1;
     Value off = xferOp.indices()[lastIndex];
     Value dim = rewriter.create<DimOp>(loc, xferOp.source(), lastIndex);

diff  --git a/mlir/lib/Conversion/VectorToROCDL/VectorToROCDL.cpp b/mlir/lib/Conversion/VectorToROCDL/VectorToROCDL.cpp
index d27f097a3baa..005e7b30ea7c 100644
--- a/mlir/lib/Conversion/VectorToROCDL/VectorToROCDL.cpp
+++ b/mlir/lib/Conversion/VectorToROCDL/VectorToROCDL.cpp
@@ -78,9 +78,8 @@ class VectorTransferConversion : public ConvertOpToLLVMPattern<ConcreteOp> {
     auto toLLVMTy = [&](Type t) {
       return this->getTypeConverter()->convertType(t);
     };
-    auto vecTy = toLLVMTy(xferOp.getVectorType())
-                     .template cast<LLVM::LLVMFixedVectorType>();
-    unsigned vecWidth = vecTy.getNumElements();
+    auto vecTy = toLLVMTy(xferOp.getVectorType());
+    unsigned vecWidth = LLVM::getVectorNumElements(vecTy).getFixedValue();
     Location loc = xferOp->getLoc();
 
     // The backend result vector scalarization have trouble scalarize
@@ -120,18 +119,13 @@ class VectorTransferConversion : public ConvertOpToLLVMPattern<ConcreteOp> {
     // to it.
     Type i64Ty = rewriter.getIntegerType(64);
     Value i64x2Ty = rewriter.create<LLVM::BitcastOp>(
-        loc,
-        LLVM::LLVMFixedVectorType::get(toLLVMTy(i64Ty).template cast<Type>(),
-                                       2),
-        constConfig);
+        loc, LLVM::getFixedVectorType(toLLVMTy(i64Ty), 2), constConfig);
     Value dataPtrAsI64 = rewriter.create<LLVM::PtrToIntOp>(
         loc, toLLVMTy(i64Ty).template cast<Type>(), dataPtr);
     Value zero = this->createIndexConstant(rewriter, loc, 0);
     Value dwordConfig = rewriter.create<LLVM::InsertElementOp>(
-        loc,
-        LLVM::LLVMFixedVectorType::get(toLLVMTy(i64Ty).template cast<Type>(),
-                                       2),
-        i64x2Ty, dataPtrAsI64, zero);
+        loc, LLVM::getFixedVectorType(toLLVMTy(i64Ty), 2), i64x2Ty,
+        dataPtrAsI64, zero);
     dwordConfig =
         rewriter.create<LLVM::BitcastOp>(loc, toLLVMTy(i32Vecx4), dwordConfig);
 

diff  --git a/mlir/lib/Dialect/LLVMIR/IR/LLVMDialect.cpp b/mlir/lib/Dialect/LLVMIR/IR/LLVMDialect.cpp
index 3c9329ff0eb5..b895c904d623 100644
--- a/mlir/lib/Dialect/LLVMIR/IR/LLVMDialect.cpp
+++ b/mlir/lib/Dialect/LLVMIR/IR/LLVMDialect.cpp
@@ -150,9 +150,9 @@ static ParseResult parseCmpOp(OpAsmParser &parser, OperationState &result) {
   if (!isCompatibleType(type))
     return parser.emitError(trailingTypeLoc,
                             "expected LLVM dialect-compatible type");
-  if (auto vecArgType = type.dyn_cast<LLVM::LLVMFixedVectorType>())
-    resultType =
-        LLVMFixedVectorType::get(resultType, vecArgType.getNumElements());
+  if (LLVM::isCompatibleVectorType(type))
+    resultType = LLVM::getFixedVectorType(
+        resultType, LLVM::getVectorNumElements(type).getFixedValue());
   assert(!type.isa<LLVM::LLVMScalableVectorType>() &&
          "unhandled scalable vector");
 
@@ -913,8 +913,8 @@ static ParseResult parseCallOp(OpAsmParser &parser, OperationState &result) {
 void LLVM::ExtractElementOp::build(OpBuilder &b, OperationState &result,
                                    Value vector, Value position,
                                    ArrayRef<NamedAttribute> attrs) {
-  auto vectorType = vector.getType().cast<LLVM::LLVMVectorType>();
-  auto llvmType = vectorType.getElementType();
+  auto vectorType = vector.getType();
+  auto llvmType = LLVM::getVectorElementType(vectorType);
   build(b, result, llvmType, vector, position);
   result.addAttributes(attrs);
 }
@@ -941,11 +941,10 @@ static ParseResult parseExtractElementOp(OpAsmParser &parser,
       parser.resolveOperand(vector, type, result.operands) ||
       parser.resolveOperand(position, positionType, result.operands))
     return failure();
-  auto vectorType = type.dyn_cast<LLVM::LLVMVectorType>();
-  if (!vectorType)
+  if (!LLVM::isCompatibleVectorType(type))
     return parser.emitError(
-        loc, "expected LLVM IR dialect vector type for operand #1");
-  result.addTypes(vectorType.getElementType());
+        loc, "expected LLVM dialect-compatible vector type for operand #1");
+  result.addTypes(LLVM::getVectorElementType(type));
   return success();
 }
 
@@ -1057,11 +1056,10 @@ static ParseResult parseInsertElementOp(OpAsmParser &parser,
       parser.parseColonType(vectorType))
     return failure();
 
-  auto llvmVectorType = vectorType.dyn_cast<LLVM::LLVMVectorType>();
-  if (!llvmVectorType)
+  if (!LLVM::isCompatibleVectorType(vectorType))
     return parser.emitError(
-        loc, "expected LLVM IR dialect vector type for operand #1");
-  Type valueType = llvmVectorType.getElementType();
+        loc, "expected LLVM dialect-compatible vector type for operand #1");
+  Type valueType = LLVM::getVectorElementType(vectorType);
   if (!valueType)
     return failure();
 
@@ -1278,21 +1276,8 @@ static LogicalResult verifyCast(DialectCastOp op, Type llvmType, Type type,
 
   // Vectors are compatible if they are 1D non-scalable, and their element types
   // are compatible.
-  if (auto vectorType = type.dyn_cast<VectorType>()) {
-    if (vectorType.getRank() != 1)
-      return op->emitOpError("only 1-d vector is allowed");
-
-    auto llvmVector = llvmType.dyn_cast<LLVMFixedVectorType>();
-    if (!llvmVector)
-      return op->emitOpError("only fixed-sized vector is allowed");
-
-    if (vectorType.getDimSize(0) != llvmVector.getNumElements())
-      return op->emitOpError(
-          "invalid cast between vectors with mismatching sizes");
-
-    return verifyCast(op, llvmVector.getElementType(),
-                      vectorType.getElementType(), /*isElement=*/true);
-  }
+  if (auto vectorType = type.dyn_cast<VectorType>())
+    return op.emitOpError("vector types should not be casted");
 
   if (auto memrefType = type.dyn_cast<MemRefType>()) {
     // Bare pointer convention: statically-shaped memref is compatible with an
@@ -1543,9 +1528,9 @@ static LogicalResult verify(GlobalOp op) {
 void LLVM::ShuffleVectorOp::build(OpBuilder &b, OperationState &result,
                                   Value v1, Value v2, ArrayAttr mask,
                                   ArrayRef<NamedAttribute> attrs) {
-  auto containerType = v1.getType().cast<LLVM::LLVMVectorType>();
-  auto vType =
-      LLVMFixedVectorType::get(containerType.getElementType(), mask.size());
+  auto containerType = v1.getType();
+  auto vType = LLVM::getFixedVectorType(
+      LLVM::getVectorElementType(containerType), mask.size());
   build(b, result, vType, v1, v2, mask);
   result.addAttributes(attrs);
 }
@@ -1575,12 +1560,11 @@ static ParseResult parseShuffleVectorOp(OpAsmParser &parser,
       parser.resolveOperand(v1, typeV1, result.operands) ||
       parser.resolveOperand(v2, typeV2, result.operands))
     return failure();
-  auto containerType = typeV1.dyn_cast<LLVM::LLVMVectorType>();
-  if (!containerType)
+  if (!LLVM::isCompatibleVectorType(typeV1))
     return parser.emitError(
         loc, "expected LLVM IR dialect vector type for operand #1");
-  auto vType =
-      LLVMFixedVectorType::get(containerType.getElementType(), maskAttr.size());
+  auto vType = LLVM::getFixedVectorType(LLVM::getVectorElementType(typeV1),
+                                        maskAttr.size());
   result.addTypes(vType);
   return success();
 }

diff  --git a/mlir/lib/Dialect/LLVMIR/IR/LLVMTypeSyntax.cpp b/mlir/lib/Dialect/LLVMIR/IR/LLVMTypeSyntax.cpp
index 18a4262bcaf8..3ff69006da42 100644
--- a/mlir/lib/Dialect/LLVMIR/IR/LLVMTypeSyntax.cpp
+++ b/mlir/lib/Dialect/LLVMIR/IR/LLVMTypeSyntax.cpp
@@ -24,8 +24,7 @@ using namespace mlir::LLVM;
 /// internal functions to avoid getting a verbose `!llvm` prefix. Otherwise
 /// prints it as usual.
 static void dispatchPrint(DialectAsmPrinter &printer, Type type) {
-  if (isCompatibleType(type) && !type.isa<IntegerType>() &&
-      !type.isa<FloatType>())
+  if (isCompatibleType(type) && !type.isa<IntegerType, FloatType, VectorType>())
     return mlir::LLVM::detail::printType(type, printer);
   printer.printType(type);
 }
@@ -43,7 +42,8 @@ static StringRef getTypeKeyword(Type type) {
       .Case<LLVMMetadataType>([&](Type) { return "metadata"; })
       .Case<LLVMFunctionType>([&](Type) { return "func"; })
       .Case<LLVMPointerType>([&](Type) { return "ptr"; })
-      .Case<LLVMVectorType>([&](Type) { return "vec"; })
+      .Case<LLVMFixedVectorType, LLVMScalableVectorType>(
+          [&](Type) { return "vec"; })
       .Case<LLVMArrayType>([&](Type) { return "array"; })
       .Case<LLVMStructType>([&](Type) { return "struct"; })
       .Default([](Type) -> StringRef {
@@ -236,7 +236,7 @@ static LLVMPointerType parsePointerType(DialectAsmParser &parser) {
 /// Parses an LLVM dialect vector type.
 ///   llvm-type ::= `vec<` `? x`? integer `x` llvm-type `>`
 /// Supports both fixed and scalable vectors.
-static LLVMVectorType parseVectorType(DialectAsmParser &parser) {
+static Type parseVectorType(DialectAsmParser &parser) {
   SmallVector<int64_t, 2> dims;
   llvm::SMLoc dimPos;
   Type elementType;
@@ -244,7 +244,7 @@ static LLVMVectorType parseVectorType(DialectAsmParser &parser) {
   if (parser.parseLess() || parser.getCurrentLocation(&dimPos) ||
       parser.parseDimensionList(dims, /*allowDynamic=*/true) ||
       dispatchParse(parser, elementType) || parser.parseGreater())
-    return LLVMVectorType();
+    return Type();
 
   // We parsed a generic dimension list, but vectors only support two forms:
   //  - single non-dynamic entry in the list (fixed vector);
@@ -255,12 +255,14 @@ static LLVMVectorType parseVectorType(DialectAsmParser &parser) {
       (dims.size() == 2 && dims[1] == -1)) {
     parser.emitError(dimPos)
         << "expected '? x <integer> x <type>' or '<integer> x <type>'";
-    return LLVMVectorType();
+    return Type();
   }
 
   bool isScalable = dims.size() == 2;
   if (isScalable)
     return LLVMScalableVectorType::getChecked(loc, elementType, dims[1]);
+  if (elementType.isSignlessIntOrFloat())
+    return VectorType::getChecked(loc, dims, elementType);
   return LLVMFixedVectorType::getChecked(loc, elementType, dims[0]);
 }
 

diff  --git a/mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp b/mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp
index ce6f052eb871..ace7194011ac 100644
--- a/mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp
+++ b/mlir/lib/Dialect/LLVMIR/IR/LLVMTypes.cpp
@@ -236,38 +236,15 @@ LogicalResult LLVMStructType::verifyConstructionInvariants(Location loc,
 // Vector types.
 //===----------------------------------------------------------------------===//
 
-bool LLVMVectorType::isValidElementType(Type type) {
-  if (auto intType = type.dyn_cast<IntegerType>())
-    return intType.isSignless();
-  return type.isa<LLVMPointerType>() ||
-         mlir::LLVM::isCompatibleFloatingPointType(type);
-}
-
-/// Support type casting functionality.
-bool LLVMVectorType::classof(Type type) {
-  return type.isa<LLVMFixedVectorType, LLVMScalableVectorType>();
-}
-
-Type LLVMVectorType::getElementType() {
-  // Both derived classes share the implementation type.
-  return static_cast<detail::LLVMTypeAndSizeStorage *>(impl)->elementType;
-}
-
-llvm::ElementCount LLVMVectorType::getElementCount() {
-  // Both derived classes share the implementation type.
-  return llvm::ElementCount::get(
-      static_cast<detail::LLVMTypeAndSizeStorage *>(impl)->numElements,
-      isa<LLVMScalableVectorType>());
-}
-
 /// Verifies that the type about to be constructed is well-formed.
-LogicalResult
-LLVMVectorType::verifyConstructionInvariants(Location loc, Type elementType,
-                                             unsigned numElements) {
+template <typename VecTy>
+static LogicalResult verifyVectorConstructionInvariants(Location loc,
+                                                        Type elementType,
+                                                        unsigned numElements) {
   if (numElements == 0)
     return emitError(loc, "the number of vector elements must be positive");
 
-  if (!isValidElementType(elementType))
+  if (!VecTy::isValidElementType(elementType))
     return emitError(loc, "invalid vector element type");
 
   return success();
@@ -286,10 +263,29 @@ LLVMFixedVectorType LLVMFixedVectorType::getChecked(Location loc,
   return Base::getChecked(loc, elementType, numElements);
 }
 
+Type LLVMFixedVectorType::getElementType() {
+  return static_cast<detail::LLVMTypeAndSizeStorage *>(impl)->elementType;
+}
+
 unsigned LLVMFixedVectorType::getNumElements() {
   return getImpl()->numElements;
 }
 
+bool LLVMFixedVectorType::isValidElementType(Type type) {
+  return type
+      .isa<LLVMPointerType, LLVMX86FP80Type, LLVMFP128Type, LLVMPPCFP128Type>();
+}
+
+LogicalResult LLVMFixedVectorType::verifyConstructionInvariants(
+    Location loc, Type elementType, unsigned numElements) {
+  return verifyVectorConstructionInvariants<LLVMFixedVectorType>(
+      loc, elementType, numElements);
+}
+
+//===----------------------------------------------------------------------===//
+// LLVMScalableVectorType.
+//===----------------------------------------------------------------------===//
+
 LLVMScalableVectorType LLVMScalableVectorType::get(Type elementType,
                                                    unsigned minNumElements) {
   assert(elementType && "expected non-null subtype");
@@ -303,10 +299,27 @@ LLVMScalableVectorType::getChecked(Location loc, Type elementType,
   return Base::getChecked(loc, elementType, minNumElements);
 }
 
+Type LLVMScalableVectorType::getElementType() {
+  return static_cast<detail::LLVMTypeAndSizeStorage *>(impl)->elementType;
+}
+
 unsigned LLVMScalableVectorType::getMinNumElements() {
   return getImpl()->numElements;
 }
 
+bool LLVMScalableVectorType::isValidElementType(Type type) {
+  if (auto intType = type.dyn_cast<IntegerType>())
+    return intType.isSignless();
+
+  return isCompatibleFloatingPointType(type) || type.isa<LLVMPointerType>();
+}
+
+LogicalResult LLVMScalableVectorType::verifyConstructionInvariants(
+    Location loc, Type elementType, unsigned numElements) {
+  return verifyVectorConstructionInvariants<LLVMScalableVectorType>(
+      loc, elementType, numElements);
+}
+
 //===----------------------------------------------------------------------===//
 // Utility functions.
 //===----------------------------------------------------------------------===//
@@ -316,6 +329,10 @@ bool mlir::LLVM::isCompatibleType(Type type) {
   if (auto intType = type.dyn_cast<IntegerType>())
     return intType.isSignless();
 
+  // 1D vector types are compatible if their element types are.
+  if (auto vecType = type.dyn_cast<VectorType>())
+    return vecType.getRank() == 1 && isCompatibleType(vecType.getElementType());
+
   // clang-format off
   return type.isa<
       BFloat16Type,
@@ -331,7 +348,8 @@ bool mlir::LLVM::isCompatibleType(Type type) {
       LLVMPointerType,
       LLVMStructType,
       LLVMTokenType,
-      LLVMVectorType,
+      LLVMFixedVectorType,
+      LLVMScalableVectorType,
       LLVMVoidType,
       LLVMX86FP80Type,
       LLVMX86MMXType
@@ -344,6 +362,55 @@ bool mlir::LLVM::isCompatibleFloatingPointType(Type type) {
                   LLVMFP128Type, LLVMPPCFP128Type, LLVMX86FP80Type>();
 }
 
+bool mlir::LLVM::isCompatibleVectorType(Type type) {
+  if (type.isa<LLVMFixedVectorType, LLVMScalableVectorType>())
+    return true;
+
+  if (auto vecType = type.dyn_cast<VectorType>()) {
+    if (vecType.getRank() != 1)
+      return false;
+    Type elementType = vecType.getElementType();
+    if (auto intType = elementType.dyn_cast<IntegerType>())
+      return intType.isSignless();
+    return elementType
+        .isa<BFloat16Type, Float16Type, Float32Type, Float64Type>();
+  }
+  return false;
+}
+
+Type mlir::LLVM::getVectorElementType(Type type) {
+  return llvm::TypeSwitch<Type, Type>(type)
+      .Case<LLVMFixedVectorType, LLVMScalableVectorType, VectorType>(
+          [](auto ty) { return ty.getElementType(); })
+      .Default([](Type) -> Type {
+        llvm_unreachable("incompatible with LLVM vector type");
+      });
+}
+
+llvm::ElementCount mlir::LLVM::getVectorNumElements(Type type) {
+  return llvm::TypeSwitch<Type, llvm::ElementCount>(type)
+      .Case<LLVMFixedVectorType, VectorType>([](auto ty) {
+        return llvm::ElementCount::getFixed(ty.getNumElements());
+      })
+      .Case([](LLVMScalableVectorType ty) {
+        return llvm::ElementCount::getScalable(ty.getMinNumElements());
+      })
+      .Default([](Type) -> llvm::ElementCount {
+        llvm_unreachable("incompatible with LLVM vector type");
+      });
+}
+
+Type mlir::LLVM::getFixedVectorType(Type elementType, unsigned numElements) {
+  bool useLLVM = LLVMFixedVectorType::isValidElementType(elementType);
+  bool useBuiltIn = VectorType::isValidElementType(elementType);
+  (void)useBuiltIn;
+  assert((useLLVM ^ useBuiltIn) && "expected LLVM-compatible fixed-vector type "
+                                   "to be either builtin or LLVM dialect type");
+  if (useLLVM)
+    return LLVMFixedVectorType::get(elementType, numElements);
+  return VectorType::get(numElements, elementType);
+}
+
 llvm::TypeSize mlir::LLVM::getPrimitiveTypeSizeInBits(Type type) {
   assert(isCompatibleType(type) &&
          "expected a type compatible with the LLVM dialect");
@@ -360,15 +427,19 @@ llvm::TypeSize mlir::LLVM::getPrimitiveTypeSizeInBits(Type type) {
       .Case<LLVMX86FP80Type>([](Type) { return llvm::TypeSize::Fixed(80); })
       .Case<LLVMPPCFP128Type, LLVMFP128Type>(
           [](Type) { return llvm::TypeSize::Fixed(128); })
-      .Case<LLVMVectorType>([](LLVMVectorType t) {
+      .Case<LLVMFixedVectorType>([](LLVMFixedVectorType t) {
+        llvm::TypeSize elementSize =
+            getPrimitiveTypeSizeInBits(t.getElementType());
+        return llvm::TypeSize(elementSize.getFixedSize() * t.getNumElements(),
+                              elementSize.isScalable());
+      })
+      .Case<VectorType>([](VectorType t) {
+        assert(isCompatibleVectorType(t) &&
+               "unexpected incompatible with LLVM vector type");
         llvm::TypeSize elementSize =
             getPrimitiveTypeSizeInBits(t.getElementType());
-        llvm::ElementCount elementCount = t.getElementCount();
-        assert(!elementSize.isScalable() &&
-               "vector type should have fixed-width elements");
-        return llvm::TypeSize(elementSize.getFixedSize() *
-                                  elementCount.getKnownMinValue(),
-                              elementCount.isScalable());
+        return llvm::TypeSize(elementSize.getFixedSize() * t.getNumElements(),
+                              elementSize.isScalable());
       })
       .Default([](Type ty) {
         assert((ty.isa<LLVMVoidType, LLVMLabelType, LLVMMetadataType,

diff  --git a/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp b/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
index 8bfbbf857b70..06e7378d5fe1 100644
--- a/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
+++ b/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
@@ -87,7 +87,7 @@ static ParseResult parseNVVMVoteBallotOp(OpAsmParser &parser,
 static LogicalResult verify(MmaOp op) {
   MLIRContext *context = op.getContext();
   auto f16Ty = Float16Type::get(context);
-  auto f16x2Ty = LLVM::LLVMFixedVectorType::get(f16Ty, 2);
+  auto f16x2Ty = LLVM::getFixedVectorType(f16Ty, 2);
   auto f32Ty = Float32Type::get(context);
   auto f16x2x4StructTy = LLVM::LLVMStructType::getLiteral(
       context, {f16x2Ty, f16x2Ty, f16x2Ty, f16x2Ty});

diff  --git a/mlir/lib/Dialect/LLVMIR/IR/ROCDLDialect.cpp b/mlir/lib/Dialect/LLVMIR/IR/ROCDLDialect.cpp
index da1b73ac565a..1cdceaf78904 100644
--- a/mlir/lib/Dialect/LLVMIR/IR/ROCDLDialect.cpp
+++ b/mlir/lib/Dialect/LLVMIR/IR/ROCDLDialect.cpp
@@ -48,7 +48,7 @@ static ParseResult parseROCDLMubufLoadOp(OpAsmParser &parser,
   MLIRContext *context = parser.getBuilder().getContext();
   auto int32Ty = IntegerType::get(context, 32);
   auto int1Ty = IntegerType::get(context, 1);
-  auto i32x4Ty = LLVM::LLVMFixedVectorType::get(int32Ty, 4);
+  auto i32x4Ty = LLVM::getFixedVectorType(int32Ty, 4);
   return parser.resolveOperands(ops,
                                 {i32x4Ty, int32Ty, int32Ty, int1Ty, int1Ty},
                                 parser.getNameLoc(), result.operands);
@@ -67,7 +67,7 @@ static ParseResult parseROCDLMubufStoreOp(OpAsmParser &parser,
   MLIRContext *context = parser.getBuilder().getContext();
   auto int32Ty = IntegerType::get(context, 32);
   auto int1Ty = IntegerType::get(context, 1);
-  auto i32x4Ty = LLVM::LLVMFixedVectorType::get(int32Ty, 4);
+  auto i32x4Ty = LLVM::getFixedVectorType(int32Ty, 4);
 
   if (parser.resolveOperands(ops,
                              {type, i32x4Ty, int32Ty, int32Ty, int1Ty, int1Ty},

diff  --git a/mlir/lib/Target/LLVMIR/ConvertFromLLVMIR.cpp b/mlir/lib/Target/LLVMIR/ConvertFromLLVMIR.cpp
index 93a587bf8e64..863ab021301f 100644
--- a/mlir/lib/Target/LLVMIR/ConvertFromLLVMIR.cpp
+++ b/mlir/lib/Target/LLVMIR/ConvertFromLLVMIR.cpp
@@ -176,13 +176,13 @@ Type Importer::getStdTypeForAttr(Type type) {
     return type;
 
   // LLVM vectors can only contain scalars.
-  if (auto vectorType = type.dyn_cast<LLVM::LLVMVectorType>()) {
-    auto numElements = vectorType.getElementCount();
+  if (LLVM::isCompatibleVectorType(type)) {
+    auto numElements = LLVM::getVectorNumElements(type);
     if (numElements.isScalable()) {
       emitError(unknownLoc) << "scalable vectors not supported";
       return nullptr;
     }
-    Type elementType = getStdTypeForAttr(vectorType.getElementType());
+    Type elementType = getStdTypeForAttr(LLVM::getVectorElementType(type));
     if (!elementType)
       return nullptr;
     return VectorType::get(numElements.getKnownMinValue(), elementType);
@@ -200,16 +200,16 @@ Type Importer::getStdTypeForAttr(Type type) {
 
     // If the innermost type is a vector, use the multi-dimensional vector as
     // attribute type.
-    if (auto vectorType =
-            arrayType.getElementType().dyn_cast<LLVMVectorType>()) {
-      auto numElements = vectorType.getElementCount();
+    if (LLVM::isCompatibleVectorType(arrayType.getElementType())) {
+      auto numElements = LLVM::getVectorNumElements(arrayType.getElementType());
       if (numElements.isScalable()) {
         emitError(unknownLoc) << "scalable vectors not supported";
         return nullptr;
       }
       shape.push_back(numElements.getKnownMinValue());
 
-      Type elementType = getStdTypeForAttr(vectorType.getElementType());
+      Type elementType = getStdTypeForAttr(
+          LLVM::getVectorElementType(arrayType.getElementType()));
       if (!elementType)
         return nullptr;
       return VectorType::get(shape, elementType);

diff  --git a/mlir/lib/Target/LLVMIR/TypeTranslation.cpp b/mlir/lib/Target/LLVMIR/TypeTranslation.cpp
index bb773f09cf2c..50f4836fba28 100644
--- a/mlir/lib/Target/LLVMIR/TypeTranslation.cpp
+++ b/mlir/lib/Target/LLVMIR/TypeTranslation.cpp
@@ -72,7 +72,8 @@ class TypeToLLVMIRTranslatorImpl {
             })
             .Case<LLVM::LLVMArrayType, IntegerType, LLVM::LLVMFunctionType,
                   LLVM::LLVMPointerType, LLVM::LLVMStructType,
-                  LLVM::LLVMFixedVectorType, LLVM::LLVMScalableVectorType>(
+                  LLVM::LLVMFixedVectorType, LLVM::LLVMScalableVectorType,
+                  VectorType>(
                 [this](auto type) { return this->translate(type); })
             .Default([](Type t) -> llvm::Type * {
               llvm_unreachable("unknown LLVM dialect type");
@@ -132,6 +133,14 @@ class TypeToLLVMIRTranslatorImpl {
     return structType;
   }
 
+  /// Translates the given built-in vector type compatible with LLVM.
+  llvm::Type *translate(VectorType type) {
+    assert(LLVM::isCompatibleVectorType(type) &&
+           "expected compatible with LLVM vector type");
+    return llvm::FixedVectorType::get(translateType(type.getElementType()),
+                                      type.getNumElements());
+  }
+
   /// Translates the given fixed-vector type.
   llvm::Type *translate(LLVM::LLVMFixedVectorType type) {
     return llvm::FixedVectorType::get(translateType(type.getElementType()),
@@ -285,8 +294,8 @@ class TypeFromLLVMIRTranslatorImpl {
 
   /// Translates the given fixed-vector type.
   Type translate(llvm::FixedVectorType *type) {
-    return LLVM::LLVMFixedVectorType::get(translateType(type->getElementType()),
-                                          type->getNumElements());
+    return LLVM::getFixedVectorType(translateType(type->getElementType()),
+                                    type->getNumElements());
   }
 
   /// Translates the given scalable-vector type.

diff  --git a/mlir/test/Conversion/ArmNeonToLLVM/convert-to-llvm.mlir b/mlir/test/Conversion/ArmNeonToLLVM/convert-to-llvm.mlir
index fe56052fe734..d95abf4dd50e 100644
--- a/mlir/test/Conversion/ArmNeonToLLVM/convert-to-llvm.mlir
+++ b/mlir/test/Conversion/ArmNeonToLLVM/convert-to-llvm.mlir
@@ -3,17 +3,17 @@
 // CHECK-LABEL: arm_neon_smull
 func @arm_neon_smull(%a: vector<8xi8>, %b: vector<8xi8>)
     -> (vector<8xi16>, vector<4xi32>, vector<2xi64>) {
-  // CHECK: arm_neon.smull{{.*}}: (!llvm.vec<8 x i8>, !llvm.vec<8 x i8>) -> !llvm.vec<8 x i16>
+  // CHECK: arm_neon.smull{{.*}}: (vector<8xi8>, vector<8xi8>) -> vector<8xi16>
   %0 = arm_neon.smull %a, %b : vector<8xi8> to vector<8xi16>
   %00 = vector.extract_strided_slice %0 {offsets = [3], sizes = [4], strides = [1]}:
     vector<8xi16> to vector<4xi16>
 
-  // CHECK: arm_neon.smull{{.*}}: (!llvm.vec<4 x i16>, !llvm.vec<4 x i16>) -> !llvm.vec<4 x i32>
+  // CHECK: arm_neon.smull{{.*}}: (vector<4xi16>, vector<4xi16>) -> vector<4xi32>
   %1 = arm_neon.smull %00, %00 : vector<4xi16> to vector<4xi32>
   %11 = vector.extract_strided_slice %1 {offsets = [1], sizes = [2], strides = [1]}:
     vector<4xi32> to vector<2xi32>
 
-  // CHECK: arm_neon.smull{{.*}}: (!llvm.vec<2 x i32>, !llvm.vec<2 x i32>) -> !llvm.vec<2 x i64>
+  // CHECK: arm_neon.smull{{.*}}: (vector<2xi32>, vector<2xi32>) -> vector<2xi64>
   %2 = arm_neon.smull %11, %11 : vector<2xi32> to vector<2xi64>
 
   return %0, %1, %2 : vector<8xi16>, vector<4xi32>, vector<2xi64>

diff  --git a/mlir/test/Conversion/SPIRVToLLVM/arithmetic-ops-to-llvm.mlir b/mlir/test/Conversion/SPIRVToLLVM/arithmetic-ops-to-llvm.mlir
index 0e8dfab78855..5a1a3eb6209b 100644
--- a/mlir/test/Conversion/SPIRVToLLVM/arithmetic-ops-to-llvm.mlir
+++ b/mlir/test/Conversion/SPIRVToLLVM/arithmetic-ops-to-llvm.mlir
@@ -13,7 +13,7 @@ spv.func @iadd_scalar(%arg0: i32, %arg1: i32) "None" {
 
 // CHECK-LABEL: @iadd_vector
 spv.func @iadd_vector(%arg0: vector<4xi64>, %arg1: vector<4xi64>) "None" {
-  // CHECK: llvm.add %{{.*}}, %{{.*}} : !llvm.vec<4 x i64>
+  // CHECK: llvm.add %{{.*}}, %{{.*}} : vector<4xi64>
   %0 = spv.IAdd %arg0, %arg1 : vector<4xi64>
   spv.Return
 }
@@ -31,7 +31,7 @@ spv.func @isub_scalar(%arg0: i8, %arg1: i8) "None" {
 
 // CHECK-LABEL: @isub_vector
 spv.func @isub_vector(%arg0: vector<2xi16>, %arg1: vector<2xi16>) "None" {
-  // CHECK: llvm.sub %{{.*}}, %{{.*}} : !llvm.vec<2 x i16>
+  // CHECK: llvm.sub %{{.*}}, %{{.*}} : vector<2xi16>
   %0 = spv.ISub %arg0, %arg1 : vector<2xi16>
   spv.Return
 }
@@ -49,7 +49,7 @@ spv.func @imul_scalar(%arg0: i32, %arg1: i32) "None" {
 
 // CHECK-LABEL: @imul_vector
 spv.func @imul_vector(%arg0: vector<3xi32>, %arg1: vector<3xi32>) "None" {
-  // CHECK: llvm.mul %{{.*}}, %{{.*}} : !llvm.vec<3 x i32>
+  // CHECK: llvm.mul %{{.*}}, %{{.*}} : vector<3xi32>
   %0 = spv.IMul %arg0, %arg1 : vector<3xi32>
   spv.Return
 }
@@ -67,7 +67,7 @@ spv.func @fadd_scalar(%arg0: f16, %arg1: f16) "None" {
 
 // CHECK-LABEL: @fadd_vector
 spv.func @fadd_vector(%arg0: vector<4xf32>, %arg1: vector<4xf32>) "None" {
-  // CHECK: llvm.fadd %{{.*}}, %{{.*}} : !llvm.vec<4 x f32>
+  // CHECK: llvm.fadd %{{.*}}, %{{.*}} : vector<4xf32>
   %0 = spv.FAdd %arg0, %arg1 : vector<4xf32>
   spv.Return
 }
@@ -85,7 +85,7 @@ spv.func @fsub_scalar(%arg0: f32, %arg1: f32) "None" {
 
 // CHECK-LABEL: @fsub_vector
 spv.func @fsub_vector(%arg0: vector<2xf32>, %arg1: vector<2xf32>) "None" {
-  // CHECK: llvm.fsub %{{.*}}, %{{.*}} : !llvm.vec<2 x f32>
+  // CHECK: llvm.fsub %{{.*}}, %{{.*}} : vector<2xf32>
   %0 = spv.FSub %arg0, %arg1 : vector<2xf32>
   spv.Return
 }
@@ -103,7 +103,7 @@ spv.func @fdiv_scalar(%arg0: f32, %arg1: f32) "None" {
 
 // CHECK-LABEL: @fdiv_vector
 spv.func @fdiv_vector(%arg0: vector<3xf64>, %arg1: vector<3xf64>) "None" {
-  // CHECK: llvm.fdiv %{{.*}}, %{{.*}} : !llvm.vec<3 x f64>
+  // CHECK: llvm.fdiv %{{.*}}, %{{.*}} : vector<3xf64>
   %0 = spv.FDiv %arg0, %arg1 : vector<3xf64>
   spv.Return
 }
@@ -121,7 +121,7 @@ spv.func @fmul_scalar(%arg0: f32, %arg1: f32) "None" {
 
 // CHECK-LABEL: @fmul_vector
 spv.func @fmul_vector(%arg0: vector<2xf32>, %arg1: vector<2xf32>) "None" {
-  // CHECK: llvm.fmul %{{.*}}, %{{.*}} : !llvm.vec<2 x f32>
+  // CHECK: llvm.fmul %{{.*}}, %{{.*}} : vector<2xf32>
   %0 = spv.FMul %arg0, %arg1 : vector<2xf32>
   spv.Return
 }
@@ -139,7 +139,7 @@ spv.func @frem_scalar(%arg0: f32, %arg1: f32) "None" {
 
 // CHECK-LABEL: @frem_vector
 spv.func @frem_vector(%arg0: vector<3xf64>, %arg1: vector<3xf64>) "None" {
-  // CHECK: llvm.frem %{{.*}}, %{{.*}} : !llvm.vec<3 x f64>
+  // CHECK: llvm.frem %{{.*}}, %{{.*}} : vector<3xf64>
   %0 = spv.FRem %arg0, %arg1 : vector<3xf64>
   spv.Return
 }
@@ -157,7 +157,7 @@ spv.func @fneg_scalar(%arg: f64) "None" {
 
 // CHECK-LABEL: @fneg_vector
 spv.func @fneg_vector(%arg: vector<2xf32>) "None" {
-  // CHECK: llvm.fneg %{{.*}} : !llvm.vec<2 x f32>
+  // CHECK: llvm.fneg %{{.*}} : vector<2xf32>
   %0 = spv.FNegate %arg : vector<2xf32>
   spv.Return
 }
@@ -175,7 +175,7 @@ spv.func @udiv_scalar(%arg0: i32, %arg1: i32) "None" {
 
 // CHECK-LABEL: @udiv_vector
 spv.func @udiv_vector(%arg0: vector<3xi64>, %arg1: vector<3xi64>) "None" {
-  // CHECK: llvm.udiv %{{.*}}, %{{.*}} : !llvm.vec<3 x i64>
+  // CHECK: llvm.udiv %{{.*}}, %{{.*}} : vector<3xi64>
   %0 = spv.UDiv %arg0, %arg1 : vector<3xi64>
   spv.Return
 }
@@ -193,7 +193,7 @@ spv.func @umod_scalar(%arg0: i32, %arg1: i32) "None" {
 
 // CHECK-LABEL: @umod_vector
 spv.func @umod_vector(%arg0: vector<3xi64>, %arg1: vector<3xi64>) "None" {
-  // CHECK: llvm.urem %{{.*}}, %{{.*}} : !llvm.vec<3 x i64>
+  // CHECK: llvm.urem %{{.*}}, %{{.*}} : vector<3xi64>
   %0 = spv.UMod %arg0, %arg1 : vector<3xi64>
   spv.Return
 }
@@ -211,7 +211,7 @@ spv.func @sdiv_scalar(%arg0: i16, %arg1: i16) "None" {
 
 // CHECK-LABEL: @sdiv_vector
 spv.func @sdiv_vector(%arg0: vector<2xi64>, %arg1: vector<2xi64>) "None" {
-  // CHECK: llvm.sdiv %{{.*}}, %{{.*}} : !llvm.vec<2 x i64>
+  // CHECK: llvm.sdiv %{{.*}}, %{{.*}} : vector<2xi64>
   %0 = spv.SDiv %arg0, %arg1 : vector<2xi64>
   spv.Return
 }
@@ -229,7 +229,7 @@ spv.func @srem_scalar(%arg0: i32, %arg1: i32) "None" {
 
 // CHECK-LABEL: @srem_vector
 spv.func @srem_vector(%arg0: vector<4xi32>, %arg1: vector<4xi32>) "None" {
-  // CHECK: llvm.srem %{{.*}}, %{{.*}} : !llvm.vec<4 x i32>
+  // CHECK: llvm.srem %{{.*}}, %{{.*}} : vector<4xi32>
   %0 = spv.SRem %arg0, %arg1 : vector<4xi32>
   spv.Return
 }

diff  --git a/mlir/test/Conversion/SPIRVToLLVM/bitwise-ops-to-llvm.mlir b/mlir/test/Conversion/SPIRVToLLVM/bitwise-ops-to-llvm.mlir
index db1ac3a6d4d0..488dd7ea37d8 100644
--- a/mlir/test/Conversion/SPIRVToLLVM/bitwise-ops-to-llvm.mlir
+++ b/mlir/test/Conversion/SPIRVToLLVM/bitwise-ops-to-llvm.mlir
@@ -13,7 +13,7 @@ spv.func @bitcount_scalar(%arg0: i16) "None" {
 
 // CHECK-LABEL: @bitcount_vector
 spv.func @bitcount_vector(%arg0: vector<3xi32>) "None" {
-  // CHECK: "llvm.intr.ctpop"(%{{.*}}) : (!llvm.vec<3 x i32>) -> !llvm.vec<3 x i32>
+  // CHECK: "llvm.intr.ctpop"(%{{.*}}) : (vector<3xi32>) -> vector<3xi32>
   %0 = spv.BitCount %arg0: vector<3xi32>
   spv.Return
 }
@@ -31,7 +31,7 @@ spv.func @bitreverse_scalar(%arg0: i64) "None" {
 
 // CHECK-LABEL: @bitreverse_vector
 spv.func @bitreverse_vector(%arg0: vector<4xi32>) "None" {
-  // CHECK: "llvm.intr.bitreverse"(%{{.*}}) : (!llvm.vec<4 x i32>) -> !llvm.vec<4 x i32>
+  // CHECK: "llvm.intr.bitreverse"(%{{.*}}) : (vector<4xi32>) -> vector<4xi32>
   %0 = spv.BitReverse %arg0: vector<4xi32>
   spv.Return
 }
@@ -90,26 +90,26 @@ spv.func @bitfield_insert_scalar_greater_bit_width(%base: i16, %insert: i16, %of
 }
 
 // CHECK-LABEL: @bitfield_insert_vector
-//  CHECK-SAME: %[[BASE:.*]]: !llvm.vec<2 x i32>, %[[INSERT:.*]]: !llvm.vec<2 x i32>, %[[OFFSET:.*]]: i32, %[[COUNT:.*]]: i32
+//  CHECK-SAME: %[[BASE:.*]]: vector<2xi32>, %[[INSERT:.*]]: vector<2xi32>, %[[OFFSET:.*]]: i32, %[[COUNT:.*]]: i32
 spv.func @bitfield_insert_vector(%base: vector<2xi32>, %insert: vector<2xi32>, %offset: i32, %count: i32) "None" {
-  // CHECK: %[[OFFSET_V0:.*]] = llvm.mlir.undef : !llvm.vec<2 x i32>
+  // CHECK: %[[OFFSET_V0:.*]] = llvm.mlir.undef : vector<2xi32>
   // CHECK: %[[ZERO:.*]] = llvm.mlir.constant(0 : i32) : i32
-  // CHECK: %[[OFFSET_V1:.*]] = llvm.insertelement %[[OFFSET]], %[[OFFSET_V0]][%[[ZERO]] : i32] : !llvm.vec<2 x i32>
+  // CHECK: %[[OFFSET_V1:.*]] = llvm.insertelement %[[OFFSET]], %[[OFFSET_V0]][%[[ZERO]] : i32] : vector<2xi32>
   // CHECK: %[[ONE:.*]] = llvm.mlir.constant(1 : i32) : i32
-  // CHECK: %[[OFFSET_V2:.*]] = llvm.insertelement  %[[OFFSET]], %[[OFFSET_V1]][%[[ONE]] : i32] : !llvm.vec<2 x i32>
-  // CHECK: %[[COUNT_V0:.*]] = llvm.mlir.undef : !llvm.vec<2 x i32>
+  // CHECK: %[[OFFSET_V2:.*]] = llvm.insertelement  %[[OFFSET]], %[[OFFSET_V1]][%[[ONE]] : i32] : vector<2xi32>
+  // CHECK: %[[COUNT_V0:.*]] = llvm.mlir.undef : vector<2xi32>
   // CHECK: %[[ZERO:.*]] = llvm.mlir.constant(0 : i32) : i32
-  // CHECK: %[[COUNT_V1:.*]] = llvm.insertelement %[[COUNT]], %[[COUNT_V0]][%[[ZERO]] : i32] : !llvm.vec<2 x i32>
+  // CHECK: %[[COUNT_V1:.*]] = llvm.insertelement %[[COUNT]], %[[COUNT_V0]][%[[ZERO]] : i32] : vector<2xi32>
   // CHECK: %[[ONE:.*]] = llvm.mlir.constant(1 : i32) : i32
-  // CHECK: %[[COUNT_V2:.*]] = llvm.insertelement %[[COUNT]], %[[COUNT_V1]][%[[ONE]] : i32] : !llvm.vec<2 x i32>
-  // CHECK: %[[MINUS_ONE:.*]] = llvm.mlir.constant(dense<-1> : vector<2xi32>) : !llvm.vec<2 x i32>
-  // CHECK: %[[T0:.*]] = llvm.shl %[[MINUS_ONE]], %[[COUNT_V2]] : !llvm.vec<2 x i32>
-  // CHECK: %[[T1:.*]] = llvm.xor %[[T0]], %[[MINUS_ONE]] : !llvm.vec<2 x i32>
-  // CHECK: %[[T2:.*]] = llvm.shl %[[T1]], %[[OFFSET_V2]] : !llvm.vec<2 x i32>
-  // CHECK: %[[MASK:.*]] = llvm.xor %[[T2]], %[[MINUS_ONE]] : !llvm.vec<2 x i32>
-  // CHECK: %[[NEW_BASE:.*]] = llvm.and %[[BASE]], %[[MASK]] : !llvm.vec<2 x i32>
-  // CHECK: %[[SHIFTED_INSERT:.*]] = llvm.shl %[[INSERT]], %[[OFFSET_V2]] : !llvm.vec<2 x i32>
-  // CHECK: llvm.or %[[NEW_BASE]], %[[SHIFTED_INSERT]] : !llvm.vec<2 x i32>
+  // CHECK: %[[COUNT_V2:.*]] = llvm.insertelement %[[COUNT]], %[[COUNT_V1]][%[[ONE]] : i32] : vector<2xi32>
+  // CHECK: %[[MINUS_ONE:.*]] = llvm.mlir.constant(dense<-1> : vector<2xi32>) : vector<2xi32>
+  // CHECK: %[[T0:.*]] = llvm.shl %[[MINUS_ONE]], %[[COUNT_V2]] : vector<2xi32>
+  // CHECK: %[[T1:.*]] = llvm.xor %[[T0]], %[[MINUS_ONE]] : vector<2xi32>
+  // CHECK: %[[T2:.*]] = llvm.shl %[[T1]], %[[OFFSET_V2]] : vector<2xi32>
+  // CHECK: %[[MASK:.*]] = llvm.xor %[[T2]], %[[MINUS_ONE]] : vector<2xi32>
+  // CHECK: %[[NEW_BASE:.*]] = llvm.and %[[BASE]], %[[MASK]] : vector<2xi32>
+  // CHECK: %[[SHIFTED_INSERT:.*]] = llvm.shl %[[INSERT]], %[[OFFSET_V2]] : vector<2xi32>
+  // CHECK: llvm.or %[[NEW_BASE]], %[[SHIFTED_INSERT]] : vector<2xi32>
   %0 = spv.BitFieldInsert %base, %insert, %offset, %count : vector<2xi32>, i32, i32
   spv.Return
 }
@@ -162,24 +162,24 @@ spv.func @bitfield_sextract_scalar_greater_bit_width(%base: i32, %offset: i64, %
 }
 
 // CHECK-LABEL: @bitfield_sextract_vector
-//  CHECK-SAME: %[[BASE:.*]]: !llvm.vec<2 x i32>, %[[OFFSET:.*]]: i32, %[[COUNT:.*]]: i32
+//  CHECK-SAME: %[[BASE:.*]]: vector<2xi32>, %[[OFFSET:.*]]: i32, %[[COUNT:.*]]: i32
 spv.func @bitfield_sextract_vector(%base: vector<2xi32>, %offset: i32, %count: i32) "None" {
-  // CHECK: %[[OFFSET_V0:.*]] = llvm.mlir.undef : !llvm.vec<2 x i32>
+  // CHECK: %[[OFFSET_V0:.*]] = llvm.mlir.undef : vector<2xi32>
   // CHECK: %[[ZERO:.*]] = llvm.mlir.constant(0 : i32) : i32
-  // CHECK: %[[OFFSET_V1:.*]] = llvm.insertelement %[[OFFSET]], %[[OFFSET_V0]][%[[ZERO]] : i32] : !llvm.vec<2 x i32>
+  // CHECK: %[[OFFSET_V1:.*]] = llvm.insertelement %[[OFFSET]], %[[OFFSET_V0]][%[[ZERO]] : i32] : vector<2xi32>
   // CHECK: %[[ONE:.*]] = llvm.mlir.constant(1 : i32) : i32
-  // CHECK: %[[OFFSET_V2:.*]] = llvm.insertelement  %[[OFFSET]], %[[OFFSET_V1]][%[[ONE]] : i32] : !llvm.vec<2 x i32>
-  // CHECK: %[[COUNT_V0:.*]] = llvm.mlir.undef : !llvm.vec<2 x i32>
+  // CHECK: %[[OFFSET_V2:.*]] = llvm.insertelement  %[[OFFSET]], %[[OFFSET_V1]][%[[ONE]] : i32] : vector<2xi32>
+  // CHECK: %[[COUNT_V0:.*]] = llvm.mlir.undef : vector<2xi32>
   // CHECK: %[[ZERO:.*]] = llvm.mlir.constant(0 : i32) : i32
-  // CHECK: %[[COUNT_V1:.*]] = llvm.insertelement %[[COUNT]], %[[COUNT_V0]][%[[ZERO]] : i32] : !llvm.vec<2 x i32>
+  // CHECK: %[[COUNT_V1:.*]] = llvm.insertelement %[[COUNT]], %[[COUNT_V0]][%[[ZERO]] : i32] : vector<2xi32>
   // CHECK: %[[ONE:.*]] = llvm.mlir.constant(1 : i32) : i32
-  // CHECK: %[[COUNT_V2:.*]] = llvm.insertelement %[[COUNT]], %[[COUNT_V1]][%[[ONE]] : i32] : !llvm.vec<2 x i32>
-  // CHECK: %[[SIZE:.*]] = llvm.mlir.constant(dense<32> : vector<2xi32>) : !llvm.vec<2 x i32>
-  // CHECK: %[[T0:.*]] = llvm.add %[[COUNT_V2]], %[[OFFSET_V2]] : !llvm.vec<2 x i32>
-  // CHECK: %[[T1:.*]] = llvm.sub %[[SIZE]], %[[T0]] : !llvm.vec<2 x i32>
-  // CHECK: %[[SHIFTED_LEFT:.*]] = llvm.shl %[[BASE]], %[[T1]] : !llvm.vec<2 x i32>
-  // CHECK: %[[T2:.*]] = llvm.add %[[OFFSET_V2]], %[[T1]] : !llvm.vec<2 x i32>
-  // CHECK: llvm.ashr %[[SHIFTED_LEFT]], %[[T2]] : !llvm.vec<2 x i32>
+  // CHECK: %[[COUNT_V2:.*]] = llvm.insertelement %[[COUNT]], %[[COUNT_V1]][%[[ONE]] : i32] : vector<2xi32>
+  // CHECK: %[[SIZE:.*]] = llvm.mlir.constant(dense<32> : vector<2xi32>) : vector<2xi32>
+  // CHECK: %[[T0:.*]] = llvm.add %[[COUNT_V2]], %[[OFFSET_V2]] : vector<2xi32>
+  // CHECK: %[[T1:.*]] = llvm.sub %[[SIZE]], %[[T0]] : vector<2xi32>
+  // CHECK: %[[SHIFTED_LEFT:.*]] = llvm.shl %[[BASE]], %[[T1]] : vector<2xi32>
+  // CHECK: %[[T2:.*]] = llvm.add %[[OFFSET_V2]], %[[T1]] : vector<2xi32>
+  // CHECK: llvm.ashr %[[SHIFTED_LEFT]], %[[T2]] : vector<2xi32>
   %0 = spv.BitFieldSExtract %base, %offset, %count : vector<2xi32>, i32, i32
   spv.Return
 }
@@ -228,23 +228,23 @@ spv.func @bitfield_uextract_scalar_greater_bit_width(%base: i8, %offset: i16, %c
 }
 
 // CHECK-LABEL: @bitfield_uextract_vector
-//  CHECK-SAME: %[[BASE:.*]]: !llvm.vec<2 x i32>, %[[OFFSET:.*]]: i32, %[[COUNT:.*]]: i32
+//  CHECK-SAME: %[[BASE:.*]]: vector<2xi32>, %[[OFFSET:.*]]: i32, %[[COUNT:.*]]: i32
 spv.func @bitfield_uextract_vector(%base: vector<2xi32>, %offset: i32, %count: i32) "None" {
-  // CHECK: %[[OFFSET_V0:.*]] = llvm.mlir.undef : !llvm.vec<2 x i32>
+  // CHECK: %[[OFFSET_V0:.*]] = llvm.mlir.undef : vector<2xi32>
   // CHECK: %[[ZERO:.*]] = llvm.mlir.constant(0 : i32) : i32
-  // CHECK: %[[OFFSET_V1:.*]] = llvm.insertelement %[[OFFSET]], %[[OFFSET_V0]][%[[ZERO]] : i32] : !llvm.vec<2 x i32>
+  // CHECK: %[[OFFSET_V1:.*]] = llvm.insertelement %[[OFFSET]], %[[OFFSET_V0]][%[[ZERO]] : i32] : vector<2xi32>
   // CHECK: %[[ONE:.*]] = llvm.mlir.constant(1 : i32) : i32
-  // CHECK: %[[OFFSET_V2:.*]] = llvm.insertelement  %[[OFFSET]], %[[OFFSET_V1]][%[[ONE]] : i32] : !llvm.vec<2 x i32>
-  // CHECK: %[[COUNT_V0:.*]] = llvm.mlir.undef : !llvm.vec<2 x i32>
+  // CHECK: %[[OFFSET_V2:.*]] = llvm.insertelement  %[[OFFSET]], %[[OFFSET_V1]][%[[ONE]] : i32] : vector<2xi32>
+  // CHECK: %[[COUNT_V0:.*]] = llvm.mlir.undef : vector<2xi32>
   // CHECK: %[[ZERO:.*]] = llvm.mlir.constant(0 : i32) : i32
-  // CHECK: %[[COUNT_V1:.*]] = llvm.insertelement %[[COUNT]], %[[COUNT_V0]][%[[ZERO]] : i32] : !llvm.vec<2 x i32>
+  // CHECK: %[[COUNT_V1:.*]] = llvm.insertelement %[[COUNT]], %[[COUNT_V0]][%[[ZERO]] : i32] : vector<2xi32>
   // CHECK: %[[ONE:.*]] = llvm.mlir.constant(1 : i32) : i32
-  // CHECK: %[[COUNT_V2:.*]] = llvm.insertelement %[[COUNT]], %[[COUNT_V1]][%[[ONE]] : i32] : !llvm.vec<2 x i32>
-  // CHECK: %[[MINUS_ONE:.*]] = llvm.mlir.constant(dense<-1> : vector<2xi32>) : !llvm.vec<2 x i32>
-  // CHECK: %[[T0:.*]] = llvm.shl %[[MINUS_ONE]], %[[COUNT_V2]] : !llvm.vec<2 x i32>
-  // CHECK: %[[MASK:.*]] = llvm.xor %[[T0]], %[[MINUS_ONE]] : !llvm.vec<2 x i32>
-  // CHECK: %[[SHIFTED_BASE:.*]] = llvm.lshr %[[BASE]], %[[OFFSET_V2]] : !llvm.vec<2 x i32>
-  // CHECK: llvm.and %[[SHIFTED_BASE]], %[[MASK]] : !llvm.vec<2 x i32>
+  // CHECK: %[[COUNT_V2:.*]] = llvm.insertelement %[[COUNT]], %[[COUNT_V1]][%[[ONE]] : i32] : vector<2xi32>
+  // CHECK: %[[MINUS_ONE:.*]] = llvm.mlir.constant(dense<-1> : vector<2xi32>) : vector<2xi32>
+  // CHECK: %[[T0:.*]] = llvm.shl %[[MINUS_ONE]], %[[COUNT_V2]] : vector<2xi32>
+  // CHECK: %[[MASK:.*]] = llvm.xor %[[T0]], %[[MINUS_ONE]] : vector<2xi32>
+  // CHECK: %[[SHIFTED_BASE:.*]] = llvm.lshr %[[BASE]], %[[OFFSET_V2]] : vector<2xi32>
+  // CHECK: llvm.and %[[SHIFTED_BASE]], %[[MASK]] : vector<2xi32>
   %0 = spv.BitFieldUExtract %base, %offset, %count : vector<2xi32>, i32, i32
   spv.Return
 }
@@ -262,7 +262,7 @@ spv.func @bitwise_and_scalar(%arg0: i32, %arg1: i32) "None" {
 
 // CHECK-LABEL: @bitwise_and_vector
 spv.func @bitwise_and_vector(%arg0: vector<4xi64>, %arg1: vector<4xi64>) "None" {
-  // CHECK: llvm.and %{{.*}}, %{{.*}} : !llvm.vec<4 x i64>
+  // CHECK: llvm.and %{{.*}}, %{{.*}} : vector<4xi64>
   %0 = spv.BitwiseAnd %arg0, %arg1 : vector<4xi64>
   spv.Return
 }
@@ -280,7 +280,7 @@ spv.func @bitwise_or_scalar(%arg0: i64, %arg1: i64) "None" {
 
 // CHECK-LABEL: @bitwise_or_vector
 spv.func @bitwise_or_vector(%arg0: vector<3xi8>, %arg1: vector<3xi8>) "None" {
-  // CHECK: llvm.or %{{.*}}, %{{.*}} : !llvm.vec<3 x i8>
+  // CHECK: llvm.or %{{.*}}, %{{.*}} : vector<3xi8>
   %0 = spv.BitwiseOr %arg0, %arg1 : vector<3xi8>
   spv.Return
 }
@@ -298,7 +298,7 @@ spv.func @bitwise_xor_scalar(%arg0: i32, %arg1: i32) "None" {
 
 // CHECK-LABEL: @bitwise_xor_vector
 spv.func @bitwise_xor_vector(%arg0: vector<2xi16>, %arg1: vector<2xi16>) "None" {
-  // CHECK: llvm.xor %{{.*}}, %{{.*}} : !llvm.vec<2 x i16>
+  // CHECK: llvm.xor %{{.*}}, %{{.*}} : vector<2xi16>
   %0 = spv.BitwiseXor %arg0, %arg1 : vector<2xi16>
   spv.Return
 }
@@ -317,8 +317,8 @@ spv.func @not_scalar(%arg0: i32) "None" {
 
 // CHECK-LABEL: @not_vector
 spv.func @not_vector(%arg0: vector<2xi16>) "None" {
-  // CHECK: %[[CONST:.*]] = llvm.mlir.constant(dense<-1> : vector<2xi16>) : !llvm.vec<2 x i16>
-  // CHECK: llvm.xor %{{.*}}, %[[CONST]] : !llvm.vec<2 x i16>
+  // CHECK: %[[CONST:.*]] = llvm.mlir.constant(dense<-1> : vector<2xi16>) : vector<2xi16>
+  // CHECK: llvm.xor %{{.*}}, %[[CONST]] : vector<2xi16>
   %0 = spv.Not %arg0 : vector<2xi16>
   spv.Return
 }

diff  --git a/mlir/test/Conversion/SPIRVToLLVM/cast-ops-to-llvm.mlir b/mlir/test/Conversion/SPIRVToLLVM/cast-ops-to-llvm.mlir
index 8f67a5fcab70..bfabc8c7d0c6 100644
--- a/mlir/test/Conversion/SPIRVToLLVM/cast-ops-to-llvm.mlir
+++ b/mlir/test/Conversion/SPIRVToLLVM/cast-ops-to-llvm.mlir
@@ -13,28 +13,28 @@ spv.func @bitcast_float_to_integer_scalar(%arg0 : f32) "None" {
 
 // CHECK-LABEL: @bitcast_float_to_integer_vector
 spv.func @bitcast_float_to_integer_vector(%arg0 : vector<3xf32>) "None" {
-  // CHECK: {{.*}} = llvm.bitcast {{.*}} : !llvm.vec<3 x f32> to !llvm.vec<3 x i32>
+  // CHECK: {{.*}} = llvm.bitcast {{.*}} : vector<3xf32> to vector<3xi32>
   %0 = spv.Bitcast %arg0: vector<3xf32> to vector<3xi32>
   spv.Return
 }
 
 // CHECK-LABEL: @bitcast_vector_to_scalar
 spv.func @bitcast_vector_to_scalar(%arg0 : vector<2xf32>) "None" {
-  // CHECK: {{.*}} = llvm.bitcast {{.*}} : !llvm.vec<2 x f32> to i64
+  // CHECK: {{.*}} = llvm.bitcast {{.*}} : vector<2xf32> to i64
   %0 = spv.Bitcast %arg0: vector<2xf32> to i64
   spv.Return
 }
 
 // CHECK-LABEL: @bitcast_scalar_to_vector
 spv.func @bitcast_scalar_to_vector(%arg0 : f64) "None" {
-  // CHECK: {{.*}} = llvm.bitcast {{.*}} : f64 to !llvm.vec<2 x i32>
+  // CHECK: {{.*}} = llvm.bitcast {{.*}} : f64 to vector<2xi32>
   %0 = spv.Bitcast %arg0: f64 to vector<2xi32>
   spv.Return
 }
 
 // CHECK-LABEL: @bitcast_vector_to_vector
 spv.func @bitcast_vector_to_vector(%arg0 : vector<4xf32>) "None" {
-  // CHECK: {{.*}} = llvm.bitcast {{.*}} : !llvm.vec<4 x f32> to !llvm.vec<2 x i64>
+  // CHECK: {{.*}} = llvm.bitcast {{.*}} : vector<4xf32> to vector<2xi64>
   %0 = spv.Bitcast %arg0: vector<4xf32> to vector<2xi64>
   spv.Return
 }
@@ -59,7 +59,7 @@ spv.func @convert_float_to_signed_scalar(%arg0: f32) "None" {
 
 // CHECK-LABEL: @convert_float_to_signed_vector
 spv.func @convert_float_to_signed_vector(%arg0: vector<2xf32>) "None" {
-  // CHECK: llvm.fptosi %{{.*}} : !llvm.vec<2 x f32> to !llvm.vec<2 x i32>
+  // CHECK: llvm.fptosi %{{.*}} : vector<2xf32> to vector<2xi32>
     %0 = spv.ConvertFToS %arg0: vector<2xf32> to vector<2xi32>
   spv.Return
 }
@@ -77,7 +77,7 @@ spv.func @convert_float_to_unsigned_scalar(%arg0: f32) "None" {
 
 // CHECK-LABEL: @convert_float_to_unsigned_vector
 spv.func @convert_float_to_unsigned_vector(%arg0: vector<2xf32>) "None" {
-  // CHECK: llvm.fptoui %{{.*}} : !llvm.vec<2 x f32> to !llvm.vec<2 x i32>
+  // CHECK: llvm.fptoui %{{.*}} : vector<2xf32> to vector<2xi32>
     %0 = spv.ConvertFToU %arg0: vector<2xf32> to vector<2xi32>
   spv.Return
 }
@@ -95,7 +95,7 @@ spv.func @convert_signed_to_float_scalar(%arg0: i32) "None" {
 
 // CHECK-LABEL: @convert_signed_to_float_vector
 spv.func @convert_signed_to_float_vector(%arg0: vector<3xi32>) "None" {
-  // CHECK: llvm.sitofp %{{.*}} : !llvm.vec<3 x i32> to !llvm.vec<3 x f32>
+  // CHECK: llvm.sitofp %{{.*}} : vector<3xi32> to vector<3xf32>
     %0 = spv.ConvertSToF %arg0: vector<3xi32> to vector<3xf32>
   spv.Return
 }
@@ -113,7 +113,7 @@ spv.func @convert_unsigned_to_float_scalar(%arg0: i32) "None" {
 
 // CHECK-LABEL: @convert_unsigned_to_float_vector
 spv.func @convert_unsigned_to_float_vector(%arg0: vector<3xi32>) "None" {
-  // CHECK: llvm.uitofp %{{.*}} : !llvm.vec<3 x i32> to !llvm.vec<3 x f32>
+  // CHECK: llvm.uitofp %{{.*}} : vector<3xi32> to vector<3xf32>
     %0 = spv.ConvertUToF %arg0: vector<3xi32> to vector<3xf32>
   spv.Return
 }
@@ -134,10 +134,10 @@ spv.func @fconvert_scalar(%arg0: f32, %arg1: f64) "None" {
 
 // CHECK-LABEL: @fconvert_vector
 spv.func @fconvert_vector(%arg0: vector<2xf32>, %arg1: vector<2xf64>) "None" {
-  // CHECK: llvm.fpext %{{.*}} : !llvm.vec<2 x f32> to !llvm.vec<2 x f64>
+  // CHECK: llvm.fpext %{{.*}} : vector<2xf32> to vector<2xf64>
   %0 = spv.FConvert %arg0: vector<2xf32> to vector<2xf64>
 
-  // CHECK: llvm.fptrunc %{{.*}} : !llvm.vec<2 x f64> to !llvm.vec<2 x f32>
+  // CHECK: llvm.fptrunc %{{.*}} : vector<2xf64> to vector<2xf32>
   %1 = spv.FConvert %arg1: vector<2xf64> to vector<2xf32>
   spv.Return
 }
@@ -158,10 +158,10 @@ spv.func @sconvert_scalar(%arg0: i32, %arg1: i64) "None" {
 
 // CHECK-LABEL: @sconvert_vector
 spv.func @sconvert_vector(%arg0: vector<3xi32>, %arg1: vector<3xi64>) "None" {
-  // CHECK: llvm.sext %{{.*}} : !llvm.vec<3 x i32> to !llvm.vec<3 x i64>
+  // CHECK: llvm.sext %{{.*}} : vector<3xi32> to vector<3xi64>
   %0 = spv.SConvert %arg0: vector<3xi32> to vector<3xi64>
 
-  // CHECK: llvm.trunc %{{.*}} : !llvm.vec<3 x i64> to !llvm.vec<3 x i32>
+  // CHECK: llvm.trunc %{{.*}} : vector<3xi64> to vector<3xi32>
   %1 = spv.SConvert %arg1: vector<3xi64> to vector<3xi32>
   spv.Return
 }
@@ -182,10 +182,10 @@ spv.func @uconvert_scalar(%arg0: i32, %arg1: i64) "None" {
 
 // CHECK-LABEL: @uconvert_vector
 spv.func @uconvert_vector(%arg0: vector<3xi32>, %arg1: vector<3xi64>) "None" {
-  // CHECK: llvm.zext %{{.*}} : !llvm.vec<3 x i32> to !llvm.vec<3 x i64>
+  // CHECK: llvm.zext %{{.*}} : vector<3xi32> to vector<3xi64>
   %0 = spv.UConvert %arg0: vector<3xi32> to vector<3xi64>
 
-  // CHECK: llvm.trunc %{{.*}} : !llvm.vec<3 x i64> to !llvm.vec<3 x i32>
+  // CHECK: llvm.trunc %{{.*}} : vector<3xi64> to vector<3xi32>
   %1 = spv.UConvert %arg1: vector<3xi64> to vector<3xi32>
   spv.Return
 }

diff  --git a/mlir/test/Conversion/SPIRVToLLVM/comparison-ops-to-llvm.mlir b/mlir/test/Conversion/SPIRVToLLVM/comparison-ops-to-llvm.mlir
index 632136c5ede0..355e313d9f0c 100644
--- a/mlir/test/Conversion/SPIRVToLLVM/comparison-ops-to-llvm.mlir
+++ b/mlir/test/Conversion/SPIRVToLLVM/comparison-ops-to-llvm.mlir
@@ -13,7 +13,7 @@ spv.func @i_equal_scalar(%arg0: i32, %arg1: i32) "None" {
 
 // CHECK-LABEL: @i_equal_vector
 spv.func @i_equal_vector(%arg0: vector<4xi64>, %arg1: vector<4xi64>) "None" {
-  // CHECK: llvm.icmp "eq" %{{.*}}, %{{.*}} : !llvm.vec<4 x i64>
+  // CHECK: llvm.icmp "eq" %{{.*}}, %{{.*}} : vector<4xi64>
   %0 = spv.IEqual %arg0, %arg1 : vector<4xi64>
   spv.Return
 }
@@ -31,7 +31,7 @@ spv.func @i_not_equal_scalar(%arg0: i64, %arg1: i64) "None" {
 
 // CHECK-LABEL: @i_not_equal_vector
 spv.func @i_not_equal_vector(%arg0: vector<2xi64>, %arg1: vector<2xi64>) "None" {
-  // CHECK: llvm.icmp "ne" %{{.*}}, %{{.*}} : !llvm.vec<2 x i64>
+  // CHECK: llvm.icmp "ne" %{{.*}}, %{{.*}} : vector<2xi64>
   %0 = spv.INotEqual %arg0, %arg1 : vector<2xi64>
   spv.Return
 }
@@ -49,7 +49,7 @@ spv.func @s_greater_than_equal_scalar(%arg0: i64, %arg1: i64) "None" {
 
 // CHECK-LABEL: @s_greater_than_equal_vector
 spv.func @s_greater_than_equal_vector(%arg0: vector<2xi64>, %arg1: vector<2xi64>) "None" {
-  // CHECK: llvm.icmp "sge" %{{.*}}, %{{.*}} : !llvm.vec<2 x i64>
+  // CHECK: llvm.icmp "sge" %{{.*}}, %{{.*}} : vector<2xi64>
   %0 = spv.SGreaterThanEqual %arg0, %arg1 : vector<2xi64>
   spv.Return
 }
@@ -67,7 +67,7 @@ spv.func @s_greater_than_scalar(%arg0: i64, %arg1: i64) "None" {
 
 // CHECK-LABEL: @s_greater_than_vector
 spv.func @s_greater_than_vector(%arg0: vector<2xi64>, %arg1: vector<2xi64>) "None" {
-  // CHECK: llvm.icmp "sgt" %{{.*}}, %{{.*}} : !llvm.vec<2 x i64>
+  // CHECK: llvm.icmp "sgt" %{{.*}}, %{{.*}} : vector<2xi64>
   %0 = spv.SGreaterThan %arg0, %arg1 : vector<2xi64>
   spv.Return
 }
@@ -85,7 +85,7 @@ spv.func @s_less_than_equal_scalar(%arg0: i64, %arg1: i64) "None" {
 
 // CHECK-LABEL: @s_less_than_equal_vector
 spv.func @s_less_than_equal_vector(%arg0: vector<2xi64>, %arg1: vector<2xi64>) "None" {
-  // CHECK: llvm.icmp "sle" %{{.*}}, %{{.*}} : !llvm.vec<2 x i64>
+  // CHECK: llvm.icmp "sle" %{{.*}}, %{{.*}} : vector<2xi64>
   %0 = spv.SLessThanEqual %arg0, %arg1 : vector<2xi64>
   spv.Return
 }
@@ -103,7 +103,7 @@ spv.func @s_less_than_scalar(%arg0: i64, %arg1: i64) "None" {
 
 // CHECK-LABEL: @s_less_than_vector
 spv.func @s_less_than_vector(%arg0: vector<2xi64>, %arg1: vector<2xi64>) "None" {
-  // CHECK: llvm.icmp "slt" %{{.*}}, %{{.*}} : !llvm.vec<2 x i64>
+  // CHECK: llvm.icmp "slt" %{{.*}}, %{{.*}} : vector<2xi64>
   %0 = spv.SLessThan %arg0, %arg1 : vector<2xi64>
   spv.Return
 }
@@ -121,7 +121,7 @@ spv.func @u_greater_than_equal_scalar(%arg0: i64, %arg1: i64) "None" {
 
 // CHECK-LABEL: @u_greater_than_equal_vector
 spv.func @u_greater_than_equal_vector(%arg0: vector<2xi64>, %arg1: vector<2xi64>) "None" {
-  // CHECK: llvm.icmp "uge" %{{.*}}, %{{.*}} : !llvm.vec<2 x i64>
+  // CHECK: llvm.icmp "uge" %{{.*}}, %{{.*}} : vector<2xi64>
   %0 = spv.UGreaterThanEqual %arg0, %arg1 : vector<2xi64>
   spv.Return
 }
@@ -139,7 +139,7 @@ spv.func @u_greater_than_scalar(%arg0: i64, %arg1: i64) "None" {
 
 // CHECK-LABEL: @u_greater_than_vector
 spv.func @u_greater_than_vector(%arg0: vector<2xi64>, %arg1: vector<2xi64>) "None" {
-  // CHECK: llvm.icmp "ugt" %{{.*}}, %{{.*}} : !llvm.vec<2 x i64>
+  // CHECK: llvm.icmp "ugt" %{{.*}}, %{{.*}} : vector<2xi64>
   %0 = spv.UGreaterThan %arg0, %arg1 : vector<2xi64>
   spv.Return
 }
@@ -157,7 +157,7 @@ spv.func @u_less_than_equal_scalar(%arg0: i64, %arg1: i64) "None" {
 
 // CHECK-LABEL: @u_less_than_equal_vector
 spv.func @u_less_than_equal_vector(%arg0: vector<2xi64>, %arg1: vector<2xi64>) "None" {
-  // CHECK: llvm.icmp "ule" %{{.*}}, %{{.*}} : !llvm.vec<2 x i64>
+  // CHECK: llvm.icmp "ule" %{{.*}}, %{{.*}} : vector<2xi64>
   %0 = spv.ULessThanEqual %arg0, %arg1 : vector<2xi64>
   spv.Return
 }
@@ -175,7 +175,7 @@ spv.func @u_less_than_scalar(%arg0: i64, %arg1: i64) "None" {
 
 // CHECK-LABEL: @u_less_than_vector
 spv.func @u_less_than_vector(%arg0: vector<2xi64>, %arg1: vector<2xi64>) "None" {
-  // CHECK: llvm.icmp "ult" %{{.*}}, %{{.*}} : !llvm.vec<2 x i64>
+  // CHECK: llvm.icmp "ult" %{{.*}}, %{{.*}} : vector<2xi64>
   %0 = spv.ULessThan %arg0, %arg1 : vector<2xi64>
   spv.Return
 }
@@ -193,7 +193,7 @@ spv.func @f_ord_equal_scalar(%arg0: f32, %arg1: f32) "None" {
 
 // CHECK-LABEL: @f_ord_equal_vector
 spv.func @f_ord_equal_vector(%arg0: vector<4xf64>, %arg1: vector<4xf64>) "None" {
-  // CHECK: llvm.fcmp "oeq" %{{.*}}, %{{.*}} : !llvm.vec<4 x f64>
+  // CHECK: llvm.fcmp "oeq" %{{.*}}, %{{.*}} : vector<4xf64>
   %0 = spv.FOrdEqual %arg0, %arg1 : vector<4xf64>
   spv.Return
 }
@@ -211,7 +211,7 @@ spv.func @f_ord_greater_than_equal_scalar(%arg0: f64, %arg1: f64) "None" {
 
 // CHECK-LABEL: @f_ord_greater_than_equal_vector
 spv.func @f_ord_greater_than_equal_vector(%arg0: vector<2xf64>, %arg1: vector<2xf64>) "None" {
-  // CHECK: llvm.fcmp "oge" %{{.*}}, %{{.*}} : !llvm.vec<2 x f64>
+  // CHECK: llvm.fcmp "oge" %{{.*}}, %{{.*}} : vector<2xf64>
   %0 = spv.FOrdGreaterThanEqual %arg0, %arg1 : vector<2xf64>
   spv.Return
 }
@@ -229,7 +229,7 @@ spv.func @f_ord_greater_than_scalar(%arg0: f64, %arg1: f64) "None" {
 
 // CHECK-LABEL: @f_ord_greater_than_vector
 spv.func @f_ord_greater_than_vector(%arg0: vector<2xf64>, %arg1: vector<2xf64>) "None" {
-  // CHECK: llvm.fcmp "ogt" %{{.*}}, %{{.*}} : !llvm.vec<2 x f64>
+  // CHECK: llvm.fcmp "ogt" %{{.*}}, %{{.*}} : vector<2xf64>
   %0 = spv.FOrdGreaterThan %arg0, %arg1 : vector<2xf64>
   spv.Return
 }
@@ -247,7 +247,7 @@ spv.func @f_ord_less_than_scalar(%arg0: f64, %arg1: f64) "None" {
 
 // CHECK-LABEL: @f_ord_less_than_vector
 spv.func @f_ord_less_than_vector(%arg0: vector<2xf64>, %arg1: vector<2xf64>) "None" {
-  // CHECK: llvm.fcmp "olt" %{{.*}}, %{{.*}} : !llvm.vec<2 x f64>
+  // CHECK: llvm.fcmp "olt" %{{.*}}, %{{.*}} : vector<2xf64>
   %0 = spv.FOrdLessThan %arg0, %arg1 : vector<2xf64>
   spv.Return
 }
@@ -265,7 +265,7 @@ spv.func @f_ord_less_than_equal_scalar(%arg0: f64, %arg1: f64) "None" {
 
 // CHECK-LABEL: @f_ord_less_than_equal_vector
 spv.func @f_ord_less_than_equal_vector(%arg0: vector<2xf64>, %arg1: vector<2xf64>) "None" {
-  // CHECK: llvm.fcmp "ole" %{{.*}}, %{{.*}} : !llvm.vec<2 x f64>
+  // CHECK: llvm.fcmp "ole" %{{.*}}, %{{.*}} : vector<2xf64>
   %0 = spv.FOrdLessThanEqual %arg0, %arg1 : vector<2xf64>
   spv.Return
 }
@@ -283,7 +283,7 @@ spv.func @f_ord_not_equal_scalar(%arg0: f32, %arg1: f32) "None" {
 
 // CHECK-LABEL: @f_ord_not_equal_vector
 spv.func @f_ord_not_equal_vector(%arg0: vector<4xf64>, %arg1: vector<4xf64>) "None" {
-  // CHECK: llvm.fcmp "one" %{{.*}}, %{{.*}} : !llvm.vec<4 x f64>
+  // CHECK: llvm.fcmp "one" %{{.*}}, %{{.*}} : vector<4xf64>
   %0 = spv.FOrdNotEqual %arg0, %arg1 : vector<4xf64>
   spv.Return
 }
@@ -301,7 +301,7 @@ spv.func @f_unord_equal_scalar(%arg0: f32, %arg1: f32) "None" {
 
 // CHECK-LABEL: @f_unord_equal_vector
 spv.func @f_unord_equal_vector(%arg0: vector<4xf64>, %arg1: vector<4xf64>) "None" {
-  // CHECK: llvm.fcmp "ueq" %{{.*}}, %{{.*}} : !llvm.vec<4 x f64>
+  // CHECK: llvm.fcmp "ueq" %{{.*}}, %{{.*}} : vector<4xf64>
   %0 = spv.FUnordEqual %arg0, %arg1 : vector<4xf64>
   spv.Return
 }
@@ -319,7 +319,7 @@ spv.func @f_unord_greater_than_equal_scalar(%arg0: f64, %arg1: f64) "None" {
 
 // CHECK-LABEL: @f_unord_greater_than_equal_vector
 spv.func @f_unord_greater_than_equal_vector(%arg0: vector<2xf64>, %arg1: vector<2xf64>) "None" {
-  // CHECK: llvm.fcmp "uge" %{{.*}}, %{{.*}} : !llvm.vec<2 x f64>
+  // CHECK: llvm.fcmp "uge" %{{.*}}, %{{.*}} : vector<2xf64>
   %0 = spv.FUnordGreaterThanEqual %arg0, %arg1 : vector<2xf64>
   spv.Return
 }
@@ -337,7 +337,7 @@ spv.func @f_unord_greater_than_scalar(%arg0: f64, %arg1: f64) "None" {
 
 // CHECK-LABEL: @f_unord_greater_than_vector
 spv.func @f_unord_greater_than_vector(%arg0: vector<2xf64>, %arg1: vector<2xf64>) "None" {
-  // CHECK: llvm.fcmp "ugt" %{{.*}}, %{{.*}} : !llvm.vec<2 x f64>
+  // CHECK: llvm.fcmp "ugt" %{{.*}}, %{{.*}} : vector<2xf64>
   %0 = spv.FUnordGreaterThan %arg0, %arg1 : vector<2xf64>
   spv.Return
 }
@@ -355,7 +355,7 @@ spv.func @f_unord_less_than_scalar(%arg0: f64, %arg1: f64) "None" {
 
 // CHECK-LABEL: @f_unord_less_than_vector
 spv.func @f_unord_less_than_vector(%arg0: vector<2xf64>, %arg1: vector<2xf64>) "None" {
-  // CHECK: llvm.fcmp "ult" %{{.*}}, %{{.*}} : !llvm.vec<2 x f64>
+  // CHECK: llvm.fcmp "ult" %{{.*}}, %{{.*}} : vector<2xf64>
   %0 = spv.FUnordLessThan %arg0, %arg1 : vector<2xf64>
   spv.Return
 }
@@ -373,7 +373,7 @@ spv.func @f_unord_less_than_equal_scalar(%arg0: f64, %arg1: f64) "None" {
 
 // CHECK-LABEL: @f_unord_less_than_equal_vector
 spv.func @f_unord_less_than_equal_vector(%arg0: vector<2xf64>, %arg1: vector<2xf64>) "None" {
-  // CHECK: llvm.fcmp "ule" %{{.*}}, %{{.*}} : !llvm.vec<2 x f64>
+  // CHECK: llvm.fcmp "ule" %{{.*}}, %{{.*}} : vector<2xf64>
   %0 = spv.FUnordLessThanEqual %arg0, %arg1 : vector<2xf64>
   spv.Return
 }
@@ -391,7 +391,7 @@ spv.func @f_unord_not_equal_scalar(%arg0: f32, %arg1: f32) "None" {
 
 // CHECK-LABEL: @f_unord_not_equal_vector
 spv.func @f_unord_not_equal_vector(%arg0: vector<4xf64>, %arg1: vector<4xf64>) "None" {
-  // CHECK: llvm.fcmp "une" %{{.*}}, %{{.*}} : !llvm.vec<4 x f64>
+  // CHECK: llvm.fcmp "une" %{{.*}}, %{{.*}} : vector<4xf64>
   %0 = spv.FUnordNotEqual %arg0, %arg1 : vector<4xf64>
   spv.Return
 }

diff  --git a/mlir/test/Conversion/SPIRVToLLVM/constant-op-to-llvm.mlir b/mlir/test/Conversion/SPIRVToLLVM/constant-op-to-llvm.mlir
index 949aa0376d14..0ab431b3ac24 100644
--- a/mlir/test/Conversion/SPIRVToLLVM/constant-op-to-llvm.mlir
+++ b/mlir/test/Conversion/SPIRVToLLVM/constant-op-to-llvm.mlir
@@ -15,9 +15,9 @@ spv.func @bool_constant_scalar() "None" {
 
 // CHECK-LABEL: @bool_constant_vector
 spv.func @bool_constant_vector() "None" {
-  // CHECK: llvm.mlir.constant(dense<[true, false]> : vector<2xi1>) : !llvm.vec<2 x i1>
+  // CHECK: llvm.mlir.constant(dense<[true, false]> : vector<2xi1>) : vector<2xi1>
   %0 = spv.constant dense<[true, false]> : vector<2xi1>
-  // CHECK: llvm.mlir.constant(dense<false> : vector<3xi1>) : !llvm.vec<3 x i1>
+  // CHECK: llvm.mlir.constant(dense<false> : vector<3xi1>) : vector<3xi1>
   %1 = spv.constant dense<false> : vector<3xi1>
   spv.Return
 }
@@ -35,11 +35,11 @@ spv.func @integer_constant_scalar() "None" {
 
 // CHECK-LABEL: @integer_constant_vector
 spv.func @integer_constant_vector() "None" {
-  // CHECK: llvm.mlir.constant(dense<[2, 3]> : vector<2xi32>) : !llvm.vec<2 x i32>
+  // CHECK: llvm.mlir.constant(dense<[2, 3]> : vector<2xi32>) : vector<2xi32>
   %0 = spv.constant dense<[2, 3]> : vector<2xi32>
-  // CHECK: llvm.mlir.constant(dense<-4> : vector<2xi32>) : !llvm.vec<2 x i32>
+  // CHECK: llvm.mlir.constant(dense<-4> : vector<2xi32>) : vector<2xi32>
   %1 = spv.constant dense<-4> : vector<2xsi32>
-  // CHECK: llvm.mlir.constant(dense<[2, 3, 4]> : vector<3xi32>) : !llvm.vec<3 x i32>
+  // CHECK: llvm.mlir.constant(dense<[2, 3, 4]> : vector<3xi32>) : vector<3xi32>
   %2 = spv.constant dense<[2, 3, 4]> : vector<3xui32>
   spv.Return
 }
@@ -55,7 +55,7 @@ spv.func @float_constant_scalar() "None" {
 
 // CHECK-LABEL: @float_constant_vector
 spv.func @float_constant_vector() "None" {
-  // CHECK: llvm.mlir.constant(dense<[2.000000e+00, 3.000000e+00]> : vector<2xf32>) : !llvm.vec<2 x f32>
+  // CHECK: llvm.mlir.constant(dense<[2.000000e+00, 3.000000e+00]> : vector<2xf32>) : vector<2xf32>
   %0 = spv.constant dense<[2.000000e+00, 3.000000e+00]> : vector<2xf32>
   spv.Return
 }

diff  --git a/mlir/test/Conversion/SPIRVToLLVM/func-ops-to-llvm.mlir b/mlir/test/Conversion/SPIRVToLLVM/func-ops-to-llvm.mlir
index 0928b0fa6c4a..e196a304a872 100644
--- a/mlir/test/Conversion/SPIRVToLLVM/func-ops-to-llvm.mlir
+++ b/mlir/test/Conversion/SPIRVToLLVM/func-ops-to-llvm.mlir
@@ -54,7 +54,7 @@ spv.func @scalar_types(%arg0: i32, %arg1: i1, %arg2: f64, %arg3: f32) "None" {
   spv.Return
 }
 
-// CHECK-LABEL: llvm.func @vector_types(%arg0: !llvm.vec<2 x i64>, %arg1: !llvm.vec<2 x i64>) -> !llvm.vec<2 x i64>
+// CHECK-LABEL: llvm.func @vector_types(%arg0: vector<2xi64>, %arg1: vector<2xi64>) -> vector<2xi64>
 spv.func @vector_types(%arg0: vector<2xi64>, %arg1: vector<2xi64>) -> vector<2xi64> "None" {
   %0 = spv.IAdd %arg0, %arg1 : vector<2xi64>
   spv.ReturnValue %0 : vector<2xi64>
@@ -65,12 +65,12 @@ spv.func @vector_types(%arg0: vector<2xi64>, %arg1: vector<2xi64>) -> vector<2xi
 //===----------------------------------------------------------------------===//
 
 // CHECK-LABEL: llvm.func @function_calls
-// CHECK-SAME: %[[ARG0:.*]]: i32, %[[ARG1:.*]]: i1, %[[ARG2:.*]]: f64, %[[ARG3:.*]]: !llvm.vec<2 x i64>, %[[ARG4:.*]]: !llvm.vec<2 x f32>
+// CHECK-SAME: %[[ARG0:.*]]: i32, %[[ARG1:.*]]: i1, %[[ARG2:.*]]: f64, %[[ARG3:.*]]: vector<2xi64>, %[[ARG4:.*]]: vector<2xf32>
 spv.func @function_calls(%arg0: i32, %arg1: i1, %arg2: f64, %arg3: vector<2xi64>, %arg4: vector<2xf32>) "None" {
   // CHECK: llvm.call @void_1() : () -> ()
-  // CHECK: llvm.call @void_2(%[[ARG3]]) : (!llvm.vec<2 x i64>) -> ()
+  // CHECK: llvm.call @void_2(%[[ARG3]]) : (vector<2xi64>) -> ()
   // CHECK: llvm.call @value_scalar(%[[ARG0]], %[[ARG1]], %[[ARG2]]) : (i32, i1, f64) -> i32
-  // CHECK: llvm.call @value_vector(%[[ARG3]], %[[ARG4]]) : (!llvm.vec<2 x i64>, !llvm.vec<2 x f32>) -> !llvm.vec<2 x f32>
+  // CHECK: llvm.call @value_vector(%[[ARG3]], %[[ARG4]]) : (vector<2xi64>, vector<2xf32>) -> vector<2xf32>
   spv.FunctionCall @void_1() : () -> ()
   spv.FunctionCall @void_2(%arg3) : (vector<2xi64>) -> ()
   %0 = spv.FunctionCall @value_scalar(%arg0, %arg1, %arg2) : (i32, i1, f64) -> i32

diff  --git a/mlir/test/Conversion/SPIRVToLLVM/glsl-ops-to-llvm.mlir b/mlir/test/Conversion/SPIRVToLLVM/glsl-ops-to-llvm.mlir
index 62d0dec74060..c9243f660fba 100644
--- a/mlir/test/Conversion/SPIRVToLLVM/glsl-ops-to-llvm.mlir
+++ b/mlir/test/Conversion/SPIRVToLLVM/glsl-ops-to-llvm.mlir
@@ -8,7 +8,7 @@
 spv.func @ceil(%arg0: f32, %arg1: vector<3xf16>) "None" {
   // CHECK: "llvm.intr.ceil"(%{{.*}}) : (f32) -> f32
   %0 = spv.GLSL.Ceil %arg0 : f32
-  // CHECK: "llvm.intr.ceil"(%{{.*}}) : (!llvm.vec<3 x f16>) -> !llvm.vec<3 x f16>
+  // CHECK: "llvm.intr.ceil"(%{{.*}}) : (vector<3xf16>) -> vector<3xf16>
   %1 = spv.GLSL.Ceil %arg1 : vector<3xf16>
   spv.Return
 }
@@ -21,7 +21,7 @@ spv.func @ceil(%arg0: f32, %arg1: vector<3xf16>) "None" {
 spv.func @cos(%arg0: f32, %arg1: vector<3xf16>) "None" {
   // CHECK: "llvm.intr.cos"(%{{.*}}) : (f32) -> f32
   %0 = spv.GLSL.Cos %arg0 : f32
-  // CHECK: "llvm.intr.cos"(%{{.*}}) : (!llvm.vec<3 x f16>) -> !llvm.vec<3 x f16>
+  // CHECK: "llvm.intr.cos"(%{{.*}}) : (vector<3xf16>) -> vector<3xf16>
   %1 = spv.GLSL.Cos %arg1 : vector<3xf16>
   spv.Return
 }
@@ -34,7 +34,7 @@ spv.func @cos(%arg0: f32, %arg1: vector<3xf16>) "None" {
 spv.func @exp(%arg0: f32, %arg1: vector<3xf16>) "None" {
   // CHECK: "llvm.intr.exp"(%{{.*}}) : (f32) -> f32
   %0 = spv.GLSL.Exp %arg0 : f32
-  // CHECK: "llvm.intr.exp"(%{{.*}}) : (!llvm.vec<3 x f16>) -> !llvm.vec<3 x f16>
+  // CHECK: "llvm.intr.exp"(%{{.*}}) : (vector<3xf16>) -> vector<3xf16>
   %1 = spv.GLSL.Exp %arg1 : vector<3xf16>
   spv.Return
 }
@@ -47,7 +47,7 @@ spv.func @exp(%arg0: f32, %arg1: vector<3xf16>) "None" {
 spv.func @fabs(%arg0: f32, %arg1: vector<3xf16>) "None" {
   // CHECK: "llvm.intr.fabs"(%{{.*}}) : (f32) -> f32
   %0 = spv.GLSL.FAbs %arg0 : f32
-  // CHECK: "llvm.intr.fabs"(%{{.*}}) : (!llvm.vec<3 x f16>) -> !llvm.vec<3 x f16>
+  // CHECK: "llvm.intr.fabs"(%{{.*}}) : (vector<3xf16>) -> vector<3xf16>
   %1 = spv.GLSL.FAbs %arg1 : vector<3xf16>
   spv.Return
 }
@@ -60,7 +60,7 @@ spv.func @fabs(%arg0: f32, %arg1: vector<3xf16>) "None" {
 spv.func @floor(%arg0: f32, %arg1: vector<3xf16>) "None" {
   // CHECK: "llvm.intr.floor"(%{{.*}}) : (f32) -> f32
   %0 = spv.GLSL.Floor %arg0 : f32
-  // CHECK: "llvm.intr.floor"(%{{.*}}) : (!llvm.vec<3 x f16>) -> !llvm.vec<3 x f16>
+  // CHECK: "llvm.intr.floor"(%{{.*}}) : (vector<3xf16>) -> vector<3xf16>
   %1 = spv.GLSL.Floor %arg1 : vector<3xf16>
   spv.Return
 }
@@ -73,7 +73,7 @@ spv.func @floor(%arg0: f32, %arg1: vector<3xf16>) "None" {
 spv.func @fmax(%arg0: f32, %arg1: vector<3xf16>) "None" {
   // CHECK: "llvm.intr.maxnum"(%{{.*}}, %{{.*}}) : (f32, f32) -> f32
   %0 = spv.GLSL.FMax %arg0, %arg0 : f32
-  // CHECK: "llvm.intr.maxnum"(%{{.*}}, %{{.*}}) : (!llvm.vec<3 x f16>, !llvm.vec<3 x f16>) -> !llvm.vec<3 x f16>
+  // CHECK: "llvm.intr.maxnum"(%{{.*}}, %{{.*}}) : (vector<3xf16>, vector<3xf16>) -> vector<3xf16>
   %1 = spv.GLSL.FMax %arg1, %arg1 : vector<3xf16>
   spv.Return
 }
@@ -86,7 +86,7 @@ spv.func @fmax(%arg0: f32, %arg1: vector<3xf16>) "None" {
 spv.func @fmin(%arg0: f32, %arg1: vector<3xf16>) "None" {
   // CHECK: "llvm.intr.minnum"(%{{.*}}, %{{.*}}) : (f32, f32) -> f32
   %0 = spv.GLSL.FMin %arg0, %arg0 : f32
-  // CHECK: "llvm.intr.minnum"(%{{.*}}, %{{.*}}) : (!llvm.vec<3 x f16>, !llvm.vec<3 x f16>) -> !llvm.vec<3 x f16>
+  // CHECK: "llvm.intr.minnum"(%{{.*}}, %{{.*}}) : (vector<3xf16>, vector<3xf16>) -> vector<3xf16>
   %1 = spv.GLSL.FMin %arg1, %arg1 : vector<3xf16>
   spv.Return
 }
@@ -99,7 +99,7 @@ spv.func @fmin(%arg0: f32, %arg1: vector<3xf16>) "None" {
 spv.func @log(%arg0: f32, %arg1: vector<3xf16>) "None" {
   // CHECK: "llvm.intr.log"(%{{.*}}) : (f32) -> f32
   %0 = spv.GLSL.Log %arg0 : f32
-  // CHECK: "llvm.intr.log"(%{{.*}}) : (!llvm.vec<3 x f16>) -> !llvm.vec<3 x f16>
+  // CHECK: "llvm.intr.log"(%{{.*}}) : (vector<3xf16>) -> vector<3xf16>
   %1 = spv.GLSL.Log %arg1 : vector<3xf16>
   spv.Return
 }
@@ -112,7 +112,7 @@ spv.func @log(%arg0: f32, %arg1: vector<3xf16>) "None" {
 spv.func @sin(%arg0: f32, %arg1: vector<3xf16>) "None" {
   // CHECK: "llvm.intr.sin"(%{{.*}}) : (f32) -> f32
   %0 = spv.GLSL.Sin %arg0 : f32
-  // CHECK: "llvm.intr.sin"(%{{.*}}) : (!llvm.vec<3 x f16>) -> !llvm.vec<3 x f16>
+  // CHECK: "llvm.intr.sin"(%{{.*}}) : (vector<3xf16>) -> vector<3xf16>
   %1 = spv.GLSL.Sin %arg1 : vector<3xf16>
   spv.Return
 }
@@ -125,7 +125,7 @@ spv.func @sin(%arg0: f32, %arg1: vector<3xf16>) "None" {
 spv.func @smax(%arg0: i16, %arg1: vector<3xi32>) "None" {
   // CHECK: "llvm.intr.smax"(%{{.*}}, %{{.*}}) : (i16, i16) -> i16
   %0 = spv.GLSL.SMax %arg0, %arg0 : i16
-  // CHECK: "llvm.intr.smax"(%{{.*}}, %{{.*}}) : (!llvm.vec<3 x i32>, !llvm.vec<3 x i32>) -> !llvm.vec<3 x i32>
+  // CHECK: "llvm.intr.smax"(%{{.*}}, %{{.*}}) : (vector<3xi32>, vector<3xi32>) -> vector<3xi32>
   %1 = spv.GLSL.SMax %arg1, %arg1 : vector<3xi32>
   spv.Return
 }
@@ -138,7 +138,7 @@ spv.func @smax(%arg0: i16, %arg1: vector<3xi32>) "None" {
 spv.func @smin(%arg0: i16, %arg1: vector<3xi32>) "None" {
   // CHECK: "llvm.intr.smin"(%{{.*}}, %{{.*}}) : (i16, i16) -> i16
   %0 = spv.GLSL.SMin %arg0, %arg0 : i16
-  // CHECK: "llvm.intr.smin"(%{{.*}}, %{{.*}}) : (!llvm.vec<3 x i32>, !llvm.vec<3 x i32>) -> !llvm.vec<3 x i32>
+  // CHECK: "llvm.intr.smin"(%{{.*}}, %{{.*}}) : (vector<3xi32>, vector<3xi32>) -> vector<3xi32>
   %1 = spv.GLSL.SMin %arg1, %arg1 : vector<3xi32>
   spv.Return
 }
@@ -151,7 +151,7 @@ spv.func @smin(%arg0: i16, %arg1: vector<3xi32>) "None" {
 spv.func @sqrt(%arg0: f32, %arg1: vector<3xf16>) "None" {
   // CHECK: "llvm.intr.sqrt"(%{{.*}}) : (f32) -> f32
   %0 = spv.GLSL.Sqrt %arg0 : f32
-  // CHECK: "llvm.intr.sqrt"(%{{.*}}) : (!llvm.vec<3 x f16>) -> !llvm.vec<3 x f16>
+  // CHECK: "llvm.intr.sqrt"(%{{.*}}) : (vector<3xf16>) -> vector<3xf16>
   %1 = spv.GLSL.Sqrt %arg1 : vector<3xf16>
   spv.Return
 }

diff  --git a/mlir/test/Conversion/SPIRVToLLVM/logical-ops-to-llvm.mlir b/mlir/test/Conversion/SPIRVToLLVM/logical-ops-to-llvm.mlir
index a61fb7316fb3..d052a3bf406c 100644
--- a/mlir/test/Conversion/SPIRVToLLVM/logical-ops-to-llvm.mlir
+++ b/mlir/test/Conversion/SPIRVToLLVM/logical-ops-to-llvm.mlir
@@ -13,7 +13,7 @@ spv.func @logical_equal_scalar(%arg0: i1, %arg1: i1) "None" {
 
 // CHECK-LABEL: @logical_equal_vector
 spv.func @logical_equal_vector(%arg0: vector<4xi1>, %arg1: vector<4xi1>) "None" {
-  // CHECK: llvm.icmp "eq" %{{.*}}, %{{.*}} : !llvm.vec<4 x i1>
+  // CHECK: llvm.icmp "eq" %{{.*}}, %{{.*}} : vector<4xi1>
   %0 = spv.LogicalEqual %arg0, %arg0 : vector<4xi1>
   spv.Return
 }
@@ -31,7 +31,7 @@ spv.func @logical_not_equal_scalar(%arg0: i1, %arg1: i1) "None" {
 
 // CHECK-LABEL: @logical_not_equal_vector
 spv.func @logical_not_equal_vector(%arg0: vector<4xi1>, %arg1: vector<4xi1>) "None" {
-  // CHECK: llvm.icmp "ne" %{{.*}}, %{{.*}} : !llvm.vec<4 x i1>
+  // CHECK: llvm.icmp "ne" %{{.*}}, %{{.*}} : vector<4xi1>
   %0 = spv.LogicalNotEqual %arg0, %arg0 : vector<4xi1>
   spv.Return
 }
@@ -50,8 +50,8 @@ spv.func @logical_not_scalar(%arg0: i1) "None" {
 
 // CHECK-LABEL: @logical_not_vector
 spv.func @logical_not_vector(%arg0: vector<4xi1>) "None" {
-  // CHECK: %[[CONST:.*]] = llvm.mlir.constant(dense<true> : vector<4xi1>) : !llvm.vec<4 x i1>
-  // CHECK: llvm.xor %{{.*}}, %[[CONST]] : !llvm.vec<4 x i1>
+  // CHECK: %[[CONST:.*]] = llvm.mlir.constant(dense<true> : vector<4xi1>) : vector<4xi1>
+  // CHECK: llvm.xor %{{.*}}, %[[CONST]] : vector<4xi1>
   %0 = spv.LogicalNot %arg0 : vector<4xi1>
   spv.Return
 }
@@ -69,7 +69,7 @@ spv.func @logical_and_scalar(%arg0: i1, %arg1: i1) "None" {
 
 // CHECK-LABEL: @logical_and_vector
 spv.func @logical_and_vector(%arg0: vector<4xi1>, %arg1: vector<4xi1>) "None" {
-  // CHECK: llvm.and %{{.*}}, %{{.*}} : !llvm.vec<4 x i1>
+  // CHECK: llvm.and %{{.*}}, %{{.*}} : vector<4xi1>
   %0 = spv.LogicalAnd %arg0, %arg0 : vector<4xi1>
   spv.Return
 }
@@ -87,7 +87,7 @@ spv.func @logical_or_scalar(%arg0: i1, %arg1: i1) "None" {
 
 // CHECK-LABEL: @logical_or_vector
 spv.func @logical_or_vector(%arg0: vector<4xi1>, %arg1: vector<4xi1>) "None" {
-  // CHECK: llvm.or %{{.*}}, %{{.*}} : !llvm.vec<4 x i1>
+  // CHECK: llvm.or %{{.*}}, %{{.*}} : vector<4xi1>
   %0 = spv.LogicalOr %arg0, %arg0 : vector<4xi1>
   spv.Return
 }

diff  --git a/mlir/test/Conversion/SPIRVToLLVM/memory-ops-to-llvm.mlir b/mlir/test/Conversion/SPIRVToLLVM/memory-ops-to-llvm.mlir
index ccf8068320ea..6aab710220e7 100644
--- a/mlir/test/Conversion/SPIRVToLLVM/memory-ops-to-llvm.mlir
+++ b/mlir/test/Conversion/SPIRVToLLVM/memory-ops-to-llvm.mlir
@@ -184,17 +184,17 @@ spv.func @variable_scalar_with_initialization() "None" {
 // CHECK-LABEL: @variable_vector
 spv.func @variable_vector() "None" {
   // CHECK: %[[SIZE:.*]] = llvm.mlir.constant(1 : i32) : i32
-  // CHECK: llvm.alloca  %[[SIZE]] x !llvm.vec<3 x f32> : (i32) -> !llvm.ptr<vec<3 x f32>>
+  // CHECK: llvm.alloca  %[[SIZE]] x vector<3xf32> : (i32) -> !llvm.ptr<vector<3xf32>>
   %0 = spv.Variable : !spv.ptr<vector<3xf32>, Function>
   spv.Return
 }
 
 // CHECK-LABEL: @variable_vector_with_initialization
 spv.func @variable_vector_with_initialization() "None" {
-  // CHECK: %[[VALUE:.*]] = llvm.mlir.constant(dense<false> : vector<3xi1>) : !llvm.vec<3 x i1>
+  // CHECK: %[[VALUE:.*]] = llvm.mlir.constant(dense<false> : vector<3xi1>) : vector<3xi1>
   // CHECK: %[[SIZE:.*]] = llvm.mlir.constant(1 : i32) : i32
-  // CHECK: %[[ALLOCATED:.*]] = llvm.alloca %[[SIZE]] x !llvm.vec<3 x i1> : (i32) -> !llvm.ptr<vec<3 x i1>>
-  // CHECK: llvm.store %[[VALUE]], %[[ALLOCATED]] : !llvm.ptr<vec<3 x i1>>
+  // CHECK: %[[ALLOCATED:.*]] = llvm.alloca %[[SIZE]] x vector<3xi1> : (i32) -> !llvm.ptr<vector<3xi1>>
+  // CHECK: llvm.store %[[VALUE]], %[[ALLOCATED]] : !llvm.ptr<vector<3xi1>>
   %c = spv.constant dense<false> : vector<3xi1>
   %0 = spv.Variable init(%c) : !spv.ptr<vector<3xi1>, Function>
   spv.Return

diff  --git a/mlir/test/Conversion/SPIRVToLLVM/misc-ops-to-llvm.mlir b/mlir/test/Conversion/SPIRVToLLVM/misc-ops-to-llvm.mlir
index b95c3ff6b003..47263c079c8f 100644
--- a/mlir/test/Conversion/SPIRVToLLVM/misc-ops-to-llvm.mlir
+++ b/mlir/test/Conversion/SPIRVToLLVM/misc-ops-to-llvm.mlir
@@ -14,7 +14,7 @@ spv.func @composite_extract_array(%arg: !spv.array<4x!spv.array<4xf32>>) "None"
 // CHECK-LABEL: @composite_extract_vector
 spv.func @composite_extract_vector(%arg: vector<3xf32>) "None" {
   // CHECK: %[[ZERO:.*]] = llvm.mlir.constant(0 : i32) : i32
-  // CHECK: llvm.extractelement %{{.*}}[%[[ZERO]] : i32] : !llvm.vec<3 x f32>
+  // CHECK: llvm.extractelement %{{.*}}[%[[ZERO]] : i32] : vector<3xf32>
   %0 = spv.CompositeExtract %arg[0 : i32] : vector<3xf32>
   spv.Return
 }
@@ -33,7 +33,7 @@ spv.func @composite_insert_struct(%arg0: i32, %arg1: !spv.struct<(f32, !spv.arra
 // CHECK-LABEL: @composite_insert_vector
 spv.func @composite_insert_vector(%arg0: vector<3xf32>, %arg1: f32) "None" {
   // CHECK: %[[ONE:.*]] = llvm.mlir.constant(1 : i32) : i32
-  // CHECK: llvm.insertelement %{{.*}}, %{{.*}}[%[[ONE]] : i32] : !llvm.vec<3 x f32>
+  // CHECK: llvm.insertelement %{{.*}}, %{{.*}}[%[[ONE]] : i32] : vector<3xf32>
   %0 = spv.CompositeInsert %arg1, %arg0[1 : i32] : f32 into vector<3xf32>
   spv.Return
 }
@@ -44,7 +44,7 @@ spv.func @composite_insert_vector(%arg0: vector<3xf32>, %arg1: f32) "None" {
 
 // CHECK-LABEL: @select_scalar
 spv.func @select_scalar(%arg0: i1, %arg1: vector<3xi32>, %arg2: f32) "None" {
-  // CHECK: llvm.select %{{.*}}, %{{.*}}, %{{.*}} : i1, !llvm.vec<3 x i32>
+  // CHECK: llvm.select %{{.*}}, %{{.*}}, %{{.*}} : i1, vector<3xi32>
   %0 = spv.Select %arg0, %arg1, %arg1 : i1, vector<3xi32>
   // CHECK: llvm.select %{{.*}}, %{{.*}}, %{{.*}} : i1, f32
   %1 = spv.Select %arg0, %arg2, %arg2 : i1, f32
@@ -53,7 +53,7 @@ spv.func @select_scalar(%arg0: i1, %arg1: vector<3xi32>, %arg2: f32) "None" {
 
 // CHECK-LABEL: @select_vector
 spv.func @select_vector(%arg0: vector<2xi1>, %arg1: vector<2xi32>) "None" {
-  // CHECK: llvm.select %{{.*}}, %{{.*}}, %{{.*}} : !llvm.vec<2 x i1>, !llvm.vec<2 x i32>
+  // CHECK: llvm.select %{{.*}}, %{{.*}}, %{{.*}} : vector<2xi1>, vector<2xi32>
   %0 = spv.Select %arg0, %arg1, %arg1 : vector<2xi1>, vector<2xi32>
   spv.Return
 }
@@ -119,7 +119,7 @@ spv.func @undef_scalar() "None" {
 
 // CHECK-LABEL: @undef_vector
 spv.func @undef_vector() "None" {
-  // CHECK: llvm.mlir.undef : !llvm.vec<2 x i32>
+  // CHECK: llvm.mlir.undef : vector<2xi32>
   %0 = spv.undef : vector<2xi32>
   spv.Return
 }

diff  --git a/mlir/test/Conversion/SPIRVToLLVM/shift-ops-to-llvm.mlir b/mlir/test/Conversion/SPIRVToLLVM/shift-ops-to-llvm.mlir
index c28328385dcf..9b76409dbd47 100644
--- a/mlir/test/Conversion/SPIRVToLLVM/shift-ops-to-llvm.mlir
+++ b/mlir/test/Conversion/SPIRVToLLVM/shift-ops-to-llvm.mlir
@@ -24,18 +24,18 @@ spv.func @shift_right_arithmetic_scalar(%arg0: i32, %arg1: si32, %arg2 : i16, %a
 
 // CHECK-LABEL: @shift_right_arithmetic_vector
 spv.func @shift_right_arithmetic_vector(%arg0: vector<4xi64>, %arg1: vector<4xui64>, %arg2: vector<4xi32>, %arg3: vector<4xui32>) "None" {
-  // CHECK: llvm.ashr %{{.*}}, %{{.*}} : !llvm.vec<4 x i64>
+  // CHECK: llvm.ashr %{{.*}}, %{{.*}} : vector<4xi64>
   %0 = spv.ShiftRightArithmetic %arg0, %arg0 : vector<4xi64>, vector<4xi64>
 
-  // CHECK: llvm.ashr %{{.*}}, %{{.*}} : !llvm.vec<4 x i64>
+  // CHECK: llvm.ashr %{{.*}}, %{{.*}} : vector<4xi64>
   %1 = spv.ShiftRightArithmetic %arg0, %arg1 : vector<4xi64>, vector<4xui64>
 
-  // CHECK: %[[SEXT:.*]] = llvm.sext %{{.*}} : !llvm.vec<4 x i32> to !llvm.vec<4 x i64>
-  // CHECK: llvm.ashr %{{.*}}, %[[SEXT]] : !llvm.vec<4 x i64>
+  // CHECK: %[[SEXT:.*]] = llvm.sext %{{.*}} : vector<4xi32> to vector<4xi64>
+  // CHECK: llvm.ashr %{{.*}}, %[[SEXT]] : vector<4xi64>
   %2 = spv.ShiftRightArithmetic %arg0, %arg2 : vector<4xi64>,  vector<4xi32>
 
-  // CHECK: %[[ZEXT:.*]] = llvm.zext %{{.*}} : !llvm.vec<4 x i32> to !llvm.vec<4 x i64>
-  // CHECK: llvm.ashr %{{.*}}, %[[ZEXT]] : !llvm.vec<4 x i64>
+  // CHECK: %[[ZEXT:.*]] = llvm.zext %{{.*}} : vector<4xi32> to vector<4xi64>
+  // CHECK: llvm.ashr %{{.*}}, %[[ZEXT]] : vector<4xi64>
   %3 = spv.ShiftRightArithmetic %arg0, %arg3 : vector<4xi64>, vector<4xui32>
   spv.Return
 }
@@ -64,18 +64,18 @@ spv.func @shift_right_logical_scalar(%arg0: i32, %arg1: si32, %arg2 : si16, %arg
 
 // CHECK-LABEL: @shift_right_logical_vector
 spv.func @shift_right_logical_vector(%arg0: vector<4xi64>, %arg1: vector<4xsi64>, %arg2: vector<4xi32>, %arg3: vector<4xui32>) "None" {
-  // CHECK: llvm.lshr %{{.*}}, %{{.*}} : !llvm.vec<4 x i64>
+  // CHECK: llvm.lshr %{{.*}}, %{{.*}} : vector<4xi64>
   %0 = spv.ShiftRightLogical %arg0, %arg0 : vector<4xi64>, vector<4xi64>
 
-  // CHECK: llvm.lshr %{{.*}}, %{{.*}} : !llvm.vec<4 x i64>
+  // CHECK: llvm.lshr %{{.*}}, %{{.*}} : vector<4xi64>
   %1 = spv.ShiftRightLogical %arg0, %arg1 : vector<4xi64>, vector<4xsi64>
 
-  // CHECK: %[[SEXT:.*]] = llvm.sext %{{.*}} : !llvm.vec<4 x i32> to !llvm.vec<4 x i64>
-  // CHECK: llvm.lshr %{{.*}}, %[[SEXT]] : !llvm.vec<4 x i64>
+  // CHECK: %[[SEXT:.*]] = llvm.sext %{{.*}} : vector<4xi32> to vector<4xi64>
+  // CHECK: llvm.lshr %{{.*}}, %[[SEXT]] : vector<4xi64>
   %2 = spv.ShiftRightLogical %arg0, %arg2 : vector<4xi64>,  vector<4xi32>
 
-  // CHECK: %[[ZEXT:.*]] = llvm.zext %{{.*}} : !llvm.vec<4 x i32> to !llvm.vec<4 x i64>
-  // CHECK: llvm.lshr %{{.*}}, %[[ZEXT]] : !llvm.vec<4 x i64>
+  // CHECK: %[[ZEXT:.*]] = llvm.zext %{{.*}} : vector<4xi32> to vector<4xi64>
+  // CHECK: llvm.lshr %{{.*}}, %[[ZEXT]] : vector<4xi64>
   %3 = spv.ShiftRightLogical %arg0, %arg3 : vector<4xi64>, vector<4xui32>
   spv.Return
 }
@@ -104,18 +104,18 @@ spv.func @shift_left_logical_scalar(%arg0: i32, %arg1: si32, %arg2 : i16, %arg3
 
 // CHECK-LABEL: @shift_left_logical_vector
 spv.func @shift_left_logical_vector(%arg0: vector<4xi64>, %arg1: vector<4xsi64>, %arg2: vector<4xi32>, %arg3: vector<4xui32>) "None" {
-  // CHECK: llvm.shl %{{.*}}, %{{.*}} : !llvm.vec<4 x i64>
+  // CHECK: llvm.shl %{{.*}}, %{{.*}} : vector<4xi64>
   %0 = spv.ShiftLeftLogical %arg0, %arg0 : vector<4xi64>, vector<4xi64>
 
-  // CHECK: llvm.shl %{{.*}}, %{{.*}} : !llvm.vec<4 x i64>
+  // CHECK: llvm.shl %{{.*}}, %{{.*}} : vector<4xi64>
   %1 = spv.ShiftLeftLogical %arg0, %arg1 : vector<4xi64>, vector<4xsi64>
 
-  // CHECK: %[[SEXT:.*]] = llvm.sext %{{.*}} : !llvm.vec<4 x i32> to !llvm.vec<4 x i64>
-  // CHECK: llvm.shl %{{.*}}, %[[SEXT]] : !llvm.vec<4 x i64>
+  // CHECK: %[[SEXT:.*]] = llvm.sext %{{.*}} : vector<4xi32> to vector<4xi64>
+  // CHECK: llvm.shl %{{.*}}, %[[SEXT]] : vector<4xi64>
   %2 = spv.ShiftLeftLogical %arg0, %arg2 : vector<4xi64>,  vector<4xi32>
 
-  // CHECK: %[[ZEXT:.*]] = llvm.zext %{{.*}} : !llvm.vec<4 x i32> to !llvm.vec<4 x i64>
-  // CHECK: llvm.shl %{{.*}}, %[[ZEXT]] : !llvm.vec<4 x i64>
+  // CHECK: %[[ZEXT:.*]] = llvm.zext %{{.*}} : vector<4xi32> to vector<4xi64>
+  // CHECK: llvm.shl %{{.*}}, %[[ZEXT]] : vector<4xi64>
   %3 = spv.ShiftLeftLogical %arg0, %arg3 : vector<4xi64>, vector<4xui32>
   spv.Return
 }

diff  --git a/mlir/test/Conversion/SPIRVToLLVM/spirv-types-to-llvm.mlir b/mlir/test/Conversion/SPIRVToLLVM/spirv-types-to-llvm.mlir
index f65ee5dc86f5..1d573f3f2505 100644
--- a/mlir/test/Conversion/SPIRVToLLVM/spirv-types-to-llvm.mlir
+++ b/mlir/test/Conversion/SPIRVToLLVM/spirv-types-to-llvm.mlir
@@ -4,7 +4,7 @@
 // Array type
 //===----------------------------------------------------------------------===//
 
-// CHECK-LABEL: @array(!llvm.array<16 x f32>, !llvm.array<32 x vec<4 x f32>>)
+// CHECK-LABEL: @array(!llvm.array<16 x f32>, !llvm.array<32 x vector<4xf32>>)
 spv.func @array(!spv.array<16 x f32>, !spv.array< 32 x vector<4xf32> >) "None"
 
 // CHECK-LABEL: @array_with_natural_stride(!llvm.array<16 x f32>)
@@ -17,14 +17,14 @@ spv.func @array_with_natural_stride(!spv.array<16 x f32, stride=4>) "None"
 // CHECK-LABEL: @pointer_scalar(!llvm.ptr<i1>, !llvm.ptr<f32>)
 spv.func @pointer_scalar(!spv.ptr<i1, Uniform>, !spv.ptr<f32, Private>) "None"
 
-// CHECK-LABEL: @pointer_vector(!llvm.ptr<vec<4 x i32>>)
+// CHECK-LABEL: @pointer_vector(!llvm.ptr<vector<4xi32>>)
 spv.func @pointer_vector(!spv.ptr<vector<4xi32>, Function>) "None"
 
 //===----------------------------------------------------------------------===//
 // Runtime array type
 //===----------------------------------------------------------------------===//
 
-// CHECK-LABEL: @runtime_array_vector(!llvm.array<0 x vec<4 x f32>>)
+// CHECK-LABEL: @runtime_array_vector(!llvm.array<0 x vector<4xf32>>)
 spv.func @runtime_array_vector(!spv.rtarray< vector<4xf32> >) "None"
 
 // CHECK-LABEL: @runtime_array_scalar(!llvm.array<0 x f32>)

diff  --git a/mlir/test/Conversion/StandardToLLVM/convert-to-llvmir.mlir b/mlir/test/Conversion/StandardToLLVM/convert-to-llvmir.mlir
index 304d62c3935d..0b334031aba9 100644
--- a/mlir/test/Conversion/StandardToLLVM/convert-to-llvmir.mlir
+++ b/mlir/test/Conversion/StandardToLLVM/convert-to-llvmir.mlir
@@ -487,35 +487,35 @@ func @multireturn_caller() {
   return
 }
 
-// CHECK-LABEL: llvm.func @vector_ops(%arg0: !llvm.vec<4 x f32>, %arg1: !llvm.vec<4 x i1>, %arg2: !llvm.vec<4 x i64>, %arg3: !llvm.vec<4 x i64>) -> !llvm.vec<4 x f32> {
+// CHECK-LABEL: llvm.func @vector_ops(%arg0: vector<4xf32>, %arg1: vector<4xi1>, %arg2: vector<4xi64>, %arg3: vector<4xi64>) -> vector<4xf32> {
 func @vector_ops(%arg0: vector<4xf32>, %arg1: vector<4xi1>, %arg2: vector<4xi64>, %arg3: vector<4xi64>) -> vector<4xf32> {
-// CHECK-NEXT:  %0 = llvm.mlir.constant(dense<4.200000e+01> : vector<4xf32>) : !llvm.vec<4 x f32>
+// CHECK-NEXT:  %0 = llvm.mlir.constant(dense<4.200000e+01> : vector<4xf32>) : vector<4xf32>
   %0 = constant dense<42.> : vector<4xf32>
-// CHECK-NEXT:  %1 = llvm.fadd %arg0, %0 : !llvm.vec<4 x f32>
+// CHECK-NEXT:  %1 = llvm.fadd %arg0, %0 : vector<4xf32>
   %1 = addf %arg0, %0 : vector<4xf32>
-// CHECK-NEXT:  %2 = llvm.sdiv %arg2, %arg2 : !llvm.vec<4 x i64>
+// CHECK-NEXT:  %2 = llvm.sdiv %arg2, %arg2 : vector<4xi64>
   %3 = divi_signed %arg2, %arg2 : vector<4xi64>
-// CHECK-NEXT:  %3 = llvm.udiv %arg2, %arg2 : !llvm.vec<4 x i64>
+// CHECK-NEXT:  %3 = llvm.udiv %arg2, %arg2 : vector<4xi64>
   %4 = divi_unsigned %arg2, %arg2 : vector<4xi64>
-// CHECK-NEXT:  %4 = llvm.srem %arg2, %arg2 : !llvm.vec<4 x i64>
+// CHECK-NEXT:  %4 = llvm.srem %arg2, %arg2 : vector<4xi64>
   %5 = remi_signed %arg2, %arg2 : vector<4xi64>
-// CHECK-NEXT:  %5 = llvm.urem %arg2, %arg2 : !llvm.vec<4 x i64>
+// CHECK-NEXT:  %5 = llvm.urem %arg2, %arg2 : vector<4xi64>
   %6 = remi_unsigned %arg2, %arg2 : vector<4xi64>
-// CHECK-NEXT:  %6 = llvm.fdiv %arg0, %0 : !llvm.vec<4 x f32>
+// CHECK-NEXT:  %6 = llvm.fdiv %arg0, %0 : vector<4xf32>
   %7 = divf %arg0, %0 : vector<4xf32>
-// CHECK-NEXT:  %7 = llvm.frem %arg0, %0 : !llvm.vec<4 x f32>
+// CHECK-NEXT:  %7 = llvm.frem %arg0, %0 : vector<4xf32>
   %8 = remf %arg0, %0 : vector<4xf32>
-// CHECK-NEXT:  %8 = llvm.and %arg2, %arg3 : !llvm.vec<4 x i64>
+// CHECK-NEXT:  %8 = llvm.and %arg2, %arg3 : vector<4xi64>
   %9 = and %arg2, %arg3 : vector<4xi64>
-// CHECK-NEXT:  %9 = llvm.or %arg2, %arg3 : !llvm.vec<4 x i64>
+// CHECK-NEXT:  %9 = llvm.or %arg2, %arg3 : vector<4xi64>
   %10 = or %arg2, %arg3 : vector<4xi64>
-// CHECK-NEXT:  %10 = llvm.xor %arg2, %arg3 : !llvm.vec<4 x i64>
+// CHECK-NEXT:  %10 = llvm.xor %arg2, %arg3 : vector<4xi64>
   %11 = xor %arg2, %arg3 : vector<4xi64>
-// CHECK-NEXT:  %11 = llvm.shl %arg2, %arg2 : !llvm.vec<4 x i64>
+// CHECK-NEXT:  %11 = llvm.shl %arg2, %arg2 : vector<4xi64>
   %12 = shift_left %arg2, %arg2 : vector<4xi64>
-// CHECK-NEXT:  %12 = llvm.ashr %arg2, %arg2 : !llvm.vec<4 x i64>
+// CHECK-NEXT:  %12 = llvm.ashr %arg2, %arg2 : vector<4xi64>
   %13 = shift_right_signed %arg2, %arg2 : vector<4xi64>
-// CHECK-NEXT:  %13 = llvm.lshr %arg2, %arg2 : !llvm.vec<4 x i64>
+// CHECK-NEXT:  %13 = llvm.lshr %arg2, %arg2 : vector<4xi64>
   %14 = shift_right_unsigned %arg2, %arg2 : vector<4xi64>
   return %1 : vector<4xf32>
 }
@@ -597,17 +597,17 @@ func @sitofp(%arg0 : i32, %arg1 : i64) {
 // Checking conversion of integer vectors to floating point vector types.
 // CHECK-LABEL: @sitofp_vector
 func @sitofp_vector(%arg0 : vector<2xi16>, %arg1 : vector<2xi32>, %arg2 : vector<2xi64>) {
-// CHECK-NEXT: = llvm.sitofp {{.*}} : !llvm.vec<2 x i16> to !llvm.vec<2 x f32>
+// CHECK-NEXT: = llvm.sitofp {{.*}} : vector<2xi16> to vector<2xf32>
   %0 = sitofp %arg0: vector<2xi16> to vector<2xf32>
-// CHECK-NEXT: = llvm.sitofp {{.*}} : !llvm.vec<2 x i16> to !llvm.vec<2 x f64>
+// CHECK-NEXT: = llvm.sitofp {{.*}} : vector<2xi16> to vector<2xf64>
   %1 = sitofp %arg0: vector<2xi16> to vector<2xf64>
-// CHECK-NEXT: = llvm.sitofp {{.*}} : !llvm.vec<2 x i32> to !llvm.vec<2 x f32>
+// CHECK-NEXT: = llvm.sitofp {{.*}} : vector<2xi32> to vector<2xf32>
   %2 = sitofp %arg1: vector<2xi32> to vector<2xf32>
-// CHECK-NEXT: = llvm.sitofp {{.*}} : !llvm.vec<2 x i32> to !llvm.vec<2 x f64>
+// CHECK-NEXT: = llvm.sitofp {{.*}} : vector<2xi32> to vector<2xf64>
   %3 = sitofp %arg1: vector<2xi32> to vector<2xf64>
-// CHECK-NEXT: = llvm.sitofp {{.*}} : !llvm.vec<2 x i64> to !llvm.vec<2 x f32>
+// CHECK-NEXT: = llvm.sitofp {{.*}} : vector<2xi64> to vector<2xf32>
   %4 = sitofp %arg2: vector<2xi64> to vector<2xf32>
-// CHECK-NEXT: = llvm.sitofp {{.*}} : !llvm.vec<2 x i64> to !llvm.vec<2 x f64>
+// CHECK-NEXT: = llvm.sitofp {{.*}} : vector<2xi64> to vector<2xf64>
   %5 = sitofp %arg2: vector<2xi64> to vector<2xf64>
   return
 }
@@ -641,11 +641,11 @@ func @fpext(%arg0 : f16, %arg1 : f32) {
 // Checking conversion of integer types to floating point.
 // CHECK-LABEL: @fpext
 func @fpext_vector(%arg0 : vector<2xf16>, %arg1 : vector<2xf32>) {
-// CHECK-NEXT: = llvm.fpext {{.*}} : !llvm.vec<2 x f16> to !llvm.vec<2 x f32>
+// CHECK-NEXT: = llvm.fpext {{.*}} : vector<2xf16> to vector<2xf32>
   %0 = fpext %arg0: vector<2xf16> to vector<2xf32>
-// CHECK-NEXT: = llvm.fpext {{.*}} : !llvm.vec<2 x f16> to !llvm.vec<2 x f64>
+// CHECK-NEXT: = llvm.fpext {{.*}} : vector<2xf16> to vector<2xf64>
   %1 = fpext %arg0: vector<2xf16> to vector<2xf64>
-// CHECK-NEXT: = llvm.fpext {{.*}} : !llvm.vec<2 x f32> to !llvm.vec<2 x f64>
+// CHECK-NEXT: = llvm.fpext {{.*}} : vector<2xf32> to vector<2xf64>
   %2 = fpext %arg1: vector<2xf32> to vector<2xf64>
   return
 }
@@ -667,17 +667,17 @@ func @fptosi(%arg0 : f32, %arg1 : f64) {
 // Checking conversion of floating point vectors to integer vector types.
 // CHECK-LABEL: @fptosi_vector
 func @fptosi_vector(%arg0 : vector<2xf16>, %arg1 : vector<2xf32>, %arg2 : vector<2xf64>) {
-// CHECK-NEXT: = llvm.fptosi {{.*}} : !llvm.vec<2 x f16> to !llvm.vec<2 x i32>
+// CHECK-NEXT: = llvm.fptosi {{.*}} : vector<2xf16> to vector<2xi32>
   %0 = fptosi %arg0: vector<2xf16> to vector<2xi32>
-// CHECK-NEXT: = llvm.fptosi {{.*}} : !llvm.vec<2 x f16> to !llvm.vec<2 x i64>
+// CHECK-NEXT: = llvm.fptosi {{.*}} : vector<2xf16> to vector<2xi64>
   %1 = fptosi %arg0: vector<2xf16> to vector<2xi64>
-// CHECK-NEXT: = llvm.fptosi {{.*}} : !llvm.vec<2 x f32> to !llvm.vec<2 x i32>
+// CHECK-NEXT: = llvm.fptosi {{.*}} : vector<2xf32> to vector<2xi32>
   %2 = fptosi %arg1: vector<2xf32> to vector<2xi32>
-// CHECK-NEXT: = llvm.fptosi {{.*}} : !llvm.vec<2 x f32> to !llvm.vec<2 x i64>
+// CHECK-NEXT: = llvm.fptosi {{.*}} : vector<2xf32> to vector<2xi64>
   %3 = fptosi %arg1: vector<2xf32> to vector<2xi64>
-// CHECK-NEXT: = llvm.fptosi {{.*}} : !llvm.vec<2 x f64> to !llvm.vec<2 x i32>
+// CHECK-NEXT: = llvm.fptosi {{.*}} : vector<2xf64> to vector<2xi32>
   %4 = fptosi %arg2: vector<2xf64> to vector<2xi32>
-// CHECK-NEXT: = llvm.fptosi {{.*}} : !llvm.vec<2 x f64> to !llvm.vec<2 x i64>
+// CHECK-NEXT: = llvm.fptosi {{.*}} : vector<2xf64> to vector<2xi64>
   %5 = fptosi %arg2: vector<2xf64> to vector<2xi64>
   return
 }
@@ -699,17 +699,17 @@ func @fptoui(%arg0 : f32, %arg1 : f64) {
 // Checking conversion of floating point vectors to integer vector types.
 // CHECK-LABEL: @fptoui_vector
 func @fptoui_vector(%arg0 : vector<2xf16>, %arg1 : vector<2xf32>, %arg2 : vector<2xf64>) {
-// CHECK-NEXT: = llvm.fptoui {{.*}} : !llvm.vec<2 x f16> to !llvm.vec<2 x i32>
+// CHECK-NEXT: = llvm.fptoui {{.*}} : vector<2xf16> to vector<2xi32>
   %0 = fptoui %arg0: vector<2xf16> to vector<2xi32>
-// CHECK-NEXT: = llvm.fptoui {{.*}} : !llvm.vec<2 x f16> to !llvm.vec<2 x i64>
+// CHECK-NEXT: = llvm.fptoui {{.*}} : vector<2xf16> to vector<2xi64>
   %1 = fptoui %arg0: vector<2xf16> to vector<2xi64>
-// CHECK-NEXT: = llvm.fptoui {{.*}} : !llvm.vec<2 x f32> to !llvm.vec<2 x i32>
+// CHECK-NEXT: = llvm.fptoui {{.*}} : vector<2xf32> to vector<2xi32>
   %2 = fptoui %arg1: vector<2xf32> to vector<2xi32>
-// CHECK-NEXT: = llvm.fptoui {{.*}} : !llvm.vec<2 x f32> to !llvm.vec<2 x i64>
+// CHECK-NEXT: = llvm.fptoui {{.*}} : vector<2xf32> to vector<2xi64>
   %3 = fptoui %arg1: vector<2xf32> to vector<2xi64>
-// CHECK-NEXT: = llvm.fptoui {{.*}} : !llvm.vec<2 x f64> to !llvm.vec<2 x i32>
+// CHECK-NEXT: = llvm.fptoui {{.*}} : vector<2xf64> to vector<2xi32>
   %4 = fptoui %arg2: vector<2xf64> to vector<2xi32>
-// CHECK-NEXT: = llvm.fptoui {{.*}} : !llvm.vec<2 x f64> to !llvm.vec<2 x i64>
+// CHECK-NEXT: = llvm.fptoui {{.*}} : vector<2xf64> to vector<2xi64>
   %5 = fptoui %arg2: vector<2xf64> to vector<2xi64>
   return
 }
@@ -717,17 +717,17 @@ func @fptoui_vector(%arg0 : vector<2xf16>, %arg1 : vector<2xf32>, %arg2 : vector
 // Checking conversion of integer vectors to floating point vector types.
 // CHECK-LABEL: @uitofp_vector
 func @uitofp_vector(%arg0 : vector<2xi16>, %arg1 : vector<2xi32>, %arg2 : vector<2xi64>) {
-// CHECK-NEXT: = llvm.uitofp {{.*}} : !llvm.vec<2 x i16> to !llvm.vec<2 x f32>
+// CHECK-NEXT: = llvm.uitofp {{.*}} : vector<2xi16> to vector<2xf32>
   %0 = uitofp %arg0: vector<2xi16> to vector<2xf32>
-// CHECK-NEXT: = llvm.uitofp {{.*}} : !llvm.vec<2 x i16> to !llvm.vec<2 x f64>
+// CHECK-NEXT: = llvm.uitofp {{.*}} : vector<2xi16> to vector<2xf64>
   %1 = uitofp %arg0: vector<2xi16> to vector<2xf64>
-// CHECK-NEXT: = llvm.uitofp {{.*}} : !llvm.vec<2 x i32> to !llvm.vec<2 x f32>
+// CHECK-NEXT: = llvm.uitofp {{.*}} : vector<2xi32> to vector<2xf32>
   %2 = uitofp %arg1: vector<2xi32> to vector<2xf32>
-// CHECK-NEXT: = llvm.uitofp {{.*}} : !llvm.vec<2 x i32> to !llvm.vec<2 x f64>
+// CHECK-NEXT: = llvm.uitofp {{.*}} : vector<2xi32> to vector<2xf64>
   %3 = uitofp %arg1: vector<2xi32> to vector<2xf64>
-// CHECK-NEXT: = llvm.uitofp {{.*}} : !llvm.vec<2 x i64> to !llvm.vec<2 x f32>
+// CHECK-NEXT: = llvm.uitofp {{.*}} : vector<2xi64> to vector<2xf32>
   %4 = uitofp %arg2: vector<2xi64> to vector<2xf32>
-// CHECK-NEXT: = llvm.uitofp {{.*}} : !llvm.vec<2 x i64> to !llvm.vec<2 x f64>
+// CHECK-NEXT: = llvm.uitofp {{.*}} : vector<2xi64> to vector<2xf64>
   %5 = uitofp %arg2: vector<2xi64> to vector<2xf64>
   return
 }
@@ -747,11 +747,11 @@ func @fptrunc(%arg0 : f32, %arg1 : f64) {
 // Checking conversion of integer types to floating point.
 // CHECK-LABEL: @fptrunc
 func @fptrunc_vector(%arg0 : vector<2xf32>, %arg1 : vector<2xf64>) {
-// CHECK-NEXT: = llvm.fptrunc {{.*}} : !llvm.vec<2 x f32> to !llvm.vec<2 x f16>
+// CHECK-NEXT: = llvm.fptrunc {{.*}} : vector<2xf32> to vector<2xf16>
   %0 = fptrunc %arg0: vector<2xf32> to vector<2xf16>
-// CHECK-NEXT: = llvm.fptrunc {{.*}} : !llvm.vec<2 x f64> to !llvm.vec<2 x f16>
+// CHECK-NEXT: = llvm.fptrunc {{.*}} : vector<2xf64> to vector<2xf16>
   %1 = fptrunc %arg1: vector<2xf64> to vector<2xf16>
-// CHECK-NEXT: = llvm.fptrunc {{.*}} : !llvm.vec<2 x f64> to !llvm.vec<2 x f32>
+// CHECK-NEXT: = llvm.fptrunc {{.*}} : vector<2xf64> to vector<2xf32>
   %2 = fptrunc %arg1: vector<2xf64> to vector<2xf32>
   return
 }
@@ -831,40 +831,40 @@ func @vec_bin(%arg0: vector<2x2x2xf32>) -> vector<2x2x2xf32> {
   %0 = addf %arg0, %arg0 : vector<2x2x2xf32>
   return %0 : vector<2x2x2xf32>
 
-//  CHECK-NEXT: llvm.mlir.undef : !llvm.array<2 x array<2 x vec<2 x f32>>>
+//  CHECK-NEXT: llvm.mlir.undef : !llvm.array<2 x array<2 x vector<2xf32>>>
 
 // This block appears 2x2 times
-//  CHECK-NEXT: llvm.extractvalue %{{.*}}[0, 0] : !llvm.array<2 x array<2 x vec<2 x f32>>>
-//  CHECK-NEXT: llvm.extractvalue %{{.*}}[0, 0] : !llvm.array<2 x array<2 x vec<2 x f32>>>
-//  CHECK-NEXT: llvm.fadd %{{.*}} : !llvm.vec<2 x f32>
-//  CHECK-NEXT: llvm.insertvalue %{{.*}}[0, 0] : !llvm.array<2 x array<2 x vec<2 x f32>>>
+//  CHECK-NEXT: llvm.extractvalue %{{.*}}[0, 0] : !llvm.array<2 x array<2 x vector<2xf32>>>
+//  CHECK-NEXT: llvm.extractvalue %{{.*}}[0, 0] : !llvm.array<2 x array<2 x vector<2xf32>>>
+//  CHECK-NEXT: llvm.fadd %{{.*}} : vector<2xf32>
+//  CHECK-NEXT: llvm.insertvalue %{{.*}}[0, 0] : !llvm.array<2 x array<2 x vector<2xf32>>>
 
 // We check the proper indexing of extract/insert in the remaining 3 positions.
-//       CHECK: llvm.extractvalue %{{.*}}[0, 1] : !llvm.array<2 x array<2 x vec<2 x f32>>>
-//       CHECK: llvm.insertvalue %{{.*}}[0, 1] : !llvm.array<2 x array<2 x vec<2 x f32>>>
-//       CHECK: llvm.extractvalue %{{.*}}[1, 0] : !llvm.array<2 x array<2 x vec<2 x f32>>>
-//       CHECK: llvm.insertvalue %{{.*}}[1, 0] : !llvm.array<2 x array<2 x vec<2 x f32>>>
-//       CHECK: llvm.extractvalue %{{.*}}[1, 1] : !llvm.array<2 x array<2 x vec<2 x f32>>>
-//       CHECK: llvm.insertvalue %{{.*}}[1, 1] : !llvm.array<2 x array<2 x vec<2 x f32>>>
+//       CHECK: llvm.extractvalue %{{.*}}[0, 1] : !llvm.array<2 x array<2 x vector<2xf32>>>
+//       CHECK: llvm.insertvalue %{{.*}}[0, 1] : !llvm.array<2 x array<2 x vector<2xf32>>>
+//       CHECK: llvm.extractvalue %{{.*}}[1, 0] : !llvm.array<2 x array<2 x vector<2xf32>>>
+//       CHECK: llvm.insertvalue %{{.*}}[1, 0] : !llvm.array<2 x array<2 x vector<2xf32>>>
+//       CHECK: llvm.extractvalue %{{.*}}[1, 1] : !llvm.array<2 x array<2 x vector<2xf32>>>
+//       CHECK: llvm.insertvalue %{{.*}}[1, 1] : !llvm.array<2 x array<2 x vector<2xf32>>>
 
 // And we're done
 //   CHECK-NEXT: return
 }
 
 // CHECK-LABEL: @splat
-// CHECK-SAME: %[[A:arg[0-9]+]]: !llvm.vec<4 x f32>
+// CHECK-SAME: %[[A:arg[0-9]+]]: vector<4xf32>
 // CHECK-SAME: %[[ELT:arg[0-9]+]]: f32
 func @splat(%a: vector<4xf32>, %b: f32) -> vector<4xf32> {
   %vb = splat %b : vector<4xf32>
   %r = mulf %a, %vb : vector<4xf32>
   return %r : vector<4xf32>
 }
-// CHECK-NEXT: %[[UNDEF:[0-9]+]] = llvm.mlir.undef : !llvm.vec<4 x f32>
+// CHECK-NEXT: %[[UNDEF:[0-9]+]] = llvm.mlir.undef : vector<4xf32>
 // CHECK-NEXT: %[[ZERO:[0-9]+]] = llvm.mlir.constant(0 : i32) : i32
-// CHECK-NEXT: %[[V:[0-9]+]] = llvm.insertelement %[[ELT]], %[[UNDEF]][%[[ZERO]] : i32] : !llvm.vec<4 x f32>
+// CHECK-NEXT: %[[V:[0-9]+]] = llvm.insertelement %[[ELT]], %[[UNDEF]][%[[ZERO]] : i32] : vector<4xf32>
 // CHECK-NEXT: %[[SPLAT:[0-9]+]] = llvm.shufflevector %[[V]], %[[UNDEF]] [0 : i32, 0 : i32, 0 : i32, 0 : i32]
-// CHECK-NEXT: %[[SCALE:[0-9]+]] = llvm.fmul %[[A]], %[[SPLAT]] : !llvm.vec<4 x f32>
-// CHECK-NEXT: llvm.return %[[SCALE]] : !llvm.vec<4 x f32>
+// CHECK-NEXT: %[[SCALE:[0-9]+]] = llvm.fmul %[[A]], %[[SPLAT]] : vector<4xf32>
+// CHECK-NEXT: llvm.return %[[SCALE]] : vector<4xf32>
 
 // CHECK-LABEL: func @view(
 // CHECK: %[[ARG0:.*]]: i64, %[[ARG1:.*]]: i64, %[[ARG2:.*]]: i64
@@ -1357,24 +1357,6 @@ func @assume_alignment(%0 : memref<4x4xf16>) {
 
 // -----
 
-// CHECK-LABEL: func @mlir_cast_to_llvm
-// CHECK-SAME: %[[ARG:.*]]:
-func @mlir_cast_to_llvm(%0 : vector<2xf16>) -> !llvm.vec<2 x f16> {
-  %1 = llvm.mlir.cast %0 : vector<2xf16> to !llvm.vec<2 x f16>
-  // CHECK-NEXT: llvm.return %[[ARG]]
-  return %1 : !llvm.vec<2 x f16>
-}
-
-// CHECK-LABEL: func @mlir_cast_from_llvm
-// CHECK-SAME: %[[ARG:.*]]:
-func @mlir_cast_from_llvm(%0 : !llvm.vec<2 x f16>) -> vector<2xf16> {
-  %1 = llvm.mlir.cast %0 : !llvm.vec<2 x f16> to vector<2xf16>
-  // CHECK-NEXT: llvm.return %[[ARG]]
-  return %1 : vector<2xf16>
-}
-
-// -----
-
 // CHECK-LABEL: func @memref_index
 // CHECK-SAME: %arg0: !llvm.ptr<i64>, %arg1: !llvm.ptr<i64>,
 // CHECK-SAME: %arg2: i64, %arg3: i64, %arg4: i64)

diff  --git a/mlir/test/Conversion/StandardToLLVM/standard-to-llvm.mlir b/mlir/test/Conversion/StandardToLLVM/standard-to-llvm.mlir
index 56126f603c27..72c42de3a47e 100644
--- a/mlir/test/Conversion/StandardToLLVM/standard-to-llvm.mlir
+++ b/mlir/test/Conversion/StandardToLLVM/standard-to-llvm.mlir
@@ -68,11 +68,11 @@ func @rsqrt_double(%arg0 : f64) {
 // -----
 
 // CHECK-LABEL: func @rsqrt_vector(
-// CHECK-SAME: !llvm.vec<4 x f32>
+// CHECK-SAME: vector<4xf32>
 func @rsqrt_vector(%arg0 : vector<4xf32>) {
-  // CHECK: %[[ONE:.*]] = llvm.mlir.constant(dense<1.000000e+00> : vector<4xf32>) : !llvm.vec<4 x f32>
-  // CHECK: %[[SQRT:.*]] = "llvm.intr.sqrt"(%arg0) : (!llvm.vec<4 x f32>) -> !llvm.vec<4 x f32>
-  // CHECK: %[[DIV:.*]] = llvm.fdiv %[[ONE]], %[[SQRT]] : !llvm.vec<4 x f32>
+  // CHECK: %[[ONE:.*]] = llvm.mlir.constant(dense<1.000000e+00> : vector<4xf32>) : vector<4xf32>
+  // CHECK: %[[SQRT:.*]] = "llvm.intr.sqrt"(%arg0) : (vector<4xf32>) -> vector<4xf32>
+  // CHECK: %[[DIV:.*]] = llvm.fdiv %[[ONE]], %[[SQRT]] : vector<4xf32>
   %0 = rsqrt %arg0 : vector<4xf32>
   std.return
 }
@@ -80,13 +80,13 @@ func @rsqrt_vector(%arg0 : vector<4xf32>) {
 // -----
 
 // CHECK-LABEL: func @rsqrt_multidim_vector(
-// CHECK-SAME: !llvm.array<4 x vec<3 x f32>>
+// CHECK-SAME: !llvm.array<4 x vector<3xf32>>
 func @rsqrt_multidim_vector(%arg0 : vector<4x3xf32>) {
-  // CHECK: %[[EXTRACT:.*]] = llvm.extractvalue %arg0[0] : !llvm.array<4 x vec<3 x f32>>
-  // CHECK: %[[ONE:.*]] = llvm.mlir.constant(dense<1.000000e+00> : vector<3xf32>) : !llvm.vec<3 x f32>
-  // CHECK: %[[SQRT:.*]] = "llvm.intr.sqrt"(%[[EXTRACT]]) : (!llvm.vec<3 x f32>) -> !llvm.vec<3 x f32>
-  // CHECK: %[[DIV:.*]] = llvm.fdiv %[[ONE]], %[[SQRT]] : !llvm.vec<3 x f32>
-  // CHECK: %[[INSERT:.*]] = llvm.insertvalue %[[DIV]], %0[0] : !llvm.array<4 x vec<3 x f32>>
+  // CHECK: %[[EXTRACT:.*]] = llvm.extractvalue %arg0[0] : !llvm.array<4 x vector<3xf32>>
+  // CHECK: %[[ONE:.*]] = llvm.mlir.constant(dense<1.000000e+00> : vector<3xf32>) : vector<3xf32>
+  // CHECK: %[[SQRT:.*]] = "llvm.intr.sqrt"(%[[EXTRACT]]) : (vector<3xf32>) -> vector<3xf32>
+  // CHECK: %[[DIV:.*]] = llvm.fdiv %[[ONE]], %[[SQRT]] : vector<3xf32>
+  // CHECK: %[[INSERT:.*]] = llvm.insertvalue %[[DIV]], %0[0] : !llvm.array<4 x vector<3xf32>>
   %0 = rsqrt %arg0 : vector<4x3xf32>
   std.return
 }

diff  --git a/mlir/test/Conversion/VectorToLLVM/vector-mask-to-llvm.mlir b/mlir/test/Conversion/VectorToLLVM/vector-mask-to-llvm.mlir
index d15499ec3871..85e19da84013 100644
--- a/mlir/test/Conversion/VectorToLLVM/vector-mask-to-llvm.mlir
+++ b/mlir/test/Conversion/VectorToLLVM/vector-mask-to-llvm.mlir
@@ -3,24 +3,24 @@
 
 // CMP32-LABEL: llvm.func @genbool_var_1d(
 // CMP32-SAME: %[[A:.*]]: i64)
-// CMP32: %[[T0:.*]] = llvm.mlir.constant(dense<[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]> : vector<11xi32>) : !llvm.vec<11 x i32>
+// CMP32: %[[T0:.*]] = llvm.mlir.constant(dense<[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]> : vector<11xi32>) : vector<11xi32>
 // CMP32: %[[T1:.*]] = llvm.trunc %[[A]] : i64 to i32
-// CMP32: %[[T2:.*]] = llvm.mlir.undef : !llvm.vec<11 x i32>
+// CMP32: %[[T2:.*]] = llvm.mlir.undef : vector<11xi32>
 // CMP32: %[[T3:.*]] = llvm.mlir.constant(0 : i32) : i32
-// CMP32: %[[T4:.*]] = llvm.insertelement %[[T1]], %[[T2]][%[[T3]] : i32] : !llvm.vec<11 x i32>
-// CMP32: %[[T5:.*]] = llvm.shufflevector %[[T4]], %[[T2]] [0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32] : !llvm.vec<11 x i32>, !llvm.vec<11 x i32>
-// CMP32: %[[T6:.*]] = llvm.icmp "slt" %[[T0]], %[[T5]] : !llvm.vec<11 x i32>
-// CMP32: llvm.return %[[T6]] : !llvm.vec<11 x i1>
+// CMP32: %[[T4:.*]] = llvm.insertelement %[[T1]], %[[T2]][%[[T3]] : i32] : vector<11xi32>
+// CMP32: %[[T5:.*]] = llvm.shufflevector %[[T4]], %[[T2]] [0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32] : vector<11xi32>, vector<11xi32>
+// CMP32: %[[T6:.*]] = llvm.icmp "slt" %[[T0]], %[[T5]] : vector<11xi32>
+// CMP32: llvm.return %[[T6]] : vector<11xi1>
 
 // CMP64-LABEL: llvm.func @genbool_var_1d(
 // CMP64-SAME: %[[A:.*]]: i64)
-// CMP64: %[[T0:.*]] = llvm.mlir.constant(dense<[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]> : vector<11xi64>) : !llvm.vec<11 x i64>
-// CMP64: %[[T1:.*]] = llvm.mlir.undef : !llvm.vec<11 x i64>
+// CMP64: %[[T0:.*]] = llvm.mlir.constant(dense<[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]> : vector<11xi64>) : vector<11xi64>
+// CMP64: %[[T1:.*]] = llvm.mlir.undef : vector<11xi64>
 // CMP64: %[[T2:.*]] = llvm.mlir.constant(0 : i32) : i32
-// CMP64: %[[T3:.*]] = llvm.insertelement %[[A]], %[[T1]][%[[T2]] : i32] : !llvm.vec<11 x i64>
-// CMP64: %[[T4:.*]] = llvm.shufflevector %[[T3]], %[[T1]] [0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32] : !llvm.vec<11 x i64>, !llvm.vec<11 x i64>
-// CMP64: %[[T5:.*]] = llvm.icmp "slt" %[[T0]], %[[T4]] : !llvm.vec<11 x i64>
-// CMP64: llvm.return %[[T5]] : !llvm.vec<11 x i1>
+// CMP64: %[[T3:.*]] = llvm.insertelement %[[A]], %[[T1]][%[[T2]] : i32] : vector<11xi64>
+// CMP64: %[[T4:.*]] = llvm.shufflevector %[[T3]], %[[T1]] [0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32] : vector<11xi64>, vector<11xi64>
+// CMP64: %[[T5:.*]] = llvm.icmp "slt" %[[T0]], %[[T4]] : vector<11xi64>
+// CMP64: llvm.return %[[T5]] : vector<11xi1>
 
 func @genbool_var_1d(%arg0: index) -> vector<11xi1> {
   %0 = vector.create_mask %arg0 : vector<11xi1>
@@ -28,18 +28,18 @@ func @genbool_var_1d(%arg0: index) -> vector<11xi1> {
 }
 
 // CMP32-LABEL: llvm.func @transfer_read_1d
-// CMP32: %[[C:.*]] = llvm.mlir.constant(dense<[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]> : vector<16xi32>) : !llvm.vec<16 x i32>
-// CMP32: %[[A:.*]] = llvm.add %{{.*}}, %[[C]] : !llvm.vec<16 x i32>
-// CMP32: %[[M:.*]] = llvm.icmp "slt" %[[A]], %{{.*}} : !llvm.vec<16 x i32>
+// CMP32: %[[C:.*]] = llvm.mlir.constant(dense<[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]> : vector<16xi32>) : vector<16xi32>
+// CMP32: %[[A:.*]] = llvm.add %{{.*}}, %[[C]] : vector<16xi32>
+// CMP32: %[[M:.*]] = llvm.icmp "slt" %[[A]], %{{.*}} : vector<16xi32>
 // CMP32: %[[L:.*]] = llvm.intr.masked.load %{{.*}}, %[[M]], %{{.*}}
-// CMP32: llvm.return %[[L]] : !llvm.vec<16 x f32>
+// CMP32: llvm.return %[[L]] : vector<16xf32>
 
 // CMP64-LABEL: llvm.func @transfer_read_1d
-// CMP64: %[[C:.*]] = llvm.mlir.constant(dense<[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]> : vector<16xi64>) : !llvm.vec<16 x i64>
-// CMP64: %[[A:.*]] = llvm.add %{{.*}}, %[[C]] : !llvm.vec<16 x i64>
-// CMP64: %[[M:.*]] = llvm.icmp "slt" %[[A]], %{{.*}} : !llvm.vec<16 x i64>
+// CMP64: %[[C:.*]] = llvm.mlir.constant(dense<[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]> : vector<16xi64>) : vector<16xi64>
+// CMP64: %[[A:.*]] = llvm.add %{{.*}}, %[[C]] : vector<16xi64>
+// CMP64: %[[M:.*]] = llvm.icmp "slt" %[[A]], %{{.*}} : vector<16xi64>
 // CMP64: %[[L:.*]] = llvm.intr.masked.load %{{.*}}, %[[M]], %{{.*}}
-// CMP64: llvm.return %[[L]] : !llvm.vec<16 x f32>
+// CMP64: llvm.return %[[L]] : vector<16xf32>
 
 func @transfer_read_1d(%A : memref<?xf32>, %i: index) -> vector<16xf32> {
   %d = constant -1.0: f32

diff  --git a/mlir/test/Conversion/VectorToLLVM/vector-reduction-to-llvm.mlir b/mlir/test/Conversion/VectorToLLVM/vector-reduction-to-llvm.mlir
index e35f6f8f0a4b..71d091413b2e 100644
--- a/mlir/test/Conversion/VectorToLLVM/vector-reduction-to-llvm.mlir
+++ b/mlir/test/Conversion/VectorToLLVM/vector-reduction-to-llvm.mlir
@@ -3,17 +3,17 @@
 
 //
 // CHECK-LABEL: llvm.func @reduce_add_f32(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<16 x f32>)
+// CHECK-SAME: %[[A:.*]]: vector<16xf32>)
 //      CHECK: %[[C:.*]] = llvm.mlir.constant(0.000000e+00 : f32) : f32
 //      CHECK: %[[V:.*]] = "llvm.intr.vector.reduce.fadd"(%[[C]], %[[A]])
-// CHECK-SAME: {reassoc = false} : (f32, !llvm.vec<16 x f32>) -> f32
+// CHECK-SAME: {reassoc = false} : (f32, vector<16xf32>) -> f32
 //      CHECK: llvm.return %[[V]] : f32
 //
 // REASSOC-LABEL: llvm.func @reduce_add_f32(
-// REASSOC-SAME: %[[A:.*]]: !llvm.vec<16 x f32>)
+// REASSOC-SAME: %[[A:.*]]: vector<16xf32>)
 //      REASSOC: %[[C:.*]] = llvm.mlir.constant(0.000000e+00 : f32) : f32
 //      REASSOC: %[[V:.*]] = "llvm.intr.vector.reduce.fadd"(%[[C]], %[[A]])
-// REASSOC-SAME: {reassoc = true} : (f32, !llvm.vec<16 x f32>) -> f32
+// REASSOC-SAME: {reassoc = true} : (f32, vector<16xf32>) -> f32
 //      REASSOC: llvm.return %[[V]] : f32
 //
 func @reduce_add_f32(%arg0: vector<16xf32>) -> f32 {
@@ -23,17 +23,17 @@ func @reduce_add_f32(%arg0: vector<16xf32>) -> f32 {
 
 //
 // CHECK-LABEL: llvm.func @reduce_mul_f32(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<16 x f32>)
+// CHECK-SAME: %[[A:.*]]: vector<16xf32>)
 //      CHECK: %[[C:.*]] = llvm.mlir.constant(1.000000e+00 : f32) : f32
 //      CHECK: %[[V:.*]] = "llvm.intr.vector.reduce.fmul"(%[[C]], %[[A]])
-// CHECK-SAME: {reassoc = false} : (f32, !llvm.vec<16 x f32>) -> f32
+// CHECK-SAME: {reassoc = false} : (f32, vector<16xf32>) -> f32
 //      CHECK: llvm.return %[[V]] : f32
 //
 // REASSOC-LABEL: llvm.func @reduce_mul_f32(
-// REASSOC-SAME: %[[A:.*]]: !llvm.vec<16 x f32>)
+// REASSOC-SAME: %[[A:.*]]: vector<16xf32>)
 //      REASSOC: %[[C:.*]] = llvm.mlir.constant(1.000000e+00 : f32) : f32
 //      REASSOC: %[[V:.*]] = "llvm.intr.vector.reduce.fmul"(%[[C]], %[[A]])
-// REASSOC-SAME: {reassoc = true} : (f32, !llvm.vec<16 x f32>) -> f32
+// REASSOC-SAME: {reassoc = true} : (f32, vector<16xf32>) -> f32
 //      REASSOC: llvm.return %[[V]] : f32
 //
 func @reduce_mul_f32(%arg0: vector<16xf32>) -> f32 {

diff  --git a/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir b/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
index ef4a85c1652c..cfaaed48e039 100644
--- a/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
+++ b/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
@@ -6,11 +6,11 @@ func @broadcast_vec1d_from_scalar(%arg0: f32) -> vector<2xf32> {
 }
 // CHECK-LABEL: llvm.func @broadcast_vec1d_from_scalar(
 // CHECK-SAME:  %[[A:.*]]: f32)
-// CHECK:       %[[T0:.*]] = llvm.mlir.undef : !llvm.vec<2 x f32>
+// CHECK:       %[[T0:.*]] = llvm.mlir.undef : vector<2xf32>
 // CHECK:       %[[T1:.*]] = llvm.mlir.constant(0 : i32) : i32
-// CHECK:       %[[T2:.*]] = llvm.insertelement %[[A]], %[[T0]][%[[T1]] : i32] : !llvm.vec<2 x f32>
-// CHECK:       %[[T3:.*]] = llvm.shufflevector %[[T2]], %[[T0]] [0 : i32, 0 : i32] : !llvm.vec<2 x f32>, !llvm.vec<2 x f32>
-// CHECK:       llvm.return %[[T3]] : !llvm.vec<2 x f32>
+// CHECK:       %[[T2:.*]] = llvm.insertelement %[[A]], %[[T0]][%[[T1]] : i32] : vector<2xf32>
+// CHECK:       %[[T3:.*]] = llvm.shufflevector %[[T2]], %[[T0]] [0 : i32, 0 : i32] : vector<2xf32>, vector<2xf32>
+// CHECK:       llvm.return %[[T3]] : vector<2xf32>
 
 func @broadcast_vec2d_from_scalar(%arg0: f32) -> vector<2x3xf32> {
   %0 = vector.broadcast %arg0 : f32 to vector<2x3xf32>
@@ -18,14 +18,14 @@ func @broadcast_vec2d_from_scalar(%arg0: f32) -> vector<2x3xf32> {
 }
 // CHECK-LABEL: llvm.func @broadcast_vec2d_from_scalar(
 // CHECK-SAME:  %[[A:.*]]: f32)
-// CHECK:       %[[T0:.*]] = llvm.mlir.undef : !llvm.array<2 x vec<3 x f32>>
-// CHECK:       %[[T1:.*]] = llvm.mlir.undef : !llvm.vec<3 x f32>
+// CHECK:       %[[T0:.*]] = llvm.mlir.undef : !llvm.array<2 x vector<3xf32>>
+// CHECK:       %[[T1:.*]] = llvm.mlir.undef : vector<3xf32>
 // CHECK:       %[[T2:.*]] = llvm.mlir.constant(0 : i32) : i32
-// CHECK:       %[[T3:.*]] = llvm.insertelement %[[A]], %[[T1]][%[[T2]] : i32] : !llvm.vec<3 x f32>
-// CHECK:       %[[T4:.*]] = llvm.shufflevector %[[T3]], %[[T3]] [0 : i32, 0 : i32, 0 : i32] : !llvm.vec<3 x f32>, !llvm.vec<3 x f32>
-// CHECK:       %[[T5:.*]] = llvm.insertvalue %[[T4]], %[[T0]][0] : !llvm.array<2 x vec<3 x f32>>
-// CHECK:       %[[T6:.*]] = llvm.insertvalue %[[T4]], %[[T5]][1] : !llvm.array<2 x vec<3 x f32>>
-// CHECK:       llvm.return %[[T6]] : !llvm.array<2 x vec<3 x f32>>
+// CHECK:       %[[T3:.*]] = llvm.insertelement %[[A]], %[[T1]][%[[T2]] : i32] : vector<3xf32>
+// CHECK:       %[[T4:.*]] = llvm.shufflevector %[[T3]], %[[T3]] [0 : i32, 0 : i32, 0 : i32] : vector<3xf32>, vector<3xf32>
+// CHECK:       %[[T5:.*]] = llvm.insertvalue %[[T4]], %[[T0]][0] : !llvm.array<2 x vector<3xf32>>
+// CHECK:       %[[T6:.*]] = llvm.insertvalue %[[T4]], %[[T5]][1] : !llvm.array<2 x vector<3xf32>>
+// CHECK:       llvm.return %[[T6]] : !llvm.array<2 x vector<3xf32>>
 
 func @broadcast_vec3d_from_scalar(%arg0: f32) -> vector<2x3x4xf32> {
   %0 = vector.broadcast %arg0 : f32 to vector<2x3x4xf32>
@@ -33,277 +33,277 @@ func @broadcast_vec3d_from_scalar(%arg0: f32) -> vector<2x3x4xf32> {
 }
 // CHECK-LABEL: llvm.func @broadcast_vec3d_from_scalar(
 // CHECK-SAME:  %[[A:.*]]: f32)
-// CHECK:       %[[T0:.*]] = llvm.mlir.undef : !llvm.array<2 x array<3 x vec<4 x f32>>>
-// CHECK:       %[[T1:.*]] = llvm.mlir.undef : !llvm.vec<4 x f32>
+// CHECK:       %[[T0:.*]] = llvm.mlir.undef : !llvm.array<2 x array<3 x vector<4xf32>>>
+// CHECK:       %[[T1:.*]] = llvm.mlir.undef : vector<4xf32>
 // CHECK:       %[[T2:.*]] = llvm.mlir.constant(0 : i32) : i32
-// CHECK:       %[[T3:.*]] = llvm.insertelement %[[A]], %[[T1]][%[[T2]] : i32] : !llvm.vec<4 x f32>
-// CHECK:       %[[T4:.*]] = llvm.shufflevector %[[T3]], %[[T3]] [0 : i32, 0 : i32, 0 : i32, 0 : i32] : !llvm.vec<4 x f32>, !llvm.vec<4 x f32>
-// CHECK:       %[[T5:.*]] = llvm.insertvalue %[[T4]], %[[T0]][0, 0] : !llvm.array<2 x array<3 x vec<4 x f32>>>
-// CHECK:       %[[T6:.*]] = llvm.insertvalue %[[T4]], %[[T5]][0, 1] : !llvm.array<2 x array<3 x vec<4 x f32>>>
-// CHECK:       %[[T7:.*]] = llvm.insertvalue %[[T4]], %[[T6]][0, 2] : !llvm.array<2 x array<3 x vec<4 x f32>>>
-// CHECK:       %[[T8:.*]] = llvm.insertvalue %[[T4]], %[[T7]][1, 0] : !llvm.array<2 x array<3 x vec<4 x f32>>>
-// CHECK:       %[[T9:.*]] = llvm.insertvalue %[[T4]], %[[T8]][1, 1] : !llvm.array<2 x array<3 x vec<4 x f32>>>
-// CHECK:       %[[T10:.*]] = llvm.insertvalue %[[T4]], %[[T9]][1, 2] : !llvm.array<2 x array<3 x vec<4 x f32>>>
-// CHECK:       llvm.return %[[T10]] : !llvm.array<2 x array<3 x vec<4 x f32>>>
+// CHECK:       %[[T3:.*]] = llvm.insertelement %[[A]], %[[T1]][%[[T2]] : i32] : vector<4xf32>
+// CHECK:       %[[T4:.*]] = llvm.shufflevector %[[T3]], %[[T3]] [0 : i32, 0 : i32, 0 : i32, 0 : i32] : vector<4xf32>, vector<4xf32>
+// CHECK:       %[[T5:.*]] = llvm.insertvalue %[[T4]], %[[T0]][0, 0] : !llvm.array<2 x array<3 x vector<4xf32>>>
+// CHECK:       %[[T6:.*]] = llvm.insertvalue %[[T4]], %[[T5]][0, 1] : !llvm.array<2 x array<3 x vector<4xf32>>>
+// CHECK:       %[[T7:.*]] = llvm.insertvalue %[[T4]], %[[T6]][0, 2] : !llvm.array<2 x array<3 x vector<4xf32>>>
+// CHECK:       %[[T8:.*]] = llvm.insertvalue %[[T4]], %[[T7]][1, 0] : !llvm.array<2 x array<3 x vector<4xf32>>>
+// CHECK:       %[[T9:.*]] = llvm.insertvalue %[[T4]], %[[T8]][1, 1] : !llvm.array<2 x array<3 x vector<4xf32>>>
+// CHECK:       %[[T10:.*]] = llvm.insertvalue %[[T4]], %[[T9]][1, 2] : !llvm.array<2 x array<3 x vector<4xf32>>>
+// CHECK:       llvm.return %[[T10]] : !llvm.array<2 x array<3 x vector<4xf32>>>
 
 func @broadcast_vec1d_from_vec1d(%arg0: vector<2xf32>) -> vector<2xf32> {
   %0 = vector.broadcast %arg0 : vector<2xf32> to vector<2xf32>
   return %0 : vector<2xf32>
 }
 // CHECK-LABEL: llvm.func @broadcast_vec1d_from_vec1d(
-// CHECK-SAME:  %[[A:.*]]: !llvm.vec<2 x f32>)
-// CHECK:       llvm.return %[[A]] : !llvm.vec<2 x f32>
+// CHECK-SAME:  %[[A:.*]]: vector<2xf32>)
+// CHECK:       llvm.return %[[A]] : vector<2xf32>
 
 func @broadcast_vec2d_from_vec1d(%arg0: vector<2xf32>) -> vector<3x2xf32> {
   %0 = vector.broadcast %arg0 : vector<2xf32> to vector<3x2xf32>
   return %0 : vector<3x2xf32>
 }
 // CHECK-LABEL: llvm.func @broadcast_vec2d_from_vec1d(
-// CHECK-SAME:  %[[A:.*]]: !llvm.vec<2 x f32>)
-// CHECK:       %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<3x2xf32>) : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T1:.*]] = llvm.insertvalue %[[A]], %[[T0]][0] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T2:.*]] = llvm.insertvalue %[[A]], %[[T1]][1] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T3:.*]] = llvm.insertvalue %[[A]], %[[T2]][2] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       llvm.return %[[T3]] : !llvm.array<3 x vec<2 x f32>>
+// CHECK-SAME:  %[[A:.*]]: vector<2xf32>)
+// CHECK:       %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<3x2xf32>) : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T1:.*]] = llvm.insertvalue %[[A]], %[[T0]][0] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T2:.*]] = llvm.insertvalue %[[A]], %[[T1]][1] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T3:.*]] = llvm.insertvalue %[[A]], %[[T2]][2] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       llvm.return %[[T3]] : !llvm.array<3 x vector<2xf32>>
 
 func @broadcast_vec3d_from_vec1d(%arg0: vector<2xf32>) -> vector<4x3x2xf32> {
   %0 = vector.broadcast %arg0 : vector<2xf32> to vector<4x3x2xf32>
   return %0 : vector<4x3x2xf32>
 }
 // CHECK-LABEL: llvm.func @broadcast_vec3d_from_vec1d(
-// CHECK-SAME:  %[[A:.*]]: !llvm.vec<2 x f32>)
-// CHECK:       %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<3x2xf32>) : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T1:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<4x3x2xf32>) : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       %[[T2:.*]] = llvm.insertvalue %[[A]], %[[T0]][0] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T3:.*]] = llvm.insertvalue %[[A]], %[[T2]][1] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T4:.*]] = llvm.insertvalue %[[A]], %[[T3]][2] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T5:.*]] = llvm.insertvalue %[[T4]], %[[T1]][0] : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       %[[T6:.*]] = llvm.insertvalue %[[T4]], %[[T5]][1] : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       %[[T7:.*]] = llvm.insertvalue %[[T4]], %[[T6]][2] : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       %[[T8:.*]] = llvm.insertvalue %[[T4]], %[[T7]][3] : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       llvm.return %[[T8]] : !llvm.array<4 x array<3 x vec<2 x f32>>>
+// CHECK-SAME:  %[[A:.*]]: vector<2xf32>)
+// CHECK:       %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<3x2xf32>) : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T1:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<4x3x2xf32>) : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       %[[T2:.*]] = llvm.insertvalue %[[A]], %[[T0]][0] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T3:.*]] = llvm.insertvalue %[[A]], %[[T2]][1] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T4:.*]] = llvm.insertvalue %[[A]], %[[T3]][2] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T5:.*]] = llvm.insertvalue %[[T4]], %[[T1]][0] : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       %[[T6:.*]] = llvm.insertvalue %[[T4]], %[[T5]][1] : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       %[[T7:.*]] = llvm.insertvalue %[[T4]], %[[T6]][2] : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       %[[T8:.*]] = llvm.insertvalue %[[T4]], %[[T7]][3] : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       llvm.return %[[T8]] : !llvm.array<4 x array<3 x vector<2xf32>>>
 
 func @broadcast_vec3d_from_vec2d(%arg0: vector<3x2xf32>) -> vector<4x3x2xf32> {
   %0 = vector.broadcast %arg0 : vector<3x2xf32> to vector<4x3x2xf32>
   return %0 : vector<4x3x2xf32>
 }
 // CHECK-LABEL: llvm.func @broadcast_vec3d_from_vec2d(
-// CHECK-SAME:  %[[A:.*]]: !llvm.array<3 x vec<2 x f32>>)
-// CHECK:       %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<4x3x2xf32>) : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       %[[T1:.*]] = llvm.insertvalue %[[A]], %[[T0]][0] : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       %[[T2:.*]] = llvm.insertvalue %[[A]], %[[T1]][1] : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       %[[T3:.*]] = llvm.insertvalue %[[A]], %[[T2]][2] : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       %[[T4:.*]] = llvm.insertvalue %[[A]], %[[T3]][3] : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       llvm.return %[[T4]] : !llvm.array<4 x array<3 x vec<2 x f32>>>
+// CHECK-SAME:  %[[A:.*]]: !llvm.array<3 x vector<2xf32>>)
+// CHECK:       %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<4x3x2xf32>) : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       %[[T1:.*]] = llvm.insertvalue %[[A]], %[[T0]][0] : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       %[[T2:.*]] = llvm.insertvalue %[[A]], %[[T1]][1] : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       %[[T3:.*]] = llvm.insertvalue %[[A]], %[[T2]][2] : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       %[[T4:.*]] = llvm.insertvalue %[[A]], %[[T3]][3] : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       llvm.return %[[T4]] : !llvm.array<4 x array<3 x vector<2xf32>>>
 
 func @broadcast_stretch(%arg0: vector<1xf32>) -> vector<4xf32> {
   %0 = vector.broadcast %arg0 : vector<1xf32> to vector<4xf32>
   return %0 : vector<4xf32>
 }
 // CHECK-LABEL: llvm.func @broadcast_stretch(
-// CHECK-SAME:  %[[A:.*]]: !llvm.vec<1 x f32>)
+// CHECK-SAME:  %[[A:.*]]: vector<1xf32>)
 // CHECK:       %[[T0:.*]] = llvm.mlir.constant(0 : i64) : i64
-// CHECK:       %[[T1:.*]] = llvm.extractelement %[[A]][%[[T0]] : i64] : !llvm.vec<1 x f32>
-// CHECK:       %[[T2:.*]] = llvm.mlir.undef : !llvm.vec<4 x f32>
+// CHECK:       %[[T1:.*]] = llvm.extractelement %[[A]][%[[T0]] : i64] : vector<1xf32>
+// CHECK:       %[[T2:.*]] = llvm.mlir.undef : vector<4xf32>
 // CHECK:       %[[T3:.*]] = llvm.mlir.constant(0 : i32) : i32
-// CHECK:       %[[T4:.*]] = llvm.insertelement %[[T1]], %[[T2]][%3 : i32] : !llvm.vec<4 x f32>
-// CHECK:       %[[T5:.*]] = llvm.shufflevector %[[T4]], %[[T2]] [0 : i32, 0 : i32, 0 : i32, 0 : i32] : !llvm.vec<4 x f32>, !llvm.vec<4 x f32>
-// CHECK:       llvm.return %[[T5]] : !llvm.vec<4 x f32>
+// CHECK:       %[[T4:.*]] = llvm.insertelement %[[T1]], %[[T2]][%3 : i32] : vector<4xf32>
+// CHECK:       %[[T5:.*]] = llvm.shufflevector %[[T4]], %[[T2]] [0 : i32, 0 : i32, 0 : i32, 0 : i32] : vector<4xf32>, vector<4xf32>
+// CHECK:       llvm.return %[[T5]] : vector<4xf32>
 
 func @broadcast_stretch_at_start(%arg0: vector<1x4xf32>) -> vector<3x4xf32> {
   %0 = vector.broadcast %arg0 : vector<1x4xf32> to vector<3x4xf32>
   return %0 : vector<3x4xf32>
 }
 // CHECK-LABEL: llvm.func @broadcast_stretch_at_start(
-// CHECK-SAME:  %[[A:.*]]: !llvm.array<1 x vec<4 x f32>>)
-// CHECK:       %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<3x4xf32>) : !llvm.array<3 x vec<4 x f32>>
-// CHECK:       %[[T1:.*]] = llvm.extractvalue %[[A]][0] : !llvm.array<1 x vec<4 x f32>>
-// CHECK:       %[[T2:.*]] = llvm.insertvalue %[[T1]], %[[T0]][0] : !llvm.array<3 x vec<4 x f32>>
-// CHECK:       %[[T3:.*]] = llvm.insertvalue %[[T1]], %[[T2]][1] : !llvm.array<3 x vec<4 x f32>>
-// CHECK:       %[[T4:.*]] = llvm.insertvalue %[[T1]], %[[T3]][2] : !llvm.array<3 x vec<4 x f32>>
-// CHECK:       llvm.return %[[T4]] : !llvm.array<3 x vec<4 x f32>>
+// CHECK-SAME:  %[[A:.*]]: !llvm.array<1 x vector<4xf32>>)
+// CHECK:       %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<3x4xf32>) : !llvm.array<3 x vector<4xf32>>
+// CHECK:       %[[T1:.*]] = llvm.extractvalue %[[A]][0] : !llvm.array<1 x vector<4xf32>>
+// CHECK:       %[[T2:.*]] = llvm.insertvalue %[[T1]], %[[T0]][0] : !llvm.array<3 x vector<4xf32>>
+// CHECK:       %[[T3:.*]] = llvm.insertvalue %[[T1]], %[[T2]][1] : !llvm.array<3 x vector<4xf32>>
+// CHECK:       %[[T4:.*]] = llvm.insertvalue %[[T1]], %[[T3]][2] : !llvm.array<3 x vector<4xf32>>
+// CHECK:       llvm.return %[[T4]] : !llvm.array<3 x vector<4xf32>>
 
 func @broadcast_stretch_at_end(%arg0: vector<4x1xf32>) -> vector<4x3xf32> {
   %0 = vector.broadcast %arg0 : vector<4x1xf32> to vector<4x3xf32>
   return %0 : vector<4x3xf32>
 }
 // CHECK-LABEL: llvm.func @broadcast_stretch_at_end(
-// CHECK-SAME:  %[[A:.*]]: !llvm.array<4 x vec<1 x f32>>)
-// CHECK:       %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<4x3xf32>) : !llvm.array<4 x vec<3 x f32>>
-// CHECK:       %[[T1:.*]] = llvm.extractvalue %[[A]][0] : !llvm.array<4 x vec<1 x f32>>
+// CHECK-SAME:  %[[A:.*]]: !llvm.array<4 x vector<1xf32>>)
+// CHECK:       %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<4x3xf32>) : !llvm.array<4 x vector<3xf32>>
+// CHECK:       %[[T1:.*]] = llvm.extractvalue %[[A]][0] : !llvm.array<4 x vector<1xf32>>
 // CHECK:       %[[T2:.*]] = llvm.mlir.constant(0 : i64) : i64
-// CHECK:       %[[T3:.*]] = llvm.extractelement %[[T1]][%[[T2]] : i64] : !llvm.vec<1 x f32>
-// CHECK:       %[[T4:.*]] = llvm.mlir.undef : !llvm.vec<3 x f32>
+// CHECK:       %[[T3:.*]] = llvm.extractelement %[[T1]][%[[T2]] : i64] : vector<1xf32>
+// CHECK:       %[[T4:.*]] = llvm.mlir.undef : vector<3xf32>
 // CHECK:       %[[T5:.*]] = llvm.mlir.constant(0 : i32) : i32
-// CHECK:       %[[T6:.*]] = llvm.insertelement %[[T3]], %[[T4]][%[[T5]] : i32] : !llvm.vec<3 x f32>
-// CHECK:       %[[T7:.*]] = llvm.shufflevector %[[T6]], %[[T4]] [0 : i32, 0 : i32, 0 : i32] : !llvm.vec<3 x f32>, !llvm.vec<3 x f32>
-// CHECK:       %[[T8:.*]] = llvm.insertvalue %[[T7]], %[[T0]][0] : !llvm.array<4 x vec<3 x f32>>
-// CHECK:       %[[T9:.*]] = llvm.extractvalue %[[A]][1] : !llvm.array<4 x vec<1 x f32>>
+// CHECK:       %[[T6:.*]] = llvm.insertelement %[[T3]], %[[T4]][%[[T5]] : i32] : vector<3xf32>
+// CHECK:       %[[T7:.*]] = llvm.shufflevector %[[T6]], %[[T4]] [0 : i32, 0 : i32, 0 : i32] : vector<3xf32>, vector<3xf32>
+// CHECK:       %[[T8:.*]] = llvm.insertvalue %[[T7]], %[[T0]][0] : !llvm.array<4 x vector<3xf32>>
+// CHECK:       %[[T9:.*]] = llvm.extractvalue %[[A]][1] : !llvm.array<4 x vector<1xf32>>
 // CHECK:       %[[T10:.*]] = llvm.mlir.constant(0 : i64) : i64
-// CHECK:       %[[T11:.*]] = llvm.extractelement %[[T9]][%[[T10]] : i64] : !llvm.vec<1 x f32>
-// CHECK:       %[[T12:.*]] = llvm.mlir.undef : !llvm.vec<3 x f32>
+// CHECK:       %[[T11:.*]] = llvm.extractelement %[[T9]][%[[T10]] : i64] : vector<1xf32>
+// CHECK:       %[[T12:.*]] = llvm.mlir.undef : vector<3xf32>
 // CHECK:       %[[T13:.*]] = llvm.mlir.constant(0 : i32) : i32
-// CHECK:       %[[T14:.*]] = llvm.insertelement %[[T11]], %[[T12]][%[[T13]] : i32] : !llvm.vec<3 x f32>
-// CHECK:       %[[T15:.*]] = llvm.shufflevector %[[T14]], %[[T12]] [0 : i32, 0 : i32, 0 : i32] : !llvm.vec<3 x f32>, !llvm.vec<3 x f32>
-// CHECK:       %[[T16:.*]] = llvm.insertvalue %[[T15]], %[[T8]][1] : !llvm.array<4 x vec<3 x f32>>
-// CHECK:       %[[T17:.*]] = llvm.extractvalue %[[A]][2] : !llvm.array<4 x vec<1 x f32>>
+// CHECK:       %[[T14:.*]] = llvm.insertelement %[[T11]], %[[T12]][%[[T13]] : i32] : vector<3xf32>
+// CHECK:       %[[T15:.*]] = llvm.shufflevector %[[T14]], %[[T12]] [0 : i32, 0 : i32, 0 : i32] : vector<3xf32>, vector<3xf32>
+// CHECK:       %[[T16:.*]] = llvm.insertvalue %[[T15]], %[[T8]][1] : !llvm.array<4 x vector<3xf32>>
+// CHECK:       %[[T17:.*]] = llvm.extractvalue %[[A]][2] : !llvm.array<4 x vector<1xf32>>
 // CHECK:       %[[T18:.*]] = llvm.mlir.constant(0 : i64) : i64
-// CHECK:       %[[T19:.*]] = llvm.extractelement %[[T17]][%[[T18]] : i64] : !llvm.vec<1 x f32>
-// CHECK:       %[[T20:.*]] = llvm.mlir.undef : !llvm.vec<3 x f32>
+// CHECK:       %[[T19:.*]] = llvm.extractelement %[[T17]][%[[T18]] : i64] : vector<1xf32>
+// CHECK:       %[[T20:.*]] = llvm.mlir.undef : vector<3xf32>
 // CHECK:       %[[T21:.*]] = llvm.mlir.constant(0 : i32) : i32
-// CHECK:       %[[T22:.*]] = llvm.insertelement %[[T19]], %[[T20]][%[[T21]] : i32] : !llvm.vec<3 x f32>
-// CHECK:       %[[T23:.*]] = llvm.shufflevector %[[T22]], %[[T20]] [0 : i32, 0 : i32, 0 : i32] : !llvm.vec<3 x f32>, !llvm.vec<3 x f32>
-// CHECK:       %[[T24:.*]] = llvm.insertvalue %[[T23]], %[[T16]][2] : !llvm.array<4 x vec<3 x f32>>
-// CHECK:       %[[T25:.*]] = llvm.extractvalue %[[A]][3] : !llvm.array<4 x vec<1 x f32>>
+// CHECK:       %[[T22:.*]] = llvm.insertelement %[[T19]], %[[T20]][%[[T21]] : i32] : vector<3xf32>
+// CHECK:       %[[T23:.*]] = llvm.shufflevector %[[T22]], %[[T20]] [0 : i32, 0 : i32, 0 : i32] : vector<3xf32>, vector<3xf32>
+// CHECK:       %[[T24:.*]] = llvm.insertvalue %[[T23]], %[[T16]][2] : !llvm.array<4 x vector<3xf32>>
+// CHECK:       %[[T25:.*]] = llvm.extractvalue %[[A]][3] : !llvm.array<4 x vector<1xf32>>
 // CHECK:       %[[T26:.*]] = llvm.mlir.constant(0 : i64) : i64
-// CHECK:       %[[T27:.*]] = llvm.extractelement %[[T25]][%[[T26]] : i64] : !llvm.vec<1 x f32>
-// CHECK:       %[[T28:.*]] = llvm.mlir.undef : !llvm.vec<3 x f32>
+// CHECK:       %[[T27:.*]] = llvm.extractelement %[[T25]][%[[T26]] : i64] : vector<1xf32>
+// CHECK:       %[[T28:.*]] = llvm.mlir.undef : vector<3xf32>
 // CHECK:       %[[T29:.*]] = llvm.mlir.constant(0 : i32) : i32
-// CHECK:       %[[T30:.*]] = llvm.insertelement %[[T27]], %[[T28]][%[[T29]] : i32] : !llvm.vec<3 x f32>
-// CHECK:       %[[T31:.*]] = llvm.shufflevector %[[T30]], %[[T28]] [0 : i32, 0 : i32, 0 : i32] : !llvm.vec<3 x f32>, !llvm.vec<3 x f32>
-// CHECK:       %[[T32:.*]] = llvm.insertvalue %[[T31]], %[[T24]][3] : !llvm.array<4 x vec<3 x f32>>
-// CHECK:       llvm.return %[[T32]] : !llvm.array<4 x vec<3 x f32>>
+// CHECK:       %[[T30:.*]] = llvm.insertelement %[[T27]], %[[T28]][%[[T29]] : i32] : vector<3xf32>
+// CHECK:       %[[T31:.*]] = llvm.shufflevector %[[T30]], %[[T28]] [0 : i32, 0 : i32, 0 : i32] : vector<3xf32>, vector<3xf32>
+// CHECK:       %[[T32:.*]] = llvm.insertvalue %[[T31]], %[[T24]][3] : !llvm.array<4 x vector<3xf32>>
+// CHECK:       llvm.return %[[T32]] : !llvm.array<4 x vector<3xf32>>
 
 func @broadcast_stretch_in_middle(%arg0: vector<4x1x2xf32>) -> vector<4x3x2xf32> {
   %0 = vector.broadcast %arg0 : vector<4x1x2xf32> to vector<4x3x2xf32>
   return %0 : vector<4x3x2xf32>
 }
 // CHECK-LABEL: llvm.func @broadcast_stretch_in_middle(
-// CHECK-SAME:  %[[A:.*]]: !llvm.array<4 x array<1 x vec<2 x f32>>>)
-// CHECK:       %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<4x3x2xf32>) : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       %[[T1:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<3x2xf32>) : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T2:.*]] = llvm.extractvalue %[[A]][0, 0] : !llvm.array<4 x array<1 x vec<2 x f32>>>
-// CHECK:       %[[T4:.*]] = llvm.insertvalue %[[T2]], %[[T1]][0] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T5:.*]] = llvm.insertvalue %[[T2]], %[[T4]][1] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T6:.*]] = llvm.insertvalue %[[T2]], %[[T5]][2] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T7:.*]] = llvm.insertvalue %[[T6]], %[[T0]][0] : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       %[[T8:.*]] = llvm.extractvalue %[[A]][1, 0] : !llvm.array<4 x array<1 x vec<2 x f32>>>
-// CHECK:       %[[T10:.*]] = llvm.insertvalue %[[T8]], %[[T1]][0] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T11:.*]] = llvm.insertvalue %[[T8]], %[[T10]][1] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T12:.*]] = llvm.insertvalue %[[T8]], %[[T11]][2] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T13:.*]] = llvm.insertvalue %[[T12]], %[[T7]][1] : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       %[[T14:.*]] = llvm.extractvalue %[[A]][2, 0] : !llvm.array<4 x array<1 x vec<2 x f32>>>
-// CHECK:       %[[T16:.*]] = llvm.insertvalue %[[T14]], %[[T1]][0] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T17:.*]] = llvm.insertvalue %[[T14]], %[[T16]][1] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T18:.*]] = llvm.insertvalue %[[T14]], %[[T17]][2] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T19:.*]] = llvm.insertvalue %[[T18]], %[[T13]][2] : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       %[[T20:.*]] = llvm.extractvalue %[[A]][3, 0] : !llvm.array<4 x array<1 x vec<2 x f32>>>
-// CHECK:       %[[T22:.*]] = llvm.insertvalue %[[T20]], %[[T1]][0] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T23:.*]] = llvm.insertvalue %[[T20]], %[[T22]][1] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T24:.*]] = llvm.insertvalue %[[T20]], %[[T23]][2] : !llvm.array<3 x vec<2 x f32>>
-// CHECK:       %[[T25:.*]] = llvm.insertvalue %[[T24]], %[[T19]][3] : !llvm.array<4 x array<3 x vec<2 x f32>>>
-// CHECK:       llvm.return %[[T25]] : !llvm.array<4 x array<3 x vec<2 x f32>>>
+// CHECK-SAME:  %[[A:.*]]: !llvm.array<4 x array<1 x vector<2xf32>>>)
+// CHECK:       %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<4x3x2xf32>) : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       %[[T1:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<3x2xf32>) : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T2:.*]] = llvm.extractvalue %[[A]][0, 0] : !llvm.array<4 x array<1 x vector<2xf32>>>
+// CHECK:       %[[T4:.*]] = llvm.insertvalue %[[T2]], %[[T1]][0] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T5:.*]] = llvm.insertvalue %[[T2]], %[[T4]][1] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T6:.*]] = llvm.insertvalue %[[T2]], %[[T5]][2] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T7:.*]] = llvm.insertvalue %[[T6]], %[[T0]][0] : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       %[[T8:.*]] = llvm.extractvalue %[[A]][1, 0] : !llvm.array<4 x array<1 x vector<2xf32>>>
+// CHECK:       %[[T10:.*]] = llvm.insertvalue %[[T8]], %[[T1]][0] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T11:.*]] = llvm.insertvalue %[[T8]], %[[T10]][1] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T12:.*]] = llvm.insertvalue %[[T8]], %[[T11]][2] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T13:.*]] = llvm.insertvalue %[[T12]], %[[T7]][1] : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       %[[T14:.*]] = llvm.extractvalue %[[A]][2, 0] : !llvm.array<4 x array<1 x vector<2xf32>>>
+// CHECK:       %[[T16:.*]] = llvm.insertvalue %[[T14]], %[[T1]][0] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T17:.*]] = llvm.insertvalue %[[T14]], %[[T16]][1] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T18:.*]] = llvm.insertvalue %[[T14]], %[[T17]][2] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T19:.*]] = llvm.insertvalue %[[T18]], %[[T13]][2] : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       %[[T20:.*]] = llvm.extractvalue %[[A]][3, 0] : !llvm.array<4 x array<1 x vector<2xf32>>>
+// CHECK:       %[[T22:.*]] = llvm.insertvalue %[[T20]], %[[T1]][0] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T23:.*]] = llvm.insertvalue %[[T20]], %[[T22]][1] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T24:.*]] = llvm.insertvalue %[[T20]], %[[T23]][2] : !llvm.array<3 x vector<2xf32>>
+// CHECK:       %[[T25:.*]] = llvm.insertvalue %[[T24]], %[[T19]][3] : !llvm.array<4 x array<3 x vector<2xf32>>>
+// CHECK:       llvm.return %[[T25]] : !llvm.array<4 x array<3 x vector<2xf32>>>
 
 func @outerproduct(%arg0: vector<2xf32>, %arg1: vector<3xf32>) -> vector<2x3xf32> {
   %2 = vector.outerproduct %arg0, %arg1 : vector<2xf32>, vector<3xf32>
   return %2 : vector<2x3xf32>
 }
 // CHECK-LABEL: llvm.func @outerproduct(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<2 x f32>,
-// CHECK-SAME: %[[B:.*]]: !llvm.vec<3 x f32>)
+// CHECK-SAME: %[[A:.*]]: vector<2xf32>,
+// CHECK-SAME: %[[B:.*]]: vector<3xf32>)
 //      CHECK: %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<2x3xf32>)
 //      CHECK: %[[T1:.*]] = llvm.mlir.constant(0 : i64) : i64
-//      CHECK: %[[T2:.*]] = llvm.extractelement %[[A]][%[[T1]] : i64] : !llvm.vec<2 x f32>
-//      CHECK: %[[T3:.*]] = llvm.mlir.undef : !llvm.vec<3 x f32>
+//      CHECK: %[[T2:.*]] = llvm.extractelement %[[A]][%[[T1]] : i64] : vector<2xf32>
+//      CHECK: %[[T3:.*]] = llvm.mlir.undef : vector<3xf32>
 //      CHECK: %[[T4:.*]] = llvm.mlir.constant(0 : i32) : i32
-//      CHECK: %[[T5:.*]] = llvm.insertelement %[[T2]], %[[T3]][%4 : i32] : !llvm.vec<3 x f32>
-//      CHECK: %[[T6:.*]] = llvm.shufflevector %[[T5]], %[[T3]] [0 : i32, 0 : i32, 0 : i32] : !llvm.vec<3 x f32>, !llvm.vec<3 x f32>
-//      CHECK: %[[T7:.*]] = llvm.fmul %[[T6]], %[[B]] : !llvm.vec<3 x f32>
-//      CHECK: %[[T8:.*]] = llvm.insertvalue %[[T7]], %[[T0]][0] : !llvm.array<2 x vec<3 x f32>>
+//      CHECK: %[[T5:.*]] = llvm.insertelement %[[T2]], %[[T3]][%4 : i32] : vector<3xf32>
+//      CHECK: %[[T6:.*]] = llvm.shufflevector %[[T5]], %[[T3]] [0 : i32, 0 : i32, 0 : i32] : vector<3xf32>, vector<3xf32>
+//      CHECK: %[[T7:.*]] = llvm.fmul %[[T6]], %[[B]] : vector<3xf32>
+//      CHECK: %[[T8:.*]] = llvm.insertvalue %[[T7]], %[[T0]][0] : !llvm.array<2 x vector<3xf32>>
 //      CHECK: %[[T9:.*]] = llvm.mlir.constant(1 : i64) : i64
-//      CHECK: %[[T10:.*]] = llvm.extractelement %[[A]][%9 : i64] : !llvm.vec<2 x f32>
-//      CHECK: %[[T11:.*]] = llvm.mlir.undef : !llvm.vec<3 x f32>
+//      CHECK: %[[T10:.*]] = llvm.extractelement %[[A]][%9 : i64] : vector<2xf32>
+//      CHECK: %[[T11:.*]] = llvm.mlir.undef : vector<3xf32>
 //      CHECK: %[[T12:.*]] = llvm.mlir.constant(0 : i32) : i32
-//      CHECK: %[[T13:.*]] = llvm.insertelement %[[T10]], %[[T11]][%12 : i32] : !llvm.vec<3 x f32>
-//      CHECK: %[[T14:.*]] = llvm.shufflevector %[[T13]], %[[T11]] [0 : i32, 0 : i32, 0 : i32] : !llvm.vec<3 x f32>, !llvm.vec<3 x f32>
-//      CHECK: %[[T15:.*]] = llvm.fmul %[[T14]], %[[B]] : !llvm.vec<3 x f32>
-//      CHECK: %[[T16:.*]] = llvm.insertvalue %[[T15]], %[[T8]][1] : !llvm.array<2 x vec<3 x f32>>
-//      CHECK: llvm.return %[[T16]] : !llvm.array<2 x vec<3 x f32>>
+//      CHECK: %[[T13:.*]] = llvm.insertelement %[[T10]], %[[T11]][%12 : i32] : vector<3xf32>
+//      CHECK: %[[T14:.*]] = llvm.shufflevector %[[T13]], %[[T11]] [0 : i32, 0 : i32, 0 : i32] : vector<3xf32>, vector<3xf32>
+//      CHECK: %[[T15:.*]] = llvm.fmul %[[T14]], %[[B]] : vector<3xf32>
+//      CHECK: %[[T16:.*]] = llvm.insertvalue %[[T15]], %[[T8]][1] : !llvm.array<2 x vector<3xf32>>
+//      CHECK: llvm.return %[[T16]] : !llvm.array<2 x vector<3xf32>>
 
 func @outerproduct_add(%arg0: vector<2xf32>, %arg1: vector<3xf32>, %arg2: vector<2x3xf32>) -> vector<2x3xf32> {
   %2 = vector.outerproduct %arg0, %arg1, %arg2 : vector<2xf32>, vector<3xf32>
   return %2 : vector<2x3xf32>
 }
 // CHECK-LABEL: llvm.func @outerproduct_add(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<2 x f32>,
-// CHECK-SAME: %[[B:.*]]: !llvm.vec<3 x f32>,
-// CHECK-SAME: %[[C:.*]]: !llvm.array<2 x vec<3 x f32>>)
+// CHECK-SAME: %[[A:.*]]: vector<2xf32>,
+// CHECK-SAME: %[[B:.*]]: vector<3xf32>,
+// CHECK-SAME: %[[C:.*]]: !llvm.array<2 x vector<3xf32>>)
 //      CHECK: %[[T0:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<2x3xf32>)
 //      CHECK: %[[T1:.*]] = llvm.mlir.constant(0 : i64) : i64
-//      CHECK: %[[T2:.*]] = llvm.extractelement %[[A]][%[[T1]] : i64] : !llvm.vec<2 x f32>
-//      CHECK: %[[T3:.*]] = llvm.mlir.undef : !llvm.vec<3 x f32>
+//      CHECK: %[[T2:.*]] = llvm.extractelement %[[A]][%[[T1]] : i64] : vector<2xf32>
+//      CHECK: %[[T3:.*]] = llvm.mlir.undef : vector<3xf32>
 //      CHECK: %[[T4:.*]] = llvm.mlir.constant(0 : i32) : i32
-//      CHECK: %[[T5:.*]] = llvm.insertelement %[[T2]], %[[T3]][%[[T4]] : i32] : !llvm.vec<3 x f32>
-//      CHECK: %[[T6:.*]] = llvm.shufflevector %[[T5]], %[[T3]] [0 : i32, 0 : i32, 0 : i32] : !llvm.vec<3 x f32>, !llvm.vec<3 x f32>
-//      CHECK: %[[T7:.*]] = llvm.extractvalue %[[C]][0] : !llvm.array<2 x vec<3 x f32>>
-//      CHECK: %[[T8:.*]] = "llvm.intr.fmuladd"(%[[T6]], %[[B]], %[[T7]]) : (!llvm.vec<3 x f32>, !llvm.vec<3 x f32>, !llvm.vec<3 x f32>)
-//      CHECK: %[[T9:.*]] = llvm.insertvalue %[[T8]], %[[T0]][0] : !llvm.array<2 x vec<3 x f32>>
+//      CHECK: %[[T5:.*]] = llvm.insertelement %[[T2]], %[[T3]][%[[T4]] : i32] : vector<3xf32>
+//      CHECK: %[[T6:.*]] = llvm.shufflevector %[[T5]], %[[T3]] [0 : i32, 0 : i32, 0 : i32] : vector<3xf32>, vector<3xf32>
+//      CHECK: %[[T7:.*]] = llvm.extractvalue %[[C]][0] : !llvm.array<2 x vector<3xf32>>
+//      CHECK: %[[T8:.*]] = "llvm.intr.fmuladd"(%[[T6]], %[[B]], %[[T7]]) : (vector<3xf32>, vector<3xf32>, vector<3xf32>)
+//      CHECK: %[[T9:.*]] = llvm.insertvalue %[[T8]], %[[T0]][0] : !llvm.array<2 x vector<3xf32>>
 //      CHECK: %[[T10:.*]] = llvm.mlir.constant(1 : i64) : i64
-//      CHECK: %[[T11:.*]] = llvm.extractelement %[[A]][%[[T10]] : i64] : !llvm.vec<2 x f32>
-//      CHECK: %[[T12:.*]] = llvm.mlir.undef : !llvm.vec<3 x f32>
+//      CHECK: %[[T11:.*]] = llvm.extractelement %[[A]][%[[T10]] : i64] : vector<2xf32>
+//      CHECK: %[[T12:.*]] = llvm.mlir.undef : vector<3xf32>
 //      CHECK: %[[T13:.*]] = llvm.mlir.constant(0 : i32) : i32
-//      CHECK: %[[T14:.*]] = llvm.insertelement %[[T11]], %[[T12]][%[[T13]] : i32] : !llvm.vec<3 x f32>
-//      CHECK: %[[T15:.*]] = llvm.shufflevector %[[T14]], %[[T12]] [0 : i32, 0 : i32, 0 : i32] : !llvm.vec<3 x f32>, !llvm.vec<3 x f32>
-//      CHECK: %[[T16:.*]] = llvm.extractvalue %[[C]][1] : !llvm.array<2 x vec<3 x f32>>
-//      CHECK: %[[T17:.*]] = "llvm.intr.fmuladd"(%[[T15]], %[[B]], %[[T16]]) : (!llvm.vec<3 x f32>, !llvm.vec<3 x f32>, !llvm.vec<3 x f32>)
-//      CHECK: %[[T18:.*]] = llvm.insertvalue %[[T17]], %[[T9]][1] : !llvm.array<2 x vec<3 x f32>>
-//      CHECK: llvm.return %[[T18]] : !llvm.array<2 x vec<3 x f32>>
+//      CHECK: %[[T14:.*]] = llvm.insertelement %[[T11]], %[[T12]][%[[T13]] : i32] : vector<3xf32>
+//      CHECK: %[[T15:.*]] = llvm.shufflevector %[[T14]], %[[T12]] [0 : i32, 0 : i32, 0 : i32] : vector<3xf32>, vector<3xf32>
+//      CHECK: %[[T16:.*]] = llvm.extractvalue %[[C]][1] : !llvm.array<2 x vector<3xf32>>
+//      CHECK: %[[T17:.*]] = "llvm.intr.fmuladd"(%[[T15]], %[[B]], %[[T16]]) : (vector<3xf32>, vector<3xf32>, vector<3xf32>)
+//      CHECK: %[[T18:.*]] = llvm.insertvalue %[[T17]], %[[T9]][1] : !llvm.array<2 x vector<3xf32>>
+//      CHECK: llvm.return %[[T18]] : !llvm.array<2 x vector<3xf32>>
 
 func @shuffle_1D_direct(%arg0: vector<2xf32>, %arg1: vector<2xf32>) -> vector<2xf32> {
   %1 = vector.shuffle %arg0, %arg1 [0, 1] : vector<2xf32>, vector<2xf32>
   return %1 : vector<2xf32>
 }
 // CHECK-LABEL: llvm.func @shuffle_1D_direct(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<2 x f32>,
-// CHECK-SAME: %[[B:.*]]: !llvm.vec<2 x f32>)
-//       CHECK:   %[[s:.*]] = llvm.shufflevector %[[A]], %[[B]] [0, 1] : !llvm.vec<2 x f32>, !llvm.vec<2 x f32>
-//       CHECK:   llvm.return %[[s]] : !llvm.vec<2 x f32>
+// CHECK-SAME: %[[A:.*]]: vector<2xf32>,
+// CHECK-SAME: %[[B:.*]]: vector<2xf32>)
+//       CHECK:   %[[s:.*]] = llvm.shufflevector %[[A]], %[[B]] [0, 1] : vector<2xf32>, vector<2xf32>
+//       CHECK:   llvm.return %[[s]] : vector<2xf32>
 
 func @shuffle_1D(%arg0: vector<2xf32>, %arg1: vector<3xf32>) -> vector<5xf32> {
   %1 = vector.shuffle %arg0, %arg1 [4, 3, 2, 1, 0] : vector<2xf32>, vector<3xf32>
   return %1 : vector<5xf32>
 }
 // CHECK-LABEL: llvm.func @shuffle_1D(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<2 x f32>,
-// CHECK-SAME: %[[B:.*]]: !llvm.vec<3 x f32>)
-//       CHECK:   %[[u0:.*]] = llvm.mlir.undef : !llvm.vec<5 x f32>
+// CHECK-SAME: %[[A:.*]]: vector<2xf32>,
+// CHECK-SAME: %[[B:.*]]: vector<3xf32>)
+//       CHECK:   %[[u0:.*]] = llvm.mlir.undef : vector<5xf32>
 //       CHECK:   %[[c2:.*]] = llvm.mlir.constant(2 : index) : i64
-//       CHECK:   %[[e1:.*]] = llvm.extractelement %[[B]][%[[c2]] : i64] : !llvm.vec<3 x f32>
+//       CHECK:   %[[e1:.*]] = llvm.extractelement %[[B]][%[[c2]] : i64] : vector<3xf32>
 //       CHECK:   %[[c0:.*]] = llvm.mlir.constant(0 : index) : i64
-//       CHECK:   %[[i1:.*]] = llvm.insertelement %[[e1]], %[[u0]][%[[c0]] : i64] : !llvm.vec<5 x f32>
+//       CHECK:   %[[i1:.*]] = llvm.insertelement %[[e1]], %[[u0]][%[[c0]] : i64] : vector<5xf32>
 //       CHECK:   %[[c1:.*]] = llvm.mlir.constant(1 : index) : i64
-//       CHECK:   %[[e2:.*]] = llvm.extractelement %[[B]][%[[c1]] : i64] : !llvm.vec<3 x f32>
+//       CHECK:   %[[e2:.*]] = llvm.extractelement %[[B]][%[[c1]] : i64] : vector<3xf32>
 //       CHECK:   %[[c1:.*]] = llvm.mlir.constant(1 : index) : i64
-//       CHECK:   %[[i2:.*]] = llvm.insertelement %[[e2]], %[[i1]][%[[c1]] : i64] : !llvm.vec<5 x f32>
+//       CHECK:   %[[i2:.*]] = llvm.insertelement %[[e2]], %[[i1]][%[[c1]] : i64] : vector<5xf32>
 //       CHECK:   %[[c0:.*]] = llvm.mlir.constant(0 : index) : i64
-//       CHECK:   %[[e3:.*]] = llvm.extractelement %[[B]][%[[c0]] : i64] : !llvm.vec<3 x f32>
+//       CHECK:   %[[e3:.*]] = llvm.extractelement %[[B]][%[[c0]] : i64] : vector<3xf32>
 //       CHECK:   %[[c2:.*]] = llvm.mlir.constant(2 : index) : i64
-//       CHECK:   %[[i3:.*]] = llvm.insertelement %[[e3]], %[[i2]][%[[c2]] : i64] : !llvm.vec<5 x f32>
+//       CHECK:   %[[i3:.*]] = llvm.insertelement %[[e3]], %[[i2]][%[[c2]] : i64] : vector<5xf32>
 //       CHECK:   %[[c1:.*]] = llvm.mlir.constant(1 : index) : i64
-//       CHECK:   %[[e4:.*]] = llvm.extractelement %[[A]][%[[c1]] : i64] : !llvm.vec<2 x f32>
+//       CHECK:   %[[e4:.*]] = llvm.extractelement %[[A]][%[[c1]] : i64] : vector<2xf32>
 //       CHECK:   %[[c3:.*]] = llvm.mlir.constant(3 : index) : i64
-//       CHECK:   %[[i4:.*]] = llvm.insertelement %[[e4]], %[[i3]][%[[c3]] : i64] : !llvm.vec<5 x f32>
+//       CHECK:   %[[i4:.*]] = llvm.insertelement %[[e4]], %[[i3]][%[[c3]] : i64] : vector<5xf32>
 //       CHECK:   %[[c0:.*]] = llvm.mlir.constant(0 : index) : i64
-//       CHECK:   %[[e5:.*]] = llvm.extractelement %[[A]][%[[c0]] : i64] : !llvm.vec<2 x f32>
+//       CHECK:   %[[e5:.*]] = llvm.extractelement %[[A]][%[[c0]] : i64] : vector<2xf32>
 //       CHECK:   %[[c4:.*]] = llvm.mlir.constant(4 : index) : i64
-//       CHECK:   %[[i5:.*]] = llvm.insertelement %[[e5]], %[[i4]][%[[c4]] : i64] : !llvm.vec<5 x f32>
-//       CHECK:   llvm.return %[[i5]] : !llvm.vec<5 x f32>
+//       CHECK:   %[[i5:.*]] = llvm.insertelement %[[e5]], %[[i4]][%[[c4]] : i64] : vector<5xf32>
+//       CHECK:   llvm.return %[[i5]] : vector<5xf32>
 
 func @shuffle_2D(%a: vector<1x4xf32>, %b: vector<2x4xf32>) -> vector<3x4xf32> {
   %1 = vector.shuffle %a, %b[1, 0, 2] : vector<1x4xf32>, vector<2x4xf32>
   return %1 : vector<3x4xf32>
 }
 // CHECK-LABEL: llvm.func @shuffle_2D(
-// CHECK-SAME: %[[A:.*]]: !llvm.array<1 x vec<4 x f32>>,
-// CHECK-SAME: %[[B:.*]]: !llvm.array<2 x vec<4 x f32>>)
-//       CHECK:   %[[u0:.*]] = llvm.mlir.undef : !llvm.array<3 x vec<4 x f32>>
-//       CHECK:   %[[e1:.*]] = llvm.extractvalue %[[B]][0] : !llvm.array<2 x vec<4 x f32>>
-//       CHECK:   %[[i1:.*]] = llvm.insertvalue %[[e1]], %[[u0]][0] : !llvm.array<3 x vec<4 x f32>>
-//       CHECK:   %[[e2:.*]] = llvm.extractvalue %[[A]][0] : !llvm.array<1 x vec<4 x f32>>
-//       CHECK:   %[[i2:.*]] = llvm.insertvalue %[[e2]], %[[i1]][1] : !llvm.array<3 x vec<4 x f32>>
-//       CHECK:   %[[e3:.*]] = llvm.extractvalue %[[B]][1] : !llvm.array<2 x vec<4 x f32>>
-//       CHECK:   %[[i3:.*]] = llvm.insertvalue %[[e3]], %[[i2]][2] : !llvm.array<3 x vec<4 x f32>>
-//       CHECK:   llvm.return %[[i3]] : !llvm.array<3 x vec<4 x f32>>
+// CHECK-SAME: %[[A:.*]]: !llvm.array<1 x vector<4xf32>>,
+// CHECK-SAME: %[[B:.*]]: !llvm.array<2 x vector<4xf32>>)
+//       CHECK:   %[[u0:.*]] = llvm.mlir.undef : !llvm.array<3 x vector<4xf32>>
+//       CHECK:   %[[e1:.*]] = llvm.extractvalue %[[B]][0] : !llvm.array<2 x vector<4xf32>>
+//       CHECK:   %[[i1:.*]] = llvm.insertvalue %[[e1]], %[[u0]][0] : !llvm.array<3 x vector<4xf32>>
+//       CHECK:   %[[e2:.*]] = llvm.extractvalue %[[A]][0] : !llvm.array<1 x vector<4xf32>>
+//       CHECK:   %[[i2:.*]] = llvm.insertvalue %[[e2]], %[[i1]][1] : !llvm.array<3 x vector<4xf32>>
+//       CHECK:   %[[e3:.*]] = llvm.extractvalue %[[B]][1] : !llvm.array<2 x vector<4xf32>>
+//       CHECK:   %[[i3:.*]] = llvm.insertvalue %[[e3]], %[[i2]][2] : !llvm.array<3 x vector<4xf32>>
+//       CHECK:   llvm.return %[[i3]] : !llvm.array<3 x vector<4xf32>>
 
 func @extract_element(%arg0: vector<16xf32>) -> f32 {
   %0 = constant 15 : i32
@@ -311,9 +311,9 @@ func @extract_element(%arg0: vector<16xf32>) -> f32 {
   return %1 : f32
 }
 // CHECK-LABEL: llvm.func @extract_element(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<16 x f32>)
+// CHECK-SAME: %[[A:.*]]: vector<16xf32>)
 //       CHECK:   %[[c:.*]] = llvm.mlir.constant(15 : i32) : i32
-//       CHECK:   %[[x:.*]] = llvm.extractelement %[[A]][%[[c]] : i32] : !llvm.vec<16 x f32>
+//       CHECK:   %[[x:.*]] = llvm.extractelement %[[A]][%[[c]] : i32] : vector<16xf32>
 //       CHECK:   llvm.return %[[x]] : f32
 
 func @extract_element_from_vec_1d(%arg0: vector<16xf32>) -> f32 {
@@ -322,7 +322,7 @@ func @extract_element_from_vec_1d(%arg0: vector<16xf32>) -> f32 {
 }
 // CHECK-LABEL: llvm.func @extract_element_from_vec_1d
 //       CHECK:   llvm.mlir.constant(15 : i64) : i64
-//       CHECK:   llvm.extractelement {{.*}}[{{.*}} : i64] : !llvm.vec<16 x f32>
+//       CHECK:   llvm.extractelement {{.*}}[{{.*}} : i64] : vector<16xf32>
 //       CHECK:   llvm.return {{.*}} : f32
 
 func @extract_vec_2d_from_vec_3d(%arg0: vector<4x3x16xf32>) -> vector<3x16xf32> {
@@ -330,25 +330,25 @@ func @extract_vec_2d_from_vec_3d(%arg0: vector<4x3x16xf32>) -> vector<3x16xf32>
   return %0 : vector<3x16xf32>
 }
 // CHECK-LABEL: llvm.func @extract_vec_2d_from_vec_3d
-//       CHECK:   llvm.extractvalue {{.*}}[0] : !llvm.array<4 x array<3 x vec<16 x f32>>>
-//       CHECK:   llvm.return {{.*}} : !llvm.array<3 x vec<16 x f32>>
+//       CHECK:   llvm.extractvalue {{.*}}[0] : !llvm.array<4 x array<3 x vector<16xf32>>>
+//       CHECK:   llvm.return {{.*}} : !llvm.array<3 x vector<16xf32>>
 
 func @extract_vec_1d_from_vec_3d(%arg0: vector<4x3x16xf32>) -> vector<16xf32> {
   %0 = vector.extract %arg0[0, 0]: vector<4x3x16xf32>
   return %0 : vector<16xf32>
 }
 // CHECK-LABEL: llvm.func @extract_vec_1d_from_vec_3d
-//       CHECK:   llvm.extractvalue {{.*}}[0, 0] : !llvm.array<4 x array<3 x vec<16 x f32>>>
-//       CHECK:   llvm.return {{.*}} : !llvm.vec<16 x f32>
+//       CHECK:   llvm.extractvalue {{.*}}[0, 0] : !llvm.array<4 x array<3 x vector<16xf32>>>
+//       CHECK:   llvm.return {{.*}} : vector<16xf32>
 
 func @extract_element_from_vec_3d(%arg0: vector<4x3x16xf32>) -> f32 {
   %0 = vector.extract %arg0[0, 0, 0]: vector<4x3x16xf32>
   return %0 : f32
 }
 // CHECK-LABEL: llvm.func @extract_element_from_vec_3d
-//       CHECK:   llvm.extractvalue {{.*}}[0, 0] : !llvm.array<4 x array<3 x vec<16 x f32>>>
+//       CHECK:   llvm.extractvalue {{.*}}[0, 0] : !llvm.array<4 x array<3 x vector<16xf32>>>
 //       CHECK:   llvm.mlir.constant(0 : i64) : i64
-//       CHECK:   llvm.extractelement {{.*}}[{{.*}} : i64] : !llvm.vec<16 x f32>
+//       CHECK:   llvm.extractelement {{.*}}[{{.*}} : i64] : vector<16xf32>
 //       CHECK:   llvm.return {{.*}} : f32
 
 func @insert_element(%arg0: f32, %arg1: vector<4xf32>) -> vector<4xf32> {
@@ -358,10 +358,10 @@ func @insert_element(%arg0: f32, %arg1: vector<4xf32>) -> vector<4xf32> {
 }
 // CHECK-LABEL: llvm.func @insert_element(
 // CHECK-SAME: %[[A:.*]]: f32,
-// CHECK-SAME: %[[B:.*]]: !llvm.vec<4 x f32>)
+// CHECK-SAME: %[[B:.*]]: vector<4xf32>)
 //       CHECK:   %[[c:.*]] = llvm.mlir.constant(3 : i32) : i32
-//       CHECK:   %[[x:.*]] = llvm.insertelement %[[A]], %[[B]][%[[c]] : i32] : !llvm.vec<4 x f32>
-//       CHECK:   llvm.return %[[x]] : !llvm.vec<4 x f32>
+//       CHECK:   %[[x:.*]] = llvm.insertelement %[[A]], %[[B]][%[[c]] : i32] : vector<4xf32>
+//       CHECK:   llvm.return %[[x]] : vector<4xf32>
 
 func @insert_element_into_vec_1d(%arg0: f32, %arg1: vector<4xf32>) -> vector<4xf32> {
   %0 = vector.insert %arg0, %arg1[3] : f32 into vector<4xf32>
@@ -369,65 +369,65 @@ func @insert_element_into_vec_1d(%arg0: f32, %arg1: vector<4xf32>) -> vector<4xf
 }
 // CHECK-LABEL: llvm.func @insert_element_into_vec_1d
 //       CHECK:   llvm.mlir.constant(3 : i64) : i64
-//       CHECK:   llvm.insertelement {{.*}}, {{.*}}[{{.*}} : i64] : !llvm.vec<4 x f32>
-//       CHECK:   llvm.return {{.*}} : !llvm.vec<4 x f32>
+//       CHECK:   llvm.insertelement {{.*}}, {{.*}}[{{.*}} : i64] : vector<4xf32>
+//       CHECK:   llvm.return {{.*}} : vector<4xf32>
 
 func @insert_vec_2d_into_vec_3d(%arg0: vector<8x16xf32>, %arg1: vector<4x8x16xf32>) -> vector<4x8x16xf32> {
   %0 = vector.insert %arg0, %arg1[3] : vector<8x16xf32> into vector<4x8x16xf32>
   return %0 : vector<4x8x16xf32>
 }
 // CHECK-LABEL: llvm.func @insert_vec_2d_into_vec_3d
-//       CHECK:   llvm.insertvalue {{.*}}, {{.*}}[3] : !llvm.array<4 x array<8 x vec<16 x f32>>>
-//       CHECK:   llvm.return {{.*}} : !llvm.array<4 x array<8 x vec<16 x f32>>>
+//       CHECK:   llvm.insertvalue {{.*}}, {{.*}}[3] : !llvm.array<4 x array<8 x vector<16xf32>>>
+//       CHECK:   llvm.return {{.*}} : !llvm.array<4 x array<8 x vector<16xf32>>>
 
 func @insert_vec_1d_into_vec_3d(%arg0: vector<16xf32>, %arg1: vector<4x8x16xf32>) -> vector<4x8x16xf32> {
   %0 = vector.insert %arg0, %arg1[3, 7] : vector<16xf32> into vector<4x8x16xf32>
   return %0 : vector<4x8x16xf32>
 }
 // CHECK-LABEL: llvm.func @insert_vec_1d_into_vec_3d
-//       CHECK:   llvm.insertvalue {{.*}}, {{.*}}[3, 7] : !llvm.array<4 x array<8 x vec<16 x f32>>>
-//       CHECK:   llvm.return {{.*}} : !llvm.array<4 x array<8 x vec<16 x f32>>>
+//       CHECK:   llvm.insertvalue {{.*}}, {{.*}}[3, 7] : !llvm.array<4 x array<8 x vector<16xf32>>>
+//       CHECK:   llvm.return {{.*}} : !llvm.array<4 x array<8 x vector<16xf32>>>
 
 func @insert_element_into_vec_3d(%arg0: f32, %arg1: vector<4x8x16xf32>) -> vector<4x8x16xf32> {
   %0 = vector.insert %arg0, %arg1[3, 7, 15] : f32 into vector<4x8x16xf32>
   return %0 : vector<4x8x16xf32>
 }
 // CHECK-LABEL: llvm.func @insert_element_into_vec_3d
-//       CHECK:   llvm.extractvalue {{.*}}[3, 7] : !llvm.array<4 x array<8 x vec<16 x f32>>>
+//       CHECK:   llvm.extractvalue {{.*}}[3, 7] : !llvm.array<4 x array<8 x vector<16xf32>>>
 //       CHECK:   llvm.mlir.constant(15 : i64) : i64
-//       CHECK:   llvm.insertelement {{.*}}, {{.*}}[{{.*}} : i64] : !llvm.vec<16 x f32>
-//       CHECK:   llvm.insertvalue {{.*}}, {{.*}}[3, 7] : !llvm.array<4 x array<8 x vec<16 x f32>>>
-//       CHECK:   llvm.return {{.*}} : !llvm.array<4 x array<8 x vec<16 x f32>>>
+//       CHECK:   llvm.insertelement {{.*}}, {{.*}}[{{.*}} : i64] : vector<16xf32>
+//       CHECK:   llvm.insertvalue {{.*}}, {{.*}}[3, 7] : !llvm.array<4 x array<8 x vector<16xf32>>>
+//       CHECK:   llvm.return {{.*}} : !llvm.array<4 x array<8 x vector<16xf32>>>
 
 func @vector_type_cast(%arg0: memref<8x8x8xf32>) -> memref<vector<8x8x8xf32>> {
   %0 = vector.type_cast %arg0: memref<8x8x8xf32> to memref<vector<8x8x8xf32>>
   return %0 : memref<vector<8x8x8xf32>>
 }
 // CHECK-LABEL: llvm.func @vector_type_cast
-//       CHECK:   llvm.mlir.undef : !llvm.struct<(ptr<array<8 x array<8 x vec<8 x f32>>>>, ptr<array<8 x array<8 x vec<8 x f32>>>>, i64)>
+//       CHECK:   llvm.mlir.undef : !llvm.struct<(ptr<array<8 x array<8 x vector<8xf32>>>>, ptr<array<8 x array<8 x vector<8xf32>>>>, i64)>
 //       CHECK:   %[[allocated:.*]] = llvm.extractvalue {{.*}}[0] : !llvm.struct<(ptr<f32>, ptr<f32>, i64, array<3 x i64>, array<3 x i64>)>
-//       CHECK:   %[[allocatedBit:.*]] = llvm.bitcast %[[allocated]] : !llvm.ptr<f32> to !llvm.ptr<array<8 x array<8 x vec<8 x f32>>>>
-//       CHECK:   llvm.insertvalue %[[allocatedBit]], {{.*}}[0] : !llvm.struct<(ptr<array<8 x array<8 x vec<8 x f32>>>>, ptr<array<8 x array<8 x vec<8 x f32>>>>, i64)>
+//       CHECK:   %[[allocatedBit:.*]] = llvm.bitcast %[[allocated]] : !llvm.ptr<f32> to !llvm.ptr<array<8 x array<8 x vector<8xf32>>>>
+//       CHECK:   llvm.insertvalue %[[allocatedBit]], {{.*}}[0] : !llvm.struct<(ptr<array<8 x array<8 x vector<8xf32>>>>, ptr<array<8 x array<8 x vector<8xf32>>>>, i64)>
 //       CHECK:   %[[aligned:.*]] = llvm.extractvalue {{.*}}[1] : !llvm.struct<(ptr<f32>, ptr<f32>, i64, array<3 x i64>, array<3 x i64>)>
-//       CHECK:   %[[alignedBit:.*]] = llvm.bitcast %[[aligned]] : !llvm.ptr<f32> to !llvm.ptr<array<8 x array<8 x vec<8 x f32>>>>
-//       CHECK:   llvm.insertvalue %[[alignedBit]], {{.*}}[1] : !llvm.struct<(ptr<array<8 x array<8 x vec<8 x f32>>>>, ptr<array<8 x array<8 x vec<8 x f32>>>>, i64)>
+//       CHECK:   %[[alignedBit:.*]] = llvm.bitcast %[[aligned]] : !llvm.ptr<f32> to !llvm.ptr<array<8 x array<8 x vector<8xf32>>>>
+//       CHECK:   llvm.insertvalue %[[alignedBit]], {{.*}}[1] : !llvm.struct<(ptr<array<8 x array<8 x vector<8xf32>>>>, ptr<array<8 x array<8 x vector<8xf32>>>>, i64)>
 //       CHECK:   llvm.mlir.constant(0 : index
-//       CHECK:   llvm.insertvalue {{.*}}[2] : !llvm.struct<(ptr<array<8 x array<8 x vec<8 x f32>>>>, ptr<array<8 x array<8 x vec<8 x f32>>>>, i64)>
+//       CHECK:   llvm.insertvalue {{.*}}[2] : !llvm.struct<(ptr<array<8 x array<8 x vector<8xf32>>>>, ptr<array<8 x array<8 x vector<8xf32>>>>, i64)>
 
 func @vector_type_cast_non_zero_addrspace(%arg0: memref<8x8x8xf32, 3>) -> memref<vector<8x8x8xf32>, 3> {
   %0 = vector.type_cast %arg0: memref<8x8x8xf32, 3> to memref<vector<8x8x8xf32>, 3>
   return %0 : memref<vector<8x8x8xf32>, 3>
 }
 // CHECK-LABEL: llvm.func @vector_type_cast_non_zero_addrspace
-//       CHECK:   llvm.mlir.undef : !llvm.struct<(ptr<array<8 x array<8 x vec<8 x f32>>>, 3>, ptr<array<8 x array<8 x vec<8 x f32>>>, 3>, i64)>
+//       CHECK:   llvm.mlir.undef : !llvm.struct<(ptr<array<8 x array<8 x vector<8xf32>>>, 3>, ptr<array<8 x array<8 x vector<8xf32>>>, 3>, i64)>
 //       CHECK:   %[[allocated:.*]] = llvm.extractvalue {{.*}}[0] : !llvm.struct<(ptr<f32, 3>, ptr<f32, 3>, i64, array<3 x i64>, array<3 x i64>)>
-//       CHECK:   %[[allocatedBit:.*]] = llvm.bitcast %[[allocated]] : !llvm.ptr<f32, 3> to !llvm.ptr<array<8 x array<8 x vec<8 x f32>>>, 3>
-//       CHECK:   llvm.insertvalue %[[allocatedBit]], {{.*}}[0] : !llvm.struct<(ptr<array<8 x array<8 x vec<8 x f32>>>, 3>, ptr<array<8 x array<8 x vec<8 x f32>>>, 3>, i64)>
+//       CHECK:   %[[allocatedBit:.*]] = llvm.bitcast %[[allocated]] : !llvm.ptr<f32, 3> to !llvm.ptr<array<8 x array<8 x vector<8xf32>>>, 3>
+//       CHECK:   llvm.insertvalue %[[allocatedBit]], {{.*}}[0] : !llvm.struct<(ptr<array<8 x array<8 x vector<8xf32>>>, 3>, ptr<array<8 x array<8 x vector<8xf32>>>, 3>, i64)>
 //       CHECK:   %[[aligned:.*]] = llvm.extractvalue {{.*}}[1] : !llvm.struct<(ptr<f32, 3>, ptr<f32, 3>, i64, array<3 x i64>, array<3 x i64>)>
-//       CHECK:   %[[alignedBit:.*]] = llvm.bitcast %[[aligned]] : !llvm.ptr<f32, 3> to !llvm.ptr<array<8 x array<8 x vec<8 x f32>>>, 3>
-//       CHECK:   llvm.insertvalue %[[alignedBit]], {{.*}}[1] : !llvm.struct<(ptr<array<8 x array<8 x vec<8 x f32>>>, 3>, ptr<array<8 x array<8 x vec<8 x f32>>>, 3>, i64)>
+//       CHECK:   %[[alignedBit:.*]] = llvm.bitcast %[[aligned]] : !llvm.ptr<f32, 3> to !llvm.ptr<array<8 x array<8 x vector<8xf32>>>, 3>
+//       CHECK:   llvm.insertvalue %[[alignedBit]], {{.*}}[1] : !llvm.struct<(ptr<array<8 x array<8 x vector<8xf32>>>, 3>, ptr<array<8 x array<8 x vector<8xf32>>>, 3>, i64)>
 //       CHECK:   llvm.mlir.constant(0 : index
-//       CHECK:   llvm.insertvalue {{.*}}[2] : !llvm.struct<(ptr<array<8 x array<8 x vec<8 x f32>>>, 3>, ptr<array<8 x array<8 x vec<8 x f32>>>, 3>, i64)>
+//       CHECK:   llvm.insertvalue {{.*}}[2] : !llvm.struct<(ptr<array<8 x array<8 x vector<8xf32>>>, 3>, ptr<array<8 x array<8 x vector<8xf32>>>, 3>, i64)>
 
 func @vector_print_scalar_i1(%arg0: i1) {
   vector.print %arg0 : i1
@@ -571,27 +571,27 @@ func @vector_print_vector(%arg0: vector<2x2xf32>) {
   return
 }
 // CHECK-LABEL: llvm.func @vector_print_vector(
-// CHECK-SAME: %[[A:.*]]: !llvm.array<2 x vec<2 x f32>>)
+// CHECK-SAME: %[[A:.*]]: !llvm.array<2 x vector<2xf32>>)
 //       CHECK:    llvm.call @printOpen() : () -> ()
-//       CHECK:    %[[x0:.*]] = llvm.extractvalue %[[A]][0] : !llvm.array<2 x vec<2 x f32>>
+//       CHECK:    %[[x0:.*]] = llvm.extractvalue %[[A]][0] : !llvm.array<2 x vector<2xf32>>
 //       CHECK:    llvm.call @printOpen() : () -> ()
 //       CHECK:    %[[x1:.*]] = llvm.mlir.constant(0 : index) : i64
-//       CHECK:    %[[x2:.*]] = llvm.extractelement %[[x0]][%[[x1]] : i64] : !llvm.vec<2 x f32>
+//       CHECK:    %[[x2:.*]] = llvm.extractelement %[[x0]][%[[x1]] : i64] : vector<2xf32>
 //       CHECK:    llvm.call @printF32(%[[x2]]) : (f32) -> ()
 //       CHECK:    llvm.call @printComma() : () -> ()
 //       CHECK:    %[[x3:.*]] = llvm.mlir.constant(1 : index) : i64
-//       CHECK:    %[[x4:.*]] = llvm.extractelement %[[x0]][%[[x3]] : i64] : !llvm.vec<2 x f32>
+//       CHECK:    %[[x4:.*]] = llvm.extractelement %[[x0]][%[[x3]] : i64] : vector<2xf32>
 //       CHECK:    llvm.call @printF32(%[[x4]]) : (f32) -> ()
 //       CHECK:    llvm.call @printClose() : () -> ()
 //       CHECK:    llvm.call @printComma() : () -> ()
-//       CHECK:    %[[x5:.*]] = llvm.extractvalue %[[A]][1] : !llvm.array<2 x vec<2 x f32>>
+//       CHECK:    %[[x5:.*]] = llvm.extractvalue %[[A]][1] : !llvm.array<2 x vector<2xf32>>
 //       CHECK:    llvm.call @printOpen() : () -> ()
 //       CHECK:    %[[x6:.*]] = llvm.mlir.constant(0 : index) : i64
-//       CHECK:    %[[x7:.*]] = llvm.extractelement %[[x5]][%[[x6]] : i64] : !llvm.vec<2 x f32>
+//       CHECK:    %[[x7:.*]] = llvm.extractelement %[[x5]][%[[x6]] : i64] : vector<2xf32>
 //       CHECK:    llvm.call @printF32(%[[x7]]) : (f32) -> ()
 //       CHECK:    llvm.call @printComma() : () -> ()
 //       CHECK:    %[[x8:.*]] = llvm.mlir.constant(1 : index) : i64
-//       CHECK:    %[[x9:.*]] = llvm.extractelement %[[x5]][%[[x8]] : i64] : !llvm.vec<2 x f32>
+//       CHECK:    %[[x9:.*]] = llvm.extractelement %[[x5]][%[[x8]] : i64] : vector<2xf32>
 //       CHECK:    llvm.call @printF32(%[[x9]]) : (f32) -> ()
 //       CHECK:    llvm.call @printClose() : () -> ()
 //       CHECK:    llvm.call @printClose() : () -> ()
@@ -602,45 +602,45 @@ func @extract_strided_slice1(%arg0: vector<4xf32>) -> vector<2xf32> {
   return %0 : vector<2xf32>
 }
 // CHECK-LABEL: llvm.func @extract_strided_slice1(
-//  CHECK-SAME:    %[[A:.*]]: !llvm.vec<4 x f32>)
-//       CHECK:    %[[T0:.*]] = llvm.shufflevector %[[A]], %[[A]] [2, 3] : !llvm.vec<4 x f32>, !llvm.vec<4 x f32>
-//       CHECK:    llvm.return %[[T0]] : !llvm.vec<2 x f32>
+//  CHECK-SAME:    %[[A:.*]]: vector<4xf32>)
+//       CHECK:    %[[T0:.*]] = llvm.shufflevector %[[A]], %[[A]] [2, 3] : vector<4xf32>, vector<4xf32>
+//       CHECK:    llvm.return %[[T0]] : vector<2xf32>
 
 func @extract_strided_slice2(%arg0: vector<4x8xf32>) -> vector<2x8xf32> {
   %0 = vector.extract_strided_slice %arg0 {offsets = [2], sizes = [2], strides = [1]} : vector<4x8xf32> to vector<2x8xf32>
   return %0 : vector<2x8xf32>
 }
 // CHECK-LABEL: llvm.func @extract_strided_slice2(
-//  CHECK-SAME:    %[[A:.*]]: !llvm.array<4 x vec<8 x f32>>)
-//       CHECK:    %[[T0:.*]] = llvm.mlir.undef : !llvm.array<2 x vec<8 x f32>>
-//       CHECK:    %[[T1:.*]] = llvm.extractvalue %[[A]][2] : !llvm.array<4 x vec<8 x f32>>
-//       CHECK:    %[[T2:.*]] = llvm.insertvalue %[[T1]], %[[T0]][0] : !llvm.array<2 x vec<8 x f32>>
-//       CHECK:    %[[T3:.*]] = llvm.extractvalue %[[A]][3] : !llvm.array<4 x vec<8 x f32>>
-//       CHECK:    %[[T4:.*]] = llvm.insertvalue %[[T3]], %[[T2]][1] : !llvm.array<2 x vec<8 x f32>>
-//       CHECK:    llvm.return %[[T4]] : !llvm.array<2 x vec<8 x f32>>
+//  CHECK-SAME:    %[[A:.*]]: !llvm.array<4 x vector<8xf32>>)
+//       CHECK:    %[[T0:.*]] = llvm.mlir.undef : !llvm.array<2 x vector<8xf32>>
+//       CHECK:    %[[T1:.*]] = llvm.extractvalue %[[A]][2] : !llvm.array<4 x vector<8xf32>>
+//       CHECK:    %[[T2:.*]] = llvm.insertvalue %[[T1]], %[[T0]][0] : !llvm.array<2 x vector<8xf32>>
+//       CHECK:    %[[T3:.*]] = llvm.extractvalue %[[A]][3] : !llvm.array<4 x vector<8xf32>>
+//       CHECK:    %[[T4:.*]] = llvm.insertvalue %[[T3]], %[[T2]][1] : !llvm.array<2 x vector<8xf32>>
+//       CHECK:    llvm.return %[[T4]] : !llvm.array<2 x vector<8xf32>>
 
 func @extract_strided_slice3(%arg0: vector<4x8xf32>) -> vector<2x2xf32> {
   %0 = vector.extract_strided_slice %arg0 {offsets = [2, 2], sizes = [2, 2], strides = [1, 1]} : vector<4x8xf32> to vector<2x2xf32>
   return %0 : vector<2x2xf32>
 }
 // CHECK-LABEL: llvm.func @extract_strided_slice3(
-//  CHECK-SAME:    %[[A:.*]]: !llvm.array<4 x vec<8 x f32>>)
-//       CHECK:    %[[T1:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<2x2xf32>) : !llvm.array<2 x vec<2 x f32>>
-//       CHECK:    %[[T2:.*]] = llvm.extractvalue %[[A]][2] : !llvm.array<4 x vec<8 x f32>>
-//       CHECK:    %[[T3:.*]] = llvm.shufflevector %[[T2]], %[[T2]] [2, 3] : !llvm.vec<8 x f32>, !llvm.vec<8 x f32>
-//       CHECK:    %[[T4:.*]] = llvm.insertvalue %[[T3]], %[[T1]][0] : !llvm.array<2 x vec<2 x f32>>
-//       CHECK:    %[[T5:.*]] = llvm.extractvalue %[[A]][3] : !llvm.array<4 x vec<8 x f32>>
-//       CHECK:    %[[T6:.*]] = llvm.shufflevector %[[T5]], %[[T5]] [2, 3] : !llvm.vec<8 x f32>, !llvm.vec<8 x f32>
-//       CHECK:    %[[T7:.*]] = llvm.insertvalue %[[T6]], %[[T4]][1] : !llvm.array<2 x vec<2 x f32>>
-//       CHECK:    llvm.return %[[T7]] : !llvm.array<2 x vec<2 x f32>>
+//  CHECK-SAME:    %[[A:.*]]: !llvm.array<4 x vector<8xf32>>)
+//       CHECK:    %[[T1:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<2x2xf32>) : !llvm.array<2 x vector<2xf32>>
+//       CHECK:    %[[T2:.*]] = llvm.extractvalue %[[A]][2] : !llvm.array<4 x vector<8xf32>>
+//       CHECK:    %[[T3:.*]] = llvm.shufflevector %[[T2]], %[[T2]] [2, 3] : vector<8xf32>, vector<8xf32>
+//       CHECK:    %[[T4:.*]] = llvm.insertvalue %[[T3]], %[[T1]][0] : !llvm.array<2 x vector<2xf32>>
+//       CHECK:    %[[T5:.*]] = llvm.extractvalue %[[A]][3] : !llvm.array<4 x vector<8xf32>>
+//       CHECK:    %[[T6:.*]] = llvm.shufflevector %[[T5]], %[[T5]] [2, 3] : vector<8xf32>, vector<8xf32>
+//       CHECK:    %[[T7:.*]] = llvm.insertvalue %[[T6]], %[[T4]][1] : !llvm.array<2 x vector<2xf32>>
+//       CHECK:    llvm.return %[[T7]] : !llvm.array<2 x vector<2xf32>>
 
 func @insert_strided_slice1(%b: vector<4x4xf32>, %c: vector<4x4x4xf32>) -> vector<4x4x4xf32> {
   %0 = vector.insert_strided_slice %b, %c {offsets = [2, 0, 0], strides = [1, 1]} : vector<4x4xf32> into vector<4x4x4xf32>
   return %0 : vector<4x4x4xf32>
 }
 // CHECK-LABEL: llvm.func @insert_strided_slice1
-//       CHECK:    llvm.extractvalue {{.*}}[2] : !llvm.array<4 x array<4 x vec<4 x f32>>>
-//  CHECK-NEXT:    llvm.insertvalue {{.*}}, {{.*}}[2] : !llvm.array<4 x array<4 x vec<4 x f32>>>
+//       CHECK:    llvm.extractvalue {{.*}}[2] : !llvm.array<4 x array<4 x vector<4xf32>>>
+//  CHECK-NEXT:    llvm.insertvalue {{.*}}, {{.*}}[2] : !llvm.array<4 x array<4 x vector<4xf32>>>
 
 func @insert_strided_slice2(%a: vector<2x2xf32>, %b: vector<4x4xf32>) -> vector<4x4xf32> {
   %0 = vector.insert_strided_slice %a, %b {offsets = [2, 2], strides = [1, 1]} : vector<2x2xf32> into vector<4x4xf32>
@@ -649,34 +649,34 @@ func @insert_strided_slice2(%a: vector<2x2xf32>, %b: vector<4x4xf32>) -> vector<
 // CHECK-LABEL: llvm.func @insert_strided_slice2
 //
 // Subvector vector<2xf32> @0 into vector<4xf32> @2
-//       CHECK:    llvm.extractvalue {{.*}}[0] : !llvm.array<2 x vec<2 x f32>>
-//  CHECK-NEXT:    llvm.extractvalue {{.*}}[2] : !llvm.array<4 x vec<4 x f32>>
+//       CHECK:    llvm.extractvalue {{.*}}[0] : !llvm.array<2 x vector<2xf32>>
+//  CHECK-NEXT:    llvm.extractvalue {{.*}}[2] : !llvm.array<4 x vector<4xf32>>
 // Element @0 -> element @2
 //  CHECK-NEXT:    llvm.mlir.constant(0 : index) : i64
-//  CHECK-NEXT:    llvm.extractelement {{.*}}[{{.*}} : i64] : !llvm.vec<2 x f32>
+//  CHECK-NEXT:    llvm.extractelement {{.*}}[{{.*}} : i64] : vector<2xf32>
 //  CHECK-NEXT:    llvm.mlir.constant(2 : index) : i64
-//  CHECK-NEXT:    llvm.insertelement {{.*}}, {{.*}}[{{.*}} : i64] : !llvm.vec<4 x f32>
+//  CHECK-NEXT:    llvm.insertelement {{.*}}, {{.*}}[{{.*}} : i64] : vector<4xf32>
 // Element @1 -> element @3
 //  CHECK-NEXT:    llvm.mlir.constant(1 : index) : i64
-//  CHECK-NEXT:    llvm.extractelement {{.*}}[{{.*}} : i64] : !llvm.vec<2 x f32>
+//  CHECK-NEXT:    llvm.extractelement {{.*}}[{{.*}} : i64] : vector<2xf32>
 //  CHECK-NEXT:    llvm.mlir.constant(3 : index) : i64
-//  CHECK-NEXT:    llvm.insertelement {{.*}}, {{.*}}[{{.*}} : i64] : !llvm.vec<4 x f32>
-//  CHECK-NEXT:    llvm.insertvalue {{.*}}, {{.*}}[2] : !llvm.array<4 x vec<4 x f32>>
+//  CHECK-NEXT:    llvm.insertelement {{.*}}, {{.*}}[{{.*}} : i64] : vector<4xf32>
+//  CHECK-NEXT:    llvm.insertvalue {{.*}}, {{.*}}[2] : !llvm.array<4 x vector<4xf32>>
 //
 // Subvector vector<2xf32> @1 into vector<4xf32> @3
-//       CHECK:    llvm.extractvalue {{.*}}[1] : !llvm.array<2 x vec<2 x f32>>
-//  CHECK-NEXT:    llvm.extractvalue {{.*}}[3] : !llvm.array<4 x vec<4 x f32>>
+//       CHECK:    llvm.extractvalue {{.*}}[1] : !llvm.array<2 x vector<2xf32>>
+//  CHECK-NEXT:    llvm.extractvalue {{.*}}[3] : !llvm.array<4 x vector<4xf32>>
 // Element @0 -> element @2
 //  CHECK-NEXT:    llvm.mlir.constant(0 : index) : i64
-//  CHECK-NEXT:    llvm.extractelement {{.*}}[{{.*}} : i64] : !llvm.vec<2 x f32>
+//  CHECK-NEXT:    llvm.extractelement {{.*}}[{{.*}} : i64] : vector<2xf32>
 //  CHECK-NEXT:    llvm.mlir.constant(2 : index) : i64
-//  CHECK-NEXT:    llvm.insertelement {{.*}}, {{.*}}[{{.*}} : i64] : !llvm.vec<4 x f32>
+//  CHECK-NEXT:    llvm.insertelement {{.*}}, {{.*}}[{{.*}} : i64] : vector<4xf32>
 // Element @1 -> element @3
 //  CHECK-NEXT:    llvm.mlir.constant(1 : index) : i64
-//  CHECK-NEXT:    llvm.extractelement {{.*}}[{{.*}} : i64] : !llvm.vec<2 x f32>
+//  CHECK-NEXT:    llvm.extractelement {{.*}}[{{.*}} : i64] : vector<2xf32>
 //  CHECK-NEXT:    llvm.mlir.constant(3 : index) : i64
-//  CHECK-NEXT:    llvm.insertelement {{.*}}, {{.*}}[{{.*}} : i64] : !llvm.vec<4 x f32>
-//  CHECK-NEXT:    llvm.insertvalue {{.*}}, {{.*}}[3] : !llvm.array<4 x vec<4 x f32>>
+//  CHECK-NEXT:    llvm.insertelement {{.*}}, {{.*}}[{{.*}} : i64] : vector<4xf32>
+//  CHECK-NEXT:    llvm.insertvalue {{.*}}, {{.*}}[3] : !llvm.array<4 x vector<4xf32>>
 
 func @insert_strided_slice3(%arg0: vector<2x4xf32>, %arg1: vector<16x4x8xf32>) -> vector<16x4x8xf32> {
   %0 = vector.insert_strided_slice %arg0, %arg1 {offsets = [0, 0, 2], strides = [1, 1]}:
@@ -684,49 +684,49 @@ func @insert_strided_slice3(%arg0: vector<2x4xf32>, %arg1: vector<16x4x8xf32>) -
   return %0 : vector<16x4x8xf32>
 }
 // CHECK-LABEL: llvm.func @insert_strided_slice3(
-// CHECK-SAME: %[[A:.*]]: !llvm.array<2 x vec<4 x f32>>,
-// CHECK-SAME: %[[B:.*]]: !llvm.array<16 x array<4 x vec<8 x f32>>>)
-//      CHECK: %[[s0:.*]] = llvm.extractvalue %[[B]][0] : !llvm.array<16 x array<4 x vec<8 x f32>>>
-//      CHECK: %[[s1:.*]] = llvm.extractvalue %[[A]][0] : !llvm.array<2 x vec<4 x f32>>
-//      CHECK: %[[s2:.*]] = llvm.extractvalue %[[B]][0, 0] : !llvm.array<16 x array<4 x vec<8 x f32>>>
+// CHECK-SAME: %[[A:.*]]: !llvm.array<2 x vector<4xf32>>,
+// CHECK-SAME: %[[B:.*]]: !llvm.array<16 x array<4 x vector<8xf32>>>)
+//      CHECK: %[[s0:.*]] = llvm.extractvalue %[[B]][0] : !llvm.array<16 x array<4 x vector<8xf32>>>
+//      CHECK: %[[s1:.*]] = llvm.extractvalue %[[A]][0] : !llvm.array<2 x vector<4xf32>>
+//      CHECK: %[[s2:.*]] = llvm.extractvalue %[[B]][0, 0] : !llvm.array<16 x array<4 x vector<8xf32>>>
 //      CHECK: %[[s3:.*]] = llvm.mlir.constant(0 : index) : i64
-//      CHECK: %[[s4:.*]] = llvm.extractelement %[[s1]][%[[s3]] : i64] : !llvm.vec<4 x f32>
+//      CHECK: %[[s4:.*]] = llvm.extractelement %[[s1]][%[[s3]] : i64] : vector<4xf32>
 //      CHECK: %[[s5:.*]] = llvm.mlir.constant(2 : index) : i64
-//      CHECK: %[[s6:.*]] = llvm.insertelement %[[s4]], %[[s2]][%[[s5]] : i64] : !llvm.vec<8 x f32>
+//      CHECK: %[[s6:.*]] = llvm.insertelement %[[s4]], %[[s2]][%[[s5]] : i64] : vector<8xf32>
 //      CHECK: %[[s7:.*]] = llvm.mlir.constant(1 : index) : i64
-//      CHECK: %[[s8:.*]] = llvm.extractelement %[[s1]][%[[s7]] : i64] : !llvm.vec<4 x f32>
+//      CHECK: %[[s8:.*]] = llvm.extractelement %[[s1]][%[[s7]] : i64] : vector<4xf32>
 //      CHECK: %[[s9:.*]] = llvm.mlir.constant(3 : index) : i64
-//      CHECK: %[[s10:.*]] = llvm.insertelement %[[s8]], %[[s6]][%[[s9]] : i64] : !llvm.vec<8 x f32>
+//      CHECK: %[[s10:.*]] = llvm.insertelement %[[s8]], %[[s6]][%[[s9]] : i64] : vector<8xf32>
 //      CHECK: %[[s11:.*]] = llvm.mlir.constant(2 : index) : i64
-//      CHECK: %[[s12:.*]] = llvm.extractelement %[[s1]][%[[s11]] : i64] : !llvm.vec<4 x f32>
+//      CHECK: %[[s12:.*]] = llvm.extractelement %[[s1]][%[[s11]] : i64] : vector<4xf32>
 //      CHECK: %[[s13:.*]] = llvm.mlir.constant(4 : index) : i64
-//      CHECK: %[[s14:.*]] = llvm.insertelement %[[s12]], %[[s10]][%[[s13]] : i64] : !llvm.vec<8 x f32>
+//      CHECK: %[[s14:.*]] = llvm.insertelement %[[s12]], %[[s10]][%[[s13]] : i64] : vector<8xf32>
 //      CHECK: %[[s15:.*]] = llvm.mlir.constant(3 : index) : i64
-//      CHECK: %[[s16:.*]] = llvm.extractelement %[[s1]][%[[s15]] : i64] : !llvm.vec<4 x f32>
+//      CHECK: %[[s16:.*]] = llvm.extractelement %[[s1]][%[[s15]] : i64] : vector<4xf32>
 //      CHECK: %[[s17:.*]] = llvm.mlir.constant(5 : index) : i64
-//      CHECK: %[[s18:.*]] = llvm.insertelement %[[s16]], %[[s14]][%[[s17]] : i64] : !llvm.vec<8 x f32>
-//      CHECK: %[[s19:.*]] = llvm.insertvalue %[[s18]], %[[s0]][0] : !llvm.array<4 x vec<8 x f32>>
-//      CHECK: %[[s20:.*]] = llvm.extractvalue %[[A]][1] : !llvm.array<2 x vec<4 x f32>>
-//      CHECK: %[[s21:.*]] = llvm.extractvalue %[[B]][0, 1] : !llvm.array<16 x array<4 x vec<8 x f32>>>
+//      CHECK: %[[s18:.*]] = llvm.insertelement %[[s16]], %[[s14]][%[[s17]] : i64] : vector<8xf32>
+//      CHECK: %[[s19:.*]] = llvm.insertvalue %[[s18]], %[[s0]][0] : !llvm.array<4 x vector<8xf32>>
+//      CHECK: %[[s20:.*]] = llvm.extractvalue %[[A]][1] : !llvm.array<2 x vector<4xf32>>
+//      CHECK: %[[s21:.*]] = llvm.extractvalue %[[B]][0, 1] : !llvm.array<16 x array<4 x vector<8xf32>>>
 //      CHECK: %[[s22:.*]] = llvm.mlir.constant(0 : index) : i64
-//      CHECK: %[[s23:.*]] = llvm.extractelement %[[s20]][%[[s22]] : i64] : !llvm.vec<4 x f32>
+//      CHECK: %[[s23:.*]] = llvm.extractelement %[[s20]][%[[s22]] : i64] : vector<4xf32>
 //      CHECK: %[[s24:.*]] = llvm.mlir.constant(2 : index) : i64
-//      CHECK: %[[s25:.*]] = llvm.insertelement %[[s23]], %[[s21]][%[[s24]] : i64] : !llvm.vec<8 x f32>
+//      CHECK: %[[s25:.*]] = llvm.insertelement %[[s23]], %[[s21]][%[[s24]] : i64] : vector<8xf32>
 //      CHECK: %[[s26:.*]] = llvm.mlir.constant(1 : index) : i64
-//      CHECK: %[[s27:.*]] = llvm.extractelement %[[s20]][%[[s26]] : i64] : !llvm.vec<4 x f32>
+//      CHECK: %[[s27:.*]] = llvm.extractelement %[[s20]][%[[s26]] : i64] : vector<4xf32>
 //      CHECK: %[[s28:.*]] = llvm.mlir.constant(3 : index) : i64
-//      CHECK: %[[s29:.*]] = llvm.insertelement %[[s27]], %[[s25]][%[[s28]] : i64] : !llvm.vec<8 x f32>
+//      CHECK: %[[s29:.*]] = llvm.insertelement %[[s27]], %[[s25]][%[[s28]] : i64] : vector<8xf32>
 //      CHECK: %[[s30:.*]] = llvm.mlir.constant(2 : index) : i64
-//      CHECK: %[[s31:.*]] = llvm.extractelement %[[s20]][%[[s30]] : i64] : !llvm.vec<4 x f32>
+//      CHECK: %[[s31:.*]] = llvm.extractelement %[[s20]][%[[s30]] : i64] : vector<4xf32>
 //      CHECK: %[[s32:.*]] = llvm.mlir.constant(4 : index) : i64
-//      CHECK: %[[s33:.*]] = llvm.insertelement %[[s31]], %[[s29]][%[[s32]] : i64] : !llvm.vec<8 x f32>
+//      CHECK: %[[s33:.*]] = llvm.insertelement %[[s31]], %[[s29]][%[[s32]] : i64] : vector<8xf32>
 //      CHECK: %[[s34:.*]] = llvm.mlir.constant(3 : index) : i64
-//      CHECK: %[[s35:.*]] = llvm.extractelement %[[s20]][%[[s34]] : i64] : !llvm.vec<4 x f32>
+//      CHECK: %[[s35:.*]] = llvm.extractelement %[[s20]][%[[s34]] : i64] : vector<4xf32>
 //      CHECK: %[[s36:.*]] = llvm.mlir.constant(5 : index) : i64
-//      CHECK: %[[s37:.*]] = llvm.insertelement %[[s35]], %[[s33]][%[[s36]] : i64] : !llvm.vec<8 x f32>
-//      CHECK: %[[s38:.*]] = llvm.insertvalue %[[s37]], %[[s19]][1] : !llvm.array<4 x vec<8 x f32>>
-//      CHECK: %[[s39:.*]] = llvm.insertvalue %[[s38]], %[[B]][0] : !llvm.array<16 x array<4 x vec<8 x f32>>>
-//      CHECK: llvm.return %[[s39]] : !llvm.array<16 x array<4 x vec<8 x f32>>>
+//      CHECK: %[[s37:.*]] = llvm.insertelement %[[s35]], %[[s33]][%[[s36]] : i64] : vector<8xf32>
+//      CHECK: %[[s38:.*]] = llvm.insertvalue %[[s37]], %[[s19]][1] : !llvm.array<4 x vector<8xf32>>
+//      CHECK: %[[s39:.*]] = llvm.insertvalue %[[s38]], %[[B]][0] : !llvm.array<16 x array<4 x vector<8xf32>>>
+//      CHECK: llvm.return %[[s39]] : !llvm.array<16 x array<4 x vector<8xf32>>>
 
 func @extract_strides(%arg0: vector<3x3xf32>) -> vector<1x1xf32> {
   %0 = vector.extract_slices %arg0, [2, 2], [1, 1]
@@ -735,33 +735,33 @@ func @extract_strides(%arg0: vector<3x3xf32>) -> vector<1x1xf32> {
   return %1 : vector<1x1xf32>
 }
 // CHECK-LABEL: llvm.func @extract_strides(
-// CHECK-SAME: %[[A:.*]]: !llvm.array<3 x vec<3 x f32>>)
-//      CHECK: %[[T1:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<1x1xf32>) : !llvm.array<1 x vec<1 x f32>>
-//      CHECK: %[[T2:.*]] = llvm.extractvalue %[[A]][2] : !llvm.array<3 x vec<3 x f32>>
-//      CHECK: %[[T3:.*]] = llvm.shufflevector %[[T2]], %[[T2]] [2] : !llvm.vec<3 x f32>, !llvm.vec<3 x f32>
-//      CHECK: %[[T4:.*]] = llvm.insertvalue %[[T3]], %[[T1]][0] : !llvm.array<1 x vec<1 x f32>>
-//      CHECK: llvm.return %[[T4]] : !llvm.array<1 x vec<1 x f32>>
+// CHECK-SAME: %[[A:.*]]: !llvm.array<3 x vector<3xf32>>)
+//      CHECK: %[[T1:.*]] = llvm.mlir.constant(dense<0.000000e+00> : vector<1x1xf32>) : !llvm.array<1 x vector<1xf32>>
+//      CHECK: %[[T2:.*]] = llvm.extractvalue %[[A]][2] : !llvm.array<3 x vector<3xf32>>
+//      CHECK: %[[T3:.*]] = llvm.shufflevector %[[T2]], %[[T2]] [2] : vector<3xf32>, vector<3xf32>
+//      CHECK: %[[T4:.*]] = llvm.insertvalue %[[T3]], %[[T1]][0] : !llvm.array<1 x vector<1xf32>>
+//      CHECK: llvm.return %[[T4]] : !llvm.array<1 x vector<1xf32>>
 
 // CHECK-LABEL: llvm.func @vector_fma(
-//  CHECK-SAME: %[[A:.*]]: !llvm.vec<8 x f32>, %[[B:.*]]: !llvm.array<2 x vec<4 x f32>>)
-//  CHECK-SAME: -> !llvm.struct<(vec<8 x f32>, array<2 x vec<4 x f32>>)> {
+//  CHECK-SAME: %[[A:.*]]: vector<8xf32>, %[[B:.*]]: !llvm.array<2 x vector<4xf32>>)
+//  CHECK-SAME: -> !llvm.struct<(vector<8xf32>, array<2 x vector<4xf32>>)> {
 func @vector_fma(%a: vector<8xf32>, %b: vector<2x4xf32>) -> (vector<8xf32>, vector<2x4xf32>) {
   //         CHECK: "llvm.intr.fmuladd"(%[[A]], %[[A]], %[[A]]) :
-  //    CHECK-SAME:   (!llvm.vec<8 x f32>, !llvm.vec<8 x f32>, !llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  //    CHECK-SAME:   (vector<8xf32>, vector<8xf32>, vector<8xf32>) -> vector<8xf32>
   %0 = vector.fma %a, %a, %a : vector<8xf32>
 
-  //       CHECK: %[[b00:.*]] = llvm.extractvalue %[[B]][0] : !llvm.array<2 x vec<4 x f32>>
-  //       CHECK: %[[b01:.*]] = llvm.extractvalue %[[B]][0] : !llvm.array<2 x vec<4 x f32>>
-  //       CHECK: %[[b02:.*]] = llvm.extractvalue %[[B]][0] : !llvm.array<2 x vec<4 x f32>>
+  //       CHECK: %[[b00:.*]] = llvm.extractvalue %[[B]][0] : !llvm.array<2 x vector<4xf32>>
+  //       CHECK: %[[b01:.*]] = llvm.extractvalue %[[B]][0] : !llvm.array<2 x vector<4xf32>>
+  //       CHECK: %[[b02:.*]] = llvm.extractvalue %[[B]][0] : !llvm.array<2 x vector<4xf32>>
   //       CHECK: %[[B0:.*]] = "llvm.intr.fmuladd"(%[[b00]], %[[b01]], %[[b02]]) :
-  //  CHECK-SAME: (!llvm.vec<4 x f32>, !llvm.vec<4 x f32>, !llvm.vec<4 x f32>) -> !llvm.vec<4 x f32>
-  //       CHECK: llvm.insertvalue %[[B0]], {{.*}}[0] : !llvm.array<2 x vec<4 x f32>>
-  //       CHECK: %[[b10:.*]] = llvm.extractvalue %[[B]][1] : !llvm.array<2 x vec<4 x f32>>
-  //       CHECK: %[[b11:.*]] = llvm.extractvalue %[[B]][1] : !llvm.array<2 x vec<4 x f32>>
-  //       CHECK: %[[b12:.*]] = llvm.extractvalue %[[B]][1] : !llvm.array<2 x vec<4 x f32>>
+  //  CHECK-SAME: (vector<4xf32>, vector<4xf32>, vector<4xf32>) -> vector<4xf32>
+  //       CHECK: llvm.insertvalue %[[B0]], {{.*}}[0] : !llvm.array<2 x vector<4xf32>>
+  //       CHECK: %[[b10:.*]] = llvm.extractvalue %[[B]][1] : !llvm.array<2 x vector<4xf32>>
+  //       CHECK: %[[b11:.*]] = llvm.extractvalue %[[B]][1] : !llvm.array<2 x vector<4xf32>>
+  //       CHECK: %[[b12:.*]] = llvm.extractvalue %[[B]][1] : !llvm.array<2 x vector<4xf32>>
   //       CHECK: %[[B1:.*]] = "llvm.intr.fmuladd"(%[[b10]], %[[b11]], %[[b12]]) :
-  //  CHECK-SAME: (!llvm.vec<4 x f32>, !llvm.vec<4 x f32>, !llvm.vec<4 x f32>) -> !llvm.vec<4 x f32>
-  //       CHECK: llvm.insertvalue %[[B1]], {{.*}}[1] : !llvm.array<2 x vec<4 x f32>>
+  //  CHECK-SAME: (vector<4xf32>, vector<4xf32>, vector<4xf32>) -> vector<4xf32>
+  //       CHECK: llvm.insertvalue %[[B1]], {{.*}}[1] : !llvm.array<2 x vector<4xf32>>
   %1 = vector.fma %b, %b, %b : vector<2x4xf32>
 
   return %0, %1: vector<8xf32>, vector<2x4xf32>
@@ -772,10 +772,10 @@ func @reduce_f16(%arg0: vector<16xf16>) -> f16 {
   return %0 : f16
 }
 // CHECK-LABEL: llvm.func @reduce_f16(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<16 x f16>)
+// CHECK-SAME: %[[A:.*]]: vector<16xf16>)
 //      CHECK: %[[C:.*]] = llvm.mlir.constant(0.000000e+00 : f16) : f16
 //      CHECK: %[[V:.*]] = "llvm.intr.vector.reduce.fadd"(%[[C]], %[[A]])
-// CHECK-SAME: {reassoc = false} : (f16, !llvm.vec<16 x f16>) -> f16
+// CHECK-SAME: {reassoc = false} : (f16, vector<16xf16>) -> f16
 //      CHECK: llvm.return %[[V]] : f16
 
 func @reduce_f32(%arg0: vector<16xf32>) -> f32 {
@@ -783,10 +783,10 @@ func @reduce_f32(%arg0: vector<16xf32>) -> f32 {
   return %0 : f32
 }
 // CHECK-LABEL: llvm.func @reduce_f32(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<16 x f32>)
+// CHECK-SAME: %[[A:.*]]: vector<16xf32>)
 //      CHECK: %[[C:.*]] = llvm.mlir.constant(0.000000e+00 : f32) : f32
 //      CHECK: %[[V:.*]] = "llvm.intr.vector.reduce.fadd"(%[[C]], %[[A]])
-// CHECK-SAME: {reassoc = false} : (f32, !llvm.vec<16 x f32>) -> f32
+// CHECK-SAME: {reassoc = false} : (f32, vector<16xf32>) -> f32
 //      CHECK: llvm.return %[[V]] : f32
 
 func @reduce_f64(%arg0: vector<16xf64>) -> f64 {
@@ -794,10 +794,10 @@ func @reduce_f64(%arg0: vector<16xf64>) -> f64 {
   return %0 : f64
 }
 // CHECK-LABEL: llvm.func @reduce_f64(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<16 x f64>)
+// CHECK-SAME: %[[A:.*]]: vector<16xf64>)
 //      CHECK: %[[C:.*]] = llvm.mlir.constant(0.000000e+00 : f64) : f64
 //      CHECK: %[[V:.*]] = "llvm.intr.vector.reduce.fadd"(%[[C]], %[[A]])
-// CHECK-SAME: {reassoc = false} : (f64, !llvm.vec<16 x f64>) -> f64
+// CHECK-SAME: {reassoc = false} : (f64, vector<16xf64>) -> f64
 //      CHECK: llvm.return %[[V]] : f64
 
 func @reduce_i8(%arg0: vector<16xi8>) -> i8 {
@@ -805,7 +805,7 @@ func @reduce_i8(%arg0: vector<16xi8>) -> i8 {
   return %0 : i8
 }
 // CHECK-LABEL: llvm.func @reduce_i8(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<16 x i8>)
+// CHECK-SAME: %[[A:.*]]: vector<16xi8>)
 //      CHECK: %[[V:.*]] = "llvm.intr.vector.reduce.add"(%[[A]])
 //      CHECK: llvm.return %[[V]] : i8
 
@@ -814,7 +814,7 @@ func @reduce_i32(%arg0: vector<16xi32>) -> i32 {
   return %0 : i32
 }
 // CHECK-LABEL: llvm.func @reduce_i32(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<16 x i32>)
+// CHECK-SAME: %[[A:.*]]: vector<16xi32>)
 //      CHECK: %[[V:.*]] = "llvm.intr.vector.reduce.add"(%[[A]])
 //      CHECK: llvm.return %[[V]] : i32
 
@@ -823,7 +823,7 @@ func @reduce_i64(%arg0: vector<16xi64>) -> i64 {
   return %0 : i64
 }
 // CHECK-LABEL: llvm.func @reduce_i64(
-// CHECK-SAME: %[[A:.*]]: !llvm.vec<16 x i64>)
+// CHECK-SAME: %[[A:.*]]: vector<16xi64>)
 //      CHECK: %[[V:.*]] = "llvm.intr.vector.reduce.add"(%[[A]])
 //      CHECK: llvm.return %[[V]] : i64
 
@@ -838,7 +838,7 @@ func @matrix_ops(%A: vector<64xf64>, %B: vector<48xf64>) -> vector<12xf64> {
 // CHECK-LABEL: llvm.func @matrix_ops
 //       CHECK:   llvm.intr.matrix.multiply %{{.*}}, %{{.*}} {
 //  CHECK-SAME: lhs_columns = 16 : i32, lhs_rows = 4 : i32, rhs_columns = 3 : i32
-//  CHECK-SAME: } : (!llvm.vec<64 x f64>, !llvm.vec<48 x f64>) -> !llvm.vec<12 x f64>
+//  CHECK-SAME: } : (vector<64xf64>, vector<48xf64>) -> vector<12xf64>
 
 func @transfer_read_1d(%A : memref<?xf32>, %base: index) -> vector<17xf32> {
   %f7 = constant 7.0: f32
@@ -851,74 +851,74 @@ func @transfer_read_1d(%A : memref<?xf32>, %base: index) -> vector<17xf32> {
   return %f: vector<17xf32>
 }
 // CHECK-LABEL: func @transfer_read_1d
-//  CHECK-SAME: %[[BASE:[a-zA-Z0-9]*]]: i64) -> !llvm.vec<17 x f32>
+//  CHECK-SAME: %[[BASE:[a-zA-Z0-9]*]]: i64) -> vector<17xf32>
 //
 // 1. Bitcast to vector form.
 //       CHECK: %[[gep:.*]] = llvm.getelementptr {{.*}} :
 //  CHECK-SAME: (!llvm.ptr<f32>, i64) -> !llvm.ptr<f32>
 //       CHECK: %[[vecPtr:.*]] = llvm.bitcast %[[gep]] :
-//  CHECK-SAME: !llvm.ptr<f32> to !llvm.ptr<vec<17 x f32>>
+//  CHECK-SAME: !llvm.ptr<f32> to !llvm.ptr<vector<17xf32>>
 //       CHECK: %[[DIM:.*]] = llvm.extractvalue %{{.*}}[3, 0] :
 //  CHECK-SAME: !llvm.struct<(ptr<f32>, ptr<f32>, i64, array<1 x i64>, array<1 x i64>)>
 //
 // 2. Create a vector with linear indices [ 0 .. vector_length - 1 ].
 //       CHECK: %[[linearIndex:.*]] = llvm.mlir.constant(dense
 //  CHECK-SAME: <[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]> :
-//  CHECK-SAME: vector<17xi32>) : !llvm.vec<17 x i32>
+//  CHECK-SAME: vector<17xi32>) : vector<17xi32>
 //
 // 3. Create offsetVector = [ offset + 0 .. offset + vector_length - 1 ].
 //       CHECK: %[[otrunc:.*]] = llvm.trunc %[[BASE]] : i64 to i32
-//       CHECK: %[[offsetVec:.*]] = llvm.mlir.undef : !llvm.vec<17 x i32>
+//       CHECK: %[[offsetVec:.*]] = llvm.mlir.undef : vector<17xi32>
 //       CHECK: %[[c0:.*]] = llvm.mlir.constant(0 : i32) : i32
 //       CHECK: %[[offsetVec2:.*]] = llvm.insertelement %[[otrunc]], %[[offsetVec]][%[[c0]] :
-//  CHECK-SAME: i32] : !llvm.vec<17 x i32>
+//  CHECK-SAME: i32] : vector<17xi32>
 //       CHECK: %[[offsetVec3:.*]] = llvm.shufflevector %[[offsetVec2]], %{{.*}} [
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32,
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32,
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32] :
-//  CHECK-SAME: !llvm.vec<17 x i32>, !llvm.vec<17 x i32>
+//  CHECK-SAME: vector<17xi32>, vector<17xi32>
 //       CHECK: %[[offsetVec4:.*]] = llvm.add %[[offsetVec3]], %[[linearIndex]] :
-//  CHECK-SAME: !llvm.vec<17 x i32>
+//  CHECK-SAME: vector<17xi32>
 //
 // 4. Let dim the memref dimension, compute the vector comparison mask:
 //    [ offset + 0 .. offset + vector_length - 1 ] < [ dim .. dim ]
 //       CHECK: %[[dtrunc:.*]] = llvm.trunc %[[DIM]] : i64 to i32
-//       CHECK: %[[dimVec:.*]] = llvm.mlir.undef : !llvm.vec<17 x i32>
+//       CHECK: %[[dimVec:.*]] = llvm.mlir.undef : vector<17xi32>
 //       CHECK: %[[c01:.*]] = llvm.mlir.constant(0 : i32) : i32
 //       CHECK: %[[dimVec2:.*]] = llvm.insertelement %[[dtrunc]], %[[dimVec]][%[[c01]] :
-//  CHECK-SAME:  i32] : !llvm.vec<17 x i32>
+//  CHECK-SAME:  i32] : vector<17xi32>
 //       CHECK: %[[dimVec3:.*]] = llvm.shufflevector %[[dimVec2]], %{{.*}} [
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32,
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32,
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32] :
-//  CHECK-SAME: !llvm.vec<17 x i32>, !llvm.vec<17 x i32>
+//  CHECK-SAME: vector<17xi32>, vector<17xi32>
 //       CHECK: %[[mask:.*]] = llvm.icmp "slt" %[[offsetVec4]], %[[dimVec3]] :
-//  CHECK-SAME: !llvm.vec<17 x i32>
+//  CHECK-SAME: vector<17xi32>
 //
 // 5. Rewrite as a masked read.
 //       CHECK: %[[PASS_THROUGH:.*]] =  llvm.mlir.constant(dense<7.000000e+00> :
-//  CHECK-SAME:  vector<17xf32>) : !llvm.vec<17 x f32>
+//  CHECK-SAME:  vector<17xf32>) : vector<17xf32>
 //       CHECK: %[[loaded:.*]] = llvm.intr.masked.load %[[vecPtr]], %[[mask]],
 //  CHECK-SAME: %[[PASS_THROUGH]] {alignment = 4 : i32} :
-//  CHECK-SAME: (!llvm.ptr<vec<17 x f32>>, !llvm.vec<17 x i1>, !llvm.vec<17 x f32>) -> !llvm.vec<17 x f32>
+//  CHECK-SAME: (!llvm.ptr<vector<17xf32>>, vector<17xi1>, vector<17xf32>) -> vector<17xf32>
 
 //
 // 1. Bitcast to vector form.
 //       CHECK: %[[gep_b:.*]] = llvm.getelementptr {{.*}} :
 //  CHECK-SAME: (!llvm.ptr<f32>, i64) -> !llvm.ptr<f32>
 //       CHECK: %[[vecPtr_b:.*]] = llvm.bitcast %[[gep_b]] :
-//  CHECK-SAME: !llvm.ptr<f32> to !llvm.ptr<vec<17 x f32>>
+//  CHECK-SAME: !llvm.ptr<f32> to !llvm.ptr<vector<17xf32>>
 //
 // 2. Create a vector with linear indices [ 0 .. vector_length - 1 ].
 //       CHECK: %[[linearIndex_b:.*]] = llvm.mlir.constant(dense
 //  CHECK-SAME: <[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]> :
-//  CHECK-SAME: vector<17xi32>) : !llvm.vec<17 x i32>
+//  CHECK-SAME: vector<17xi32>) : vector<17xi32>
 //
 // 3. Create offsetVector = [ offset + 0 .. offset + vector_length - 1 ].
 //       CHECK: llvm.shufflevector {{.*}} [0 : i32, 0 : i32, 0 : i32, 0 : i32,
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32,
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32] :
-//  CHECK-SAME: !llvm.vec<17 x i32>, !llvm.vec<17 x i32>
+//  CHECK-SAME: vector<17xi32>, vector<17xi32>
 //       CHECK: llvm.add
 //
 // 4. Let dim the memref dimension, compute the vector comparison mask:
@@ -926,13 +926,13 @@ func @transfer_read_1d(%A : memref<?xf32>, %base: index) -> vector<17xf32> {
 //       CHECK: llvm.shufflevector {{.*}} [0 : i32, 0 : i32, 0 : i32, 0 : i32,
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32,
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32] :
-//  CHECK-SAME: !llvm.vec<17 x i32>, !llvm.vec<17 x i32>
-//       CHECK: %[[mask_b:.*]] = llvm.icmp "slt" {{.*}} : !llvm.vec<17 x i32>
+//  CHECK-SAME: vector<17xi32>, vector<17xi32>
+//       CHECK: %[[mask_b:.*]] = llvm.icmp "slt" {{.*}} : vector<17xi32>
 //
 // 5. Rewrite as a masked write.
 //       CHECK: llvm.intr.masked.store %[[loaded]], %[[vecPtr_b]], %[[mask_b]]
 //  CHECK-SAME: {alignment = 4 : i32} :
-//  CHECK-SAME: !llvm.vec<17 x f32>, !llvm.vec<17 x i1> into !llvm.ptr<vec<17 x f32>>
+//  CHECK-SAME: vector<17xf32>, vector<17xi1> into !llvm.ptr<vector<17xf32>>
 
 func @transfer_read_2d_to_1d(%A : memref<?x?xf32>, %base0: index, %base1: index) -> vector<17xf32> {
   %f7 = constant 7.0: f32
@@ -942,34 +942,34 @@ func @transfer_read_2d_to_1d(%A : memref<?x?xf32>, %base0: index, %base1: index)
   return %f: vector<17xf32>
 }
 // CHECK-LABEL: func @transfer_read_2d_to_1d
-//  CHECK-SAME: %[[BASE_0:[a-zA-Z0-9]*]]: i64, %[[BASE_1:[a-zA-Z0-9]*]]: i64) -> !llvm.vec<17 x f32>
+//  CHECK-SAME: %[[BASE_0:[a-zA-Z0-9]*]]: i64, %[[BASE_1:[a-zA-Z0-9]*]]: i64) -> vector<17xf32>
 //       CHECK: %[[DIM:.*]] = llvm.extractvalue %{{.*}}[3, 1] :
 //  CHECK-SAME: !llvm.struct<(ptr<f32>, ptr<f32>, i64, array<2 x i64>, array<2 x i64>)>
 //
 // Create offsetVector = [ offset + 0 .. offset + vector_length - 1 ].
 //       CHECK: %[[trunc:.*]] = llvm.trunc %[[BASE_1]] : i64 to i32
-//       CHECK: %[[offsetVec:.*]] = llvm.mlir.undef : !llvm.vec<17 x i32>
+//       CHECK: %[[offsetVec:.*]] = llvm.mlir.undef : vector<17xi32>
 //       CHECK: %[[c0:.*]] = llvm.mlir.constant(0 : i32) : i32
 //       CHECK: %[[offsetVec2:.*]] = llvm.insertelement %[[trunc]], %[[offsetVec]][%[[c0]] :
-//  CHECK-SAME: i32] : !llvm.vec<17 x i32>
+//  CHECK-SAME: i32] : vector<17xi32>
 //       CHECK: %[[offsetVec3:.*]] = llvm.shufflevector %[[offsetVec2]], %{{.*}} [
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32,
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32,
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32] :
-//  CHECK-SAME: !llvm.vec<17 x i32>, !llvm.vec<17 x i32>
+//  CHECK-SAME: vector<17xi32>, vector<17xi32>
 //
 // Let dim the memref dimension, compute the vector comparison mask:
 //    [ offset + 0 .. offset + vector_length - 1 ] < [ dim .. dim ]
 //       CHECK: %[[dimtrunc:.*]] = llvm.trunc %[[DIM]] : i64 to i32
-//       CHECK: %[[dimVec:.*]] = llvm.mlir.undef : !llvm.vec<17 x i32>
+//       CHECK: %[[dimVec:.*]] = llvm.mlir.undef : vector<17xi32>
 //       CHECK: %[[c01:.*]] = llvm.mlir.constant(0 : i32) : i32
 //       CHECK: %[[dimVec2:.*]] = llvm.insertelement %[[dimtrunc]], %[[dimVec]][%[[c01]] :
-//  CHECK-SAME:  i32] : !llvm.vec<17 x i32>
+//  CHECK-SAME:  i32] : vector<17xi32>
 //       CHECK: %[[dimVec3:.*]] = llvm.shufflevector %[[dimVec2]], %{{.*}} [
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32,
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32,
 //  CHECK-SAME:  0 : i32, 0 : i32, 0 : i32] :
-//  CHECK-SAME: !llvm.vec<17 x i32>, !llvm.vec<17 x i32>
+//  CHECK-SAME: vector<17xi32>, vector<17xi32>
 
 func @transfer_read_1d_non_zero_addrspace(%A : memref<?xf32, 3>, %base: index) -> vector<17xf32> {
   %f7 = constant 7.0: f32
@@ -982,13 +982,13 @@ func @transfer_read_1d_non_zero_addrspace(%A : memref<?xf32, 3>, %base: index) -
   return %f: vector<17xf32>
 }
 // CHECK-LABEL: func @transfer_read_1d_non_zero_addrspace
-//  CHECK-SAME: %[[BASE:[a-zA-Z0-9]*]]: i64) -> !llvm.vec<17 x f32>
+//  CHECK-SAME: %[[BASE:[a-zA-Z0-9]*]]: i64) -> vector<17xf32>
 //
 // 1. Check address space for GEP is correct.
 //       CHECK: %[[gep:.*]] = llvm.getelementptr {{.*}} :
 //  CHECK-SAME: (!llvm.ptr<f32, 3>, i64) -> !llvm.ptr<f32, 3>
 //       CHECK: %[[vecPtr:.*]] = llvm.addrspacecast %[[gep]] :
-//  CHECK-SAME: !llvm.ptr<f32, 3> to !llvm.ptr<vec<17 x f32>>
+//  CHECK-SAME: !llvm.ptr<f32, 3> to !llvm.ptr<vector<17xf32>>
 //
 // 2. Check address space of the memref is correct.
 //       CHECK: %[[DIM:.*]] = llvm.extractvalue %{{.*}}[3, 0] :
@@ -998,7 +998,7 @@ func @transfer_read_1d_non_zero_addrspace(%A : memref<?xf32, 3>, %base: index) -
 //       CHECK: %[[gep_b:.*]] = llvm.getelementptr {{.*}} :
 //  CHECK-SAME: (!llvm.ptr<f32, 3>, i64) -> !llvm.ptr<f32, 3>
 //       CHECK: %[[vecPtr_b:.*]] = llvm.addrspacecast %[[gep_b]] :
-//  CHECK-SAME: !llvm.ptr<f32, 3> to !llvm.ptr<vec<17 x f32>>
+//  CHECK-SAME: !llvm.ptr<f32, 3> to !llvm.ptr<vector<17xf32>>
 
 func @transfer_read_1d_not_masked(%A : memref<?xf32>, %base: index) -> vector<17xf32> {
   %f7 = constant 7.0: f32
@@ -1007,16 +1007,16 @@ func @transfer_read_1d_not_masked(%A : memref<?xf32>, %base: index) -> vector<17
   return %f: vector<17xf32>
 }
 // CHECK-LABEL: func @transfer_read_1d_not_masked
-//  CHECK-SAME: %[[BASE:[a-zA-Z0-9]*]]: i64) -> !llvm.vec<17 x f32>
+//  CHECK-SAME: %[[BASE:[a-zA-Z0-9]*]]: i64) -> vector<17xf32>
 //
 // 1. Bitcast to vector form.
 //       CHECK: %[[gep:.*]] = llvm.getelementptr {{.*}} :
 //  CHECK-SAME: (!llvm.ptr<f32>, i64) -> !llvm.ptr<f32>
 //       CHECK: %[[vecPtr:.*]] = llvm.bitcast %[[gep]] :
-//  CHECK-SAME: !llvm.ptr<f32> to !llvm.ptr<vec<17 x f32>>
+//  CHECK-SAME: !llvm.ptr<f32> to !llvm.ptr<vector<17xf32>>
 //
 // 2. Rewrite as a load.
-//       CHECK: %[[loaded:.*]] = llvm.load %[[vecPtr]] {alignment = 4 : i64} : !llvm.ptr<vec<17 x f32>>
+//       CHECK: %[[loaded:.*]] = llvm.load %[[vecPtr]] {alignment = 4 : i64} : !llvm.ptr<vector<17xf32>>
 
 func @transfer_read_1d_cast(%A : memref<?xi32>, %base: index) -> vector<12xi8> {
   %c0 = constant 0: i32
@@ -1025,24 +1025,24 @@ func @transfer_read_1d_cast(%A : memref<?xi32>, %base: index) -> vector<12xi8> {
   return %v: vector<12xi8>
 }
 // CHECK-LABEL: func @transfer_read_1d_cast
-//  CHECK-SAME: %[[BASE:[a-zA-Z0-9]*]]: i64) -> !llvm.vec<12 x i8>
+//  CHECK-SAME: %[[BASE:[a-zA-Z0-9]*]]: i64) -> vector<12xi8>
 //
 // 1. Bitcast to vector form.
 //       CHECK: %[[gep:.*]] = llvm.getelementptr {{.*}} :
 //  CHECK-SAME: (!llvm.ptr<i32>, i64) -> !llvm.ptr<i32>
 //       CHECK: %[[vecPtr:.*]] = llvm.bitcast %[[gep]] :
-//  CHECK-SAME: !llvm.ptr<i32> to !llvm.ptr<vec<12 x i8>>
+//  CHECK-SAME: !llvm.ptr<i32> to !llvm.ptr<vector<12xi8>>
 //
 // 2. Rewrite as a load.
-//       CHECK: %[[loaded:.*]] = llvm.load %[[vecPtr]] {alignment = 4 : i64} : !llvm.ptr<vec<12 x i8>>
+//       CHECK: %[[loaded:.*]] = llvm.load %[[vecPtr]] {alignment = 4 : i64} : !llvm.ptr<vector<12xi8>>
 
 func @genbool_1d() -> vector<8xi1> {
   %0 = vector.constant_mask [4] : vector<8xi1>
   return %0 : vector<8xi1>
 }
 // CHECK-LABEL: func @genbool_1d
-// CHECK: %[[C1:.*]] = llvm.mlir.constant(dense<[true, true, true, true, false, false, false, false]> : vector<8xi1>) : !llvm.vec<8 x i1>
-// CHECK: llvm.return %[[C1]] : !llvm.vec<8 x i1>
+// CHECK: %[[C1:.*]] = llvm.mlir.constant(dense<[true, true, true, true, false, false, false, false]> : vector<8xi1>) : vector<8xi1>
+// CHECK: llvm.return %[[C1]] : vector<8xi1>
 
 func @genbool_2d() -> vector<4x4xi1> {
   %v = vector.constant_mask [2, 2] : vector<4x4xi1>
@@ -1050,11 +1050,11 @@ func @genbool_2d() -> vector<4x4xi1> {
 }
 
 // CHECK-LABEL: func @genbool_2d
-// CHECK: %[[C1:.*]] = llvm.mlir.constant(dense<[true, true, false, false]> : vector<4xi1>) : !llvm.vec<4 x i1>
-// CHECK: %[[C2:.*]] = llvm.mlir.constant(dense<false> : vector<4x4xi1>) : !llvm.array<4 x vec<4 x i1>>
-// CHECK: %[[T0:.*]] = llvm.insertvalue %[[C1]], %[[C2]][0] : !llvm.array<4 x vec<4 x i1>>
-// CHECK: %[[T1:.*]] = llvm.insertvalue %[[C1]], %[[T0]][1] : !llvm.array<4 x vec<4 x i1>>
-// CHECK: llvm.return %[[T1]] : !llvm.array<4 x vec<4 x i1>>
+// CHECK: %[[C1:.*]] = llvm.mlir.constant(dense<[true, true, false, false]> : vector<4xi1>) : vector<4xi1>
+// CHECK: %[[C2:.*]] = llvm.mlir.constant(dense<false> : vector<4x4xi1>) : !llvm.array<4 x vector<4xi1>>
+// CHECK: %[[T0:.*]] = llvm.insertvalue %[[C1]], %[[C2]][0] : !llvm.array<4 x vector<4xi1>>
+// CHECK: %[[T1:.*]] = llvm.insertvalue %[[C1]], %[[T0]][1] : !llvm.array<4 x vector<4xi1>>
+// CHECK: llvm.return %[[T1]] : !llvm.array<4 x vector<4xi1>>
 
 func @flat_transpose(%arg0: vector<16xf32>) -> vector<16xf32> {
   %0 = vector.flat_transpose %arg0 { rows = 4: i32, columns = 4: i32 }
@@ -1063,11 +1063,11 @@ func @flat_transpose(%arg0: vector<16xf32>) -> vector<16xf32> {
 }
 
 // CHECK-LABEL: func @flat_transpose
-// CHECK-SAME:  %[[A:.*]]: !llvm.vec<16 x f32>
+// CHECK-SAME:  %[[A:.*]]: vector<16xf32>
 // CHECK:       %[[T:.*]] = llvm.intr.matrix.transpose %[[A]]
 // CHECK-SAME:      {columns = 4 : i32, rows = 4 : i32} :
-// CHECK-SAME:      !llvm.vec<16 x f32> into !llvm.vec<16 x f32>
-// CHECK:       llvm.return %[[T]] : !llvm.vec<16 x f32>
+// CHECK-SAME:      vector<16xf32> into vector<16xf32>
+// CHECK:       llvm.return %[[T]] : vector<16xf32>
 
 func @masked_load_op(%arg0: memref<?xf32>, %arg1: vector<16xi1>, %arg2: vector<16xf32>) -> vector<16xf32> {
   %c0 = constant 0: index
@@ -1078,9 +1078,9 @@ func @masked_load_op(%arg0: memref<?xf32>, %arg1: vector<16xi1>, %arg2: vector<1
 // CHECK-LABEL: func @masked_load_op
 // CHECK: %[[C:.*]] = llvm.mlir.constant(0 : index) : i64
 // CHECK: %[[P:.*]] = llvm.getelementptr %{{.*}}[%[[C]]] : (!llvm.ptr<f32>, i64) -> !llvm.ptr<f32>
-// CHECK: %[[B:.*]] = llvm.bitcast %[[P]] : !llvm.ptr<f32> to !llvm.ptr<vec<16 x f32>>
-// CHECK: %[[L:.*]] = llvm.intr.masked.load %[[B]], %{{.*}}, %{{.*}} {alignment = 4 : i32} : (!llvm.ptr<vec<16 x f32>>, !llvm.vec<16 x i1>, !llvm.vec<16 x f32>) -> !llvm.vec<16 x f32>
-// CHECK: llvm.return %[[L]] : !llvm.vec<16 x f32>
+// CHECK: %[[B:.*]] = llvm.bitcast %[[P]] : !llvm.ptr<f32> to !llvm.ptr<vector<16xf32>>
+// CHECK: %[[L:.*]] = llvm.intr.masked.load %[[B]], %{{.*}}, %{{.*}} {alignment = 4 : i32} : (!llvm.ptr<vector<16xf32>>, vector<16xi1>, vector<16xf32>) -> vector<16xf32>
+// CHECK: llvm.return %[[L]] : vector<16xf32>
 
 func @masked_store_op(%arg0: memref<?xf32>, %arg1: vector<16xi1>, %arg2: vector<16xf32>) {
   %c0 = constant 0: index
@@ -1091,8 +1091,8 @@ func @masked_store_op(%arg0: memref<?xf32>, %arg1: vector<16xi1>, %arg2: vector<
 // CHECK-LABEL: func @masked_store_op
 // CHECK: %[[C:.*]] = llvm.mlir.constant(0 : index) : i64
 // CHECK: %[[P:.*]] = llvm.getelementptr %{{.*}}[%[[C]]] : (!llvm.ptr<f32>, i64) -> !llvm.ptr<f32>
-// CHECK: %[[B:.*]] = llvm.bitcast %[[P]] : !llvm.ptr<f32> to !llvm.ptr<vec<16 x f32>>
-// CHECK: llvm.intr.masked.store %{{.*}}, %[[B]], %{{.*}} {alignment = 4 : i32} : !llvm.vec<16 x f32>, !llvm.vec<16 x i1> into !llvm.ptr<vec<16 x f32>>
+// CHECK: %[[B:.*]] = llvm.bitcast %[[P]] : !llvm.ptr<f32> to !llvm.ptr<vector<16xf32>>
+// CHECK: llvm.intr.masked.store %{{.*}}, %[[B]], %{{.*}} {alignment = 4 : i32} : vector<16xf32>, vector<16xi1> into !llvm.ptr<vector<16xf32>>
 // CHECK: llvm.return
 
 func @gather_op(%arg0: memref<?xf32>, %arg1: vector<3xi32>, %arg2: vector<3xi1>, %arg3: vector<3xf32>) -> vector<3xf32> {
@@ -1101,9 +1101,9 @@ func @gather_op(%arg0: memref<?xf32>, %arg1: vector<3xi32>, %arg2: vector<3xi1>,
 }
 
 // CHECK-LABEL: func @gather_op
-// CHECK: %[[P:.*]] = llvm.getelementptr {{.*}}[%{{.*}}] : (!llvm.ptr<f32>, !llvm.vec<3 x i32>) -> !llvm.vec<3 x ptr<f32>>
-// CHECK: %[[G:.*]] = llvm.intr.masked.gather %[[P]], %{{.*}}, %{{.*}} {alignment = 4 : i32} : (!llvm.vec<3 x ptr<f32>>, !llvm.vec<3 x i1>, !llvm.vec<3 x f32>) -> !llvm.vec<3 x f32>
-// CHECK: llvm.return %[[G]] : !llvm.vec<3 x f32>
+// CHECK: %[[P:.*]] = llvm.getelementptr {{.*}}[%{{.*}}] : (!llvm.ptr<f32>, vector<3xi32>) -> !llvm.vec<3 x ptr<f32>>
+// CHECK: %[[G:.*]] = llvm.intr.masked.gather %[[P]], %{{.*}}, %{{.*}} {alignment = 4 : i32} : (!llvm.vec<3 x ptr<f32>>, vector<3xi1>, vector<3xf32>) -> vector<3xf32>
+// CHECK: llvm.return %[[G]] : vector<3xf32>
 
 func @scatter_op(%arg0: memref<?xf32>, %arg1: vector<3xi32>, %arg2: vector<3xi1>, %arg3: vector<3xf32>) {
   vector.scatter %arg0[%arg1], %arg2, %arg3 : memref<?xf32>, vector<3xi32>, vector<3xi1>, vector<3xf32>
@@ -1111,8 +1111,8 @@ func @scatter_op(%arg0: memref<?xf32>, %arg1: vector<3xi32>, %arg2: vector<3xi1>
 }
 
 // CHECK-LABEL: func @scatter_op
-// CHECK: %[[P:.*]] = llvm.getelementptr {{.*}}[%{{.*}}] : (!llvm.ptr<f32>, !llvm.vec<3 x i32>) -> !llvm.vec<3 x ptr<f32>>
-// CHECK: llvm.intr.masked.scatter %{{.*}}, %[[P]], %{{.*}} {alignment = 4 : i32} : !llvm.vec<3 x f32>, !llvm.vec<3 x i1> into !llvm.vec<3 x ptr<f32>>
+// CHECK: %[[P:.*]] = llvm.getelementptr {{.*}}[%{{.*}}] : (!llvm.ptr<f32>, vector<3xi32>) -> !llvm.vec<3 x ptr<f32>>
+// CHECK: llvm.intr.masked.scatter %{{.*}}, %[[P]], %{{.*}} {alignment = 4 : i32} : vector<3xf32>, vector<3xi1> into !llvm.vec<3 x ptr<f32>>
 // CHECK: llvm.return
 
 func @expand_load_op(%arg0: memref<?xf32>, %arg1: vector<11xi1>, %arg2: vector<11xf32>) -> vector<11xf32> {
@@ -1124,8 +1124,8 @@ func @expand_load_op(%arg0: memref<?xf32>, %arg1: vector<11xi1>, %arg2: vector<1
 // CHECK-LABEL: func @expand_load_op
 // CHECK: %[[C:.*]] = llvm.mlir.constant(0 : index) : i64
 // CHECK: %[[P:.*]] = llvm.getelementptr %{{.*}}[%[[C]]] : (!llvm.ptr<f32>, i64) -> !llvm.ptr<f32>
-// CHECK: %[[E:.*]] = "llvm.intr.masked.expandload"(%[[P]], %{{.*}}, %{{.*}}) : (!llvm.ptr<f32>, !llvm.vec<11 x i1>, !llvm.vec<11 x f32>) -> !llvm.vec<11 x f32>
-// CHECK: llvm.return %[[E]] : !llvm.vec<11 x f32>
+// CHECK: %[[E:.*]] = "llvm.intr.masked.expandload"(%[[P]], %{{.*}}, %{{.*}}) : (!llvm.ptr<f32>, vector<11xi1>, vector<11xf32>) -> vector<11xf32>
+// CHECK: llvm.return %[[E]] : vector<11xf32>
 
 func @compress_store_op(%arg0: memref<?xf32>, %arg1: vector<11xi1>, %arg2: vector<11xf32>) {
   %c0 = constant 0: index
@@ -1136,5 +1136,5 @@ func @compress_store_op(%arg0: memref<?xf32>, %arg1: vector<11xi1>, %arg2: vecto
 // CHECK-LABEL: func @compress_store_op
 // CHECK: %[[C:.*]] = llvm.mlir.constant(0 : index) : i64
 // CHECK: %[[P:.*]] = llvm.getelementptr %{{.*}}[%[[C]]] : (!llvm.ptr<f32>, i64) -> !llvm.ptr<f32>
-// CHECK: "llvm.intr.masked.compressstore"(%{{.*}}, %[[P]], %{{.*}}) : (!llvm.vec<11 x f32>, !llvm.ptr<f32>, !llvm.vec<11 x i1>) -> ()
+// CHECK: "llvm.intr.masked.compressstore"(%{{.*}}, %[[P]], %{{.*}}) : (vector<11xf32>, !llvm.ptr<f32>, vector<11xi1>) -> ()
 // CHECK: llvm.return

diff  --git a/mlir/test/Conversion/VectorToROCDL/vector-to-rocdl.mlir b/mlir/test/Conversion/VectorToROCDL/vector-to-rocdl.mlir
index f8483dcc7f80..a6d994ad7db6 100644
--- a/mlir/test/Conversion/VectorToROCDL/vector-to-rocdl.mlir
+++ b/mlir/test/Conversion/VectorToROCDL/vector-to-rocdl.mlir
@@ -9,7 +9,7 @@ func @transfer_readx2(%A : memref<?xf32>, %base: index) -> vector<2xf32> {
   return %f: vector<2xf32>
 }
 // CHECK-LABEL: @transfer_readx2
-// CHECK: rocdl.buffer.load {{.*}} !llvm.vec<2 x f32>
+// CHECK: rocdl.buffer.load {{.*}} vector<2xf32>
 
 func @transfer_readx4(%A : memref<?xf32>, %base: index) -> vector<4xf32> {
   %f0 = constant 0.0: f32
@@ -19,7 +19,7 @@ func @transfer_readx4(%A : memref<?xf32>, %base: index) -> vector<4xf32> {
   return %f: vector<4xf32>
 }
 // CHECK-LABEL: @transfer_readx4
-// CHECK: rocdl.buffer.load {{.*}} !llvm.vec<4 x f32>
+// CHECK: rocdl.buffer.load {{.*}} vector<4xf32>
 
 func @transfer_read_dwordConfig(%A : memref<?xf32>, %base: index) -> vector<4xf32> {
   %f0 = constant 0.0: f32
@@ -43,7 +43,7 @@ func @transfer_writex2(%A : memref<?xf32>, %B : vector<2xf32>, %base: index) {
   return
 }
 // CHECK-LABEL: @transfer_writex2
-// CHECK: rocdl.buffer.store {{.*}} !llvm.vec<2 x f32>
+// CHECK: rocdl.buffer.store {{.*}} vector<2xf32>
 
 func @transfer_writex4(%A : memref<?xf32>, %B : vector<4xf32>, %base: index) {
   vector.transfer_write %B, %A[%base]
@@ -52,7 +52,7 @@ func @transfer_writex4(%A : memref<?xf32>, %B : vector<4xf32>, %base: index) {
   return
 }
 // CHECK-LABEL: @transfer_writex4
-// CHECK: rocdl.buffer.store {{.*}} !llvm.vec<4 x f32>
+// CHECK: rocdl.buffer.store {{.*}} vector<4xf32>
 
 func @transfer_write_dwordConfig(%A : memref<?xf32>, %B : vector<2xf32>, %base: index) {
   vector.transfer_write %B, %A[%base]

diff  --git a/mlir/test/Dialect/LLVMIR/dialect-cast.mlir b/mlir/test/Dialect/LLVMIR/dialect-cast.mlir
index fb05a7060a2a..90eaaa24544f 100644
--- a/mlir/test/Dialect/LLVMIR/dialect-cast.mlir
+++ b/mlir/test/Dialect/LLVMIR/dialect-cast.mlir
@@ -9,7 +9,6 @@ func @mlir_dialect_cast(%0: index, %1: i32, %2: bf16, %3: f16, %4: f32, %5: f64,
                         %10: memref<*xf32>) {
   llvm.mlir.cast %0 : index to i64
   llvm.mlir.cast %0 : index to i32
-  llvm.mlir.cast %6 : vector<42xf32> to !llvm.vec<42xf32>
   llvm.mlir.cast %7 : memref<42xf32> to !llvm.ptr<f32>
   llvm.mlir.cast %7 : memref<42xf32> to !llvm.struct<(ptr<f32>, ptr<f32>, i64, array<1xi64>, array<1xi64>)>
   llvm.mlir.cast %8 : memref<?xf32> to !llvm.struct<(ptr<f32>, ptr<f32>, i64, array<1xi64>, array<1xi64>)>
@@ -72,27 +71,13 @@ func @mlir_dialect_cast_integer_non_integer(%0 : i16) {
 
 // -----
 
-func @mlir_dialect_cast_nd_vector(%0 : vector<2x2xf32>) {
-  // expected-error at +1 {{only 1-d vector is allowed}}
-  llvm.mlir.cast %0 : vector<2x2xf32> to !llvm.vec<4xf32>
-}
-
-// -----
-
 func @mlir_dialect_cast_scalable_vector(%0 : vector<2xf32>) {
-  // expected-error at +1 {{only fixed-sized vector is allowed}}
+  // expected-error at +1 {{vector types should not be casted}}
   llvm.mlir.cast %0 : vector<2xf32> to !llvm.vec<?x2xf32>
 }
 
 // -----
 
-func @mlir_dialect_cast_vector_size_mismatch(%0 : vector<2xf32>) {
-  // expected-error at +1 {{invalid cast between vectors with mismatching sizes}}
-  llvm.mlir.cast %0 : vector<2xf32> to !llvm.vec<4xf32>
-}
-
-// -----
-
 func @mlir_dialect_cast_dynamic_memref_bare_ptr(%0 : memref<?xf32>) {
   // expected-error at +1 {{unexpected bare pointer for dynamically shaped memref}}
   llvm.mlir.cast %0 : memref<?xf32> to !llvm.ptr<f32>

diff  --git a/mlir/test/Dialect/LLVMIR/invalid.mlir b/mlir/test/Dialect/LLVMIR/invalid.mlir
index b496237f140b..bb3a81fa6576 100644
--- a/mlir/test/Dialect/LLVMIR/invalid.mlir
+++ b/mlir/test/Dialect/LLVMIR/invalid.mlir
@@ -317,21 +317,21 @@ func @extractvalue_wrong_nesting() {
 
 // -----
 
-func @invalid_vector_type_1(%arg0: !llvm.vec<4 x f32>, %arg1: i32, %arg2: f32) {
-  // expected-error at +1 {{expected LLVM IR dialect vector type for operand #1}}
+func @invalid_vector_type_1(%arg0: vector<4xf32>, %arg1: i32, %arg2: f32) {
+  // expected-error at +1 {{expected LLVM dialect-compatible vector type for operand #1}}
   %0 = llvm.extractelement %arg2[%arg1 : i32] : f32
 }
 
 // -----
 
-func @invalid_vector_type_2(%arg0: !llvm.vec<4 x f32>, %arg1: i32, %arg2: f32) {
-  // expected-error at +1 {{expected LLVM IR dialect vector type for operand #1}}
+func @invalid_vector_type_2(%arg0: vector<4xf32>, %arg1: i32, %arg2: f32) {
+  // expected-error at +1 {{expected LLVM dialect-compatible vector type for operand #1}}
   %0 = llvm.insertelement %arg2, %arg2[%arg1 : i32] : f32
 }
 
 // -----
 
-func @invalid_vector_type_3(%arg0: !llvm.vec<4 x f32>, %arg1: i32, %arg2: f32) {
+func @invalid_vector_type_3(%arg0: vector<4xf32>, %arg1: i32, %arg2: f32) {
   // expected-error at +1 {{expected LLVM IR dialect vector type for operand #1}}
   %0 = llvm.shufflevector %arg2, %arg2 [0 : i32, 0 : i32, 0 : i32, 0 : i32, 7 : i32] : f32, f32
 }
@@ -366,74 +366,74 @@ func @nvvm_invalid_shfl_pred_3(%arg0 : i32, %arg1 : i32, %arg2 : i32, %arg3 : i3
 
 // -----
 
-func @nvvm_invalid_mma_0(%a0 : f16, %a1 : !llvm.vec<2 x f16>,
-                         %b0 : !llvm.vec<2 x f16>, %b1 : !llvm.vec<2 x f16>,
+func @nvvm_invalid_mma_0(%a0 : f16, %a1 : vector<2xf16>,
+                         %b0 : vector<2xf16>, %b1 : vector<2xf16>,
                          %c0 : f32, %c1 : f32, %c2 : f32, %c3 : f32,
                          %c4 : f32, %c5 : f32, %c6 : f32, %c7 : f32) {
   // expected-error at +1 {{expected operands to be 4 <halfx2>s followed by either 4 <halfx2>s or 8 floats}}
-  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="row", blayout="col"} : (f16, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
+  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="row", blayout="col"} : (f16, vector<2xf16>, vector<2xf16>, vector<2xf16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
   llvm.return %0 : !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
 }
 
 // -----
 
-func @nvvm_invalid_mma_1(%a0 : !llvm.vec<2 x f16>, %a1 : !llvm.vec<2 x f16>,
-                         %b0 : !llvm.vec<2 x f16>, %b1 : !llvm.vec<2 x f16>,
+func @nvvm_invalid_mma_1(%a0 : vector<2xf16>, %a1 : vector<2xf16>,
+                         %b0 : vector<2xf16>, %b1 : vector<2xf16>,
                          %c0 : f32, %c1 : f32, %c2 : f32, %c3 : f32,
                          %c4 : f32, %c5 : f32, %c6 : f32, %c7 : f32) {
   // expected-error at +1 {{expected result type to be a struct of either 4 <halfx2>s or 8 floats}}
-  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="row", blayout="col"} : (!llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f16)>
+  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="row", blayout="col"} : (vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f16)>
   llvm.return %0 : !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f16)>
 }
 
 // -----
 
-func @nvvm_invalid_mma_2(%a0 : !llvm.vec<2 x f16>, %a1 : !llvm.vec<2 x f16>,
-                         %b0 : !llvm.vec<2 x f16>, %b1 : !llvm.vec<2 x f16>,
+func @nvvm_invalid_mma_2(%a0 : vector<2xf16>, %a1 : vector<2xf16>,
+                         %b0 : vector<2xf16>, %b1 : vector<2xf16>,
                          %c0 : f32, %c1 : f32, %c2 : f32, %c3 : f32,
                          %c4 : f32, %c5 : f32, %c6 : f32, %c7 : f32) {
   // expected-error at +1 {{alayout and blayout attributes must be set to either "row" or "col"}}
-  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 : (!llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
+  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 : (vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
   llvm.return %0 : !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
 }
 
 // -----
 
-func @nvvm_invalid_mma_3(%a0 : !llvm.vec<2 x f16>, %a1 : !llvm.vec<2 x f16>,
-                         %b0 : !llvm.vec<2 x f16>, %b1 : !llvm.vec<2 x f16>,
-                         %c0 : !llvm.vec<2 x f16>, %c1 : !llvm.vec<2 x f16>,
-                         %c2 : !llvm.vec<2 x f16>, %c3 : !llvm.vec<2 x f16>) {
+func @nvvm_invalid_mma_3(%a0 : vector<2xf16>, %a1 : vector<2xf16>,
+                         %b0 : vector<2xf16>, %b1 : vector<2xf16>,
+                         %c0 : vector<2xf16>, %c1 : vector<2xf16>,
+                         %c2 : vector<2xf16>, %c3 : vector<2xf16>) {
   // expected-error at +1 {{unimplemented mma.sync variant}}
-  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3 {alayout="row", blayout="col"} : (!llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
+  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3 {alayout="row", blayout="col"} : (vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
   llvm.return %0 : !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
 }
 
 // -----
 
-func @nvvm_invalid_mma_4(%a0 : !llvm.vec<2 x f16>, %a1 : !llvm.vec<2 x f16>,
-                         %b0 : !llvm.vec<2 x f16>, %b1 : !llvm.vec<2 x f16>,
+func @nvvm_invalid_mma_4(%a0 : vector<2xf16>, %a1 : vector<2xf16>,
+                         %b0 : vector<2xf16>, %b1 : vector<2xf16>,
                          %c0 : f32, %c1 : f32, %c2 : f32, %c3 : f32,
                          %c4 : f32, %c5 : f32, %c6 : f32, %c7 : f32) {
   // expected-error at +1 {{unimplemented mma.sync variant}}
-  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="row", blayout="col"} : (!llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(vec<2 x f16>, vec<2 x f16>, vec<2 x f16>, vec<2 x f16>)>
-  llvm.return %0 : !llvm.struct<(vec<2 x f16>, vec<2 x f16>, vec<2 x f16>, vec<2 x f16>)>
+  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="row", blayout="col"} : (vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>)>
+  llvm.return %0 : !llvm.struct<(vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>)>
 }
 
 // -----
 
-func @nvvm_invalid_mma_5(%a0 : !llvm.vec<2 x f16>, %a1 : !llvm.vec<2 x f16>,
-                         %b0 : !llvm.vec<2 x f16>, %b1 : !llvm.vec<2 x f16>,
+func @nvvm_invalid_mma_5(%a0 : vector<2xf16>, %a1 : vector<2xf16>,
+                         %b0 : vector<2xf16>, %b1 : vector<2xf16>,
                          %c0 : f32, %c1 : f32, %c2 : f32, %c3 : f32,
                          %c4 : f32, %c5 : f32, %c6 : f32, %c7 : f32) {
   // expected-error at +1 {{unimplemented mma.sync variant}}
-  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="col", blayout="row"} : (!llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
+  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="col", blayout="row"} : (vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
   llvm.return %0 : !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
 }
 
 // -----
 
-func @nvvm_invalid_mma_6(%a0 : !llvm.vec<2 x f16>, %a1 : !llvm.vec<2 x f16>,
-                         %b0 : !llvm.vec<2 x f16>, %b1 : !llvm.vec<2 x f16>,
+func @nvvm_invalid_mma_6(%a0 : vector<2xf16>, %a1 : vector<2xf16>,
+                         %b0 : vector<2xf16>, %b1 : vector<2xf16>,
                          %c0 : f32, %c1 : f32, %c2 : f32, %c3 : f32,
                          %c4 : f32, %c5 : f32, %c6 : f32, %c7 : f32) {
   // expected-error at +1 {{invalid kind of type specified}}
@@ -443,12 +443,12 @@ func @nvvm_invalid_mma_6(%a0 : !llvm.vec<2 x f16>, %a1 : !llvm.vec<2 x f16>,
 
 // -----
 
-func @nvvm_invalid_mma_7(%a0 : !llvm.vec<2 x f16>, %a1 : !llvm.vec<2 x f16>,
-                         %b0 : !llvm.vec<2 x f16>, %b1 : !llvm.vec<2 x f16>,
+func @nvvm_invalid_mma_7(%a0 : vector<2xf16>, %a1 : vector<2xf16>,
+                         %b0 : vector<2xf16>, %b1 : vector<2xf16>,
                          %c0 : f32, %c1 : f32, %c2 : f32, %c3 : f32,
                          %c4 : f32, %c5 : f32, %c6 : f32, %c7 : f32) {
   // expected-error at +1 {{op requires one result}}
-  %0:2 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="col", blayout="row"} : (!llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, f32, f32, f32, f32, f32, f32, f32, f32) -> (!llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>, i32)
+  %0:2 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="col", blayout="row"} : (vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>, f32, f32, f32, f32, f32, f32, f32, f32) -> (!llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>, i32)
   llvm.return %0#0 : !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
 }
 

diff  --git a/mlir/test/Dialect/LLVMIR/nvvm.mlir b/mlir/test/Dialect/LLVMIR/nvvm.mlir
index ba0543e18cbc..545364dc0732 100644
--- a/mlir/test/Dialect/LLVMIR/nvvm.mlir
+++ b/mlir/test/Dialect/LLVMIR/nvvm.mlir
@@ -60,11 +60,11 @@ func @nvvm_vote(%arg0 : i32, %arg1 : i1) -> i32 {
   llvm.return %0 : i32
 }
 
-func @nvvm_mma(%a0 : !llvm.vec<2 x f16>, %a1 : !llvm.vec<2 x f16>,
-               %b0 : !llvm.vec<2 x f16>, %b1 : !llvm.vec<2 x f16>,
+func @nvvm_mma(%a0 : vector<2xf16>, %a1 : vector<2xf16>,
+               %b0 : vector<2xf16>, %b1 : vector<2xf16>,
                %c0 : f32, %c1 : f32, %c2 : f32, %c3 : f32,
                %c4 : f32, %c5 : f32, %c6 : f32, %c7 : f32) {
-  // CHECK: nvvm.mma.sync {{.*}} {alayout = "row", blayout = "col"} : (!llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
-  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="row", blayout="col"} : (!llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
+  // CHECK: nvvm.mma.sync {{.*}} {alayout = "row", blayout = "col"} : (vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
+  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="row", blayout="col"} : (vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
   llvm.return %0 : !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
 }

diff  --git a/mlir/test/Dialect/LLVMIR/rocdl.mlir b/mlir/test/Dialect/LLVMIR/rocdl.mlir
index c5314ab4d128..31a56bede3bd 100644
--- a/mlir/test/Dialect/LLVMIR/rocdl.mlir
+++ b/mlir/test/Dialect/LLVMIR/rocdl.mlir
@@ -36,133 +36,133 @@ func @rocdl.barrier() {
 }
 
 func @rocdl.xdlops(%arg0 : f32, %arg1 : f32,
-                   %arg2 : !llvm.vec<32 x f32>, %arg3 : i32,
-                   %arg4 : !llvm.vec<16 x f32>, %arg5 : !llvm.vec<4 x f32>,
-                   %arg6 : !llvm.vec<4 x f16>, %arg7 : !llvm.vec<32 x i32>,
-                   %arg8 : !llvm.vec<16 x i32>, %arg9 : !llvm.vec<4 x i32>,
-                   %arg10 : !llvm.vec<2 x i16>) -> !llvm.vec<32 x f32> {
+                   %arg2 : vector<32xf32>, %arg3 : i32,
+                   %arg4 : vector<16xf32>, %arg5 : vector<4xf32>,
+                   %arg6 : vector<4xf16>, %arg7 : vector<32xi32>,
+                   %arg8 : vector<16xi32>, %arg9 : vector<4xi32>,
+                   %arg10 : vector<2xi16>) -> vector<32xf32> {
   // CHECK-LABEL: rocdl.xdlops
-  // CHECK: rocdl.mfma.f32.32x32x1f32 {{.*}} : (f32, f32, !llvm.vec<32 x f32>, i32, i32, i32) -> !llvm.vec<32 x f32>
+  // CHECK: rocdl.mfma.f32.32x32x1f32 {{.*}} : (f32, f32, vector<32xf32>, i32, i32, i32) -> vector<32xf32>
   %r0 = rocdl.mfma.f32.32x32x1f32 %arg0, %arg1, %arg2, %arg3, %arg3, %arg3 :
-                            (f32, f32, !llvm.vec<32 x f32>,
-                            i32, i32, i32) -> !llvm.vec<32 x f32>
+                            (f32, f32, vector<32xf32>,
+                            i32, i32, i32) -> vector<32xf32>
 
-  // CHECK: rocdl.mfma.f32.16x16x1f32 {{.*}} : (f32, f32, !llvm.vec<16 x f32>, i32, i32, i32) -> !llvm.vec<16 x f32>
+  // CHECK: rocdl.mfma.f32.16x16x1f32 {{.*}} : (f32, f32, vector<16xf32>, i32, i32, i32) -> vector<16xf32>
   %r1 = rocdl.mfma.f32.16x16x1f32 %arg0, %arg1, %arg4, %arg3, %arg3, %arg3 :
-                            (f32, f32, !llvm.vec<16 x f32>,
-                            i32, i32, i32) -> !llvm.vec<16 x f32>
+                            (f32, f32, vector<16xf32>,
+                            i32, i32, i32) -> vector<16xf32>
 
-  // CHECK: rocdl.mfma.f32.16x16x4f32 {{.*}} : (f32, f32, !llvm.vec<4 x f32>, i32, i32, i32) -> !llvm.vec<4 x f32>
+  // CHECK: rocdl.mfma.f32.16x16x4f32 {{.*}} : (f32, f32, vector<4xf32>, i32, i32, i32) -> vector<4xf32>
   %r2 = rocdl.mfma.f32.16x16x4f32 %arg0, %arg1, %arg5, %arg3, %arg3, %arg3 :
-                            (f32, f32, !llvm.vec<4 x f32>,
-                            i32, i32, i32) -> !llvm.vec<4 x f32>
+                            (f32, f32, vector<4xf32>,
+                            i32, i32, i32) -> vector<4xf32>
 
-  // CHECK: rocdl.mfma.f32.4x4x1f32 {{.*}} : (f32, f32, !llvm.vec<4 x f32>, i32, i32, i32) -> !llvm.vec<4 x f32>
+  // CHECK: rocdl.mfma.f32.4x4x1f32 {{.*}} : (f32, f32, vector<4xf32>, i32, i32, i32) -> vector<4xf32>
   %r3 = rocdl.mfma.f32.4x4x1f32 %arg0, %arg1, %arg5, %arg3, %arg3, %arg3 :
-                            (f32, f32, !llvm.vec<4 x f32>,
-                            i32, i32, i32) -> !llvm.vec<4 x f32>
+                            (f32, f32, vector<4xf32>,
+                            i32, i32, i32) -> vector<4xf32>
 
-  // CHECK: rocdl.mfma.f32.32x32x2f32 {{.*}} : (f32, f32, !llvm.vec<16 x f32>, i32, i32, i32) -> !llvm.vec<16 x f32>
+  // CHECK: rocdl.mfma.f32.32x32x2f32 {{.*}} : (f32, f32, vector<16xf32>, i32, i32, i32) -> vector<16xf32>
   %r4= rocdl.mfma.f32.32x32x2f32 %arg0, %arg1, %arg4, %arg3, %arg3, %arg3 :
-                            (f32, f32, !llvm.vec<16 x f32>,
-                            i32, i32, i32) -> !llvm.vec<16 x f32>
+                            (f32, f32, vector<16xf32>,
+                            i32, i32, i32) -> vector<16xf32>
 
-  // CHECK: rocdl.mfma.f32.32x32x4f16 {{.*}} : (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<32 x f32>, i32, i32, i32) -> !llvm.vec<32 x f32>
+  // CHECK: rocdl.mfma.f32.32x32x4f16 {{.*}} : (vector<4xf16>, vector<4xf16>, vector<32xf32>, i32, i32, i32) -> vector<32xf32>
   %r5 = rocdl.mfma.f32.32x32x4f16 %arg6, %arg6, %arg2, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<32 x f32>,
-                            i32, i32, i32) -> !llvm.vec<32 x f32>
+                            (vector<4xf16>, vector<4xf16>, vector<32xf32>,
+                            i32, i32, i32) -> vector<32xf32>
 
-  // CHECK: rocdl.mfma.f32.16x16x4f16 {{.*}} : (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<16 x f32>, i32, i32, i32) -> !llvm.vec<16 x f32>
+  // CHECK: rocdl.mfma.f32.16x16x4f16 {{.*}} : (vector<4xf16>, vector<4xf16>, vector<16xf32>, i32, i32, i32) -> vector<16xf32>
   %r6 = rocdl.mfma.f32.16x16x4f16 %arg6, %arg6, %arg4, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<16 x f32>,
-                            i32, i32, i32) -> !llvm.vec<16 x f32>
+                            (vector<4xf16>, vector<4xf16>, vector<16xf32>,
+                            i32, i32, i32) -> vector<16xf32>
 
-  // CHECK: rocdl.mfma.f32.4x4x4f16 {{.*}} : (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<4 x f32>, i32, i32, i32) -> !llvm.vec<4 x f32>
+  // CHECK: rocdl.mfma.f32.4x4x4f16 {{.*}} : (vector<4xf16>, vector<4xf16>, vector<4xf32>, i32, i32, i32) -> vector<4xf32>
   %r7 = rocdl.mfma.f32.4x4x4f16 %arg6, %arg6, %arg5, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<4 x f32>,
-                            i32, i32, i32) -> !llvm.vec<4 x f32>
+                            (vector<4xf16>, vector<4xf16>, vector<4xf32>,
+                            i32, i32, i32) -> vector<4xf32>
 
-  // CHECK: rocdl.mfma.f32.32x32x8f16 {{.*}} : (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<16 x f32>, i32, i32, i32) -> !llvm.vec<16 x f32>
+  // CHECK: rocdl.mfma.f32.32x32x8f16 {{.*}} : (vector<4xf16>, vector<4xf16>, vector<16xf32>, i32, i32, i32) -> vector<16xf32>
   %r8 = rocdl.mfma.f32.32x32x8f16 %arg6, %arg6, %arg4, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<16 x f32>,
-                            i32, i32, i32) -> !llvm.vec<16 x f32>
+                            (vector<4xf16>, vector<4xf16>, vector<16xf32>,
+                            i32, i32, i32) -> vector<16xf32>
 
-  // CHECK: rocdl.mfma.f32.16x16x16f16 {{.*}} : (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<4 x f32>, i32, i32, i32) -> !llvm.vec<4 x f32>
+  // CHECK: rocdl.mfma.f32.16x16x16f16 {{.*}} : (vector<4xf16>, vector<4xf16>, vector<4xf32>, i32, i32, i32) -> vector<4xf32>
   %r9 = rocdl.mfma.f32.16x16x16f16 %arg6, %arg6, %arg5, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<4 x f32>,
-                            i32, i32, i32) -> !llvm.vec<4 x f32>
+                            (vector<4xf16>, vector<4xf16>, vector<4xf32>,
+                            i32, i32, i32) -> vector<4xf32>
 
-  // CHECK: rocdl.mfma.i32.32x32x4i8 {{.*}} : (i32, i32, !llvm.vec<32 x i32>, i32, i32, i32) -> !llvm.vec<32 x i32>
+  // CHECK: rocdl.mfma.i32.32x32x4i8 {{.*}} : (i32, i32, vector<32xi32>, i32, i32, i32) -> vector<32xi32>
   %r10 = rocdl.mfma.i32.32x32x4i8 %arg3, %arg3, %arg7, %arg3, %arg3, %arg3 :
-                            (i32, i32, !llvm.vec<32 x i32>,
-                            i32, i32, i32) -> !llvm.vec<32 x i32>
+                            (i32, i32, vector<32xi32>,
+                            i32, i32, i32) -> vector<32xi32>
 
-  // CHECK: rocdl.mfma.i32.16x16x4i8 {{.*}} : (i32, i32, !llvm.vec<16 x i32>, i32, i32, i32) -> !llvm.vec<16 x i32>
+  // CHECK: rocdl.mfma.i32.16x16x4i8 {{.*}} : (i32, i32, vector<16xi32>, i32, i32, i32) -> vector<16xi32>
   %r11 = rocdl.mfma.i32.16x16x4i8 %arg3, %arg3, %arg8, %arg3, %arg3, %arg3 :
-                            (i32, i32, !llvm.vec<16 x i32>,
-                            i32, i32, i32) -> !llvm.vec<16 x i32>
+                            (i32, i32, vector<16xi32>,
+                            i32, i32, i32) -> vector<16xi32>
 
-  // CHECK: rocdl.mfma.i32.4x4x4i8 {{.*}} : (i32, i32, !llvm.vec<4 x i32>, i32, i32, i32) -> !llvm.vec<4 x i32>
+  // CHECK: rocdl.mfma.i32.4x4x4i8 {{.*}} : (i32, i32, vector<4xi32>, i32, i32, i32) -> vector<4xi32>
   %r12 = rocdl.mfma.i32.4x4x4i8 %arg3, %arg3, %arg9, %arg3, %arg3, %arg3 :
-                            (i32, i32, !llvm.vec<4 x i32>,
-                            i32, i32, i32) -> !llvm.vec<4 x i32>
+                            (i32, i32, vector<4xi32>,
+                            i32, i32, i32) -> vector<4xi32>
 
-  // CHECK: rocdl.mfma.i32.32x32x8i8 {{.*}} : (i32, i32, !llvm.vec<16 x i32>, i32, i32, i32) -> !llvm.vec<16 x i32>
+  // CHECK: rocdl.mfma.i32.32x32x8i8 {{.*}} : (i32, i32, vector<16xi32>, i32, i32, i32) -> vector<16xi32>
   %r13 = rocdl.mfma.i32.32x32x8i8 %arg3, %arg3, %arg8, %arg3, %arg3, %arg3 :
-                            (i32, i32, !llvm.vec<16 x i32>,
-                            i32, i32, i32) -> !llvm.vec<16 x i32>
+                            (i32, i32, vector<16xi32>,
+                            i32, i32, i32) -> vector<16xi32>
 
-  // CHECK: rocdl.mfma.i32.16x16x16i8 {{.*}} : (i32, i32, !llvm.vec<4 x i32>, i32, i32, i32) -> !llvm.vec<4 x i32>
+  // CHECK: rocdl.mfma.i32.16x16x16i8 {{.*}} : (i32, i32, vector<4xi32>, i32, i32, i32) -> vector<4xi32>
   %r14 = rocdl.mfma.i32.16x16x16i8 %arg3, %arg3, %arg9, %arg3, %arg3, %arg3 :
-                            (i32, i32, !llvm.vec<4 x i32>,
-                            i32, i32, i32) -> !llvm.vec<4 x i32>
+                            (i32, i32, vector<4xi32>,
+                            i32, i32, i32) -> vector<4xi32>
 
-  // CHECK: rocdl.mfma.f32.32x32x2bf16 {{.*}} : (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<32 x f32>, i32, i32, i32) -> !llvm.vec<32 x f32>
+  // CHECK: rocdl.mfma.f32.32x32x2bf16 {{.*}} : (vector<2xi16>, vector<2xi16>, vector<32xf32>, i32, i32, i32) -> vector<32xf32>
   %r15 = rocdl.mfma.f32.32x32x2bf16 %arg10, %arg10, %arg2, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<32 x f32>,
-                            i32, i32, i32) -> !llvm.vec<32 x f32>
+                            (vector<2xi16>, vector<2xi16>, vector<32xf32>,
+                            i32, i32, i32) -> vector<32xf32>
 
-  // CHECK: rocdl.mfma.f32.16x16x2bf16 {{.*}} : (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<16 x f32>, i32, i32, i32) -> !llvm.vec<16 x f32>
+  // CHECK: rocdl.mfma.f32.16x16x2bf16 {{.*}} : (vector<2xi16>, vector<2xi16>, vector<16xf32>, i32, i32, i32) -> vector<16xf32>
   %r16 = rocdl.mfma.f32.16x16x2bf16 %arg10, %arg10, %arg4, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<16 x f32>,
-                            i32, i32, i32) -> !llvm.vec<16 x f32>
+                            (vector<2xi16>, vector<2xi16>, vector<16xf32>,
+                            i32, i32, i32) -> vector<16xf32>
 
-  // CHECK: rocdl.mfma.f32.4x4x2bf16 {{.*}} : (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<4 x f32>, i32, i32, i32) -> !llvm.vec<4 x f32>
+  // CHECK: rocdl.mfma.f32.4x4x2bf16 {{.*}} : (vector<2xi16>, vector<2xi16>, vector<4xf32>, i32, i32, i32) -> vector<4xf32>
   %r17 = rocdl.mfma.f32.4x4x2bf16 %arg10, %arg10, %arg5, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<4 x f32>,
-                            i32, i32, i32) -> !llvm.vec<4 x f32>
+                            (vector<2xi16>, vector<2xi16>, vector<4xf32>,
+                            i32, i32, i32) -> vector<4xf32>
 
-  // CHECK: rocdl.mfma.f32.32x32x4bf16 {{.*}} : (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<16 x f32>, i32, i32, i32) -> !llvm.vec<16 x f32>
+  // CHECK: rocdl.mfma.f32.32x32x4bf16 {{.*}} : (vector<2xi16>, vector<2xi16>, vector<16xf32>, i32, i32, i32) -> vector<16xf32>
   %r18 = rocdl.mfma.f32.32x32x4bf16 %arg10, %arg10, %arg4, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<16 x f32>,
-                            i32, i32, i32) -> !llvm.vec<16 x f32>
+                            (vector<2xi16>, vector<2xi16>, vector<16xf32>,
+                            i32, i32, i32) -> vector<16xf32>
 
-  // CHECK: rocdl.mfma.f32.16x16x8bf16 {{.*}} : (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<4 x f32>, i32, i32, i32) -> !llvm.vec<4 x f32>
+  // CHECK: rocdl.mfma.f32.16x16x8bf16 {{.*}} : (vector<2xi16>, vector<2xi16>, vector<4xf32>, i32, i32, i32) -> vector<4xf32>
   %r19 = rocdl.mfma.f32.16x16x8bf16 %arg10, %arg10, %arg5, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<4 x f32>,
-                            i32, i32, i32) -> !llvm.vec<4 x f32>
+                            (vector<2xi16>, vector<2xi16>, vector<4xf32>,
+                            i32, i32, i32) -> vector<4xf32>
 
-  llvm.return %r0 : !llvm.vec<32 x f32>
+  llvm.return %r0 : vector<32xf32>
 }
 
-llvm.func @rocdl.mubuf(%rsrc : !llvm.vec<4 x i32>, %vindex : i32,
+llvm.func @rocdl.mubuf(%rsrc : vector<4xi32>, %vindex : i32,
                        %offset : i32, %glc : i1,
-                       %slc : i1, %vdata1 : !llvm.vec<1 x f32>,
-                       %vdata2 : !llvm.vec<2 x f32>, %vdata4 : !llvm.vec<4 x f32>) {
+                       %slc : i1, %vdata1 : vector<1xf32>,
+                       %vdata2 : vector<2xf32>, %vdata4 : vector<4xf32>) {
   // CHECK-LABEL: rocdl.mubuf
-  // CHECK: %{{.*}} = rocdl.buffer.load %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} : !llvm.vec<1 x f32>
-  %r1 = rocdl.buffer.load %rsrc, %vindex, %offset, %glc, %slc : !llvm.vec<1 x f32>
-  // CHECK: %{{.*}} = rocdl.buffer.load %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} : !llvm.vec<2 x f32>
-  %r2 = rocdl.buffer.load %rsrc, %vindex, %offset, %glc, %slc : !llvm.vec<2 x f32>
-  // CHECK: %{{.*}} = rocdl.buffer.load %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} : !llvm.vec<4 x f32>
-  %r4 = rocdl.buffer.load %rsrc, %vindex, %offset, %glc, %slc : !llvm.vec<4 x f32>
-
-  // CHECK: rocdl.buffer.store %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} : !llvm.vec<1 x f32>
-  rocdl.buffer.store %vdata1, %rsrc, %vindex, %offset, %glc, %slc : !llvm.vec<1 x f32>
-  // CHECK: rocdl.buffer.store %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} : !llvm.vec<2 x f32>
-  rocdl.buffer.store %vdata2, %rsrc, %vindex, %offset, %glc, %slc : !llvm.vec<2 x f32>
-  // CHECK: rocdl.buffer.store %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} : !llvm.vec<4 x f32>
-  rocdl.buffer.store %vdata4, %rsrc, %vindex, %offset, %glc, %slc : !llvm.vec<4 x f32>
+  // CHECK: %{{.*}} = rocdl.buffer.load %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} : vector<1xf32>
+  %r1 = rocdl.buffer.load %rsrc, %vindex, %offset, %glc, %slc : vector<1xf32>
+  // CHECK: %{{.*}} = rocdl.buffer.load %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} : vector<2xf32>
+  %r2 = rocdl.buffer.load %rsrc, %vindex, %offset, %glc, %slc : vector<2xf32>
+  // CHECK: %{{.*}} = rocdl.buffer.load %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} : vector<4xf32>
+  %r4 = rocdl.buffer.load %rsrc, %vindex, %offset, %glc, %slc : vector<4xf32>
+
+  // CHECK: rocdl.buffer.store %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} : vector<1xf32>
+  rocdl.buffer.store %vdata1, %rsrc, %vindex, %offset, %glc, %slc : vector<1xf32>
+  // CHECK: rocdl.buffer.store %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} : vector<2xf32>
+  rocdl.buffer.store %vdata2, %rsrc, %vindex, %offset, %glc, %slc : vector<2xf32>
+  // CHECK: rocdl.buffer.store %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} %{{.*}} : vector<4xf32>
+  rocdl.buffer.store %vdata4, %rsrc, %vindex, %offset, %glc, %slc : vector<4xf32>
 
   llvm.return
 }

diff  --git a/mlir/test/Dialect/LLVMIR/roundtrip.mlir b/mlir/test/Dialect/LLVMIR/roundtrip.mlir
index ff970178ac9f..3b10fc51a2eb 100644
--- a/mlir/test/Dialect/LLVMIR/roundtrip.mlir
+++ b/mlir/test/Dialect/LLVMIR/roundtrip.mlir
@@ -223,21 +223,21 @@ llvm.func @foo(%arg0: i32) -> !llvm.struct<(i32, f64, i32)> {
 }
 
 // CHECK-LABEL: @casts
-// CHECK-SAME: (%[[I32:.*]]: i32, %[[I64:.*]]: i64, %[[V4I32:.*]]: !llvm.vec<4 x i32>, %[[V4I64:.*]]: !llvm.vec<4 x i64>, %[[I32PTR:.*]]: !llvm.ptr<i32>)
-func @casts(%arg0: i32, %arg1: i64, %arg2: !llvm.vec<4 x i32>,
-            %arg3: !llvm.vec<4 x i64>, %arg4: !llvm.ptr<i32>) {
+// CHECK-SAME: (%[[I32:.*]]: i32, %[[I64:.*]]: i64, %[[V4I32:.*]]: vector<4xi32>, %[[V4I64:.*]]: vector<4xi64>, %[[I32PTR:.*]]: !llvm.ptr<i32>)
+func @casts(%arg0: i32, %arg1: i64, %arg2: vector<4xi32>,
+            %arg3: vector<4xi64>, %arg4: !llvm.ptr<i32>) {
 // CHECK:  = llvm.sext %[[I32]] : i32 to i56
   %0 = llvm.sext %arg0 : i32 to i56
 // CHECK:  = llvm.zext %[[I32]] : i32 to i64
   %1 = llvm.zext %arg0 : i32 to i64
 // CHECK:  = llvm.trunc %[[I64]] : i64 to i56
   %2 = llvm.trunc %arg1 : i64 to i56
-// CHECK:  = llvm.sext %[[V4I32]] : !llvm.vec<4 x i32> to !llvm.vec<4 x i56>
-  %3 = llvm.sext %arg2 : !llvm.vec<4 x i32> to !llvm.vec<4 x i56>
-// CHECK:  = llvm.zext %[[V4I32]] : !llvm.vec<4 x i32> to !llvm.vec<4 x i64>
-  %4 = llvm.zext %arg2 : !llvm.vec<4 x i32> to !llvm.vec<4 x i64>
-// CHECK:  = llvm.trunc %[[V4I64]] : !llvm.vec<4 x i64> to !llvm.vec<4 x i56>
-  %5 = llvm.trunc %arg3 : !llvm.vec<4 x i64> to !llvm.vec<4 x i56>
+// CHECK:  = llvm.sext %[[V4I32]] : vector<4xi32> to vector<4xi56>
+  %3 = llvm.sext %arg2 : vector<4xi32> to vector<4xi56>
+// CHECK:  = llvm.zext %[[V4I32]] : vector<4xi32> to vector<4xi64>
+  %4 = llvm.zext %arg2 : vector<4xi32> to vector<4xi64>
+// CHECK:  = llvm.trunc %[[V4I64]] : vector<4xi64> to vector<4xi56>
+  %5 = llvm.trunc %arg3 : vector<4xi64> to vector<4xi56>
 // CHECK:  = llvm.sitofp %[[I32]] : i32 to f32
   %6 = llvm.sitofp %arg0 : i32 to f32
 // CHECK: %[[FLOAT:.*]] = llvm.uitofp %[[I32]] : i32 to f32
@@ -252,15 +252,15 @@ func @casts(%arg0: i32, %arg1: i64, %arg2: !llvm.vec<4 x i32>,
 }
 
 // CHECK-LABEL: @vect
-func @vect(%arg0: !llvm.vec<4 x f32>, %arg1: i32, %arg2: f32) {
-// CHECK:  = llvm.extractelement {{.*}} : !llvm.vec<4 x f32>
-  %0 = llvm.extractelement %arg0[%arg1 : i32] : !llvm.vec<4 x f32>
-// CHECK:  = llvm.insertelement {{.*}} : !llvm.vec<4 x f32>
-  %1 = llvm.insertelement %arg2, %arg0[%arg1 : i32] : !llvm.vec<4 x f32>
-// CHECK:  = llvm.shufflevector {{.*}} [0 : i32, 0 : i32, 0 : i32, 0 : i32, 7 : i32] : !llvm.vec<4 x f32>, !llvm.vec<4 x f32>
-  %2 = llvm.shufflevector %arg0, %arg0 [0 : i32, 0 : i32, 0 : i32, 0 : i32, 7 : i32] : !llvm.vec<4 x f32>, !llvm.vec<4 x f32>
-// CHECK:  = llvm.mlir.constant(dense<1.000000e+00> : vector<4xf32>) : !llvm.vec<4 x f32>
-  %3 = llvm.mlir.constant(dense<1.0> : vector<4xf32>) : !llvm.vec<4 x f32>
+func @vect(%arg0: vector<4xf32>, %arg1: i32, %arg2: f32) {
+// CHECK:  = llvm.extractelement {{.*}} : vector<4xf32>
+  %0 = llvm.extractelement %arg0[%arg1 : i32] : vector<4xf32>
+// CHECK:  = llvm.insertelement {{.*}} : vector<4xf32>
+  %1 = llvm.insertelement %arg2, %arg0[%arg1 : i32] : vector<4xf32>
+// CHECK:  = llvm.shufflevector {{.*}} [0 : i32, 0 : i32, 0 : i32, 0 : i32, 7 : i32] : vector<4xf32>, vector<4xf32>
+  %2 = llvm.shufflevector %arg0, %arg0 [0 : i32, 0 : i32, 0 : i32, 0 : i32, 7 : i32] : vector<4xf32>, vector<4xf32>
+// CHECK:  = llvm.mlir.constant(dense<1.000000e+00> : vector<4xf32>) : vector<4xf32>
+  %3 = llvm.mlir.constant(dense<1.0> : vector<4xf32>) : vector<4xf32>
   return
 }
 

diff  --git a/mlir/test/Dialect/LLVMIR/types-invalid.mlir b/mlir/test/Dialect/LLVMIR/types-invalid.mlir
index a2a6d6163dad..d1c661dab516 100644
--- a/mlir/test/Dialect/LLVMIR/types-invalid.mlir
+++ b/mlir/test/Dialect/LLVMIR/types-invalid.mlir
@@ -113,42 +113,42 @@ func @identified_struct_with_void() {
 
 func @dynamic_vector() {
   // expected-error @+1 {{expected '? x <integer> x <type>' or '<integer> x <type>'}}
-  "some.op"() : () -> !llvm.vec<? x f32>
+  "some.op"() : () -> !llvm.vec<? x ptr<f32>>
 }
 
 // -----
 
 func @dynamic_scalable_vector() {
   // expected-error @+1 {{expected '? x <integer> x <type>' or '<integer> x <type>'}}
-  "some.op"() : () -> !llvm.vec<? x ? x f32>
+  "some.op"() : () -> !llvm.vec<?x? x ptr<f32>>
 }
 
 // -----
 
 func @unscalable_vector() {
   // expected-error @+1 {{expected '? x <integer> x <type>' or '<integer> x <type>'}}
-  "some.op"() : () -> !llvm.vec<4 x 4 x i32>
+  "some.op"() : () -> !llvm.vec<4x4 x ptr<i32>>
 }
 
 // -----
 
 func @zero_vector() {
   // expected-error @+1 {{the number of vector elements must be positive}}
-  "some.op"() : () -> !llvm.vec<0 x i32>
+  "some.op"() : () -> !llvm.vec<0 x ptr<i32>>
 }
 
 // -----
 
 func @nested_vector() {
   // expected-error @+1 {{invalid vector element type}}
-  "some.op"() : () -> !llvm.vec<2 x vec<2 x i32>>
+  "some.op"() : () -> !llvm.vec<2 x vector<2xi32>>
 }
 
 // -----
 
 func @scalable_void_vector() {
   // expected-error @+1 {{invalid vector element type}}
-  "some.op"() : () -> !llvm.vec<? x 4 x void>
+  "some.op"() : () -> !llvm.vec<?x4 x void>
 }
 
 // -----

diff  --git a/mlir/test/Dialect/LLVMIR/types.mlir b/mlir/test/Dialect/LLVMIR/types.mlir
index 74e0e7e93633..cc549d07b1b4 100644
--- a/mlir/test/Dialect/LLVMIR/types.mlir
+++ b/mlir/test/Dialect/LLVMIR/types.mlir
@@ -90,10 +90,10 @@ func @ptr() {
 
 // CHECK-LABEL: @vec
 func @vec() {
-  // CHECK: !llvm.vec<4 x i32>
-  "some.op"() : () -> !llvm.vec<4 x i32>
-  // CHECK: !llvm.vec<4 x f32>
-  "some.op"() : () -> !llvm.vec<4 x f32>
+  // CHECK: vector<4xi32>
+  "some.op"() : () -> vector<4xi32>
+  // CHECK: vector<4xf32>
+  "some.op"() : () -> vector<4xf32>
   // CHECK: !llvm.vec<? x 4 x i32>
   "some.op"() : () -> !llvm.vec<? x 4 x i32>
   // CHECK: !llvm.vec<? x 8 x f16>

diff  --git a/mlir/test/Target/arm-neon.mlir b/mlir/test/Target/arm-neon.mlir
index 955b4aeb40a0..8e24b5bad295 100644
--- a/mlir/test/Target/arm-neon.mlir
+++ b/mlir/test/Target/arm-neon.mlir
@@ -1,25 +1,25 @@
 // RUN: mlir-opt -verify-diagnostics %s | mlir-opt | mlir-translate -arm-neon-mlir-to-llvmir | FileCheck %s
 
 // CHECK-LABEL: arm_neon_smull
-llvm.func @arm_neon_smull(%arg0: !llvm.vec<8 x i8>, %arg1: !llvm.vec<8 x i8>) -> !llvm.struct<(vec<8 x i16>, vec<4 x i32>, vec<2 x i64>)> {
+llvm.func @arm_neon_smull(%arg0: vector<8xi8>, %arg1: vector<8xi8>) -> !llvm.struct<(vector<8xi16>, vector<4xi32>, vector<2xi64>)> {
   //      CHECK: %[[V0:.*]] = call <8 x i16> @llvm.aarch64.neon.smull.v8i16(<8 x i8> %{{.*}}, <8 x i8> %{{.*}})
   // CHECK-NEXT: %[[V00:.*]] = shufflevector <8 x i16> %3, <8 x i16> %[[V0]], <4 x i32> <i32 3, i32 4, i32 5, i32 6>
-  %0 = "llvm_arm_neon.smull"(%arg0, %arg1) : (!llvm.vec<8 x i8>, !llvm.vec<8 x i8>) -> !llvm.vec<8 x i16>
-  %1 = llvm.shufflevector %0, %0 [3, 4, 5, 6] : !llvm.vec<8 x i16>, !llvm.vec<8 x i16>
+  %0 = "llvm_arm_neon.smull"(%arg0, %arg1) : (vector<8xi8>, vector<8xi8>) -> vector<8xi16>
+  %1 = llvm.shufflevector %0, %0 [3, 4, 5, 6] : vector<8xi16>, vector<8xi16>
 
   // CHECK-NEXT: %[[V1:.*]] = call <4 x i32> @llvm.aarch64.neon.smull.v4i32(<4 x i16> %[[V00]], <4 x i16> %[[V00]])
   // CHECK-NEXT: %[[V11:.*]] = shufflevector <4 x i32> %[[V1]], <4 x i32> %[[V1]], <2 x i32> <i32 1, i32 2>
-  %2 = "llvm_arm_neon.smull"(%1, %1) : (!llvm.vec<4 x i16>, !llvm.vec<4 x i16>) -> !llvm.vec<4 x i32>
-  %3 = llvm.shufflevector %2, %2 [1, 2] : !llvm.vec<4 x i32>, !llvm.vec<4 x i32>
+  %2 = "llvm_arm_neon.smull"(%1, %1) : (vector<4xi16>, vector<4xi16>) -> vector<4xi32>
+  %3 = llvm.shufflevector %2, %2 [1, 2] : vector<4xi32>, vector<4xi32>
 
   // CHECK-NEXT: %[[V1:.*]] = call <2 x i64> @llvm.aarch64.neon.smull.v2i64(<2 x i32> %[[V11]], <2 x i32> %[[V11]])
-  %4 = "llvm_arm_neon.smull"(%3, %3) : (!llvm.vec<2 x i32>, !llvm.vec<2 x i32>) -> !llvm.vec<2 x i64>
+  %4 = "llvm_arm_neon.smull"(%3, %3) : (vector<2xi32>, vector<2xi32>) -> vector<2xi64>
 
-  %5 = llvm.mlir.undef : !llvm.struct<(vec<8 x i16>, vec<4 x i32>, vec<2 x i64>)>
-  %6 = llvm.insertvalue %0, %5[0] : !llvm.struct<(vec<8 x i16>, vec<4 x i32>, vec<2 x i64>)>
-  %7 = llvm.insertvalue %2, %6[1] : !llvm.struct<(vec<8 x i16>, vec<4 x i32>, vec<2 x i64>)>
-  %8 = llvm.insertvalue %4, %7[2] : !llvm.struct<(vec<8 x i16>, vec<4 x i32>, vec<2 x i64>)>
+  %5 = llvm.mlir.undef : !llvm.struct<(vector<8xi16>, vector<4xi32>, vector<2xi64>)>
+  %6 = llvm.insertvalue %0, %5[0] : !llvm.struct<(vector<8xi16>, vector<4xi32>, vector<2xi64>)>
+  %7 = llvm.insertvalue %2, %6[1] : !llvm.struct<(vector<8xi16>, vector<4xi32>, vector<2xi64>)>
+  %8 = llvm.insertvalue %4, %7[2] : !llvm.struct<(vector<8xi16>, vector<4xi32>, vector<2xi64>)>
 
   //      CHECK: ret { <8 x i16>, <4 x i32>, <2 x i64> }
-  llvm.return %8 : !llvm.struct<(vec<8 x i16>, vec<4 x i32>, vec<2 x i64>)>
+  llvm.return %8 : !llvm.struct<(vector<8xi16>, vector<4xi32>, vector<2xi64>)>
 }

diff  --git a/mlir/test/Target/arm-sve.mlir b/mlir/test/Target/arm-sve.mlir
index 430f60b4ecac..f00992e05bfd 100644
--- a/mlir/test/Target/arm-sve.mlir
+++ b/mlir/test/Target/arm-sve.mlir
@@ -1,51 +1,51 @@
 // RUN: mlir-opt -verify-diagnostics %s | mlir-opt | mlir-translate --arm-sve-mlir-to-llvmir | FileCheck %s
 
 // CHECK-LABEL: define <vscale x 4 x i32> @arm_sve_sdot
-llvm.func @arm_sve_sdot(%arg0: !llvm.vec<? x 16 x i8>,
-                        %arg1: !llvm.vec<? x 16 x i8>,
-                        %arg2: !llvm.vec<? x 4 x i32>)
-                        -> !llvm.vec<? x 4 x i32> {
+llvm.func @arm_sve_sdot(%arg0: !llvm.vec<?x16 x i8>,
+                        %arg1: !llvm.vec<?x16 x i8>,
+                        %arg2: !llvm.vec<?x4 x i32>)
+                        -> !llvm.vec<?x4 x i32> {
   // CHECK: call <vscale x 4 x i32> @llvm.aarch64.sve.sdot.nxv4i32(<vscale x 4
   %0 = "llvm_arm_sve.sdot"(%arg2, %arg0, %arg1) :
-    (!llvm.vec<? x 4 x i32>, !llvm.vec<? x 16 x i8>, !llvm.vec<? x 16 x i8>)
-        -> !llvm.vec<? x 4 x i32>
-  llvm.return %0 : !llvm.vec<? x 4 x i32>
+    (!llvm.vec<?x4 x i32>, !llvm.vec<?x16 x i8>, !llvm.vec<?x16 x i8>)
+        -> !llvm.vec<?x4 x i32>
+  llvm.return %0 : !llvm.vec<?x4 x i32>
 }
 
 // CHECK-LABEL: define <vscale x 4 x i32> @arm_sve_smmla
-llvm.func @arm_sve_smmla(%arg0: !llvm.vec<? x 16 x i8>,
-                         %arg1: !llvm.vec<? x 16 x i8>,
-                         %arg2: !llvm.vec<? x 4 x i32>)
-                         -> !llvm.vec<? x 4 x i32> {
+llvm.func @arm_sve_smmla(%arg0: !llvm.vec<?x16 x i8>,
+                         %arg1: !llvm.vec<?x16 x i8>,
+                         %arg2: !llvm.vec<?x4 x i32>)
+                         -> !llvm.vec<?x4 x i32> {
   // CHECK: call <vscale x 4 x i32> @llvm.aarch64.sve.smmla.nxv4i32(<vscale x 4
   %0 = "llvm_arm_sve.smmla"(%arg2, %arg0, %arg1) :
-    (!llvm.vec<? x 4 x i32>, !llvm.vec<? x 16 x i8>, !llvm.vec<? x 16 x i8>)
-        -> !llvm.vec<? x 4 x i32>
-  llvm.return %0 : !llvm.vec<? x 4 x i32>
+    (!llvm.vec<?x4 x i32>, !llvm.vec<?x16 x i8>, !llvm.vec<?x16 x i8>)
+        -> !llvm.vec<?x4 x i32>
+  llvm.return %0 : !llvm.vec<?x4 x i32>
 }
 
 // CHECK-LABEL: define <vscale x 4 x i32> @arm_sve_udot
-llvm.func @arm_sve_udot(%arg0: !llvm.vec<? x 16 x i8>,
-                        %arg1: !llvm.vec<? x 16 x i8>,
-                        %arg2: !llvm.vec<? x 4 x i32>)
-                        -> !llvm.vec<? x 4 x i32> {
+llvm.func @arm_sve_udot(%arg0: !llvm.vec<?x16 x i8>,
+                        %arg1: !llvm.vec<?x16 x i8>,
+                        %arg2: !llvm.vec<?x4 x i32>)
+                        -> !llvm.vec<?x4 x i32> {
   // CHECK: call <vscale x 4 x i32> @llvm.aarch64.sve.udot.nxv4i32(<vscale x 4
   %0 = "llvm_arm_sve.udot"(%arg2, %arg0, %arg1) :
-    (!llvm.vec<? x 4 x i32>, !llvm.vec<? x 16 x i8>, !llvm.vec<? x 16 x i8>)
-        -> !llvm.vec<? x 4 x i32>
-  llvm.return %0 : !llvm.vec<? x 4 x i32>
+    (!llvm.vec<?x4 x i32>, !llvm.vec<?x16 x i8>, !llvm.vec<?x16 x i8>)
+        -> !llvm.vec<?x4 x i32>
+  llvm.return %0 : !llvm.vec<?x4 x i32>
 }
 
 // CHECK-LABEL: define <vscale x 4 x i32> @arm_sve_ummla
-llvm.func @arm_sve_ummla(%arg0: !llvm.vec<? x 16 x i8>,
-                         %arg1: !llvm.vec<? x 16 x i8>,
-                         %arg2: !llvm.vec<? x 4 x i32>)
-                         -> !llvm.vec<? x 4 x i32> {
+llvm.func @arm_sve_ummla(%arg0: !llvm.vec<?x16 x i8>,
+                         %arg1: !llvm.vec<?x16 x i8>,
+                         %arg2: !llvm.vec<?x4 x i32>)
+                         -> !llvm.vec<?x4 x i32> {
   // CHECK: call <vscale x 4 x i32> @llvm.aarch64.sve.ummla.nxv4i32(<vscale x 4
   %0 = "llvm_arm_sve.ummla"(%arg2, %arg0, %arg1) :
-    (!llvm.vec<? x 4 x i32>, !llvm.vec<? x 16 x i8>, !llvm.vec<? x 16 x i8>)
-        -> !llvm.vec<? x 4 x i32>
-  llvm.return %0 : !llvm.vec<? x 4 x i32>
+    (!llvm.vec<?x4 x i32>, !llvm.vec<?x16 x i8>, !llvm.vec<?x16 x i8>)
+        -> !llvm.vec<?x4 x i32>
+  llvm.return %0 : !llvm.vec<?x4 x i32>
 }
 
 // CHECK-LABEL: define i64 @get_vector_scale()

diff  --git a/mlir/test/Target/avx512.mlir b/mlir/test/Target/avx512.mlir
index c32593836015..80164ca837d4 100644
--- a/mlir/test/Target/avx512.mlir
+++ b/mlir/test/Target/avx512.mlir
@@ -1,31 +1,31 @@
 // RUN: mlir-opt -verify-diagnostics %s | mlir-opt | mlir-translate --avx512-mlir-to-llvmir | FileCheck %s
 
 // CHECK-LABEL: define <16 x float> @LLVM_x86_avx512_mask_ps_512
-llvm.func @LLVM_x86_avx512_mask_ps_512(%a: !llvm.vec<16 x f32>,
+llvm.func @LLVM_x86_avx512_mask_ps_512(%a: vector<16 x f32>,
                                        %b: i32,
                                        %c: i16)
-  -> (!llvm.vec<16 x f32>)
+  -> (vector<16 x f32>)
 {
   // CHECK: call <16 x float> @llvm.x86.avx512.mask.rndscale.ps.512(<16 x float>
   %0 = "llvm_avx512.mask.rndscale.ps.512"(%a, %b, %a, %c, %b) :
-    (!llvm.vec<16 x f32>, i32, !llvm.vec<16 x f32>, i16, i32) -> !llvm.vec<16 x f32>
+    (vector<16 x f32>, i32, vector<16 x f32>, i16, i32) -> vector<16 x f32>
   // CHECK: call <16 x float> @llvm.x86.avx512.mask.scalef.ps.512(<16 x float>
   %1 = "llvm_avx512.mask.scalef.ps.512"(%a, %a, %a, %c, %b) :
-    (!llvm.vec<16 x f32>, !llvm.vec<16 x f32>, !llvm.vec<16 x f32>, i16, i32) -> !llvm.vec<16 x f32>
-  llvm.return %1: !llvm.vec<16 x f32>
+    (vector<16 x f32>, vector<16 x f32>, vector<16 x f32>, i16, i32) -> vector<16 x f32>
+  llvm.return %1: vector<16 x f32>
 }
 
 // CHECK-LABEL: define <8 x double> @LLVM_x86_avx512_mask_pd_512
-llvm.func @LLVM_x86_avx512_mask_pd_512(%a: !llvm.vec<8 x f64>,
+llvm.func @LLVM_x86_avx512_mask_pd_512(%a: vector<8xf64>,
                                        %b: i32,
                                        %c: i8)
-  -> (!llvm.vec<8 x f64>)
+  -> (vector<8xf64>)
 {
   // CHECK: call <8 x double> @llvm.x86.avx512.mask.rndscale.pd.512(<8 x double>
   %0 = "llvm_avx512.mask.rndscale.pd.512"(%a, %b, %a, %c, %b) :
-    (!llvm.vec<8 x f64>, i32, !llvm.vec<8 x f64>, i8, i32) -> !llvm.vec<8 x f64>
+    (vector<8xf64>, i32, vector<8xf64>, i8, i32) -> vector<8xf64>
   // CHECK: call <8 x double> @llvm.x86.avx512.mask.scalef.pd.512(<8 x double>
   %1 = "llvm_avx512.mask.scalef.pd.512"(%a, %a, %a, %c, %b) :
-    (!llvm.vec<8 x f64>, !llvm.vec<8 x f64>, !llvm.vec<8 x f64>, i8, i32) -> !llvm.vec<8 x f64>
-  llvm.return %1: !llvm.vec<8 x f64>
+    (vector<8xf64>, vector<8xf64>, vector<8xf64>, i8, i32) -> vector<8xf64>
+  llvm.return %1: vector<8xf64>
 }

diff  --git a/mlir/test/Target/import.ll b/mlir/test/Target/import.ll
index c7b9218fca10..97e0c656c14e 100644
--- a/mlir/test/Target/import.ll
+++ b/mlir/test/Target/import.ll
@@ -10,7 +10,7 @@
 ; CHECK: llvm.mlir.global internal @g3("string")
 @g3 = internal global [6 x i8] c"string"
 
-; CHECK: llvm.mlir.global external @g5() : !llvm.vec<8 x i32>
+; CHECK: llvm.mlir.global external @g5() : vector<8xi32>
 @g5 = external global <8 x i32>
 
 @g4 = external global i32, align 8
@@ -53,7 +53,7 @@
 ; Sequential constants.
 ;
 
-; CHECK: llvm.mlir.global internal constant @vector_constant(dense<[1, 2]> : vector<2xi32>) : !llvm.vec<2 x i32>
+; CHECK: llvm.mlir.global internal constant @vector_constant(dense<[1, 2]> : vector<2xi32>) : vector<2xi32>
 @vector_constant = internal constant <2 x i32> <i32 1, i32 2>
 ; CHECK: llvm.mlir.global internal constant @array_constant(dense<[1.000000e+00, 2.000000e+00]> : tensor<2xf32>) : !llvm.array<2 x f32>
 @array_constant = internal constant [2 x float] [float 1., float 2.]
@@ -61,7 +61,7 @@
 @nested_array_constant = internal constant [2 x [2 x i32]] [[2 x i32] [i32 1, i32 2], [2 x i32] [i32 3, i32 4]]
 ; CHECK: llvm.mlir.global internal constant @nested_array_constant3(dense<[{{\[}}[1, 2], [3, 4]]]> : tensor<1x2x2xi32>) : !llvm.array<1 x array<2 x array<2 x i32>>>
 @nested_array_constant3 = internal constant [1 x [2 x [2 x i32]]] [[2 x [2 x i32]] [[2 x i32] [i32 1, i32 2], [2 x i32] [i32 3, i32 4]]]
-; CHECK: llvm.mlir.global internal constant @nested_array_vector(dense<[{{\[}}[1, 2], [3, 4]]]> : vector<1x2x2xi32>) : !llvm.array<1 x array<2 x vec<2 x i32>>>
+; CHECK: llvm.mlir.global internal constant @nested_array_vector(dense<[{{\[}}[1, 2], [3, 4]]]> : vector<1x2x2xi32>) : !llvm.array<1 x array<2 x vector<2xi32>>>
 @nested_array_vector = internal constant [1 x [2 x <2 x i32>]] [[2 x <2 x i32>] [<2 x i32> <i32 1, i32 2>, <2 x i32> <i32 3, i32 4>]]
 
 ;

diff  --git a/mlir/test/Target/llvmir-intrinsics.mlir b/mlir/test/Target/llvmir-intrinsics.mlir
index eedaa3e924f0..d218e35e774d 100644
--- a/mlir/test/Target/llvmir-intrinsics.mlir
+++ b/mlir/test/Target/llvmir-intrinsics.mlir
@@ -1,285 +1,285 @@
 // RUN: mlir-translate -mlir-to-llvmir %s | FileCheck %s
 
 // CHECK-LABEL: @intrinsics
-llvm.func @intrinsics(%arg0: f32, %arg1: f32, %arg2: !llvm.vec<8 x f32>, %arg3: !llvm.ptr<i8>) {
+llvm.func @intrinsics(%arg0: f32, %arg1: f32, %arg2: vector<8xf32>, %arg3: !llvm.ptr<i8>) {
   %c3 = llvm.mlir.constant(3 : i32) : i32
   %c1 = llvm.mlir.constant(1 : i32) : i32
   %c0 = llvm.mlir.constant(0 : i32) : i32
   // CHECK: call float @llvm.fmuladd.f32
   "llvm.intr.fmuladd"(%arg0, %arg1, %arg0) : (f32, f32, f32) -> f32
   // CHECK: call <8 x float> @llvm.fmuladd.v8f32
-  "llvm.intr.fmuladd"(%arg2, %arg2, %arg2) : (!llvm.vec<8 x f32>, !llvm.vec<8 x f32>, !llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.fmuladd"(%arg2, %arg2, %arg2) : (vector<8xf32>, vector<8xf32>, vector<8xf32>) -> vector<8xf32>
   // CHECK: call float @llvm.fma.f32
   "llvm.intr.fma"(%arg0, %arg1, %arg0) : (f32, f32, f32) -> f32
   // CHECK: call <8 x float> @llvm.fma.v8f32
-  "llvm.intr.fma"(%arg2, %arg2, %arg2) : (!llvm.vec<8 x f32>, !llvm.vec<8 x f32>, !llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.fma"(%arg2, %arg2, %arg2) : (vector<8xf32>, vector<8xf32>, vector<8xf32>) -> vector<8xf32>
   // CHECK: call void @llvm.prefetch.p0i8(i8* %3, i32 0, i32 3, i32 1)
   "llvm.intr.prefetch"(%arg3, %c0, %c3, %c1) : (!llvm.ptr<i8>, i32, i32, i32) -> ()
   llvm.return
 }
 
 // CHECK-LABEL: @exp_test
-llvm.func @exp_test(%arg0: f32, %arg1: !llvm.vec<8 x f32>) {
+llvm.func @exp_test(%arg0: f32, %arg1: vector<8xf32>) {
   // CHECK: call float @llvm.exp.f32
   "llvm.intr.exp"(%arg0) : (f32) -> f32
   // CHECK: call <8 x float> @llvm.exp.v8f32
-  "llvm.intr.exp"(%arg1) : (!llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.exp"(%arg1) : (vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @exp2_test
-llvm.func @exp2_test(%arg0: f32, %arg1: !llvm.vec<8 x f32>) {
+llvm.func @exp2_test(%arg0: f32, %arg1: vector<8xf32>) {
   // CHECK: call float @llvm.exp2.f32
   "llvm.intr.exp2"(%arg0) : (f32) -> f32
   // CHECK: call <8 x float> @llvm.exp2.v8f32
-  "llvm.intr.exp2"(%arg1) : (!llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.exp2"(%arg1) : (vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @log_test
-llvm.func @log_test(%arg0: f32, %arg1: !llvm.vec<8 x f32>) {
+llvm.func @log_test(%arg0: f32, %arg1: vector<8xf32>) {
   // CHECK: call float @llvm.log.f32
   "llvm.intr.log"(%arg0) : (f32) -> f32
   // CHECK: call <8 x float> @llvm.log.v8f32
-  "llvm.intr.log"(%arg1) : (!llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.log"(%arg1) : (vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @log10_test
-llvm.func @log10_test(%arg0: f32, %arg1: !llvm.vec<8 x f32>) {
+llvm.func @log10_test(%arg0: f32, %arg1: vector<8xf32>) {
   // CHECK: call float @llvm.log10.f32
   "llvm.intr.log10"(%arg0) : (f32) -> f32
   // CHECK: call <8 x float> @llvm.log10.v8f32
-  "llvm.intr.log10"(%arg1) : (!llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.log10"(%arg1) : (vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @log2_test
-llvm.func @log2_test(%arg0: f32, %arg1: !llvm.vec<8 x f32>) {
+llvm.func @log2_test(%arg0: f32, %arg1: vector<8xf32>) {
   // CHECK: call float @llvm.log2.f32
   "llvm.intr.log2"(%arg0) : (f32) -> f32
   // CHECK: call <8 x float> @llvm.log2.v8f32
-  "llvm.intr.log2"(%arg1) : (!llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.log2"(%arg1) : (vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @fabs_test
-llvm.func @fabs_test(%arg0: f32, %arg1: !llvm.vec<8 x f32>) {
+llvm.func @fabs_test(%arg0: f32, %arg1: vector<8xf32>) {
   // CHECK: call float @llvm.fabs.f32
   "llvm.intr.fabs"(%arg0) : (f32) -> f32
   // CHECK: call <8 x float> @llvm.fabs.v8f32
-  "llvm.intr.fabs"(%arg1) : (!llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.fabs"(%arg1) : (vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @sqrt_test
-llvm.func @sqrt_test(%arg0: f32, %arg1: !llvm.vec<8 x f32>) {
+llvm.func @sqrt_test(%arg0: f32, %arg1: vector<8xf32>) {
   // CHECK: call float @llvm.sqrt.f32
   "llvm.intr.sqrt"(%arg0) : (f32) -> f32
   // CHECK: call <8 x float> @llvm.sqrt.v8f32
-  "llvm.intr.sqrt"(%arg1) : (!llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.sqrt"(%arg1) : (vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @ceil_test
-llvm.func @ceil_test(%arg0: f32, %arg1: !llvm.vec<8 x f32>) {
+llvm.func @ceil_test(%arg0: f32, %arg1: vector<8xf32>) {
   // CHECK: call float @llvm.ceil.f32
   "llvm.intr.ceil"(%arg0) : (f32) -> f32
   // CHECK: call <8 x float> @llvm.ceil.v8f32
-  "llvm.intr.ceil"(%arg1) : (!llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.ceil"(%arg1) : (vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @floor_test
-llvm.func @floor_test(%arg0: f32, %arg1: !llvm.vec<8 x f32>) {
+llvm.func @floor_test(%arg0: f32, %arg1: vector<8xf32>) {
   // CHECK: call float @llvm.floor.f32
   "llvm.intr.floor"(%arg0) : (f32) -> f32
   // CHECK: call <8 x float> @llvm.floor.v8f32
-  "llvm.intr.floor"(%arg1) : (!llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.floor"(%arg1) : (vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @cos_test
-llvm.func @cos_test(%arg0: f32, %arg1: !llvm.vec<8 x f32>) {
+llvm.func @cos_test(%arg0: f32, %arg1: vector<8xf32>) {
   // CHECK: call float @llvm.cos.f32
   "llvm.intr.cos"(%arg0) : (f32) -> f32
   // CHECK: call <8 x float> @llvm.cos.v8f32
-  "llvm.intr.cos"(%arg1) : (!llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.cos"(%arg1) : (vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @copysign_test
-llvm.func @copysign_test(%arg0: f32, %arg1: f32, %arg2: !llvm.vec<8 x f32>, %arg3: !llvm.vec<8 x f32>) {
+llvm.func @copysign_test(%arg0: f32, %arg1: f32, %arg2: vector<8xf32>, %arg3: vector<8xf32>) {
   // CHECK: call float @llvm.copysign.f32
   "llvm.intr.copysign"(%arg0, %arg1) : (f32, f32) -> f32
   // CHECK: call <8 x float> @llvm.copysign.v8f32
-  "llvm.intr.copysign"(%arg2, %arg3) : (!llvm.vec<8 x f32>, !llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.copysign"(%arg2, %arg3) : (vector<8xf32>, vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @pow_test
-llvm.func @pow_test(%arg0: f32, %arg1: f32, %arg2: !llvm.vec<8 x f32>, %arg3: !llvm.vec<8 x f32>) {
+llvm.func @pow_test(%arg0: f32, %arg1: f32, %arg2: vector<8xf32>, %arg3: vector<8xf32>) {
   // CHECK: call float @llvm.pow.f32
   "llvm.intr.pow"(%arg0, %arg1) : (f32, f32) -> f32
   // CHECK: call <8 x float> @llvm.pow.v8f32
-  "llvm.intr.pow"(%arg2, %arg3) : (!llvm.vec<8 x f32>, !llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.pow"(%arg2, %arg3) : (vector<8xf32>, vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @bitreverse_test
-llvm.func @bitreverse_test(%arg0: i32, %arg1: !llvm.vec<8 x i32>) {
+llvm.func @bitreverse_test(%arg0: i32, %arg1: vector<8xi32>) {
   // CHECK: call i32 @llvm.bitreverse.i32
   "llvm.intr.bitreverse"(%arg0) : (i32) -> i32
   // CHECK: call <8 x i32> @llvm.bitreverse.v8i32
-  "llvm.intr.bitreverse"(%arg1) : (!llvm.vec<8 x i32>) -> !llvm.vec<8 x i32>
+  "llvm.intr.bitreverse"(%arg1) : (vector<8xi32>) -> vector<8xi32>
   llvm.return
 }
 
 // CHECK-LABEL: @ctpop_test
-llvm.func @ctpop_test(%arg0: i32, %arg1: !llvm.vec<8 x i32>) {
+llvm.func @ctpop_test(%arg0: i32, %arg1: vector<8xi32>) {
   // CHECK: call i32 @llvm.ctpop.i32
   "llvm.intr.ctpop"(%arg0) : (i32) -> i32
   // CHECK: call <8 x i32> @llvm.ctpop.v8i32
-  "llvm.intr.ctpop"(%arg1) : (!llvm.vec<8 x i32>) -> !llvm.vec<8 x i32>
+  "llvm.intr.ctpop"(%arg1) : (vector<8xi32>) -> vector<8xi32>
   llvm.return
 }
 
 // CHECK-LABEL: @maxnum_test
-llvm.func @maxnum_test(%arg0: f32, %arg1: f32, %arg2: !llvm.vec<8 x f32>, %arg3: !llvm.vec<8 x f32>) {
+llvm.func @maxnum_test(%arg0: f32, %arg1: f32, %arg2: vector<8xf32>, %arg3: vector<8xf32>) {
   // CHECK: call float @llvm.maxnum.f32
   "llvm.intr.maxnum"(%arg0, %arg1) : (f32, f32) -> f32
   // CHECK: call <8 x float> @llvm.maxnum.v8f32
-  "llvm.intr.maxnum"(%arg2, %arg3) : (!llvm.vec<8 x f32>, !llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.maxnum"(%arg2, %arg3) : (vector<8xf32>, vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @minnum_test
-llvm.func @minnum_test(%arg0: f32, %arg1: f32, %arg2: !llvm.vec<8 x f32>, %arg3: !llvm.vec<8 x f32>) {
+llvm.func @minnum_test(%arg0: f32, %arg1: f32, %arg2: vector<8xf32>, %arg3: vector<8xf32>) {
   // CHECK: call float @llvm.minnum.f32
   "llvm.intr.minnum"(%arg0, %arg1) : (f32, f32) -> f32
   // CHECK: call <8 x float> @llvm.minnum.v8f32
-  "llvm.intr.minnum"(%arg2, %arg3) : (!llvm.vec<8 x f32>, !llvm.vec<8 x f32>) -> !llvm.vec<8 x f32>
+  "llvm.intr.minnum"(%arg2, %arg3) : (vector<8xf32>, vector<8xf32>) -> vector<8xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @smax_test
-llvm.func @smax_test(%arg0: i32, %arg1: i32, %arg2: !llvm.vec<8 x i32>, %arg3: !llvm.vec<8 x i32>) {
+llvm.func @smax_test(%arg0: i32, %arg1: i32, %arg2: vector<8xi32>, %arg3: vector<8xi32>) {
   // CHECK: call i32 @llvm.smax.i32
   "llvm.intr.smax"(%arg0, %arg1) : (i32, i32) -> i32
   // CHECK: call <8 x i32> @llvm.smax.v8i32
-  "llvm.intr.smax"(%arg2, %arg3) : (!llvm.vec<8 x i32>, !llvm.vec<8 x i32>) -> !llvm.vec<8 x i32>
+  "llvm.intr.smax"(%arg2, %arg3) : (vector<8xi32>, vector<8xi32>) -> vector<8xi32>
   llvm.return
 }
 
 // CHECK-LABEL: @smin_test
-llvm.func @smin_test(%arg0: i32, %arg1: i32, %arg2: !llvm.vec<8 x i32>, %arg3: !llvm.vec<8 x i32>) {
+llvm.func @smin_test(%arg0: i32, %arg1: i32, %arg2: vector<8xi32>, %arg3: vector<8xi32>) {
   // CHECK: call i32 @llvm.smin.i32
   "llvm.intr.smin"(%arg0, %arg1) : (i32, i32) -> i32
   // CHECK: call <8 x i32> @llvm.smin.v8i32
-  "llvm.intr.smin"(%arg2, %arg3) : (!llvm.vec<8 x i32>, !llvm.vec<8 x i32>) -> !llvm.vec<8 x i32>
+  "llvm.intr.smin"(%arg2, %arg3) : (vector<8xi32>, vector<8xi32>) -> vector<8xi32>
   llvm.return
 }
 
 // CHECK-LABEL: @vector_reductions
-llvm.func @vector_reductions(%arg0: f32, %arg1: !llvm.vec<8 x f32>, %arg2: !llvm.vec<8 x i32>) {
+llvm.func @vector_reductions(%arg0: f32, %arg1: vector<8xf32>, %arg2: vector<8xi32>) {
   // CHECK: call i32 @llvm.vector.reduce.add.v8i32
-  "llvm.intr.vector.reduce.add"(%arg2) : (!llvm.vec<8 x i32>) -> i32
+  "llvm.intr.vector.reduce.add"(%arg2) : (vector<8xi32>) -> i32
   // CHECK: call i32 @llvm.vector.reduce.and.v8i32
-  "llvm.intr.vector.reduce.and"(%arg2) : (!llvm.vec<8 x i32>) -> i32
+  "llvm.intr.vector.reduce.and"(%arg2) : (vector<8xi32>) -> i32
   // CHECK: call float @llvm.vector.reduce.fmax.v8f32
-  "llvm.intr.vector.reduce.fmax"(%arg1) : (!llvm.vec<8 x f32>) -> f32
+  "llvm.intr.vector.reduce.fmax"(%arg1) : (vector<8xf32>) -> f32
   // CHECK: call float @llvm.vector.reduce.fmin.v8f32
-  "llvm.intr.vector.reduce.fmin"(%arg1) : (!llvm.vec<8 x f32>) -> f32
+  "llvm.intr.vector.reduce.fmin"(%arg1) : (vector<8xf32>) -> f32
   // CHECK: call i32 @llvm.vector.reduce.mul.v8i32
-  "llvm.intr.vector.reduce.mul"(%arg2) : (!llvm.vec<8 x i32>) -> i32
+  "llvm.intr.vector.reduce.mul"(%arg2) : (vector<8xi32>) -> i32
   // CHECK: call i32 @llvm.vector.reduce.or.v8i32
-  "llvm.intr.vector.reduce.or"(%arg2) : (!llvm.vec<8 x i32>) -> i32
+  "llvm.intr.vector.reduce.or"(%arg2) : (vector<8xi32>) -> i32
   // CHECK: call i32 @llvm.vector.reduce.smax.v8i32
-  "llvm.intr.vector.reduce.smax"(%arg2) : (!llvm.vec<8 x i32>) -> i32
+  "llvm.intr.vector.reduce.smax"(%arg2) : (vector<8xi32>) -> i32
   // CHECK: call i32 @llvm.vector.reduce.smin.v8i32
-  "llvm.intr.vector.reduce.smin"(%arg2) : (!llvm.vec<8 x i32>) -> i32
+  "llvm.intr.vector.reduce.smin"(%arg2) : (vector<8xi32>) -> i32
   // CHECK: call i32 @llvm.vector.reduce.umax.v8i32
-  "llvm.intr.vector.reduce.umax"(%arg2) : (!llvm.vec<8 x i32>) -> i32
+  "llvm.intr.vector.reduce.umax"(%arg2) : (vector<8xi32>) -> i32
   // CHECK: call i32 @llvm.vector.reduce.umin.v8i32
-  "llvm.intr.vector.reduce.umin"(%arg2) : (!llvm.vec<8 x i32>) -> i32
+  "llvm.intr.vector.reduce.umin"(%arg2) : (vector<8xi32>) -> i32
   // CHECK: call float @llvm.vector.reduce.fadd.v8f32
-  "llvm.intr.vector.reduce.fadd"(%arg0, %arg1) : (f32, !llvm.vec<8 x f32>) -> f32
+  "llvm.intr.vector.reduce.fadd"(%arg0, %arg1) : (f32, vector<8xf32>) -> f32
   // CHECK: call float @llvm.vector.reduce.fmul.v8f32
-  "llvm.intr.vector.reduce.fmul"(%arg0, %arg1) : (f32, !llvm.vec<8 x f32>) -> f32
+  "llvm.intr.vector.reduce.fmul"(%arg0, %arg1) : (f32, vector<8xf32>) -> f32
   // CHECK: call reassoc float @llvm.vector.reduce.fadd.v8f32
-  "llvm.intr.vector.reduce.fadd"(%arg0, %arg1) {reassoc = true} : (f32, !llvm.vec<8 x f32>) -> f32
+  "llvm.intr.vector.reduce.fadd"(%arg0, %arg1) {reassoc = true} : (f32, vector<8xf32>) -> f32
   // CHECK: call reassoc float @llvm.vector.reduce.fmul.v8f32
-  "llvm.intr.vector.reduce.fmul"(%arg0, %arg1) {reassoc = true} : (f32, !llvm.vec<8 x f32>) -> f32
+  "llvm.intr.vector.reduce.fmul"(%arg0, %arg1) {reassoc = true} : (f32, vector<8xf32>) -> f32
   // CHECK: call i32 @llvm.vector.reduce.xor.v8i32
-  "llvm.intr.vector.reduce.xor"(%arg2) : (!llvm.vec<8 x i32>) -> i32
+  "llvm.intr.vector.reduce.xor"(%arg2) : (vector<8xi32>) -> i32
   llvm.return
 }
 
 // CHECK-LABEL: @matrix_intrinsics
 //                                       4x16                       16x3
-llvm.func @matrix_intrinsics(%A: !llvm.vec<64 x f32>, %B: !llvm.vec<48 x f32>,
+llvm.func @matrix_intrinsics(%A: vector<64 x f32>, %B: vector<48 x f32>,
                              %ptr: !llvm.ptr<f32>, %stride: i64) {
   // CHECK: call <12 x float> @llvm.matrix.multiply.v12f32.v64f32.v48f32(<64 x float> %0, <48 x float> %1, i32 4, i32 16, i32 3)
   %C = llvm.intr.matrix.multiply %A, %B
     { lhs_rows = 4: i32, lhs_columns = 16: i32 , rhs_columns = 3: i32} :
-    (!llvm.vec<64 x f32>, !llvm.vec<48 x f32>) -> !llvm.vec<12 x f32>
+    (vector<64 x f32>, vector<48 x f32>) -> vector<12 x f32>
   // CHECK: call <48 x float> @llvm.matrix.transpose.v48f32(<48 x float> %1, i32 3, i32 16)
   %D = llvm.intr.matrix.transpose %B { rows = 3: i32, columns = 16: i32} :
-    !llvm.vec<48 x f32> into !llvm.vec<48 x f32>
+    vector<48 x f32> into vector<48 x f32>
   // CHECK: call <48 x float> @llvm.matrix.column.major.load.v48f32(float* align 4 %2, i64 %3, i1 false, i32 3, i32 16)
   %E = llvm.intr.matrix.column.major.load %ptr, <stride=%stride>
     { isVolatile = 0: i1, rows = 3: i32, columns = 16: i32} :
-    !llvm.vec<48 x f32> from !llvm.ptr<f32> stride i64
+    vector<48 x f32> from !llvm.ptr<f32> stride i64
   // CHECK: call void @llvm.matrix.column.major.store.v48f32(<48 x float> %7, float* align 4 %2, i64 %3, i1 false, i32 3, i32 16)
   llvm.intr.matrix.column.major.store %E, %ptr, <stride=%stride>
     { isVolatile = 0: i1, rows = 3: i32, columns = 16: i32} :
-    !llvm.vec<48 x f32> to !llvm.ptr<f32> stride i64
+    vector<48 x f32> to !llvm.ptr<f32> stride i64
   llvm.return
 }
 
 // CHECK-LABEL: @get_active_lane_mask
-llvm.func @get_active_lane_mask(%base: i64, %n: i64) -> (!llvm.vec<7 x i1>) {
+llvm.func @get_active_lane_mask(%base: i64, %n: i64) -> (vector<7xi1>) {
   // CHECK: call <7 x i1> @llvm.get.active.lane.mask.v7i1.i64(i64 %0, i64 %1)
-  %0 = llvm.intr.get.active.lane.mask %base, %n : i64, i64 to !llvm.vec<7 x i1>
-  llvm.return %0 : !llvm.vec<7 x i1>
+  %0 = llvm.intr.get.active.lane.mask %base, %n : i64, i64 to vector<7xi1>
+  llvm.return %0 : vector<7xi1>
 }
 
 // CHECK-LABEL: @masked_load_store_intrinsics
-llvm.func @masked_load_store_intrinsics(%A: !llvm.ptr<vec<7 x f32>>, %mask: !llvm.vec<7 x i1>) {
+llvm.func @masked_load_store_intrinsics(%A: !llvm.ptr<vector<7xf32>>, %mask: vector<7xi1>) {
   // CHECK: call <7 x float> @llvm.masked.load.v7f32.p0v7f32(<7 x float>* %{{.*}}, i32 1, <7 x i1> %{{.*}}, <7 x float> undef)
   %a = llvm.intr.masked.load %A, %mask { alignment = 1: i32} :
-    (!llvm.ptr<vec<7 x f32>>, !llvm.vec<7 x i1>) -> !llvm.vec<7 x f32>
+    (!llvm.ptr<vector<7xf32>>, vector<7xi1>) -> vector<7xf32>
   // CHECK: call <7 x float> @llvm.masked.load.v7f32.p0v7f32(<7 x float>* %{{.*}}, i32 1, <7 x i1> %{{.*}}, <7 x float> %{{.*}})
   %b = llvm.intr.masked.load %A, %mask, %a { alignment = 1: i32} :
-    (!llvm.ptr<vec<7 x f32>>, !llvm.vec<7 x i1>, !llvm.vec<7 x f32>) -> !llvm.vec<7 x f32>
+    (!llvm.ptr<vector<7xf32>>, vector<7xi1>, vector<7xf32>) -> vector<7xf32>
   // CHECK: call void @llvm.masked.store.v7f32.p0v7f32(<7 x float> %{{.*}}, <7 x float>* %0, i32 {{.*}}, <7 x i1> %{{.*}})
   llvm.intr.masked.store %b, %A, %mask { alignment = 1: i32} :
-    !llvm.vec<7 x f32>, !llvm.vec<7 x i1> into !llvm.ptr<vec<7 x f32>>
+    vector<7xf32>, vector<7xi1> into !llvm.ptr<vector<7xf32>>
   llvm.return
 }
 
 // CHECK-LABEL: @masked_gather_scatter_intrinsics
-llvm.func @masked_gather_scatter_intrinsics(%M: !llvm.vec<7 x ptr<f32>>, %mask: !llvm.vec<7 x i1>) {
+llvm.func @masked_gather_scatter_intrinsics(%M: !llvm.vec<7 x ptr<f32>>, %mask: vector<7xi1>) {
   // CHECK: call <7 x float> @llvm.masked.gather.v7f32.v7p0f32(<7 x float*> %{{.*}}, i32 1, <7 x i1> %{{.*}}, <7 x float> undef)
   %a = llvm.intr.masked.gather %M, %mask { alignment = 1: i32} :
-      (!llvm.vec<7 x ptr<f32>>, !llvm.vec<7 x i1>) -> !llvm.vec<7 x f32>
+      (!llvm.vec<7 x ptr<f32>>, vector<7xi1>) -> vector<7xf32>
   // CHECK: call <7 x float> @llvm.masked.gather.v7f32.v7p0f32(<7 x float*> %{{.*}}, i32 1, <7 x i1> %{{.*}}, <7 x float> %{{.*}})
   %b = llvm.intr.masked.gather %M, %mask, %a { alignment = 1: i32} :
-      (!llvm.vec<7 x ptr<f32>>, !llvm.vec<7 x i1>, !llvm.vec<7 x f32>) -> !llvm.vec<7 x f32>
+      (!llvm.vec<7 x ptr<f32>>, vector<7xi1>, vector<7xf32>) -> vector<7xf32>
   // CHECK: call void @llvm.masked.scatter.v7f32.v7p0f32(<7 x float> %{{.*}}, <7 x float*> %{{.*}}, i32 1, <7 x i1> %{{.*}})
   llvm.intr.masked.scatter %b, %M, %mask { alignment = 1: i32} :
-      !llvm.vec<7 x f32>, !llvm.vec<7 x i1> into !llvm.vec<7 x ptr<f32>>
+      vector<7xf32>, vector<7xi1> into !llvm.vec<7 x ptr<f32>>
   llvm.return
 }
 
 // CHECK-LABEL: @masked_expand_compress_intrinsics
-llvm.func @masked_expand_compress_intrinsics(%ptr: !llvm.ptr<f32>, %mask: !llvm.vec<7 x i1>, %passthru: !llvm.vec<7 x f32>) {
+llvm.func @masked_expand_compress_intrinsics(%ptr: !llvm.ptr<f32>, %mask: vector<7xi1>, %passthru: vector<7xf32>) {
   // CHECK: call <7 x float> @llvm.masked.expandload.v7f32(float* %{{.*}}, <7 x i1> %{{.*}}, <7 x float> %{{.*}})
   %0 = "llvm.intr.masked.expandload"(%ptr, %mask, %passthru)
-    : (!llvm.ptr<f32>, !llvm.vec<7 x i1>, !llvm.vec<7 x f32>) -> (!llvm.vec<7 x f32>)
+    : (!llvm.ptr<f32>, vector<7xi1>, vector<7xf32>) -> (vector<7xf32>)
   // CHECK: call void @llvm.masked.compressstore.v7f32(<7 x float> %{{.*}}, float* %{{.*}}, <7 x i1> %{{.*}})
   "llvm.intr.masked.compressstore"(%0, %ptr, %mask)
-    : (!llvm.vec<7 x f32>, !llvm.ptr<f32>, !llvm.vec<7 x i1>) -> ()
+    : (vector<7xf32>, !llvm.ptr<f32>, vector<7xi1>) -> ()
   llvm.return
 }
 
@@ -294,56 +294,56 @@ llvm.func @memcpy_test(%arg0: i32, %arg1: i1, %arg2: !llvm.ptr<i8>, %arg3: !llvm
 }
 
 // CHECK-LABEL: @sadd_with_overflow_test
-llvm.func @sadd_with_overflow_test(%arg0: i32, %arg1: i32, %arg2: !llvm.vec<8 x i32>, %arg3: !llvm.vec<8 x i32>) {
+llvm.func @sadd_with_overflow_test(%arg0: i32, %arg1: i32, %arg2: vector<8xi32>, %arg3: vector<8xi32>) {
   // CHECK: call { i32, i1 } @llvm.sadd.with.overflow.i32
   "llvm.intr.sadd.with.overflow"(%arg0, %arg1) : (i32, i32) -> !llvm.struct<(i32, i1)>
   // CHECK: call { <8 x i32>, <8 x i1> } @llvm.sadd.with.overflow.v8i32
-  "llvm.intr.sadd.with.overflow"(%arg2, %arg3) : (!llvm.vec<8 x i32>, !llvm.vec<8 x i32>) -> !llvm.struct<(vec<8 x i32>, vec<8 x i1>)>
+  "llvm.intr.sadd.with.overflow"(%arg2, %arg3) : (vector<8xi32>, vector<8xi32>) -> !llvm.struct<(vector<8xi32>, vector<8xi1>)>
   llvm.return
 }
 
 // CHECK-LABEL: @uadd_with_overflow_test
-llvm.func @uadd_with_overflow_test(%arg0: i32, %arg1: i32, %arg2: !llvm.vec<8 x i32>, %arg3: !llvm.vec<8 x i32>) {
+llvm.func @uadd_with_overflow_test(%arg0: i32, %arg1: i32, %arg2: vector<8xi32>, %arg3: vector<8xi32>) {
   // CHECK: call { i32, i1 } @llvm.uadd.with.overflow.i32
   "llvm.intr.uadd.with.overflow"(%arg0, %arg1) : (i32, i32) -> !llvm.struct<(i32, i1)>
   // CHECK: call { <8 x i32>, <8 x i1> } @llvm.uadd.with.overflow.v8i32
-  "llvm.intr.uadd.with.overflow"(%arg2, %arg3) : (!llvm.vec<8 x i32>, !llvm.vec<8 x i32>) -> !llvm.struct<(vec<8 x i32>, vec<8 x i1>)>
+  "llvm.intr.uadd.with.overflow"(%arg2, %arg3) : (vector<8xi32>, vector<8xi32>) -> !llvm.struct<(vector<8xi32>, vector<8xi1>)>
   llvm.return
 }
 
 // CHECK-LABEL: @ssub_with_overflow_test
-llvm.func @ssub_with_overflow_test(%arg0: i32, %arg1: i32, %arg2: !llvm.vec<8 x i32>, %arg3: !llvm.vec<8 x i32>) {
+llvm.func @ssub_with_overflow_test(%arg0: i32, %arg1: i32, %arg2: vector<8xi32>, %arg3: vector<8xi32>) {
   // CHECK: call { i32, i1 } @llvm.ssub.with.overflow.i32
   "llvm.intr.ssub.with.overflow"(%arg0, %arg1) : (i32, i32) -> !llvm.struct<(i32, i1)>
   // CHECK: call { <8 x i32>, <8 x i1> } @llvm.ssub.with.overflow.v8i32
-  "llvm.intr.ssub.with.overflow"(%arg2, %arg3) : (!llvm.vec<8 x i32>, !llvm.vec<8 x i32>) -> !llvm.struct<(vec<8 x i32>, vec<8 x i1>)>
+  "llvm.intr.ssub.with.overflow"(%arg2, %arg3) : (vector<8xi32>, vector<8xi32>) -> !llvm.struct<(vector<8xi32>, vector<8xi1>)>
   llvm.return
 }
 
 // CHECK-LABEL: @usub_with_overflow_test
-llvm.func @usub_with_overflow_test(%arg0: i32, %arg1: i32, %arg2: !llvm.vec<8 x i32>, %arg3: !llvm.vec<8 x i32>) {
+llvm.func @usub_with_overflow_test(%arg0: i32, %arg1: i32, %arg2: vector<8xi32>, %arg3: vector<8xi32>) {
   // CHECK: call { i32, i1 } @llvm.usub.with.overflow.i32
   "llvm.intr.usub.with.overflow"(%arg0, %arg1) : (i32, i32) -> !llvm.struct<(i32, i1)>
   // CHECK: call { <8 x i32>, <8 x i1> } @llvm.usub.with.overflow.v8i32
-  "llvm.intr.usub.with.overflow"(%arg2, %arg3) : (!llvm.vec<8 x i32>, !llvm.vec<8 x i32>) -> !llvm.struct<(vec<8 x i32>, vec<8 x i1>)>
+  "llvm.intr.usub.with.overflow"(%arg2, %arg3) : (vector<8xi32>, vector<8xi32>) -> !llvm.struct<(vector<8xi32>, vector<8xi1>)>
   llvm.return
 }
 
 // CHECK-LABEL: @smul_with_overflow_test
-llvm.func @smul_with_overflow_test(%arg0: i32, %arg1: i32, %arg2: !llvm.vec<8 x i32>, %arg3: !llvm.vec<8 x i32>) {
+llvm.func @smul_with_overflow_test(%arg0: i32, %arg1: i32, %arg2: vector<8xi32>, %arg3: vector<8xi32>) {
   // CHECK: call { i32, i1 } @llvm.smul.with.overflow.i32
   "llvm.intr.smul.with.overflow"(%arg0, %arg1) : (i32, i32) -> !llvm.struct<(i32, i1)>
   // CHECK: call { <8 x i32>, <8 x i1> } @llvm.smul.with.overflow.v8i32
-  "llvm.intr.smul.with.overflow"(%arg2, %arg3) : (!llvm.vec<8 x i32>, !llvm.vec<8 x i32>) -> !llvm.struct<(vec<8 x i32>, vec<8 x i1>)>
+  "llvm.intr.smul.with.overflow"(%arg2, %arg3) : (vector<8xi32>, vector<8xi32>) -> !llvm.struct<(vector<8xi32>, vector<8xi1>)>
   llvm.return
 }
 
 // CHECK-LABEL: @umul_with_overflow_test
-llvm.func @umul_with_overflow_test(%arg0: i32, %arg1: i32, %arg2: !llvm.vec<8 x i32>, %arg3: !llvm.vec<8 x i32>) {
+llvm.func @umul_with_overflow_test(%arg0: i32, %arg1: i32, %arg2: vector<8xi32>, %arg3: vector<8xi32>) {
   // CHECK: call { i32, i1 } @llvm.umul.with.overflow.i32
   "llvm.intr.umul.with.overflow"(%arg0, %arg1) : (i32, i32) -> !llvm.struct<(i32, i1)>
   // CHECK: call { <8 x i32>, <8 x i1> } @llvm.umul.with.overflow.v8i32
-  "llvm.intr.umul.with.overflow"(%arg2, %arg3) : (!llvm.vec<8 x i32>, !llvm.vec<8 x i32>) -> !llvm.struct<(vec<8 x i32>, vec<8 x i1>)>
+  "llvm.intr.umul.with.overflow"(%arg2, %arg3) : (vector<8xi32>, vector<8xi32>) -> !llvm.struct<(vector<8xi32>, vector<8xi1>)>
   llvm.return
 }
 

diff  --git a/mlir/test/Target/llvmir-types.mlir b/mlir/test/Target/llvmir-types.mlir
index 5ea83549f041..a4c4f201f487 100644
--- a/mlir/test/Target/llvmir-types.mlir
+++ b/mlir/test/Target/llvmir-types.mlir
@@ -87,15 +87,15 @@ llvm.func @return_ppi8_42_9() -> !llvm.ptr<ptr<i8, 42>, 9>
 //
 
 // CHECK: declare <4 x i32> @return_v4_i32()
-llvm.func @return_v4_i32() -> !llvm.vec<4 x i32>
+llvm.func @return_v4_i32() -> vector<4xi32>
 // CHECK: declare <4 x float> @return_v4_float()
-llvm.func @return_v4_float() -> !llvm.vec<4 x f32>
+llvm.func @return_v4_float() -> vector<4xf32>
 // CHECK: declare <vscale x 4 x i32> @return_vs_4_i32()
-llvm.func @return_vs_4_i32() -> !llvm.vec<? x 4 x i32>
+llvm.func @return_vs_4_i32() -> !llvm.vec<?x4 x i32>
 // CHECK: declare <vscale x 8 x half> @return_vs_8_half()
-llvm.func @return_vs_8_half() -> !llvm.vec<? x 8 x f16>
+llvm.func @return_vs_8_half() -> !llvm.vec<?x8 x f16>
 // CHECK: declare <4 x i8*> @return_v_4_pi8()
-llvm.func @return_v_4_pi8() -> !llvm.vec<4 x ptr<i8>>
+llvm.func @return_v_4_pi8() -> !llvm.vec<4xptr<i8>>
 
 //
 // Arrays.

diff  --git a/mlir/test/Target/llvmir.mlir b/mlir/test/Target/llvmir.mlir
index 5a686bfdee6e..4645ef96a9d0 100644
--- a/mlir/test/Target/llvmir.mlir
+++ b/mlir/test/Target/llvmir.mlir
@@ -782,66 +782,66 @@ llvm.func @multireturn_caller() {
 }
 
 // CHECK-LABEL: define <4 x float> @vector_ops(<4 x float> {{%.*}}, <4 x i1> {{%.*}}, <4 x i64> {{%.*}})
-llvm.func @vector_ops(%arg0: !llvm.vec<4 x f32>, %arg1: !llvm.vec<4 x i1>, %arg2: !llvm.vec<4 x i64>) -> !llvm.vec<4 x f32> {
-  %0 = llvm.mlir.constant(dense<4.200000e+01> : vector<4xf32>) : !llvm.vec<4 x f32>
+llvm.func @vector_ops(%arg0: vector<4xf32>, %arg1: vector<4xi1>, %arg2: vector<4xi64>) -> vector<4xf32> {
+  %0 = llvm.mlir.constant(dense<4.200000e+01> : vector<4xf32>) : vector<4xf32>
 // CHECK-NEXT: %4 = fadd <4 x float> %0, <float 4.200000e+01, float 4.200000e+01, float 4.200000e+01, float 4.200000e+01>
-  %1 = llvm.fadd %arg0, %0 : !llvm.vec<4 x f32>
+  %1 = llvm.fadd %arg0, %0 : vector<4xf32>
 // CHECK-NEXT: %5 = select <4 x i1> %1, <4 x float> %4, <4 x float> %0
-  %2 = llvm.select %arg1, %1, %arg0 : !llvm.vec<4 x i1>, !llvm.vec<4 x f32>
+  %2 = llvm.select %arg1, %1, %arg0 : vector<4xi1>, vector<4xf32>
 // CHECK-NEXT: %6 = sdiv <4 x i64> %2, %2
-  %3 = llvm.sdiv %arg2, %arg2 : !llvm.vec<4 x i64>
+  %3 = llvm.sdiv %arg2, %arg2 : vector<4xi64>
 // CHECK-NEXT: %7 = udiv <4 x i64> %2, %2
-  %4 = llvm.udiv %arg2, %arg2 : !llvm.vec<4 x i64>
+  %4 = llvm.udiv %arg2, %arg2 : vector<4xi64>
 // CHECK-NEXT: %8 = srem <4 x i64> %2, %2
-  %5 = llvm.srem %arg2, %arg2 : !llvm.vec<4 x i64>
+  %5 = llvm.srem %arg2, %arg2 : vector<4xi64>
 // CHECK-NEXT: %9 = urem <4 x i64> %2, %2
-  %6 = llvm.urem %arg2, %arg2 : !llvm.vec<4 x i64>
+  %6 = llvm.urem %arg2, %arg2 : vector<4xi64>
 // CHECK-NEXT: %10 = fdiv <4 x float> %0, <float 4.200000e+01, float 4.200000e+01, float 4.200000e+01, float 4.200000e+01>
-  %7 = llvm.fdiv %arg0, %0 : !llvm.vec<4 x f32>
+  %7 = llvm.fdiv %arg0, %0 : vector<4xf32>
 // CHECK-NEXT: %11 = frem <4 x float> %0, <float 4.200000e+01, float 4.200000e+01, float 4.200000e+01, float 4.200000e+01>
-  %8 = llvm.frem %arg0, %0 : !llvm.vec<4 x f32>
+  %8 = llvm.frem %arg0, %0 : vector<4xf32>
 // CHECK-NEXT: %12 = and <4 x i64> %2, %2
-  %9 = llvm.and %arg2, %arg2 : !llvm.vec<4 x i64>
+  %9 = llvm.and %arg2, %arg2 : vector<4xi64>
 // CHECK-NEXT: %13 = or <4 x i64> %2, %2
-  %10 = llvm.or %arg2, %arg2 : !llvm.vec<4 x i64>
+  %10 = llvm.or %arg2, %arg2 : vector<4xi64>
 // CHECK-NEXT: %14 = xor <4 x i64> %2, %2
-  %11 = llvm.xor %arg2, %arg2 : !llvm.vec<4 x i64>
+  %11 = llvm.xor %arg2, %arg2 : vector<4xi64>
 // CHECK-NEXT: %15 = shl <4 x i64> %2, %2
-  %12 = llvm.shl %arg2, %arg2 : !llvm.vec<4 x i64>
+  %12 = llvm.shl %arg2, %arg2 : vector<4xi64>
 // CHECK-NEXT: %16 = lshr <4 x i64> %2, %2
-  %13 = llvm.lshr %arg2, %arg2 : !llvm.vec<4 x i64>
+  %13 = llvm.lshr %arg2, %arg2 : vector<4xi64>
 // CHECK-NEXT: %17 = ashr <4 x i64> %2, %2
-  %14 = llvm.ashr %arg2, %arg2 : !llvm.vec<4 x i64>
+  %14 = llvm.ashr %arg2, %arg2 : vector<4xi64>
 // CHECK-NEXT:    ret <4 x float> %4
-  llvm.return %1 : !llvm.vec<4 x f32>
+  llvm.return %1 : vector<4xf32>
 }
 
 // CHECK-LABEL: @vector_splat_1d
-llvm.func @vector_splat_1d() -> !llvm.vec<4 x f32> {
+llvm.func @vector_splat_1d() -> vector<4xf32> {
   // CHECK: ret <4 x float> zeroinitializer
-  %0 = llvm.mlir.constant(dense<0.000000e+00> : vector<4xf32>) : !llvm.vec<4 x f32>
-  llvm.return %0 : !llvm.vec<4 x f32>
+  %0 = llvm.mlir.constant(dense<0.000000e+00> : vector<4xf32>) : vector<4xf32>
+  llvm.return %0 : vector<4xf32>
 }
 
 // CHECK-LABEL: @vector_splat_2d
-llvm.func @vector_splat_2d() -> !llvm.array<4 x vec<16 x f32>> {
+llvm.func @vector_splat_2d() -> !llvm.array<4 x vector<16 x f32>> {
   // CHECK: ret [4 x <16 x float>] zeroinitializer
-  %0 = llvm.mlir.constant(dense<0.000000e+00> : vector<4x16xf32>) : !llvm.array<4 x vec<16 x f32>>
-  llvm.return %0 : !llvm.array<4 x vec<16 x f32>>
+  %0 = llvm.mlir.constant(dense<0.000000e+00> : vector<4x16xf32>) : !llvm.array<4 x vector<16 x f32>>
+  llvm.return %0 : !llvm.array<4 x vector<16 x f32>>
 }
 
 // CHECK-LABEL: @vector_splat_3d
-llvm.func @vector_splat_3d() -> !llvm.array<4 x array<16 x vec<4 x f32>>> {
+llvm.func @vector_splat_3d() -> !llvm.array<4 x array<16 x vector<4 x f32>>> {
   // CHECK: ret [4 x [16 x <4 x float>]] zeroinitializer
-  %0 = llvm.mlir.constant(dense<0.000000e+00> : vector<4x16x4xf32>) : !llvm.array<4 x array<16 x vec<4 x f32>>>
-  llvm.return %0 : !llvm.array<4 x array<16 x vec<4 x f32>>>
+  %0 = llvm.mlir.constant(dense<0.000000e+00> : vector<4x16x4xf32>) : !llvm.array<4 x array<16 x vector<4 x f32>>>
+  llvm.return %0 : !llvm.array<4 x array<16 x vector<4 x f32>>>
 }
 
 // CHECK-LABEL: @vector_splat_nonzero
-llvm.func @vector_splat_nonzero() -> !llvm.vec<4 x f32> {
+llvm.func @vector_splat_nonzero() -> vector<4xf32> {
   // CHECK: ret <4 x float> <float 1.000000e+00, float 1.000000e+00, float 1.000000e+00, float 1.000000e+00>
-  %0 = llvm.mlir.constant(dense<1.000000e+00> : vector<4xf32>) : !llvm.vec<4 x f32>
-  llvm.return %0 : !llvm.vec<4 x f32>
+  %0 = llvm.mlir.constant(dense<1.000000e+00> : vector<4xf32>) : vector<4xf32>
+  llvm.return %0 : vector<4xf32>
 }
 
 // CHECK-LABEL: @ops
@@ -1019,22 +1019,22 @@ llvm.func @fcmp(%arg0: f32, %arg1: f32) {
 }
 
 // CHECK-LABEL: @vect
-llvm.func @vect(%arg0: !llvm.vec<4 x f32>, %arg1: i32, %arg2: f32) {
+llvm.func @vect(%arg0: vector<4xf32>, %arg1: i32, %arg2: f32) {
   // CHECK-NEXT: extractelement <4 x float> {{.*}}, i32
   // CHECK-NEXT: insertelement <4 x float> {{.*}}, float %2, i32
   // CHECK-NEXT: shufflevector <4 x float> {{.*}}, <4 x float> {{.*}}, <5 x i32> <i32 0, i32 0, i32 0, i32 0, i32 7>
-  %0 = llvm.extractelement %arg0[%arg1 : i32] : !llvm.vec<4 x f32>
-  %1 = llvm.insertelement %arg2, %arg0[%arg1 : i32] : !llvm.vec<4 x f32>
-  %2 = llvm.shufflevector %arg0, %arg0 [0 : i32, 0 : i32, 0 : i32, 0 : i32, 7 : i32] : !llvm.vec<4 x f32>, !llvm.vec<4 x f32>
+  %0 = llvm.extractelement %arg0[%arg1 : i32] : vector<4xf32>
+  %1 = llvm.insertelement %arg2, %arg0[%arg1 : i32] : vector<4xf32>
+  %2 = llvm.shufflevector %arg0, %arg0 [0 : i32, 0 : i32, 0 : i32, 0 : i32, 7 : i32] : vector<4xf32>, vector<4xf32>
   llvm.return
 }
 
 // CHECK-LABEL: @vect_i64idx
-llvm.func @vect_i64idx(%arg0: !llvm.vec<4 x f32>, %arg1: i64, %arg2: f32) {
+llvm.func @vect_i64idx(%arg0: vector<4xf32>, %arg1: i64, %arg2: f32) {
   // CHECK-NEXT: extractelement <4 x float> {{.*}}, i64
   // CHECK-NEXT: insertelement <4 x float> {{.*}}, float %2, i64
-  %0 = llvm.extractelement %arg0[%arg1 : i64] : !llvm.vec<4 x f32>
-  %1 = llvm.insertelement %arg2, %arg0[%arg1 : i64] : !llvm.vec<4 x f32>
+  %0 = llvm.extractelement %arg0[%arg1 : i64] : vector<4xf32>
+  %1 = llvm.insertelement %arg2, %arg0[%arg1 : i64] : vector<4xf32>
   llvm.return
 }
 
@@ -1050,10 +1050,10 @@ llvm.func @alloca(%size : i64) {
 }
 
 // CHECK-LABEL: @constants
-llvm.func @constants() -> !llvm.vec<4 x f32> {
+llvm.func @constants() -> vector<4xf32> {
   // CHECK: ret <4 x float> <float 4.2{{0*}}e+01, float 0.{{0*}}e+00, float 0.{{0*}}e+00, float 0.{{0*}}e+00>
-  %0 = llvm.mlir.constant(sparse<[[0]], [4.2e+01]> : vector<4xf32>) : !llvm.vec<4 x f32>
-  llvm.return %0 : !llvm.vec<4 x f32>
+  %0 = llvm.mlir.constant(sparse<[[0]], [4.2e+01]> : vector<4xf32>) : vector<4xf32>
+  llvm.return %0 : vector<4xf32>
 }
 
 // CHECK-LABEL: @fp_casts
@@ -1088,12 +1088,12 @@ llvm.func @null() -> !llvm.ptr<i32> {
 
 // Check that dense elements attributes are exported properly in constants.
 // CHECK-LABEL: @elements_constant_3d_vector
-llvm.func @elements_constant_3d_vector() -> !llvm.array<2 x array<2 x vec<2 x i32>>> {
+llvm.func @elements_constant_3d_vector() -> !llvm.array<2 x array<2 x vector<2 x i32>>> {
   // CHECK: ret [2 x [2 x <2 x i32>]]
   // CHECK-SAME: {{\[}}[2 x <2 x i32>] [<2 x i32> <i32 1, i32 2>, <2 x i32> <i32 3, i32 4>],
   // CHECK-SAME:       [2 x <2 x i32>] [<2 x i32> <i32 42, i32 43>, <2 x i32> <i32 44, i32 45>]]
-  %0 = llvm.mlir.constant(dense<[[[1, 2], [3, 4]], [[42, 43], [44, 45]]]> : vector<2x2x2xi32>) : !llvm.array<2 x array<2 x vec<2 x i32>>>
-  llvm.return %0 : !llvm.array<2 x array<2 x vec<2 x i32>>>
+  %0 = llvm.mlir.constant(dense<[[[1, 2], [3, 4]], [[42, 43], [44, 45]]]> : vector<2x2x2xi32>) : !llvm.array<2 x array<2 x vector<2 x i32>>>
+  llvm.return %0 : !llvm.array<2 x array<2 x vector<2 x i32>>>
 }
 
 // CHECK-LABEL: @elements_constant_3d_array

diff  --git a/mlir/test/Target/nvvmir.mlir b/mlir/test/Target/nvvmir.mlir
index 63dd200be9d2..08aaa07b12a2 100644
--- a/mlir/test/Target/nvvmir.mlir
+++ b/mlir/test/Target/nvvmir.mlir
@@ -64,12 +64,12 @@ llvm.func @nvvm_vote(%0 : i32, %1 : i1) -> i32 {
   llvm.return %3 : i32
 }
 
-llvm.func @nvvm_mma(%a0 : !llvm.vec<2 x f16>, %a1 : !llvm.vec<2 x f16>,
-                    %b0 : !llvm.vec<2 x f16>, %b1 : !llvm.vec<2 x f16>,
+llvm.func @nvvm_mma(%a0 : vector<2xf16>, %a1 : vector<2xf16>,
+                    %b0 : vector<2xf16>, %b1 : vector<2xf16>,
                     %c0 : f32, %c1 : f32, %c2 : f32, %c3 : f32,
                     %c4 : f32, %c5 : f32, %c6 : f32, %c7 : f32) {
   // CHECK: call { float, float, float, float, float, float, float, float } @llvm.nvvm.mma.m8n8k4.row.col.f32.f32
-  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="row", blayout="col"} : (!llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, !llvm.vec<2 x f16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
+  %0 = nvvm.mma.sync %a0, %a1, %b0, %b1, %c0, %c1, %c2, %c3, %c4, %c5, %c6, %c7 {alayout="row", blayout="col"} : (vector<2xf16>, vector<2xf16>, vector<2xf16>, vector<2xf16>, f32, f32, f32, f32, f32, f32, f32, f32) -> !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
   llvm.return %0 : !llvm.struct<(f32, f32, f32, f32, f32, f32, f32, f32)>
 }
 

diff  --git a/mlir/test/Target/rocdl.mlir b/mlir/test/Target/rocdl.mlir
index b93c5f53e6ab..1f4b8b03c81b 100644
--- a/mlir/test/Target/rocdl.mlir
+++ b/mlir/test/Target/rocdl.mlir
@@ -43,133 +43,133 @@ llvm.func @rocdl.barrier() {
 }
 
 llvm.func @rocdl.xdlops(%arg0 : f32, %arg1 : f32,
-                   %arg2 : !llvm.vec<32 x f32>, %arg3 : i32,
-                   %arg4 : !llvm.vec<16 x f32>, %arg5 : !llvm.vec<4 x f32>,
-                   %arg6 : !llvm.vec<4 x f16>, %arg7 : !llvm.vec<32 x i32>,
-                   %arg8 : !llvm.vec<16 x i32>, %arg9 : !llvm.vec<4 x i32>,
-                   %arg10 : !llvm.vec<2 x i16>) -> !llvm.vec<32 x f32> {
+                   %arg2 : vector<32 x f32>, %arg3 : i32,
+                   %arg4 : vector<16 x f32>, %arg5 : vector<4xf32>,
+                   %arg6 : vector<4xf16>, %arg7 : vector<32 x i32>,
+                   %arg8 : vector<16 x i32>, %arg9 : vector<4xi32>,
+                   %arg10 : vector<2xi16>) -> vector<32 x f32> {
   // CHECK-LABEL: rocdl.xdlops
   // CHECK: call <32 x float> @llvm.amdgcn.mfma.f32.32x32x1f32(float %{{.*}}, float %{{.*}}, <32 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r0 = rocdl.mfma.f32.32x32x1f32 %arg0, %arg1, %arg2, %arg3, %arg3, %arg3 :
-                            (f32, f32, !llvm.vec<32 x f32>,
-                            i32, i32, i32) -> !llvm.vec<32 x f32>
+                            (f32, f32, vector<32 x f32>,
+                            i32, i32, i32) -> vector<32 x f32>
 
   // CHECK: call <16 x float> @llvm.amdgcn.mfma.f32.16x16x1f32(float %{{.*}}, float %{{.*}}, <16 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r1 = rocdl.mfma.f32.16x16x1f32 %arg0, %arg1, %arg4, %arg3, %arg3, %arg3 :
-                            (f32, f32, !llvm.vec<16 x f32>,
-                            i32, i32, i32) -> !llvm.vec<16 x f32>
+                            (f32, f32, vector<16 x f32>,
+                            i32, i32, i32) -> vector<16 x f32>
 
   // CHECK: call <4 x float> @llvm.amdgcn.mfma.f32.16x16x4f32(float %{{.*}}, float %{{.*}}, <4 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r2 = rocdl.mfma.f32.16x16x4f32 %arg0, %arg1, %arg5, %arg3, %arg3, %arg3 :
-                            (f32, f32, !llvm.vec<4 x f32>,
-                            i32, i32, i32) -> !llvm.vec<4 x f32>
+                            (f32, f32, vector<4xf32>,
+                            i32, i32, i32) -> vector<4xf32>
 
   // CHECK: call <4 x float> @llvm.amdgcn.mfma.f32.4x4x1f32(float %{{.*}}, float %{{.*}}, <4 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r3 = rocdl.mfma.f32.4x4x1f32 %arg0, %arg1, %arg5, %arg3, %arg3, %arg3 :
-                            (f32, f32, !llvm.vec<4 x f32>,
-                            i32, i32, i32) -> !llvm.vec<4 x f32>
+                            (f32, f32, vector<4xf32>,
+                            i32, i32, i32) -> vector<4xf32>
 
   // CHECK: call <16 x float> @llvm.amdgcn.mfma.f32.32x32x2f32(float %{{.*}}, float %{{.*}}, <16 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r4= rocdl.mfma.f32.32x32x2f32 %arg0, %arg1, %arg4, %arg3, %arg3, %arg3 :
-                            (f32, f32, !llvm.vec<16 x f32>,
-                            i32, i32, i32) -> !llvm.vec<16 x f32>
+                            (f32, f32, vector<16 x f32>,
+                            i32, i32, i32) -> vector<16 x f32>
 
   // CHECK: call <32 x float> @llvm.amdgcn.mfma.f32.32x32x4f16(<4 x half> %{{.*}}, <4 x half> %{{.*}}, <32 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r5 = rocdl.mfma.f32.32x32x4f16 %arg6, %arg6, %arg2, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<32 x f32>,
-                            i32, i32, i32) -> !llvm.vec<32 x f32>
+                            (vector<4xf16>, vector<4xf16>, vector<32 x f32>,
+                            i32, i32, i32) -> vector<32 x f32>
 
   // CHECK: call <16 x float> @llvm.amdgcn.mfma.f32.16x16x4f16(<4 x half> %{{.*}}, <4 x half> %{{.*}}, <16 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r6 = rocdl.mfma.f32.16x16x4f16 %arg6, %arg6, %arg4, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<16 x f32>,
-                            i32, i32, i32) -> !llvm.vec<16 x f32>
+                            (vector<4xf16>, vector<4xf16>, vector<16 x f32>,
+                            i32, i32, i32) -> vector<16 x f32>
 
   // CHECK: call <4 x float> @llvm.amdgcn.mfma.f32.4x4x4f16(<4 x half> %{{.*}}, <4 x half> %{{.*}}, <4 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r7 = rocdl.mfma.f32.4x4x4f16 %arg6, %arg6, %arg5, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<4 x f32>,
-                            i32, i32, i32) -> !llvm.vec<4 x f32>
+                            (vector<4xf16>, vector<4xf16>, vector<4xf32>,
+                            i32, i32, i32) -> vector<4xf32>
 
   // CHECK: call <16 x float> @llvm.amdgcn.mfma.f32.32x32x8f16(<4 x half> %{{.*}}, <4 x half> %{{.*}}, <16 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r8 = rocdl.mfma.f32.32x32x8f16 %arg6, %arg6, %arg4, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<16 x f32>,
-                            i32, i32, i32) -> !llvm.vec<16 x f32>
+                            (vector<4xf16>, vector<4xf16>, vector<16 x f32>,
+                            i32, i32, i32) -> vector<16 x f32>
 
   // CHECK: call <4 x float> @llvm.amdgcn.mfma.f32.16x16x16f16(<4 x half> %{{.*}}, <4 x half> %{{.*}}, <4 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r9 = rocdl.mfma.f32.16x16x16f16 %arg6, %arg6, %arg5, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<4 x f16>, !llvm.vec<4 x f16>, !llvm.vec<4 x f32>,
-                            i32, i32, i32) -> !llvm.vec<4 x f32>
+                            (vector<4xf16>, vector<4xf16>, vector<4xf32>,
+                            i32, i32, i32) -> vector<4xf32>
 
   // CHECK: call <32 x i32> @llvm.amdgcn.mfma.i32.32x32x4i8(i32 %{{.*}}, i32 %{{.*}}, <32 x i32> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r10 = rocdl.mfma.i32.32x32x4i8 %arg3, %arg3, %arg7, %arg3, %arg3, %arg3 :
-                            (i32, i32, !llvm.vec<32 x i32>,
-                            i32, i32, i32) -> !llvm.vec<32 x i32>
+                            (i32, i32, vector<32 x i32>,
+                            i32, i32, i32) -> vector<32 x i32>
 
   // CHECK: call <16 x i32> @llvm.amdgcn.mfma.i32.16x16x4i8(i32 %{{.*}}, i32 %{{.*}}, <16 x i32> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r11 = rocdl.mfma.i32.16x16x4i8 %arg3, %arg3, %arg8, %arg3, %arg3, %arg3 :
-                            (i32, i32, !llvm.vec<16 x i32>,
-                            i32, i32, i32) -> !llvm.vec<16 x i32>
+                            (i32, i32, vector<16 x i32>,
+                            i32, i32, i32) -> vector<16 x i32>
 
   // CHECK: call <4 x i32> @llvm.amdgcn.mfma.i32.4x4x4i8(i32 %{{.*}}, i32 %{{.*}}, <4 x i32> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r12 = rocdl.mfma.i32.4x4x4i8 %arg3, %arg3, %arg9, %arg3, %arg3, %arg3 :
-                            (i32, i32, !llvm.vec<4 x i32>,
-                            i32, i32, i32) -> !llvm.vec<4 x i32>
+                            (i32, i32, vector<4xi32>,
+                            i32, i32, i32) -> vector<4xi32>
 
   // CHECK: call <16 x i32> @llvm.amdgcn.mfma.i32.32x32x8i8(i32 %{{.*}}, i32 %{{.*}}, <16 x i32> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r13 = rocdl.mfma.i32.32x32x8i8 %arg3, %arg3, %arg8, %arg3, %arg3, %arg3 :
-                            (i32, i32, !llvm.vec<16 x i32>,
-                            i32, i32, i32) -> !llvm.vec<16 x i32>
+                            (i32, i32, vector<16 x i32>,
+                            i32, i32, i32) -> vector<16 x i32>
 
   // CHECK: call <4 x i32> @llvm.amdgcn.mfma.i32.16x16x16i8(i32 %{{.*}}, i32 %{{.*}}, <4 x i32> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r14 = rocdl.mfma.i32.16x16x16i8 %arg3, %arg3, %arg9, %arg3, %arg3, %arg3 :
-                            (i32, i32, !llvm.vec<4 x i32>,
-                            i32, i32, i32) -> !llvm.vec<4 x i32>
+                            (i32, i32, vector<4xi32>,
+                            i32, i32, i32) -> vector<4xi32>
 
   // CHECK: call <32 x float> @llvm.amdgcn.mfma.f32.32x32x2bf16(<2 x i16> %{{.*}}, <2 x i16> %{{.*}}, <32 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r15 = rocdl.mfma.f32.32x32x2bf16 %arg10, %arg10, %arg2, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<32 x f32>,
-                            i32, i32, i32) -> !llvm.vec<32 x f32>
+                            (vector<2xi16>, vector<2xi16>, vector<32 x f32>,
+                            i32, i32, i32) -> vector<32 x f32>
 
   // CHECK: call <16 x float> @llvm.amdgcn.mfma.f32.16x16x2bf16(<2 x i16> %{{.*}}, <2 x i16> %{{.*}}, <16 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r16 = rocdl.mfma.f32.16x16x2bf16 %arg10, %arg10, %arg4, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<16 x f32>,
-                            i32, i32, i32) -> !llvm.vec<16 x f32>
+                            (vector<2xi16>, vector<2xi16>, vector<16 x f32>,
+                            i32, i32, i32) -> vector<16 x f32>
 
   // CHECK: call <4 x float> @llvm.amdgcn.mfma.f32.4x4x2bf16(<2 x i16> %{{.*}}, <2 x i16> %{{.*}}, <4 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r17 = rocdl.mfma.f32.4x4x2bf16 %arg10, %arg10, %arg5, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<4 x f32>,
-                            i32, i32, i32) -> !llvm.vec<4 x f32>
+                            (vector<2xi16>, vector<2xi16>, vector<4xf32>,
+                            i32, i32, i32) -> vector<4xf32>
 
   // CHECK: call <16 x float> @llvm.amdgcn.mfma.f32.32x32x4bf16(<2 x i16> %{{.*}}, <2 x i16> %{{.*}}, <16 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r18 = rocdl.mfma.f32.32x32x4bf16 %arg10, %arg10, %arg4, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<16 x f32>,
-                            i32, i32, i32) -> !llvm.vec<16 x f32>
+                            (vector<2xi16>, vector<2xi16>, vector<16 x f32>,
+                            i32, i32, i32) -> vector<16 x f32>
 
   // CHECK: call <4 x float> @llvm.amdgcn.mfma.f32.16x16x8bf16(<2 x i16> %{{.*}}, <2 x i16> %{{.*}}, <4 x float> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i32 %{{.*}})
   %r19 = rocdl.mfma.f32.16x16x8bf16 %arg10, %arg10, %arg5, %arg3, %arg3, %arg3 :
-                            (!llvm.vec<2 x i16>, !llvm.vec<2 x i16>, !llvm.vec<4 x f32>,
-                            i32, i32, i32) -> !llvm.vec<4 x f32>
+                            (vector<2xi16>, vector<2xi16>, vector<4xf32>,
+                            i32, i32, i32) -> vector<4xf32>
 
-  llvm.return %r0 : !llvm.vec<32 x f32>
+  llvm.return %r0 : vector<32 x f32>
 }
 
-llvm.func @rocdl.mubuf(%rsrc : !llvm.vec<4 x i32>, %vindex : i32,
+llvm.func @rocdl.mubuf(%rsrc : vector<4xi32>, %vindex : i32,
                        %offset : i32, %glc : i1,
-                       %slc : i1, %vdata1 : !llvm.vec<1 x f32>,
-                       %vdata2 : !llvm.vec<2 x f32>, %vdata4 : !llvm.vec<4 x f32>) {
+                       %slc : i1, %vdata1 : vector<1xf32>,
+                       %vdata2 : vector<2xf32>, %vdata4 : vector<4xf32>) {
   // CHECK-LABEL: rocdl.mubuf
   // CHECK: call <1 x float> @llvm.amdgcn.buffer.load.v1f32(<4 x i32> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i1 %{{.*}}, i1 %{{.*}})
-  %r1 = rocdl.buffer.load %rsrc, %vindex, %offset, %glc, %slc : !llvm.vec<1 x f32>
+  %r1 = rocdl.buffer.load %rsrc, %vindex, %offset, %glc, %slc : vector<1xf32>
   // CHECK: call <2 x float> @llvm.amdgcn.buffer.load.v2f32(<4 x i32> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i1 %{{.*}}, i1 %{{.*}})
-  %r2 = rocdl.buffer.load %rsrc, %vindex, %offset, %glc, %slc : !llvm.vec<2 x f32>
+  %r2 = rocdl.buffer.load %rsrc, %vindex, %offset, %glc, %slc : vector<2xf32>
   // CHECK: call <4 x float> @llvm.amdgcn.buffer.load.v4f32(<4 x i32> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i1 %{{.*}}, i1 %{{.*}})
-  %r4 = rocdl.buffer.load %rsrc, %vindex, %offset, %glc, %slc : !llvm.vec<4 x f32>
+  %r4 = rocdl.buffer.load %rsrc, %vindex, %offset, %glc, %slc : vector<4xf32>
 
   // CHECK: call void @llvm.amdgcn.buffer.store.v1f32(<1 x float> %{{.*}}, <4 x i32> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i1 %{{.*}}, i1 %{{.*}})
-  rocdl.buffer.store %vdata1, %rsrc, %vindex, %offset, %glc, %slc : !llvm.vec<1 x f32>
+  rocdl.buffer.store %vdata1, %rsrc, %vindex, %offset, %glc, %slc : vector<1xf32>
   // CHECK: call void @llvm.amdgcn.buffer.store.v2f32(<2 x float> %{{.*}}, <4 x i32> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i1 %{{.*}}, i1 %{{.*}})
-  rocdl.buffer.store %vdata2, %rsrc, %vindex, %offset, %glc, %slc : !llvm.vec<2 x f32>
+  rocdl.buffer.store %vdata2, %rsrc, %vindex, %offset, %glc, %slc : vector<2xf32>
   // CHECK: call void @llvm.amdgcn.buffer.store.v4f32(<4 x float> %{{.*}}, <4 x i32> %{{.*}}, i32 %{{.*}}, i32 %{{.*}}, i1 %{{.*}}, i1 %{{.*}})
-  rocdl.buffer.store %vdata4, %rsrc, %vindex, %offset, %glc, %slc : !llvm.vec<4 x f32>
+  rocdl.buffer.store %vdata4, %rsrc, %vindex, %offset, %glc, %slc : vector<4xf32>
 
   llvm.return
 }


        


More information about the Mlir-commits mailing list