<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Yep, most definitely, thanks for letting me know.</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
I am looking into it now; I missed that there are matrix test in the test-suite.</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Cheers,</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Sjoerd.</div>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Amara Emerson <amara@apple.com><br>
<b>Sent:</b> 15 July 2020 01:56<br>
<b>To:</b> Sjoerd Meijer <Sjoerd.Meijer@arm.com><br>
<b>Cc:</b> llvm-commits <llvm-commits@lists.llvm.org>; Sjoerd Meijer <llvmlistbot@llvm.org><br>
<b>Subject:</b> Re: [llvm] 2b3c505 - [Matrix] Intrinsic descriptions</font>
<div> </div>
</div>
<div class="" style="word-wrap:break-word; line-break:after-white-space">
<div class="">Did this commit break greendragon? <a href="http://green.lab.llvm.org/green/job/test-suite-verify-machineinstrs-x86_64-O0-g/7891/" class="">http://green.lab.llvm.org/green/job/test-suite-verify-machineinstrs-x86_64-O0-g/7891/</a></div>
<div class=""><br class="">
</div>
<div class="">Thanks,</div>
<div class="">Amara<br class="">
<div><br class="">
<blockquote type="cite" class="">
<div class="">On Jul 14, 2020, at 11:58 AM, Sjoerd Meijer via llvm-commits <<a href="mailto:llvm-commits@lists.llvm.org" class="">llvm-commits@lists.llvm.org</a>> wrote:</div>
<br class="x_Apple-interchange-newline">
<div class="">
<div class=""><br class="">
Author: Sjoerd Meijer<br class="">
Date: 2020-07-14T19:58:16+01:00<br class="">
New Revision: 2b3c505d0f6e776f1cfadd86d2c9bcda971fa45c<br class="">
<br class="">
URL: <a href="https://github.com/llvm/llvm-project/commit/2b3c505d0f6e776f1cfadd86d2c9bcda971fa45c" class="">
https://github.com/llvm/llvm-project/commit/2b3c505d0f6e776f1cfadd86d2c9bcda971fa45c</a><br class="">
DIFF: <a href="https://github.com/llvm/llvm-project/commit/2b3c505d0f6e776f1cfadd86d2c9bcda971fa45c.diff" class="">
https://github.com/llvm/llvm-project/commit/2b3c505d0f6e776f1cfadd86d2c9bcda971fa45c.diff</a><br class="">
<br class="">
LOG: [Matrix] Intrinsic descriptions<br class="">
<br class="">
This changes the matrix load/store intrinsic definitions to load/store from/to<br class="">
a pointer, and not from/to a pointer to a vector, as discussed in D83477.<br class="">
<br class="">
This also includes the recommit of "[Matrix] Tighten LangRef definitions and<br class="">
Verifier checks" which adds improved language reference descriptions of the<br class="">
matrix intrinsics and verifier checks.<br class="">
<br class="">
Differential Revision: <a href="https://reviews.llvm.org/D83785" class="">https://reviews.llvm.org/D83785</a><br class="">
<br class="">
Added: <br class="">
<br class="">
<br class="">
Modified: <br class="">
llvm/docs/LangRef.rst<br class="">
llvm/include/llvm/IR/Intrinsics.td<br class="">
llvm/lib/IR/Verifier.cpp<br class="">
llvm/test/Transforms/LowerMatrixIntrinsics/load-align-volatile.ll<br class="">
llvm/test/Transforms/LowerMatrixIntrinsics/remarks-inlining.ll<br class="">
llvm/test/Transforms/LowerMatrixIntrinsics/remarks.ll<br class="">
llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-double.ll<br class="">
llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-float.ll<br class="">
llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-i32.ll<br class="">
llvm/test/Transforms/LowerMatrixIntrinsics/strided-store-double.ll<br class="">
llvm/test/Verifier/matrix-intrinsics.ll<br class="">
<br class="">
Removed: <br class="">
<br class="">
<br class="">
<br class="">
################################################################################<br class="">
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst<br class="">
index c82d8c4e5dca..8bb808a3256c 100644<br class="">
--- a/llvm/docs/LangRef.rst<br class="">
+++ b/llvm/docs/LangRef.rst<br class="">
@@ -15525,6 +15525,7 @@ The argument to this intrinsic must be a vector of floating-point values.<br class="">
<br class="">
Syntax:<br class="">
"""""""<br class="">
+This is an overloaded intrinsic.<br class="">
<br class="">
::<br class="">
<br class="">
@@ -15549,17 +15550,20 @@ Matrix Intrinsics<br class="">
-----------------<br class="">
<br class="">
Operations on matrixes requiring shape information (like number of rows/columns<br class="">
-or the memory layout) can be expressed using the matrix intrinsics. Matrixes are<br class="">
-embedded in a flat vector and the intrinsics take the dimensions as arguments.<br class="">
-Currently column-major layout is assumed. The intrinsics support both integer<br class="">
-and floating point matrixes.<br class="">
+or the memory layout) can be expressed using the matrix intrinsics. These<br class="">
+intrinsics require matrix dimensions to be passed as immediate arguments, and<br class="">
+matrixes are passed and returned as vectors. This means that for a ``R`` x<br class="">
+``C`` matrix, element ``i`` of column ``j`` is at index ``j * R + i`` in the<br class="">
+corresponding vector, with indices starting at 0. Currently column-major layout<br class="">
+is assumed. The intrinsics support both integer and floating point matrixes.<br class="">
<br class="">
<br class="">
'``llvm.matrix.transpose.*``' Intrinsic<br class="">
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br class="">
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br class="">
<br class="">
Syntax:<br class="">
"""""""<br class="">
+This is an overloaded intrinsic.<br class="">
<br class="">
::<br class="">
<br class="">
@@ -15568,21 +15572,24 @@ Syntax:<br class="">
Overview:<br class="">
"""""""""<br class="">
<br class="">
-The '``llvm.matrix.transpose.*``' intrinsic treats %In as containing a matrix<br class="">
-with <Rows> rows and <Cols> columns and returns the transposed matrix embedded in<br class="">
-the result vector.<br class="">
+The '``llvm.matrix.transpose.*``' intrinsics treat %In as a <Rows> x <Cols> matrix<br class="">
+and return the transposed matrix in the result vector.<br class="">
<br class="">
Arguments:<br class="">
""""""""""<br class="">
<br class="">
-The <Rows> and <Cols> arguments must be constant integers. The vector argument<br class="">
-%In and the returned vector must have <Rows> * <Cols> elements.<br class="">
+First argument %In is vector that corresponds to a <Rows> x <Cols> matrix.<br class="">
+Thus, arguments <Rows> and <Cols> correspond to the number of rows and columns,<br class="">
+respectively, and must be positive, constant integers. The returned vector must<br class="">
+have <Rows> * <Cols> elements, and have the same float or integer element type<br class="">
+as %In.<br class="">
<br class="">
'``llvm.matrix.multiply.*``' Intrinsic<br class="">
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br class="">
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br class="">
<br class="">
Syntax:<br class="">
"""""""<br class="">
+This is an overloaded intrinsic.<br class="">
<br class="">
::<br class="">
<br class="">
@@ -15591,18 +15598,19 @@ Syntax:<br class="">
Overview:<br class="">
"""""""""<br class="">
<br class="">
-The '``llvm.matrix.multiply.*``' intrinsic treats %A as a matrix with <OuterRows><br class="">
-rows and <Inner> columns, %B as a matrix with <Inner> rows and <OuterColumns><br class="">
-columns and multiplies them. The result matrix is returned embedded in the<br class="">
-result vector.<br class="">
+The '``llvm.matrix.multiply.*``' intrinsics treat %A as a <OuterRows> x <Inner><br class="">
+matrix, %B as a <Inner> x <OuterColumns> matrix, and multiplies them. The result<br class="">
+matrix is returned in the result vector.<br class="">
<br class="">
Arguments:<br class="">
""""""""""<br class="">
<br class="">
-The <OuterRows>, <Inner> and <OuterColumns> arguments must be constant<br class="">
-integers. The vector argument %A must have <OuterRows> * <Inner> elements, %B<br class="">
-must have <Inner> * <OuterColumns> elements and the returned vector must have<br class="">
-<OuterRows> * <OuterColumns> elements.<br class="">
+The first vector argument %A corresponds to a matrix with <OuterRows> * <Inner><br class="">
+elements, and the second argument %B to a matrix with <Inner> * <OuterColumns><br class="">
+elements. Arguments <OuterRows>, <Inner> and <OuterColumns> must be positive,<br class="">
+constant integers. The returned vector must have <OuterRows> * <OuterColumns><br class="">
+elements. Vectors %A, %B, and the returned vector all have the same float or<br class="">
+integer element type.<br class="">
<br class="">
<br class="">
'``llvm.matrix.column.major.load.*``' Intrinsic<br class="">
@@ -15610,6 +15618,7 @@ must have <Inner> * <OuterColumns> elements and the returned vector must have<br class="">
<br class="">
Syntax:<br class="">
"""""""<br class="">
+This is an overloaded intrinsic.<br class="">
<br class="">
::<br class="">
<br class="">
@@ -15619,22 +15628,26 @@ Syntax:<br class="">
Overview:<br class="">
"""""""""<br class="">
<br class="">
-The '``llvm.matrix.column.major.load.*``' intrinsic loads a matrix with <Rows><br class="">
-rows and <Cols> columns, using a stride of %Stride between columns. For two<br class="">
-consecutive columns A and B, %Stride refers to the distance (the number of<br class="">
-elements) between the start of column A and the start of column B. The result<br class="">
-matrix is returned embedded in the result vector. This allows for convenient<br class="">
-loading of sub matrixes. If <IsVolatile> is true, the intrinsic is considered<br class="">
-a :ref:`volatile memory access <volatile>`.<br class="">
-<br class="">
-If the %Ptr argument is known to be aligned to some boundary, this can be<br class="">
-specified as an attribute on the argument.<br class="">
+The '``llvm.matrix.column.major.load.*``' intrinsics load a <Rows> x <Cols><br class="">
+matrix using a stride of %Stride to compute the start address of the <br class="">
diff erent<br class="">
+columns. This allows for convenient loading of sub matrixes. If <IsVolatile><br class="">
+is true, the intrinsic is considered a :ref:`volatile memory access<br class="">
+<volatile>`. The result matrix is returned in the result vector. If the %Ptr<br class="">
+argument is known to be aligned to some boundary, this can be specified as an<br class="">
+attribute on the argument.<br class="">
<br class="">
Arguments:<br class="">
""""""""""<br class="">
<br class="">
-The <IsVolatile>, <Rows> and <Cols> arguments must be constant integers. The<br class="">
-returned vector must have <Rows> * <Cols> elements. %Stride must be >= <Rows>.<br class="">
+The first argument %Ptr is a pointer type to the returned vector type, and<br class="">
+correponds to the start address to load from. The second argument %Stride is a<br class="">
+postive, constant integer with %Stride ``>=`` <Rows>. %Stride is used to compute<br class="">
+the column memory addresses. I.e., for a column ``C``, its start memory<br class="">
+addresses is calculated with %Ptr + ``C`` * %Stride. The third Argument<br class="">
+<IsVolatile> is a boolean value. The fourth and fifth arguments, <Rows> and<br class="">
+<Cols>, correspond to the number of rows and columns, respectively, and must be<br class="">
+positive, constant integers. The returned vector must have <Rows> * <Cols><br class="">
+elements.<br class="">
<br class="">
The :ref:`align <attr_align>` parameter attribute can be provided<br class="">
for the %Ptr arguments.<br class="">
@@ -15654,12 +15667,10 @@ Syntax:<br class="">
Overview:<br class="">
"""""""""<br class="">
<br class="">
-The '``llvm.matrix.column.major.store.*``' intrinsic stores the matrix with<br class="">
-<Rows> rows and <Cols> columns embedded in %In, using a stride of %Stride<br class="">
-between columns. For two consecutive columns A and B, %Stride refers to the<br class="">
-distance (the number of elements) between the start of column A and the start<br class="">
-of column B. If <IsVolatile> is true, the intrinsic is considered a<br class="">
-:ref:`volatile memory access <volatile>`.<br class="">
+The '``llvm.matrix.column.major.store.*``' intrinsics store the <Rows> x <Cols><br class="">
+matrix in %In to memory using a stride of %Stride between columns. If<br class="">
+<IsVolatile> is true, the intrinsic is considered a :ref:`volatile memory<br class="">
+access <volatile>`.<br class="">
<br class="">
If the %Ptr argument is known to be aligned to some boundary, this can be<br class="">
specified as an attribute on the argument.<br class="">
@@ -15667,8 +15678,15 @@ specified as an attribute on the argument.<br class="">
Arguments:<br class="">
""""""""""<br class="">
<br class="">
-The <IsVolatile>, <Rows>, <Cols> arguments must be constant integers. The<br class="">
-vector argument %In must have <Rows> * <Cols> elements. %Stride must be >= <Rows>.<br class="">
+The first argument %In is a vector that corresponds to a <Rows> x <Cols> matrix<br class="">
+to be stored to memory. The second argument %Ptr is a pointer to the vector<br class="">
+type of %In, and is the start address of the matrix in memory. The third<br class="">
+argument %Stride is a positive, constant integer with %Stride ``>=`` <Rows>.<br class="">
+%Stride is used to compute the column memory addresses. I.e., for a column<br class="">
+``C``, its start memory addresses is calculated with %Ptr + ``C`` * %Stride.<br class="">
+The fourth argument <IsVolatile> is a boolean value. The arguments <Rows> and<br class="">
+<Cols> correspond to the number of rows and columns, respectively, and must be<br class="">
+positive, constant integers.<br class="">
<br class="">
The :ref:`align <attr_align>` parameter attribute can be provided<br class="">
for the %Ptr arguments.<br class="">
<br class="">
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td<br class="">
index 94741229a2a7..9b14a07eb7b9 100644<br class="">
--- a/llvm/include/llvm/IR/Intrinsics.td<br class="">
+++ b/llvm/include/llvm/IR/Intrinsics.td<br class="">
@@ -1458,7 +1458,7 @@ def int_matrix_multiply<br class="">
<br class="">
def int_matrix_column_major_load<br class="">
: Intrinsic<[llvm_anyvector_ty],<br class="">
- [LLVMAnyPointerType<LLVMMatchType<0>>, llvm_i64_ty, llvm_i1_ty,<br class="">
+ [LLVMPointerToElt<0>, llvm_i64_ty, llvm_i1_ty,<br class="">
llvm_i32_ty, llvm_i32_ty],<br class="">
[IntrNoSync, IntrWillReturn, IntrArgMemOnly, IntrReadMem,<br class="">
NoCapture<ArgIndex<0>>, ImmArg<ArgIndex<2>>, ImmArg<ArgIndex<3>>,<br class="">
@@ -1466,7 +1466,7 @@ def int_matrix_column_major_load<br class="">
<br class="">
def int_matrix_column_major_store<br class="">
: Intrinsic<[],<br class="">
- [llvm_anyvector_ty, LLVMAnyPointerType<LLVMMatchType<0>>,<br class="">
+ [llvm_anyvector_ty, LLVMPointerToElt<0>,<br class="">
llvm_i64_ty, llvm_i1_ty, llvm_i32_ty, llvm_i32_ty],<br class="">
[IntrNoSync, IntrWillReturn, IntrArgMemOnly, IntrWriteMem,<br class="">
WriteOnly<ArgIndex<1>>, NoCapture<ArgIndex<1>>,<br class="">
<br class="">
diff --git a/llvm/lib/IR/Verifier.cpp b/llvm/lib/IR/Verifier.cpp<br class="">
index 3c8e73a03cc5..6df1072925f9 100644<br class="">
--- a/llvm/lib/IR/Verifier.cpp<br class="">
+++ b/llvm/lib/IR/Verifier.cpp<br class="">
@@ -5017,36 +5017,73 @@ void Verifier::visitIntrinsicCall(Intrinsic::ID ID, CallBase &Call) {<br class="">
case Intrinsic::matrix_transpose:<br class="">
case Intrinsic::matrix_column_major_load:<br class="">
case Intrinsic::matrix_column_major_store: {<br class="">
+ Function *IF = Call.getCalledFunction();<br class="">
+ ConstantInt *Stride = nullptr;<br class="">
ConstantInt *NumRows;<br class="">
ConstantInt *NumColumns;<br class="">
- VectorType *TypeToCheck;<br class="">
+ VectorType *ResultTy;<br class="">
+ Type *Op0ElemTy = nullptr;<br class="">
+ Type *Op1ElemTy = nullptr;<br class="">
switch (ID) {<br class="">
case Intrinsic::matrix_multiply:<br class="">
NumRows = cast<ConstantInt>(Call.getArgOperand(2));<br class="">
NumColumns = cast<ConstantInt>(Call.getArgOperand(4));<br class="">
- TypeToCheck = cast<VectorType>(Call.getType());<br class="">
+ ResultTy = cast<VectorType>(Call.getType());<br class="">
+ Op0ElemTy =<br class="">
+ cast<VectorType>(Call.getArgOperand(0)->getType())->getElementType();<br class="">
+ Op1ElemTy =<br class="">
+ cast<VectorType>(Call.getArgOperand(1)->getType())->getElementType();<br class="">
break;<br class="">
case Intrinsic::matrix_transpose:<br class="">
NumRows = cast<ConstantInt>(Call.getArgOperand(1));<br class="">
NumColumns = cast<ConstantInt>(Call.getArgOperand(2));<br class="">
- TypeToCheck = cast<VectorType>(Call.getType());<br class="">
+ ResultTy = cast<VectorType>(Call.getType());<br class="">
+ Op0ElemTy =<br class="">
+ cast<VectorType>(Call.getArgOperand(0)->getType())->getElementType();<br class="">
break;<br class="">
case Intrinsic::matrix_column_major_load:<br class="">
+ Stride = dyn_cast<ConstantInt>(Call.getArgOperand(1));<br class="">
NumRows = cast<ConstantInt>(Call.getArgOperand(3));<br class="">
NumColumns = cast<ConstantInt>(Call.getArgOperand(4));<br class="">
- TypeToCheck = cast<VectorType>(Call.getType());<br class="">
+ ResultTy = cast<VectorType>(Call.getType());<br class="">
+ Op0ElemTy =<br class="">
+ cast<PointerType>(Call.getArgOperand(0)->getType())->getElementType();<br class="">
break;<br class="">
case Intrinsic::matrix_column_major_store:<br class="">
+ Stride = dyn_cast<ConstantInt>(Call.getArgOperand(2));<br class="">
NumRows = cast<ConstantInt>(Call.getArgOperand(4));<br class="">
NumColumns = cast<ConstantInt>(Call.getArgOperand(5));<br class="">
- TypeToCheck = cast<VectorType>(Call.getArgOperand(0)->getType());<br class="">
+ ResultTy = cast<VectorType>(Call.getArgOperand(0)->getType());<br class="">
+ Op0ElemTy =<br class="">
+ cast<VectorType>(Call.getArgOperand(0)->getType())->getElementType();<br class="">
+ Op1ElemTy =<br class="">
+ cast<PointerType>(Call.getArgOperand(1)->getType())->getElementType();<br class="">
break;<br class="">
default:<br class="">
llvm_unreachable("unexpected intrinsic");<br class="">
}<br class="">
- Assert(TypeToCheck->getNumElements() ==<br class="">
+<br class="">
+ Assert(ResultTy->getElementType()->isIntegerTy() ||<br class="">
+ ResultTy->getElementType()->isFloatingPointTy(),<br class="">
+ "Result type must be an integer or floating-point type!", IF);<br class="">
+<br class="">
+ Assert(ResultTy->getElementType() == Op0ElemTy,<br class="">
+ "Vector element type mismatch of the result and first operand "<br class="">
+ "vector!", IF);<br class="">
+<br class="">
+ if (Op1ElemTy)<br class="">
+ Assert(ResultTy->getElementType() == Op1ElemTy,<br class="">
+ "Vector element type mismatch of the result and second operand "<br class="">
+ "vector!", IF);<br class="">
+<br class="">
+ Assert(ResultTy->getNumElements() ==<br class="">
NumRows->getZExtValue() * NumColumns->getZExtValue(),<br class="">
- "result of a matrix operation does not fit in the returned vector");<br class="">
+ "Result of a matrix operation does not fit in the returned vector!");<br class="">
+<br class="">
+ if (Stride)<br class="">
+ Assert(Stride->getZExtValue() >= NumRows->getZExtValue(),<br class="">
+ "Stride must be greater or equal than the number of rows!", IF);<br class="">
+<br class="">
break;<br class="">
}<br class="">
};<br class="">
<br class="">
diff --git a/llvm/test/Transforms/LowerMatrixIntrinsics/load-align-volatile.ll b/llvm/test/Transforms/LowerMatrixIntrinsics/load-align-volatile.ll<br class="">
index 14b81a1d8d9b..9fe38b4d336d 100644<br class="">
--- a/llvm/test/Transforms/LowerMatrixIntrinsics/load-align-volatile.ll<br class="">
+++ b/llvm/test/Transforms/LowerMatrixIntrinsics/load-align-volatile.ll<br class="">
@@ -1,30 +1,29 @@<br class="">
; RUN: opt -lower-matrix-intrinsics -S < %s | FileCheck %s<br class="">
; RUN: opt -passes='lower-matrix-intrinsics' -S < %s | FileCheck %s<br class="">
<br class="">
-define <9 x double> @strided_load_3x3_volatile(<9 x double>* %in, i64 %stride) {<br class="">
+define <9 x double> @strided_load_3x3_volatile(double* %in, i64 %stride) {<br class="">
; CHECK-LABEL: @strided_load_3x3_volatile(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <9 x double>* [[IN:%.*]] to double*<br class="">
; CHECK-NEXT: [[VEC_START:%.*]] = mul i64 0, [[STRIDE:%.*]]<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START]]<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr double, double* %in, i64 [[VEC_START]]<br class="">
; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast double* [[VEC_GEP]] to <3 x double>*<br class="">
; CHECK-NEXT: load volatile <3 x double>, <3 x double>* [[VEC_CAST]], align 8<br class="">
; CHECK-NEXT: [[VEC_START1:%.*]] = mul i64 1, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START1]]<br class="">
+; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr double, double* %in, i64 [[VEC_START1]]<br class="">
; CHECK-NEXT: [[VEC_CAST3:%.*]] = bitcast double* [[VEC_GEP2]] to <3 x double>*<br class="">
; CHECK-NEXT: load volatile <3 x double>, <3 x double>* [[VEC_CAST3]], align 8<br class="">
; CHECK-NEXT: [[VEC_START5:%.*]] = mul i64 2, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP6:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START5]]<br class="">
+; CHECK-NEXT: [[VEC_GEP6:%.*]] = getelementptr double, double* %in, i64 [[VEC_START5]]<br class="">
; CHECK-NEXT: [[VEC_CAST7:%.*]] = bitcast double* [[VEC_GEP6]] to <3 x double>*<br class="">
; CHECK-NEXT: load volatile <3 x double>, <3 x double>* [[VEC_CAST7]], align 8<br class="">
; CHECK-NOT: = load<br class="">
;<br class="">
entry:<br class="">
- %load = call <9 x double> @llvm.matrix.column.major.load.v9f64(<9 x double>* %in, i64 %stride, i1 true, i32 3, i32 3)<br class="">
+ %load = call <9 x double> @llvm.matrix.column.major.load.v9f64(double* %in, i64 %stride, i1 true, i32 3, i32 3)<br class="">
ret <9 x double> %load<br class="">
}<br class="">
<br class="">
-declare <9 x double> @llvm.matrix.column.major.load.v9f64(<9 x double>*, i64, i1, i32, i32)<br class="">
+declare <9 x double> @llvm.matrix.column.major.load.v9f64(double*, i64, i1, i32, i32)<br class="">
<br class="">
define <4 x double> @load_volatile_multiply(<4 x double>* %in) {<br class="">
; CHECK-LABEL: @load_volatile_multiply(<br class="">
@@ -44,49 +43,47 @@ define <4 x double> @load_volatile_multiply(<4 x double>* %in) {<br class="">
declare <4 x double> @llvm.matrix.multiply(<4 x double>, <4 x double>, i32, i32, i32)<br class="">
<br class="">
<br class="">
-define <9 x double> @strided_load_3x3_align32(<9 x double>* %in, i64 %stride) {<br class="">
+define <9 x double> @strided_load_3x3_align32(double* %in, i64 %stride) {<br class="">
; CHECK-LABEL: @strided_load_3x3_align32(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <9 x double>* [[IN:%.*]] to double*<br class="">
; CHECK-NEXT: [[VEC_START:%.*]] = mul i64 0, [[STRIDE:%.*]]<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START]]<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr double, double* %in, i64 [[VEC_START]]<br class="">
; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast double* [[VEC_GEP]] to <3 x double>*<br class="">
; CHECK-NEXT: load <3 x double>, <3 x double>* [[VEC_CAST]], align 32<br class="">
; CHECK-NEXT: [[VEC_START1:%.*]] = mul i64 1, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START1]]<br class="">
+; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr double, double* %in, i64 [[VEC_START1]]<br class="">
; CHECK-NEXT: [[VEC_CAST3:%.*]] = bitcast double* [[VEC_GEP2]] to <3 x double>*<br class="">
; CHECK-NEXT: load <3 x double>, <3 x double>* [[VEC_CAST3]], align 8<br class="">
; CHECK-NEXT: [[VEC_START5:%.*]] = mul i64 2, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP6:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START5]]<br class="">
+; CHECK-NEXT: [[VEC_GEP6:%.*]] = getelementptr double, double* %in, i64 [[VEC_START5]]<br class="">
; CHECK-NEXT: [[VEC_CAST7:%.*]] = bitcast double* [[VEC_GEP6]] to <3 x double>*<br class="">
; CHECK-NEXT: load <3 x double>, <3 x double>* [[VEC_CAST7]], align 8<br class="">
; CHECK-NOT: = load<br class="">
;<br class="">
entry:<br class="">
- %load = call <9 x double> @llvm.matrix.column.major.load.v9f64(<9 x double>* align 32 %in, i64 %stride, i1 false, i32 3, i32 3)<br class="">
+ %load = call <9 x double> @llvm.matrix.column.major.load.v9f64(double* align 32 %in, i64 %stride, i1 false, i32 3, i32 3)<br class="">
ret <9 x double> %load<br class="">
}<br class="">
<br class="">
-define <9 x double> @strided_load_3x3_align2(<9 x double>* %in, i64 %stride) {<br class="">
+define <9 x double> @strided_load_3x3_align2(double* %in, i64 %stride) {<br class="">
; CHECK-LABEL: @strided_load_3x3_align2(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <9 x double>* [[IN:%.*]] to double*<br class="">
; CHECK-NEXT: [[VEC_START:%.*]] = mul i64 0, [[STRIDE:%.*]]<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START]]<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr double, double* %in, i64 [[VEC_START]]<br class="">
; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast double* [[VEC_GEP]] to <3 x double>*<br class="">
; CHECK-NEXT: load <3 x double>, <3 x double>* [[VEC_CAST]], align 2<br class="">
; CHECK-NEXT: [[VEC_START1:%.*]] = mul i64 1, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START1]]<br class="">
+; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr double, double* %in, i64 [[VEC_START1]]<br class="">
; CHECK-NEXT: [[VEC_CAST3:%.*]] = bitcast double* [[VEC_GEP2]] to <3 x double>*<br class="">
; CHECK-NEXT: load <3 x double>, <3 x double>* [[VEC_CAST3]], align 2<br class="">
; CHECK-NEXT: [[VEC_START5:%.*]] = mul i64 2, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP6:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START5]]<br class="">
+; CHECK-NEXT: [[VEC_GEP6:%.*]] = getelementptr double, double* %in, i64 [[VEC_START5]]<br class="">
; CHECK-NEXT: [[VEC_CAST7:%.*]] = bitcast double* [[VEC_GEP6]] to <3 x double>*<br class="">
; CHECK-NEXT: load <3 x double>, <3 x double>* [[VEC_CAST7]], align 2<br class="">
; CHECK-NOT: = load<br class="">
;<br class="">
entry:<br class="">
- %load = call <9 x double> @llvm.matrix.column.major.load.v9f64(<9 x double>* align 2 %in, i64 %stride, i1 false, i32 3, i32 3)<br class="">
+ %load = call <9 x double> @llvm.matrix.column.major.load.v9f64(double* align 2 %in, i64 %stride, i1 false, i32 3, i32 3)<br class="">
ret <9 x double> %load<br class="">
}<br class="">
<br class="">
@@ -106,16 +103,15 @@ define <4 x double> @load_align2_multiply(<4 x double>* %in) {<br class="">
ret <4 x double> %res<br class="">
}<br class="">
<br class="">
-define <6 x float> @strided_load_2x3_align16_stride2(<6 x float>* %in) {<br class="">
+define <6 x float> @strided_load_2x3_align16_stride2(float* %in) {<br class="">
; CHECK-LABEL: @strided_load_2x3_align16_stride2(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <6 x float>* [[IN:%.*]] to float*<br class="">
-; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast float* [[TMP0]] to <2 x float>*<br class="">
+; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast float* %in to <2 x float>*<br class="">
; CHECK-NEXT: [[COL_LOAD:%.*]] = load <2 x float>, <2 x float>* [[VEC_CAST]], align 16<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr float, float* [[TMP0]], i64 2<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr float, float* %in, i64 2<br class="">
; CHECK-NEXT: [[VEC_CAST1:%.*]] = bitcast float* [[VEC_GEP]] to <2 x float>*<br class="">
; CHECK-NEXT: [[COL_LOAD2:%.*]] = load <2 x float>, <2 x float>* [[VEC_CAST1]], align 8<br class="">
-; CHECK-NEXT: [[VEC_GEP3:%.*]] = getelementptr float, float* [[TMP0]], i64 4<br class="">
+; CHECK-NEXT: [[VEC_GEP3:%.*]] = getelementptr float, float* %in, i64 4<br class="">
; CHECK-NEXT: [[VEC_CAST4:%.*]] = bitcast float* [[VEC_GEP3]] to <2 x float>*<br class="">
; CHECK-NEXT: [[COL_LOAD5:%.*]] = load <2 x float>, <2 x float>* [[VEC_CAST4]], align 16<br class="">
; CHECK-NEXT: [[TMP1:%.*]] = shufflevector <2 x float> [[COL_LOAD]], <2 x float> [[COL_LOAD2]], <4 x i32> <i32 0, i32 1, i32 2, i32 3><br class="">
@@ -124,8 +120,8 @@ define <6 x float> @strided_load_2x3_align16_stride2(<6 x float>* %in) {<br class="">
; CHECK-NEXT: ret <6 x float> [[TMP3]]<br class="">
;<br class="">
entry:<br class="">
- %load = call <6 x float> @llvm.matrix.column.major.load.v6f32(<6 x float>* align 16 %in, i64 2, i1 false, i32 2, i32 3)<br class="">
+ %load = call <6 x float> @llvm.matrix.column.major.load.v6f32(float* align 16 %in, i64 2, i1 false, i32 2, i32 3)<br class="">
ret <6 x float> %load<br class="">
}<br class="">
<br class="">
-declare <6 x float> @llvm.matrix.column.major.load.v6f32(<6 x float>*, i64, i1, i32, i32)<br class="">
+declare <6 x float> @llvm.matrix.column.major.load.v6f32(float*, i64, i1, i32, i32)<br class="">
<br class="">
diff --git a/llvm/test/Transforms/LowerMatrixIntrinsics/remarks-inlining.ll b/llvm/test/Transforms/LowerMatrixIntrinsics/remarks-inlining.ll<br class="">
index 5f3ed2a5e382..f8af07271af3 100644<br class="">
--- a/llvm/test/Transforms/LowerMatrixIntrinsics/remarks-inlining.ll<br class="">
+++ b/llvm/test/Transforms/LowerMatrixIntrinsics/remarks-inlining.ll<br class="">
@@ -92,10 +92,10 @@ target triple = "aarch64-apple-ios"<br class="">
; CHECK-LABEL: remark: transpose.h:13:11: Lowered with 0 stores, 0 loads, 8 compute ops<br class="">
; CHECK-NEXT: transpose.1x2.float(transpose.2x1.float(addr %D))<br class="">
<br class="">
-define void @toplevel(<15 x double>* %A, <15 x double>* %B, <15 x double>* %C, <2 x float>* %D) !dbg !16 {<br class="">
+define void @toplevel(<15 x double>* %A, double* %B, <15 x double>* %C, <2 x float>* %D) !dbg !16 {<br class="">
entry:<br class="">
%a = load <15 x double>, <15 x double> *%A, align 16, !dbg !3791<br class="">
- %b = call <15 x double> @llvm.matrix.column.major.load(<15 x double>* %B, i64 5, i1 false, i32 3, i32 5), !dbg !3793<br class="">
+ %b = call <15 x double> @llvm.matrix.column.major.load(double* %B, i64 5, i1 false, i32 3, i32 5), !dbg !3793<br class="">
%c = fadd <15 x double> %a, %b, !dbg !100<br class="">
store <15 x double> %c, <15 x double> *%C, align 16, !dbg !102<br class="">
<br class="">
@@ -106,7 +106,7 @@ entry:<br class="">
ret void<br class="">
}<br class="">
<br class="">
-declare <15 x double> @llvm.matrix.column.major.load(<15 x double>*, i64, i1, i32, i32)<br class="">
+declare <15 x double> @llvm.matrix.column.major.load(double*, i64, i1, i32, i32)<br class="">
declare <2 x float> @llvm.matrix.transpose(<2 x float>, i32, i32)<br class="">
<br class="">
!llvm.dbg.cu = !{!0}<br class="">
<br class="">
diff --git a/llvm/test/Transforms/LowerMatrixIntrinsics/remarks.ll b/llvm/test/Transforms/LowerMatrixIntrinsics/remarks.ll<br class="">
index 3bfb36e26655..ae5ab68c4efb 100644<br class="">
--- a/llvm/test/Transforms/LowerMatrixIntrinsics/remarks.ll<br class="">
+++ b/llvm/test/Transforms/LowerMatrixIntrinsics/remarks.ll<br class="">
@@ -15,9 +15,6 @@ define void @transpose(<12 x double>* %A, <12 x double>* %B) !dbg !23 {<br class="">
ret void<br class="">
}<br class="">
<br class="">
-declare <12 x double> @llvm.matrix.transpose.v12f64.v12f64(<12 x double>, i32, i32)<br class="">
-<br class="">
-<br class="">
; CHECK-LABEL: remark: test.h:50:20: Lowered with 2 stores, 12 loads, 22 compute ops<br class="">
; CHECK-NEXT: store(<br class="">
; CHECK-NEXT: multiply.2x6.6x2.double(<br class="">
@@ -32,33 +29,27 @@ define void @multiply(<12 x double>* %A, <12 x double>* %B, <4 x double>* %C) !d<br class="">
ret void<br class="">
}<br class="">
<br class="">
-declare <4 x double> @llvm.matrix.multiply(<12 x double>, <12 x double>, i32, i32, i32)<br class="">
-<br class="">
; CHECK-LABEL: remark: test.h:60:20: Lowered with 6 stores, 6 loads, 0 compute ops<br class="">
; CHECK-NEXT: store(<br class="">
; CHECK-NEXT: column.major.load.3x3.double(addr %A, 5),<br class="">
; CHECK-NEXT: addr %B)<br class="">
-define void @column.major.load(<9 x double>* %A, <9 x double>* %B) !dbg !27 {<br class="">
- %A.matrix = call <9 x double> @llvm.matrix.column.major.load(<9 x double>* %A, i64 5, i1 false, i32 3, i32 3), !dbg !28<br class="">
+define void @column.major.load(double* %A, <9 x double>* %B) !dbg !27 {<br class="">
+ %A.matrix = call <9 x double> @llvm.matrix.column.major.load(double* %A, i64 5, i1 false, i32 3, i32 3), !dbg !28<br class="">
store <9 x double> %A.matrix, <9 x double>* %B, !dbg !28<br class="">
ret void<br class="">
}<br class="">
<br class="">
-declare <9 x double> @llvm.matrix.column.major.load(<9 x double>*, i64, i1, i32, i32)<br class="">
-<br class="">
; CHECK-LABEL: remark: test.h:70:20: Lowered with 6 stores, 6 loads, 0 compute ops<br class="">
; CHECK-NEXT: column.major.store.3x3.double(<br class="">
; CHECK-NEXT: column.major.load.3x3.double(addr %A, 5),<br class="">
; CHECK-NEXT: addr %B,<br class="">
; CHECK-NEXT: 10)<br class="">
-define void @column.major.store(<9 x double>* %A, <9 x double>* %B) !dbg !29 {<br class="">
- %A.matrix = call <9 x double> @llvm.matrix.column.major.load(<9 x double>* %A, i64 5, i1 false, i32 3, i32 3), !dbg !30<br class="">
- call void @llvm.matrix.column.major.store(<9 x double> %A.matrix, <9 x double>* %B, i64 10, i1 false, i32 3, i32 3), !dbg !30<br class="">
+define void @column.major.store(double* %A, double* %B) !dbg !29 {<br class="">
+ %A.matrix = call <9 x double> @llvm.matrix.column.major.load(double* %A, i64 5, i1 false, i32 3, i32 3), !dbg !30<br class="">
+ call void @llvm.matrix.column.major.store(<9 x double> %A.matrix, double* %B, i64 10, i1 false, i32 3, i32 3), !dbg !30<br class="">
ret void<br class="">
}<br class="">
<br class="">
-declare void @llvm.matrix.column.major.store(<9 x double>, <9 x double>*, i64, i1, i32, i32)<br class="">
-<br class="">
; CHECK-LABEL: remark: test.h:80:20: Lowered with 6 stores, 6 loads, 12 compute ops<br class="">
; CHECK-NEXT: column.major.store.3x3.double(<br class="">
; CHECK-NEXT: fmul(<br class="">
@@ -69,11 +60,11 @@ declare void @llvm.matrix.column.major.store(<9 x double>, <9 x double>*, i64, i<br class="">
; CHECK-NEXT: addr %B,<br class="">
; CHECK-NEXT: 10)<br class="">
<br class="">
-define void @binaryops(<9 x double>* %A, <9 x double>* %B) !dbg !31 {<br class="">
- %A.matrix = call <9 x double> @llvm.matrix.column.major.load(<9 x double>* %A, i64 5, i1 false, i32 3, i32 3), !dbg !32<br class="">
+define void @binaryops(double* %A, double* %B) !dbg !31 {<br class="">
+ %A.matrix = call <9 x double> @llvm.matrix.column.major.load(double* %A, i64 5, i1 false, i32 3, i32 3), !dbg !32<br class="">
%R1.matrix = fadd <9 x double> %A.matrix, %A.matrix, !dbg !32<br class="">
%R2.matrix = fmul <9 x double> %R1.matrix, %A.matrix, !dbg !32<br class="">
- call void @llvm.matrix.column.major.store(<9 x double> %R2.matrix, <9 x double>* %B, i64 10, i1 false, i32 3, i32 3), !dbg !32<br class="">
+ call void @llvm.matrix.column.major.store(<9 x double> %R2.matrix, double* %B, i64 10, i1 false, i32 3, i32 3), !dbg !32<br class="">
ret void<br class="">
}<br class="">
<br class="">
@@ -93,11 +84,11 @@ define void @binaryops(<9 x double>* %A, <9 x double>* %B) !dbg !31 {<br class="">
; CHECK-NEXT: load(addr %D)),<br class="">
; CHECK-NEXT: addr %E)<br class="">
<br class="">
-define void @multiple_expressions(<9 x double>* %A, <9 x double>* %B, <12 x double>* %C, <12 x double>* %D, <4 x double>* %E) !dbg !33 {<br class="">
- %A.matrix = call <9 x double> @llvm.matrix.column.major.load(<9 x double>* %A, i64 5, i1 false, i32 3, i32 3), !dbg !34<br class="">
+define void @multiple_expressions(double* %A, double* %B, <12 x double>* %C, <12 x double>* %D, <4 x double>* %E) !dbg !33 {<br class="">
+ %A.matrix = call <9 x double> @llvm.matrix.column.major.load(double* %A, i64 5, i1 false, i32 3, i32 3), !dbg !34<br class="">
%R1.matrix = fadd <9 x double> %A.matrix, %A.matrix, !dbg !34<br class="">
%R2.matrix = fmul <9 x double> %R1.matrix, %A.matrix, !dbg !34<br class="">
- call void @llvm.matrix.column.major.store(<9 x double> %R2.matrix, <9 x double>* %B, i64 10, i1 false, i32 3, i32 3), !dbg !34<br class="">
+ call void @llvm.matrix.column.major.store(<9 x double> %R2.matrix, double* %B, i64 10, i1 false, i32 3, i32 3), !dbg !34<br class="">
<br class="">
%C.matrix = load <12 x double>, <12 x double>* %C, !dbg !34<br class="">
%D.matrix = load <12 x double>, <12 x double>* %D, !dbg !34<br class="">
@@ -114,14 +105,13 @@ define void @multiple_expressions(<9 x double>* %A, <9 x double>* %B, <12 x doub<br class="">
; CHECK-NEXT: column.major.load.3x3.double(addr %A, 5)<br class="">
; CHECK-NEXT: (reused) column.major.load.3x3.double(addr %A, 5)),<br class="">
; CHECK-NEXT: (reused) column.major.load.3x3.double(addr %A, 5)),<br class="">
-; CHECK-NEXT: stack addr %B,<br class="">
+; CHECK-NEXT: addr %B,<br class="">
; CHECK-NEXT: 10)<br class="">
-define void @stackaddresses(<9 x double>* %A) !dbg !35 {<br class="">
- %B = alloca <9 x double><br class="">
- %A.matrix = call <9 x double> @llvm.matrix.column.major.load(<9 x double>* %A, i64 5, i1 false, i32 3, i32 3), !dbg !36<br class="">
+define void @stackaddresses(double* %A, double* %B) !dbg !35 {<br class="">
+ %A.matrix = call <9 x double> @llvm.matrix.column.major.load(double* %A, i64 5, i1 false, i32 3, i32 3), !dbg !36<br class="">
%R1.matrix = fadd <9 x double> %A.matrix, %A.matrix, !dbg !36<br class="">
%R2.matrix = fmul <9 x double> %R1.matrix, %A.matrix, !dbg !36<br class="">
- call void @llvm.matrix.column.major.store(<9 x double> %R2.matrix, <9 x double>* %B, i64 10, i1 false, i32 3, i32 3), !dbg !36<br class="">
+ call void @llvm.matrix.column.major.store(<9 x double> %R2.matrix, double* %B, i64 10, i1 false, i32 3, i32 3), !dbg !36<br class="">
ret void<br class="">
}<br class="">
<br class="">
@@ -146,7 +136,12 @@ entry:<br class="">
ret void<br class="">
}<br class="">
<br class="">
+declare <12 x double> @llvm.matrix.transpose.v12f64.v12f64(<12 x double>, i32, i32)<br class="">
+declare <4 x double> @llvm.matrix.multiply(<12 x double>, <12 x double>, i32, i32, i32)<br class="">
+declare <9 x double> @llvm.matrix.column.major.load(double*, i64, i1, i32, i32)<br class="">
declare <15 x double> @llvm.matrix.transpose.v15f64.v15f64(<15 x double>, i32, i32)<br class="">
+declare void @llvm.matrix.column.major.store(<9 x double>, double*, i64, i1, i32, i32)<br class="">
+<br class="">
<br class="">
!llvm.dbg.cu = !{!0}<br class="">
!llvm.module.flags = !{!3, !4}<br class="">
<br class="">
diff --git a/llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-double.ll b/llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-double.ll<br class="">
index 1bc645932ecc..d211ee156ecf 100644<br class="">
--- a/llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-double.ll<br class="">
+++ b/llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-double.ll<br class="">
@@ -2,20 +2,19 @@<br class="">
; RUN: opt -lower-matrix-intrinsics -S < %s | FileCheck %s<br class="">
; RUN: opt -passes='lower-matrix-intrinsics' -S < %s | FileCheck %s<br class="">
<br class="">
-define <9 x double> @strided_load_3x3(<9 x double>* %in, i64 %stride) {<br class="">
+define <9 x double> @strided_load_3x3(double* %in, i64 %stride) {<br class="">
; CHECK-LABEL: @strided_load_3x3(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <9 x double>* [[IN:%.*]] to double*<br class="">
; CHECK-NEXT: [[VEC_START:%.*]] = mul i64 0, [[STRIDE:%.*]]<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START]]<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr double, double* %in, i64 [[VEC_START]]<br class="">
; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast double* [[VEC_GEP]] to <3 x double>*<br class="">
; CHECK-NEXT: [[COL_LOAD:%.*]] = load <3 x double>, <3 x double>* [[VEC_CAST]], align 8<br class="">
; CHECK-NEXT: [[VEC_START1:%.*]] = mul i64 1, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START1]]<br class="">
+; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr double, double* %in, i64 [[VEC_START1]]<br class="">
; CHECK-NEXT: [[VEC_CAST3:%.*]] = bitcast double* [[VEC_GEP2]] to <3 x double>*<br class="">
; CHECK-NEXT: [[COL_LOAD4:%.*]] = load <3 x double>, <3 x double>* [[VEC_CAST3]], align 8<br class="">
; CHECK-NEXT: [[VEC_START5:%.*]] = mul i64 2, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP6:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START5]]<br class="">
+; CHECK-NEXT: [[VEC_GEP6:%.*]] = getelementptr double, double* %in, i64 [[VEC_START5]]<br class="">
; CHECK-NEXT: [[VEC_CAST7:%.*]] = bitcast double* [[VEC_GEP6]] to <3 x double>*<br class="">
; CHECK-NEXT: [[COL_LOAD8:%.*]] = load <3 x double>, <3 x double>* [[VEC_CAST7]], align 8<br class="">
; CHECK-NEXT: [[TMP1:%.*]] = shufflevector <3 x double> [[COL_LOAD]], <3 x double> [[COL_LOAD4]], <6 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5><br class="">
@@ -24,51 +23,47 @@ define <9 x double> @strided_load_3x3(<9 x double>* %in, i64 %stride) {<br class="">
; CHECK-NEXT: ret <9 x double> [[TMP3]]<br class="">
;<br class="">
entry:<br class="">
- %load = call <9 x double> @llvm.matrix.column.major.load(<9 x double>* %in, i64 %stride, i1 false, i32 3, i32 3)<br class="">
+ %load = call <9 x double> @llvm.matrix.column.major.load(double* %in, i64 %stride, i1 false, i32 3, i32 3)<br class="">
ret <9 x double> %load<br class="">
}<br class="">
<br class="">
-declare <9 x double> @llvm.matrix.column.major.load(<9 x double>*, i64, i1, i32, i32)<br class="">
+declare <9 x double> @llvm.matrix.column.major.load(double*, i64, i1, i32, i32)<br class="">
<br class="">
-define <9 x double> @strided_load_9x1(<9 x double>* %in, i64 %stride) {<br class="">
+define <9 x double> @strided_load_9x1(double* %in, i64 %stride) {<br class="">
; CHECK-LABEL: @strided_load_9x1(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <9 x double>* [[IN:%.*]] to double*<br class="">
; CHECK-NEXT: [[VEC_START:%.*]] = mul i64 0, [[STRIDE:%.*]]<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START]]<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr double, double* %in, i64 [[VEC_START]]<br class="">
; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast double* [[VEC_GEP]] to <9 x double>*<br class="">
; CHECK-NEXT: [[COL_LOAD:%.*]] = load <9 x double>, <9 x double>* [[VEC_CAST]], align 8<br class="">
; CHECK-NEXT: ret <9 x double> [[COL_LOAD]]<br class="">
;<br class="">
entry:<br class="">
- %load = call <9 x double> @llvm.matrix.column.major.load(<9 x double>* %in, i64 %stride, i1 false, i32 9, i32 1)<br class="">
+ %load = call <9 x double> @llvm.matrix.column.major.load(double* %in, i64 %stride, i1 false, i32 9, i32 1)<br class="">
ret <9 x double> %load<br class="">
}<br class="">
<br class="">
-declare <8 x double> @llvm.matrix.column.major.load.v8f64(<8 x double>*, i64, i1, i32, i32)<br class="">
+declare <8 x double> @llvm.matrix.column.major.load.v8f64(double*, i64, i1, i32, i32)<br class="">
+; CHECK: declare <8 x double> @llvm.matrix.column.major.load.v8f64(double* nocapture, i64, i1 immarg, i32 immarg, i32 immarg) [[READONLY:#[0-9]]]<br class="">
<br class="">
-define <8 x double> @strided_load_4x2(<8 x double>* %in, i64 %stride) {<br class="">
+define <8 x double> @strided_load_4x2(double* %in, i64 %stride) {<br class="">
; CHECK-LABEL: @strided_load_4x2(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <8 x double>* [[IN:%.*]] to double*<br class="">
; CHECK-NEXT: [[VEC_START:%.*]] = mul i64 0, [[STRIDE:%.*]]<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START]]<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr double, double* %in, i64 [[VEC_START]]<br class="">
; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast double* [[VEC_GEP]] to <4 x double>*<br class="">
; CHECK-NEXT: [[COL_LOAD:%.*]] = load <4 x double>, <4 x double>* [[VEC_CAST]], align 8<br class="">
; CHECK-NEXT: [[VEC_START1:%.*]] = mul i64 1, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr double, double* [[TMP0]], i64 [[VEC_START1]]<br class="">
+; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr double, double* %in, i64 [[VEC_START1]]<br class="">
; CHECK-NEXT: [[VEC_CAST3:%.*]] = bitcast double* [[VEC_GEP2]] to <4 x double>*<br class="">
; CHECK-NEXT: [[COL_LOAD4:%.*]] = load <4 x double>, <4 x double>* [[VEC_CAST3]], align 8<br class="">
; CHECK-NEXT: [[TMP1:%.*]] = shufflevector <4 x double> [[COL_LOAD]], <4 x double> [[COL_LOAD4]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7><br class="">
; CHECK-NEXT: ret <8 x double> [[TMP1]]<br class="">
;<br class="">
entry:<br class="">
- %load = call <8 x double> @llvm.matrix.column.major.load.v8f64(<8 x double>* %in, i64 %stride, i1 false, i32 4, i32 2)<br class="">
+ %load = call <8 x double> @llvm.matrix.column.major.load.v8f64(double* %in, i64 %stride, i1 false, i32 4, i32 2)<br class="">
ret <8 x double> %load<br class="">
}<br class="">
<br class="">
-; CHECK: declare <9 x double> @llvm.matrix.column.major.load.v9f64.p0v9f64(<9 x double>* nocapture, i64, i1 immarg, i32 immarg, i32 immarg) [[READONLY:#[0-9]]]<br class="">
-<br class="">
-; CHECK: declare <8 x double> @llvm.matrix.column.major.load.v8f64.p0v8f64(<8 x double>* nocapture, i64, i1 immarg, i32 immarg, i32 immarg) [[READONLY]]<br class="">
-<br class="">
+; CHECK: declare <9 x double> @llvm.matrix.column.major.load.v9f64(double* nocapture, i64, i1 immarg, i32 immarg, i32 immarg) [[READONLY]]<br class="">
; CHECK: attributes [[READONLY]] = { argmemonly nosync nounwind readonly willreturn }<br class="">
<br class="">
diff --git a/llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-float.ll b/llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-float.ll<br class="">
index 248f621345ba..6b48a1709bde 100644<br class="">
--- a/llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-float.ll<br class="">
+++ b/llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-float.ll<br class="">
@@ -2,20 +2,19 @@<br class="">
; RUN: opt -lower-matrix-intrinsics -S < %s | FileCheck %s<br class="">
; RUN: opt -passes='lower-matrix-intrinsics' -S < %s | FileCheck %s<br class="">
<br class="">
-define <9 x float> @strided_load_3x3(<9 x float>* %in, i64 %stride) {<br class="">
+define <9 x float> @strided_load_3x3(float* %in, i64 %stride) {<br class="">
; CHECK-LABEL: @strided_load_3x3(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <9 x float>* [[IN:%.*]] to float*<br class="">
; CHECK-NEXT: [[VEC_START:%.*]] = mul i64 0, [[STRIDE:%.*]]<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr float, float* [[TMP0]], i64 [[VEC_START]]<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr float, float* %in, i64 [[VEC_START]]<br class="">
; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast float* [[VEC_GEP]] to <3 x float>*<br class="">
; CHECK-NEXT: [[COL_LOAD:%.*]] = load <3 x float>, <3 x float>* [[VEC_CAST]], align 4<br class="">
; CHECK-NEXT: [[VEC_START1:%.*]] = mul i64 1, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr float, float* [[TMP0]], i64 [[VEC_START1]]<br class="">
+; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr float, float* %in, i64 [[VEC_START1]]<br class="">
; CHECK-NEXT: [[VEC_CAST3:%.*]] = bitcast float* [[VEC_GEP2]] to <3 x float>*<br class="">
; CHECK-NEXT: [[COL_LOAD4:%.*]] = load <3 x float>, <3 x float>* [[VEC_CAST3]], align 4<br class="">
; CHECK-NEXT: [[VEC_START5:%.*]] = mul i64 2, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP6:%.*]] = getelementptr float, float* [[TMP0]], i64 [[VEC_START5]]<br class="">
+; CHECK-NEXT: [[VEC_GEP6:%.*]] = getelementptr float, float* %in, i64 [[VEC_START5]]<br class="">
; CHECK-NEXT: [[VEC_CAST7:%.*]] = bitcast float* [[VEC_GEP6]] to <3 x float>*<br class="">
; CHECK-NEXT: [[COL_LOAD8:%.*]] = load <3 x float>, <3 x float>* [[VEC_CAST7]], align 4<br class="">
; CHECK-NEXT: [[TMP1:%.*]] = shufflevector <3 x float> [[COL_LOAD]], <3 x float> [[COL_LOAD4]], <6 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5><br class="">
@@ -24,45 +23,43 @@ define <9 x float> @strided_load_3x3(<9 x float>* %in, i64 %stride) {<br class="">
; CHECK-NEXT: ret <9 x float> [[TMP3]]<br class="">
;<br class="">
entry:<br class="">
- %load = call <9 x float> @llvm.matrix.column.major.load(<9 x float>* %in, i64 %stride, i1 false, i32 3, i32 3)<br class="">
+ %load = call <9 x float> @llvm.matrix.column.major.load(float* %in, i64 %stride, i1 false, i32 3, i32 3)<br class="">
ret <9 x float> %load<br class="">
}<br class="">
<br class="">
-declare <9 x float> @llvm.matrix.column.major.load(<9 x float>*, i64, i1, i32, i32)<br class="">
+declare <9 x float> @llvm.matrix.column.major.load(float*, i64, i1, i32, i32)<br class="">
<br class="">
-define <9 x float> @strided_load_9x1(<9 x float>* %in, i64 %stride) {<br class="">
+define <9 x float> @strided_load_9x1(float* %in, i64 %stride) {<br class="">
; CHECK-LABEL: @strided_load_9x1(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <9 x float>* [[IN:%.*]] to float*<br class="">
; CHECK-NEXT: [[VEC_START:%.*]] = mul i64 0, [[STRIDE:%.*]]<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr float, float* [[TMP0]], i64 [[VEC_START]]<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr float, float* %in, i64 [[VEC_START]]<br class="">
; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast float* [[VEC_GEP]] to <9 x float>*<br class="">
; CHECK-NEXT: [[COL_LOAD:%.*]] = load <9 x float>, <9 x float>* [[VEC_CAST]], align 4<br class="">
; CHECK-NEXT: ret <9 x float> [[COL_LOAD]]<br class="">
;<br class="">
entry:<br class="">
- %load = call <9 x float> @llvm.matrix.column.major.load(<9 x float>* %in, i64 %stride, i1 false, i32 9, i32 1)<br class="">
+ %load = call <9 x float> @llvm.matrix.column.major.load(float* %in, i64 %stride, i1 false, i32 9, i32 1)<br class="">
ret <9 x float> %load<br class="">
}<br class="">
<br class="">
-declare <8 x float> @llvm.matrix.column.major.load.v8f32(<8 x float>*, i64, i1, i32, i32)<br class="">
+declare <8 x float> @llvm.matrix.column.major.load.v8f32(float*, i64, i1, i32, i32)<br class="">
<br class="">
-define <8 x float> @strided_load_4x2(<8 x float>* %in, i64 %stride) {<br class="">
+define <8 x float> @strided_load_4x2(float* %in, i64 %stride) {<br class="">
; CHECK-LABEL: @strided_load_4x2(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <8 x float>* [[IN:%.*]] to float*<br class="">
; CHECK-NEXT: [[VEC_START:%.*]] = mul i64 0, [[STRIDE:%.*]]<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr float, float* [[TMP0]], i64 [[VEC_START]]<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr float, float* %in, i64 [[VEC_START]]<br class="">
; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast float* [[VEC_GEP]] to <4 x float>*<br class="">
; CHECK-NEXT: [[COL_LOAD:%.*]] = load <4 x float>, <4 x float>* [[VEC_CAST]], align 4<br class="">
; CHECK-NEXT: [[VEC_START1:%.*]] = mul i64 1, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr float, float* [[TMP0]], i64 [[VEC_START1]]<br class="">
+; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr float, float* %in, i64 [[VEC_START1]]<br class="">
; CHECK-NEXT: [[VEC_CAST3:%.*]] = bitcast float* [[VEC_GEP2]] to <4 x float>*<br class="">
; CHECK-NEXT: [[COL_LOAD4:%.*]] = load <4 x float>, <4 x float>* [[VEC_CAST3]], align 4<br class="">
; CHECK-NEXT: [[TMP1:%.*]] = shufflevector <4 x float> [[COL_LOAD]], <4 x float> [[COL_LOAD4]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7><br class="">
; CHECK-NEXT: ret <8 x float> [[TMP1]]<br class="">
;<br class="">
entry:<br class="">
- %load = call <8 x float> @llvm.matrix.column.major.load.v8f32(<8 x float>* %in, i64 %stride, i1 false, i32 4, i32 2)<br class="">
+ %load = call <8 x float> @llvm.matrix.column.major.load.v8f32(float* %in, i64 %stride, i1 false, i32 4, i32 2)<br class="">
ret <8 x float> %load<br class="">
}<br class="">
<br class="">
diff --git a/llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-i32.ll b/llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-i32.ll<br class="">
index a589b7c1d2b0..4f815af6d11c 100644<br class="">
--- a/llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-i32.ll<br class="">
+++ b/llvm/test/Transforms/LowerMatrixIntrinsics/strided-load-i32.ll<br class="">
@@ -2,20 +2,19 @@<br class="">
; RUN: opt -lower-matrix-intrinsics -S < %s | FileCheck %s<br class="">
; RUN: opt -passes='lower-matrix-intrinsics' -S < %s | FileCheck %s<br class="">
<br class="">
-define <9 x i32> @strided_load_3x3(<9 x i32>* %in, i64 %stride) {<br class="">
+define <9 x i32> @strided_load_3x3(i32* %in, i64 %stride) {<br class="">
; CHECK-LABEL: @strided_load_3x3(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <9 x i32>* [[IN:%.*]] to i32*<br class="">
; CHECK-NEXT: [[VEC_START:%.*]] = mul i64 0, [[STRIDE:%.*]]<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr i32, i32* [[TMP0]], i64 [[VEC_START]]<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr i32, i32* %in, i64 [[VEC_START]]<br class="">
; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast i32* [[VEC_GEP]] to <3 x i32>*<br class="">
; CHECK-NEXT: [[COL_LOAD:%.*]] = load <3 x i32>, <3 x i32>* [[VEC_CAST]], align 4<br class="">
; CHECK-NEXT: [[VEC_START1:%.*]] = mul i64 1, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr i32, i32* [[TMP0]], i64 [[VEC_START1]]<br class="">
+; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr i32, i32* %in, i64 [[VEC_START1]]<br class="">
; CHECK-NEXT: [[VEC_CAST3:%.*]] = bitcast i32* [[VEC_GEP2]] to <3 x i32>*<br class="">
; CHECK-NEXT: [[COL_LOAD4:%.*]] = load <3 x i32>, <3 x i32>* [[VEC_CAST3]], align 4<br class="">
; CHECK-NEXT: [[VEC_START5:%.*]] = mul i64 2, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP6:%.*]] = getelementptr i32, i32* [[TMP0]], i64 [[VEC_START5]]<br class="">
+; CHECK-NEXT: [[VEC_GEP6:%.*]] = getelementptr i32, i32* %in, i64 [[VEC_START5]]<br class="">
; CHECK-NEXT: [[VEC_CAST7:%.*]] = bitcast i32* [[VEC_GEP6]] to <3 x i32>*<br class="">
; CHECK-NEXT: [[COL_LOAD8:%.*]] = load <3 x i32>, <3 x i32>* [[VEC_CAST7]], align 4<br class="">
; CHECK-NEXT: [[TMP1:%.*]] = shufflevector <3 x i32> [[COL_LOAD]], <3 x i32> [[COL_LOAD4]], <6 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5><br class="">
@@ -24,45 +23,43 @@ define <9 x i32> @strided_load_3x3(<9 x i32>* %in, i64 %stride) {<br class="">
; CHECK-NEXT: ret <9 x i32> [[TMP3]]<br class="">
;<br class="">
entry:<br class="">
- %load = call <9 x i32> @llvm.matrix.column.major.load(<9 x i32>* %in, i64 %stride, i1 false, i32 3, i32 3)<br class="">
+ %load = call <9 x i32> @llvm.matrix.column.major.load(i32* %in, i64 %stride, i1 false, i32 3, i32 3)<br class="">
ret <9 x i32> %load<br class="">
}<br class="">
<br class="">
-declare <9 x i32> @llvm.matrix.column.major.load(<9 x i32>*, i64, i1, i32, i32)<br class="">
+declare <9 x i32> @llvm.matrix.column.major.load(i32*, i64, i1, i32, i32)<br class="">
<br class="">
-define <9 x i32> @strided_load_9x1(<9 x i32>* %in, i64 %stride) {<br class="">
+define <9 x i32> @strided_load_9x1(i32* %in, i64 %stride) {<br class="">
; CHECK-LABEL: @strided_load_9x1(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <9 x i32>* [[IN:%.*]] to i32*<br class="">
; CHECK-NEXT: [[VEC_START:%.*]] = mul i64 0, [[STRIDE:%.*]]<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr i32, i32* [[TMP0]], i64 [[VEC_START]]<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr i32, i32* %in, i64 [[VEC_START]]<br class="">
; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast i32* [[VEC_GEP]] to <9 x i32>*<br class="">
; CHECK-NEXT: [[COL_LOAD:%.*]] = load <9 x i32>, <9 x i32>* [[VEC_CAST]], align 4<br class="">
; CHECK-NEXT: ret <9 x i32> [[COL_LOAD]]<br class="">
;<br class="">
entry:<br class="">
- %load = call <9 x i32> @llvm.matrix.column.major.load(<9 x i32>* %in, i64 %stride, i1 false, i32 9, i32 1)<br class="">
+ %load = call <9 x i32> @llvm.matrix.column.major.load(i32* %in, i64 %stride, i1 false, i32 9, i32 1)<br class="">
ret <9 x i32> %load<br class="">
}<br class="">
<br class="">
-declare <8 x i32> @llvm.matrix.column.major.load.v8i32(<8 x i32>*, i64, i1, i32, i32)<br class="">
+declare <8 x i32> @llvm.matrix.column.major.load.v8i32(i32*, i64, i1, i32, i32)<br class="">
<br class="">
-define <8 x i32> @strided_load_4x2(<8 x i32>* %in, i64 %stride) {<br class="">
+define <8 x i32> @strided_load_4x2(i32* %in, i64 %stride) {<br class="">
; CHECK-LABEL: @strided_load_4x2(<br class="">
; CHECK-NEXT: entry:<br class="">
-; CHECK-NEXT: [[TMP0:%.*]] = bitcast <8 x i32>* [[IN:%.*]] to i32*<br class="">
; CHECK-NEXT: [[VEC_START:%.*]] = mul i64 0, [[STRIDE:%.*]]<br class="">
-; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr i32, i32* [[TMP0]], i64 [[VEC_START]]<br class="">
+; CHECK-NEXT: [[VEC_GEP:%.*]] = getelementptr i32, i32* %in, i64 [[VEC_START]]<br class="">
; CHECK-NEXT: [[VEC_CAST:%.*]] = bitcast i32* [[VEC_GEP]] to <4 x i32>*<br class="">
; CHECK-NEXT: [[COL_LOAD:%.*]] = load <4 x i32>, <4 x i32>* [[VEC_CAST]], align 4<br class="">
; CHECK-NEXT: [[VEC_START1:%.*]] = mul i64 1, [[STRIDE]]<br class="">
-; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr i32, i32* [[TMP0]], i64 [[VEC_START1]]<br class="">
+; CHECK-NEXT: [[VEC_GEP2:%.*]] = getelementptr i32, i32* %in, i64 [[VEC_START1]]<br class="">
; CHECK-NEXT: [[VEC_CAST3:%.*]] = bitcast i32* [[VEC_GEP2]] to <4 x i32>*<br class="">
; CHECK-NEXT: [[COL_LOAD4:%.*]] = load <4 x i32>, <4 x i32>* [[VEC_CAST3]], align 4<br class="">
; CHECK-NEXT: [[TMP1:%.*]] = shufflevector <4 x i32> [[COL_LOAD]], <4 x i32> [[COL_LOAD4]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7><br class="">
; CHECK-NEXT: ret <8 x i32> [[TMP1]]<br class="">
;<br class="">
entry:<br class="">
- %load = call <8 x i32> @llvm.matrix.column.major.load.v8i32(<8 x i32>* %in, i64 %stride, i1 false, i32 4, i32 2)<br class="">
+ %load = call <8 x i32> @llvm.matrix.column.major.load.v8i32(i32* %in, i64 %stride, i1 false, i32 4, i32 2)<br class="">
ret <8 x i32> %load<br class="">
}<br class="">
<br class="">
diff --git a/llvm/test/Transforms/LowerMatrixIntrinsics/strided-store-double.ll b/llvm/test/Transforms/LowerMatrixIntrinsics/strided-store-double.ll<br class="">
index f9fbdd388cff..e90b79627dc6 100644<br class="">
--- a/llvm/test/Transforms/LowerMatrixIntrinsics/strided-store-double.ll<br class="">
+++ b/llvm/test/Transforms/LowerMatrixIntrinsics/strided-store-double.ll<br class="">
@@ -13,7 +13,7 @@ define void @strided_store_3x2(<6 x double> %in, double* %out) {<br class="">
; CHECK-NEXT: store <3 x double> [[SPLIT1]], <3 x double>* [[VEC_CAST2]], align 8<br class="">
; CHECK-NEXT: ret void<br class="">
;<br class="">
- call void @llvm.matrix.column.major.store(<6 x double> %in, double* %out, i64 5, i1 false, i32 3, i32 2)<br class="">
+ call void @llvm.matrix.column.major.store.v6f64(<6 x double> %in, double* %out, i64 5, i1 false, i32 3, i32 2)<br class="">
ret void<br class="">
}<br class="">
<br class="">
@@ -31,13 +31,10 @@ define void @strided_store_3x2_nonconst_stride(<6 x double> %in, i64 %stride, do<br class="">
; CHECK-NEXT: store <3 x double> [[SPLIT1]], <3 x double>* [[VEC_CAST4]], align 8<br class="">
; CHECK-NEXT: ret void<br class="">
;<br class="">
- call void @llvm.matrix.column.major.store(<6 x double> %in, double* %out, i64 %stride, i1 false, i32 3, i32 2)<br class="">
+ call void @llvm.matrix.column.major.store.v6f64(<6 x double> %in, double* %out, i64 %stride, i1 false, i32 3, i32 2)<br class="">
ret void<br class="">
}<br class="">
<br class="">
-<br class="">
-declare void @llvm.matrix.column.major.store(<6 x double>, double*, i64, i1, i32, i32)<br class="">
-<br class="">
define void @strided_store_2x3(<10 x double> %in, double* %out) {<br class="">
; CHECK-LABEL: @strided_store_2x3(<br class="">
; CHECK-NEXT: [[SPLIT:%.*]] = shufflevector <10 x double> [[IN:%.*]], <10 x double> undef, <2 x i32> <i32 0, i32 1><br class="">
@@ -65,10 +62,9 @@ define void @strided_store_2x3(<10 x double> %in, double* %out) {<br class="">
ret void<br class="">
}<br class="">
<br class="">
+declare void @llvm.matrix.column.major.store.v6f64(<6 x double>, double*, i64, i1, i32, i32)<br class="">
declare void @llvm.matrix.column.major.store.v10f64(<10 x double>, double*, i64, i1, i32, i32)<br class="">
<br class="">
-; CHECK: declare void @llvm.matrix.column.major.store.v6f64.p0f64(<6 x double>, double* nocapture writeonly, i64, i1 immarg, i32 immarg, i32 immarg) [[WRITEONLY:#[0-9]]]<br class="">
-<br class="">
-; CHECK: declare void @llvm.matrix.column.major.store.v10f64.p0f64(<10 x double>, double* nocapture writeonly, i64, i1 immarg, i32 immarg, i32 immarg) [[WRITEONLY]]<br class="">
-<br class="">
-; CHECK: attributes [[WRITEONLY]] = { argmemonly nosync nounwind willreturn writeonly }<br class="">
+; CHECK: declare void @llvm.matrix.column.major.store.v6f64(<6 x double>, double* nocapture writeonly, i64, i1 immarg, i32 immarg, i32 immarg) #0
<br class="">
+; CHECK: declare void @llvm.matrix.column.major.store.v10f64(<10 x double>, double* nocapture writeonly, i64, i1 immarg, i32 immarg, i32 immarg) #0
<br class="">
+; CHECK: attributes #0 = { argmemonly nosync nounwind willreturn writeonly }<br class="">
<br class="">
diff --git a/llvm/test/Verifier/matrix-intrinsics.ll b/llvm/test/Verifier/matrix-intrinsics.ll<br class="">
index 6b2a4c501c66..4194cfb434f2 100644<br class="">
--- a/llvm/test/Verifier/matrix-intrinsics.ll<br class="">
+++ b/llvm/test/Verifier/matrix-intrinsics.ll<br class="">
@@ -1,11 +1,10 @@<br class="">
; RUN: not llvm-as < %s -o /dev/null 2>&1 | FileCheck %s<br class="">
<br class="">
-declare <4 x float> @llvm.matrix.transpose.v4f32(<4 x float>, i32, i32)<br class="">
define <4 x float> @transpose(<4 x float> %m, i32 %arg) {<br class="">
; CHECK: assembly parsed, but does not verify as correct!<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
; CHECK-NEXT: immarg operand has non-immediate parameter<br class="">
; CHECK-NEXT: i32 %arg<br class="">
; CHECK-NEXT: %result.3 = call <4 x float> @llvm.matrix.transpose.v4f32(<4 x float> %result.2, i32 %arg, i32 2)<br class="">
@@ -20,11 +19,10 @@ define <4 x float> @transpose(<4 x float> %m, i32 %arg) {<br class="">
ret <4 x float> %result.4<br class="">
}<br class="">
<br class="">
-declare <4 x float> @llvm.matrix.multiply.v4f32.v4f32.v4f32(<4 x float>, <4 x float>, i32, i32, i32)<br class="">
define <4 x float> @multiply(<4 x float> %m, i32 %arg) {<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
; CHECK-NEXT: immarg operand has non-immediate parameter<br class="">
; CHECK-NEXT: i32 %arg<br class="">
; CHECK-NEXT: %result.3 = call <4 x float> @llvm.matrix.multiply.v4f32.v4f32.v4f32(<4 x float> %result.2, <4 x float> %m, i32 %arg, i32 2, i32 1)<br class="">
@@ -35,32 +33,130 @@ define <4 x float> @multiply(<4 x float> %m, i32 %arg) {<br class="">
ret <4 x float> %result.3<br class="">
}<br class="">
<br class="">
-declare <4 x float> @llvm.matrix.column.major.load.v4f32.p0v4f32(<4 x float>*, i64, i1, i32, i32)<br class="">
-declare <6 x float> @llvm.matrix.column.major.load.v6f32.p0v6f32(<6 x float>*, i64, i1, i32, i32)<br class="">
-define <4 x float> @column.major_load(<4 x float>* %m, <6 x float>* %n, i32 %arg) {<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
+define <4 x float> @column.major_load(float* %m, float* %n, i32 %arg) {<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
; CHECK-NEXT: immarg operand has non-immediate parameter<br class="">
; CHECK-NEXT: i32 %arg<br class="">
-; CHECK-NEXT: %result.3 = call <6 x float> @llvm.matrix.column.major.load.v6f32.p0v6f32(<6 x float>* %n, i64 2, i1 true, i32 3, i32 %arg)<br class="">
- %result.0 = call <4 x float> @llvm.matrix.column.major.load.v4f32.p0v4f32(<4 x float>* %m, i64 0, i1 false, i32 0, i32 0)<br class="">
- %result.1 = call <4 x float> @llvm.matrix.column.major.load.v4f32.p0v4f32(<4 x float>* %m, i64 2, i1 false, i32 1, i32 2)<br class="">
- %result.2 = call <6 x float> @llvm.matrix.column.major.load.v6f32.p0v6f32(<6 x float>* %n, i64 2, i1 true, i32 3, i32 3)<br class="">
- %result.3 = call <6 x float> @llvm.matrix.column.major.load.v6f32.p0v6f32(<6 x float>* %n, i64 2, i1 true, i32 3, i32 %arg)<br class="">
+; CHECK-NEXT: %result.3 = call <6 x float> @llvm.matrix.column.major.load.v6f32(float* %n, i64 2, i1 true, i32 3, i32 %arg)<br class="">
+ %result.0 = call <4 x float> @llvm.matrix.column.major.load.v4f32(float* %m, i64 0, i1 false, i32 0, i32 0)<br class="">
+ %result.1 = call <4 x float> @llvm.matrix.column.major.load.v4f32(float* %m, i64 2, i1 false, i32 1, i32 2)<br class="">
+ %result.2 = call <6 x float> @llvm.matrix.column.major.load.v6f32(float* %n, i64 2, i1 true, i32 3, i32 3)<br class="">
+ %result.3 = call <6 x float> @llvm.matrix.column.major.load.v6f32(float* %n, i64 2, i1 true, i32 3, i32 %arg)<br class="">
ret <4 x float> %result.1<br class="">
}<br class="">
<br class="">
-declare void @llvm.matrix.column.major.store.v4f32.p0v4f32(<4 x float>, <4 x float>*, i64, i1, i32, i32)<br class="">
-declare void @llvm.matrix.column.major.store.v6f32.p0v6f32(<6 x float>, <6 x float>*, i64, i1, i32, i32)<br class="">
-define void @column.major_store(<4 x float>* %m, <6 x float>* %n, i64 %arg) {<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
-; CHECK-NEXT: result of a matrix operation does not fit in the returned vector<br class="">
- call void @llvm.matrix.column.major.store.v4f32.p0v4f32(<4 x float> zeroinitializer, <4 x float>* %m, i64 0, i1 false, i32 0, i32 0)<br class="">
- call void @llvm.matrix.column.major.store.v4f32.p0v4f32(<4 x float> zeroinitializer, <4 x float>* %m, i64 2, i1 false, i32 1, i32 2)<br class="">
- call void @llvm.matrix.column.major.store.v6f32.p0v6f32(<6 x float> zeroinitializer, <6 x float>* %n, i64 2, i1 false, i32 3, i32 3)<br class="">
- call void @llvm.matrix.column.major.store.v6f32.p0v6f32(<6 x float> zeroinitializer, <6 x float>* %n, i64 %arg, i1 false, i32 3, i32 3)<br class="">
+define void @column.major_store(float* %m, float* %n, i64 %arg) {<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
+; CHECK-NEXT: Result of a matrix operation does not fit in the returned vector!<br class="">
+ call void @llvm.matrix.column.major.store.v4f32(<4 x float> zeroinitializer, float* %m, i64 0, i1 false, i32 0, i32 0)<br class="">
+ call void @llvm.matrix.column.major.store.v4f32(<4 x float> zeroinitializer, float* %m, i64 2, i1 false, i32 1, i32 2)<br class="">
+ call void @llvm.matrix.column.major.store.v6f32(<6 x float> zeroinitializer, float* %n, i64 2, i1 false, i32 3, i32 3)<br class="">
+ call void @llvm.matrix.column.major.store.v6f32(<6 x float> zeroinitializer, float* %n, i64 %arg, i1 false, i32 3, i32 3)<br class="">
+ ret void<br class="">
+}<br class="">
+<br class="">
+define <4 x float> @transpose_mixed_types(<4 x float> %fvec, <4 x i32> %ivec, i32 %arg) {<br class="">
+;<br class="">
+; CHECK-NEXT: Intrinsic has incorrect argument type!<br class="">
+; CHECK-NEXT: <4 x float> (<4 x i32>, i32, i32)* @llvm.matrix.transpose.v4f32.v4i32<br class="">
+; CHECK-NEXT: Intrinsic has incorrect argument type!<br class="">
+; CHECK-NEXT: <4 x i32> (<4 x float>, i32, i32)* @llvm.matrix.transpose.v4i32.v4f32<br class="">
+;<br class="">
+ %result.0 = call <4 x float> @llvm.matrix.transpose.v4f32.v4i32(<4 x i32> %ivec, i32 0, i32 0)<br class="">
+ %result.1 = call <4 x i32> @llvm.matrix.transpose.v4i32.v4f32(<4 x float> %result.0, i32 3, i32 2)<br class="">
+ ret <4 x float> %result.0<br class="">
+}<br class="">
+<br class="">
+define <4 x float> @multiply_mixed_types(<4 x i32> %ivec, <4 x float> %fvec, i32 %arg) {<br class="">
+;<br class="">
+; CHECK-NEXT: Vector element type mismatch of the result and first operand vector!<br class="">
+; CHECK-NEXT: <4 x i32> (<4 x float>, <4 x float>, i32, i32, i32)* @llvm.matrix.multiply.v4i32.v4f32.v4f32<br class="">
+; CHECK-NEXT: Vector element type mismatch of the result and first operand vector!<br class="">
+; CHECK-NEXT: <4 x float> (<4 x i32>, <4 x float>, i32, i32, i32)* @llvm.matrix.multiply.v4f32.v4i32.v4f32<br class="">
+; CHECK-NEXT: Vector element type mismatch of the result and second operand vector!<br class="">
+; CHECK-NEXT: <4 x float> (<4 x float>, <4 x i32>, i32, i32, i32)* @llvm.matrix.multiply.v4f32.v4f32.v4i32<br class="">
+; CHECK-NEXT: Vector element type mismatch of the result and first operand vector!<br class="">
+; CHECK-NEXT: <4 x float> (<4 x i32>, <4 x i32>, i32, i32, i32)* @llvm.matrix.multiply.v4f32.v4i32.v4i32<br class="">
+;<br class="">
+ %result.0 = call <4 x i32> @llvm.matrix.multiply.v4i32.v4f32.v4f32(<4 x float> %fvec, <4 x float> %fvec, i32 2, i32 2, i32 2)<br class="">
+ %result.1 = call <4 x float> @llvm.matrix.multiply.v4f32.v4i32.v4f32(<4 x i32> %result.0, <4 x float> %fvec, i32 2, i32 2, i32 2)<br class="">
+ %result.2 = call <4 x float> @llvm.matrix.multiply.v4f32.v4f32.v4i32(<4 x float> %fvec, <4 x i32> %ivec, i32 2, i32 2, i32 2)<br class="">
+ %result.3 = call <4 x float> @llvm.matrix.multiply.v4f32.v4i32.v4i32(<4 x i32> %ivec, <4 x i32> %ivec, i32 2, i32 2, i32 2)<br class="">
+ ret <4 x float> %result.3<br class="">
+}<br class="">
+<br class="">
+define <4 x float> @column.major_load_mixed_types(i32* %m, float* %n, i32 %arg) {<br class="">
+;<br class="">
+; CHECK-NEXT: Intrinsic has incorrect argument type!<br class="">
+; CHECK-NEXT: <4 x float> (i32*, i64, i1, i32, i32)* @llvm.matrix.column.major.load.v4f32.pi32<br class="">
+; CHECK-NEXT: Intrinsic has incorrect argument type!<br class="">
+; CHECK-NEXT: <4 x i32> (float*, i64, i1, i32, i32)* @llvm.matrix.column.major.load.v4i32<br class="">
+;<br class="">
+ %result.0 = call <4 x float> @llvm.matrix.column.major.load.v4f32.pi32(i32* %m, i64 2, i1 false, i32 2, i32 2)<br class="">
+ %result.1 = call <4 x i32> @llvm.matrix.column.major.load.v4i32(float* %n, i64 2, i1 false, i32 2, i32 2)<br class="">
+ ret <4 x float> %result.0<br class="">
+}<br class="">
+<br class="">
+define void @column.major_store_mixed_types(float* %m, i32* %n, i64 %arg) {<br class="">
+;<br class="">
+; CHECK-NEXT: Intrinsic has incorrect argument type! <br class="">
+; CHECK-NEXT: void (<4 x i32>, float*, i64, i1, i32, i32)* @llvm.matrix.column.major.store.v4i32.vi32<br class="">
+; CHECK-NEXT: Intrinsic has incorrect argument type! <br class="">
+; CHECK-NEXT: void (<4 x float>, i32*, i64, i1, i32, i32)* @llvm.matrix.column.major.store.v4f32.pi32<br class="">
+;<br class="">
+ call void @llvm.matrix.column.major.store.v4i32.vi32(<4 x i32> zeroinitializer, float* %m, i64 2, i1 false, i32 2, i32 2)<br class="">
+ call void @llvm.matrix.column.major.store.v4f32.pi32(<4 x float> zeroinitializer, i32* %n, i64 2, i1 false, i32 2, i32 2)<br class="">
ret void<br class="">
}<br class="">
+<br class="">
+define void @column.major_store_non_int_float_type(<4 x float>* %m, <4 x float>* %n, i64 %arg) {<br class="">
+;<br class="">
+; CHECK-NEXT: Intrinsic has incorrect argument type!<br class="">
+; CHECK-NEXT: void (<4 x float*>, <4 x float>*, i64, i1, i32, i32)* @llvm.matrix.column.major.store.v4f32p0.p0v4f32<br class="">
+;<br class="">
+ call void @llvm.matrix.column.major.store.v4f32p0.p0v4f32(<4 x float*> zeroinitializer, <4 x float>* %n, i64 2, i1 false, i32 2, i32 2)<br class="">
+ ret void<br class="">
+}<br class="">
+<br class="">
+define <4 x float> @column.major_load_stride_too_small(float* %m, i32 %arg) {<br class="">
+;<br class="">
+; CHECK-NEXT: Stride must be greater or equal than the number of rows!<br class="">
+; CHECK-NEXT: <4 x float> (float*, i64, i1, i32, i32)* @llvm.matrix.column.major.load.v4f32<br class="">
+;<br class="">
+ %result.1 = call <4 x float> @llvm.matrix.column.major.load.v4f32(float* %m, i64 1, i1 false, i32 2, i32 2)<br class="">
+ ret <4 x float> %result.1<br class="">
+}<br class="">
+<br class="">
+define void @column.major_store_stride_too_small(float* %m, i64 %arg) {<br class="">
+;<br class="">
+; CHECK-NEXT: Stride must be greater or equal than the number of rows!<br class="">
+; CHECK-NEXT: void (<4 x float>, float*, i64, i1, i32, i32)* @llvm.matrix.column.major.store.v4f32<br class="">
+;<br class="">
+ call void @llvm.matrix.column.major.store.v4f32(<4 x float> zeroinitializer, float* %m, i64 1, i1 false, i32 2, i32 2)<br class="">
+ ret void<br class="">
+}<br class="">
+<br class="">
+declare <4 x i32> @llvm.matrix.column.major.load.v4i32(float*, i64, i1, i32, i32)<br class="">
+declare <4 x float> @llvm.matrix.column.major.load.v4f32.pi32(i32*, i64, i1, i32, i32)<br class="">
+declare <4 x float> @llvm.matrix.column.major.load.v4f32(float*, i64, i1, i32, i32)<br class="">
+declare <6 x float> @llvm.matrix.column.major.load.v6f32(float*, i64, i1, i32, i32)<br class="">
+<br class="">
+declare void @llvm.matrix.column.major.store.v4f32(<4 x float>, float*, i64, i1, i32, i32)<br class="">
+declare void @llvm.matrix.column.major.store.v6f32(<6 x float>, float*, i64, i1, i32, i32)<br class="">
+declare void @llvm.matrix.column.major.store.v4i32.vi32(<4 x i32>, float*, i64, i1, i32, i32)<br class="">
+declare void @llvm.matrix.column.major.store.v4f32.pi32(<4 x float>, i32*, i64, i1, i32, i32)<br class="">
+declare void @llvm.matrix.column.major.store.v4f32p0.p0v4f32(<4 x float*>, <4 x float>*, i64, i1, i32, i32)<br class="">
+<br class="">
+declare <4 x i32> @llvm.matrix.transpose.v4i32.v4f32(<4 x float>, i32, i32)<br class="">
+declare <4 x float> @llvm.matrix.transpose.v4f32(<4 x float>, i32, i32)<br class="">
+declare <4 x float> @llvm.matrix.transpose.v4f32.v4i32(<4 x i32>, i32, i32)<br class="">
+<br class="">
+declare <4 x i32> @llvm.matrix.multiply.v4i32.v4f32.v4f32(<4 x float>, <4 x float>, i32, i32, i32)<br class="">
+declare <4 x float> @llvm.matrix.multiply.v4f32.v4i32.v4f32(<4 x i32>, <4 x float>, i32, i32, i32)<br class="">
+declare <4 x float> @llvm.matrix.multiply.v4f32.v4f32.v4i32(<4 x float>, <4 x i32>, i32, i32, i32)<br class="">
+declare <4 x float> @llvm.matrix.multiply.v4f32.v4i32.v4i32(<4 x i32>, <4 x i32>, i32, i32, i32)<br class="">
+declare <4 x float> @llvm.matrix.multiply.v4f32.v4f32.v4f32(<4 x float>, <4 x float>, i32, i32, i32)<br class="">
<br class="">
<br class="">
<br class="">
_______________________________________________<br class="">
llvm-commits mailing list<br class="">
<a href="mailto:llvm-commits@lists.llvm.org" class="">llvm-commits@lists.llvm.org</a><br class="">
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits<br class="">
</div>
</div>
</blockquote>
</div>
<br class="">
</div>
</div>
</body>
</html>