<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Wingdings;
panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
{font-family:Wingdings;
panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-reply;
font-family:"Calibri","sans-serif";
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:799957587;
mso-list-type:hybrid;
mso-list-template-ids:-1214880770 44585902 67698691 67698693 67698689 67698691 67698693 67698689 67698691 67698693;}
@list l0:level1
{mso-level-start-at:0;
mso-level-number-format:bullet;
mso-level-text:-;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-font-family:"Times New Roman";
mso-bidi-font-family:"Times New Roman";}
@list l0:level2
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Courier New";
mso-bidi-font-family:"Times New Roman";}
@list l0:level3
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Wingdings;}
@list l0:level4
{mso-level-number-format:bullet;
mso-level-text:\F0B7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Symbol;}
@list l0:level5
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Courier New";
mso-bidi-font-family:"Times New Roman";}
@list l0:level6
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Wingdings;}
@list l0:level7
{mso-level-number-format:bullet;
mso-level-text:\F0B7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Symbol;}
@list l0:level8
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Courier New";
mso-bidi-font-family:"Times New Roman";}
@list l0:level9
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Wingdings;}
ol
{margin-bottom:0cm;}
ul
{margin-bottom:0cm;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Yes, of course. All others are coming soon.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal" style="margin-left:36.0pt;text-indent:-18.0pt;mso-list:l0 level1 lfo1">
<![if !supportLists]><span style="font-family:"Calibri","sans-serif";color:#31849B"><span style="mso-list:Ignore">-<span style="font:7.0pt "Times New Roman"">
</span></span></span><![endif]><span dir="LTR"></span><b><i><span style="color:#31849B"> Elena<o:p></o:p></span></i></b></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">From:</span></b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif""> Craig Topper [mailto:craig.topper@gmail.com]
<br>
<b>Sent:</b> Wednesday, July 31, 2013 19:23<br>
<b>To:</b> Demikhovsky, Elena<br>
<b>Cc:</b> llvm-commits@cs.uiuc.edu<br>
<b>Subject:</b> Re: [llvm] r187491 - Added INSERT and EXTRACT intructions from AVX-512 ISA.<o:p></o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<p class="MsoNormal">Based on your changes in X86ISelLowering.cpp, this marks more than just insert/extract as being legal.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><o:p> </o:p></p>
<div>
<p class="MsoNormal">On Wed, Jul 31, 2013 at 4:35 AM, Elena Demikhovsky <<a href="mailto:elena.demikhovsky@intel.com" target="_blank">elena.demikhovsky@intel.com</a>> wrote:<o:p></o:p></p>
<p class="MsoNormal">Author: delena<br>
Date: Wed Jul 31 06:35:14 2013<br>
New Revision: 187491<br>
<br>
URL: <a href="http://llvm.org/viewvc/llvm-project?rev=187491&view=rev" target="_blank">
http://llvm.org/viewvc/llvm-project?rev=187491&view=rev</a><br>
Log:<br>
Added INSERT and EXTRACT intructions from AVX-512 ISA.<br>
All insertf*/extractf* functions replaced with insert/extract since we have insertf and inserti forms.<br>
Added lowering for INSERT_VECTOR_ELT / EXTRACT_VECTOR_ELT for 512-bit vectors.<br>
Added lowering for EXTRACT/INSERT subvector for 512-bit vectors.<br>
Added a test.<br>
<br>
Added:<br>
llvm/trunk/lib/Target/X86/X86InstrAVX512.td<br>
llvm/trunk/test/CodeGen/X86/avx512-insert-extract.ll<br>
Modified:<br>
llvm/trunk/lib/Target/X86/X86ISelLowering.cpp<br>
llvm/trunk/lib/Target/X86/X86ISelLowering.h<br>
llvm/trunk/lib/Target/X86/X86InstrFragmentsSIMD.td<br>
llvm/trunk/lib/Target/X86/X86InstrInfo.td<br>
llvm/trunk/lib/Target/X86/X86InstrSSE.td<br>
<br>
Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=187491&r1=187490&r2=187491&view=diff" target="_blank">
http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=187491&r1=187490&r2=187491&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/Target/X86/X86ISelLowering.cpp (original)<br>
+++ llvm/trunk/lib/Target/X86/X86ISelLowering.cpp Wed Jul 31 06:35:14 2013<br>
@@ -58,17 +58,14 @@ STATISTIC(NumTailCalls, "Number of tail<br>
static SDValue getMOVL(SelectionDAG &DAG, SDLoc dl, EVT VT, SDValue V1,<br>
SDValue V2);<br>
<br>
-/// Generate a DAG to grab 128-bits from a vector > 128 bits. This<br>
-/// sets things up to match to an AVX VEXTRACTF128 instruction or a<br>
-/// simple subregister reference. Idx is an index in the 128 bits we<br>
-/// want. It need not be aligned to a 128-bit bounday. That makes<br>
-/// lowering EXTRACT_VECTOR_ELT operations easier.<br>
-static SDValue Extract128BitVector(SDValue Vec, unsigned IdxVal,<br>
- SelectionDAG &DAG, SDLoc dl) {<br>
+static SDValue ExtractSubVector(SDValue Vec, unsigned IdxVal,<br>
+ SelectionDAG &DAG, SDLoc dl,<br>
+ unsigned vectorWidth) {<br>
+ assert((vectorWidth == 128 || vectorWidth == 256) &&<br>
+ "Unsupported vector width");<br>
EVT VT = Vec.getValueType();<br>
- assert(VT.is256BitVector() && "Unexpected vector size!");<br>
EVT ElVT = VT.getVectorElementType();<br>
- unsigned Factor = VT.getSizeInBits()/128;<br>
+ unsigned Factor = VT.getSizeInBits()/vectorWidth;<br>
EVT ResultVT = EVT::getVectorVT(*DAG.getContext(), ElVT,<br>
VT.getVectorNumElements()/Factor);<br>
<br>
@@ -76,13 +73,12 @@ static SDValue Extract128BitVector(SDVal<br>
if (Vec.getOpcode() == ISD::UNDEF)<br>
return DAG.getUNDEF(ResultVT);<br>
<br>
- // Extract the relevant 128 bits. Generate an EXTRACT_SUBVECTOR<br>
- // we can match to VEXTRACTF128.<br>
- unsigned ElemsPerChunk = 128 / ElVT.getSizeInBits();<br>
+ // Extract the relevant vectorWidth bits. Generate an EXTRACT_SUBVECTOR<br>
+ unsigned ElemsPerChunk = vectorWidth / ElVT.getSizeInBits();<br>
<br>
- // This is the index of the first element of the 128-bit chunk<br>
+ // This is the index of the first element of the vectorWidth-bit chunk<br>
// we want.<br>
- unsigned NormalizedIdxVal = (((IdxVal * ElVT.getSizeInBits()) / 128)<br>
+ unsigned NormalizedIdxVal = (((IdxVal * ElVT.getSizeInBits()) / vectorWidth)<br>
* ElemsPerChunk);<br>
<br>
// If the input is a buildvector just emit a smaller one.<br>
@@ -95,38 +91,70 @@ static SDValue Extract128BitVector(SDVal<br>
VecIdx);<br>
<br>
return Result;<br>
+<br>
+}<br>
+/// Generate a DAG to grab 128-bits from a vector > 128 bits. This<br>
+/// sets things up to match to an AVX VEXTRACTF128 / VEXTRACTI128<br>
+/// or AVX-512 VEXTRACTF32x4 / VEXTRACTI32x4<br>
+/// instructions or a simple subregister reference. Idx is an index in the<br>
+/// 128 bits we want. It need not be aligned to a 128-bit bounday. That makes<br>
+/// lowering EXTRACT_VECTOR_ELT operations easier.<br>
+static SDValue Extract128BitVector(SDValue Vec, unsigned IdxVal,<br>
+ SelectionDAG &DAG, SDLoc dl) {<br>
+ assert(Vec.getValueType().is256BitVector() && "Unexpected vector size!");<br>
+ return ExtractSubVector(Vec, IdxVal, DAG, dl, 128);<br>
}<br>
<br>
-/// Generate a DAG to put 128-bits into a vector > 128 bits. This<br>
-/// sets things up to match to an AVX VINSERTF128 instruction or a<br>
-/// simple superregister reference. Idx is an index in the 128 bits<br>
-/// we want. It need not be aligned to a 128-bit bounday. That makes<br>
-/// lowering INSERT_VECTOR_ELT operations easier.<br>
-static SDValue Insert128BitVector(SDValue Result, SDValue Vec,<br>
- unsigned IdxVal, SelectionDAG &DAG,<br>
- SDLoc dl) {<br>
+/// Generate a DAG to grab 256-bits from a 512-bit vector.<br>
+static SDValue Extract256BitVector(SDValue Vec, unsigned IdxVal,<br>
+ SelectionDAG &DAG, SDLoc dl) {<br>
+ assert(Vec.getValueType().is512BitVector() && "Unexpected vector size!");<br>
+ return ExtractSubVector(Vec, IdxVal, DAG, dl, 256);<br>
+}<br>
+<br>
+static SDValue InsertSubVector(SDValue Result, SDValue Vec,<br>
+ unsigned IdxVal, SelectionDAG &DAG,<br>
+ SDLoc dl, unsigned vectorWidth) {<br>
+ assert((vectorWidth == 128 || vectorWidth == 256) &&<br>
+ "Unsupported vector width");<br>
// Inserting UNDEF is Result<br>
if (Vec.getOpcode() == ISD::UNDEF)<br>
return Result;<br>
-<br>
EVT VT = Vec.getValueType();<br>
- assert(VT.is128BitVector() && "Unexpected vector size!");<br>
-<br>
EVT ElVT = VT.getVectorElementType();<br>
EVT ResultVT = Result.getValueType();<br>
<br>
- // Insert the relevant 128 bits.<br>
- unsigned ElemsPerChunk = 128/ElVT.getSizeInBits();<br>
+ // Insert the relevant vectorWidth bits.<br>
+ unsigned ElemsPerChunk = vectorWidth/ElVT.getSizeInBits();<br>
<br>
- // This is the index of the first element of the 128-bit chunk<br>
+ // This is the index of the first element of the vectorWidth-bit chunk<br>
// we want.<br>
- unsigned NormalizedIdxVal = (((IdxVal * ElVT.getSizeInBits())/128)<br>
+ unsigned NormalizedIdxVal = (((IdxVal * ElVT.getSizeInBits())/vectorWidth)<br>
* ElemsPerChunk);<br>
<br>
SDValue VecIdx = DAG.getIntPtrConstant(NormalizedIdxVal);<br>
return DAG.getNode(ISD::INSERT_SUBVECTOR, dl, ResultVT, Result, Vec,<br>
VecIdx);<br>
}<br>
+/// Generate a DAG to put 128-bits into a vector > 128 bits. This<br>
+/// sets things up to match to an AVX VINSERTF128/VINSERTI128 or<br>
+/// AVX-512 VINSERTF32x4/VINSERTI32x4 instructions or a<br>
+/// simple superregister reference. Idx is an index in the 128 bits<br>
+/// we want. It need not be aligned to a 128-bit bounday. That makes<br>
+/// lowering INSERT_VECTOR_ELT operations easier.<br>
+static SDValue Insert128BitVector(SDValue Result, SDValue Vec,<br>
+ unsigned IdxVal, SelectionDAG &DAG,<br>
+ SDLoc dl) {<br>
+ assert(Vec.getValueType().is128BitVector() && "Unexpected vector size!");<br>
+ return InsertSubVector(Result, Vec, IdxVal, DAG, dl, 128);<br>
+}<br>
+<br>
+static SDValue Insert256BitVector(SDValue Result, SDValue Vec,<br>
+ unsigned IdxVal, SelectionDAG &DAG,<br>
+ SDLoc dl) {<br>
+ assert(Vec.getValueType().is256BitVector() && "Unexpected vector size!");<br>
+ return InsertSubVector(Result, Vec, IdxVal, DAG, dl, 256);<br>
+}<br>
<br>
/// Concat two 128-bit vectors into a 256 bit vector using VINSERTF128<br>
/// instructions. This is used because creating CONCAT_VECTOR nodes of<br>
@@ -139,6 +167,13 @@ static SDValue Concat128BitVectors(SDVal<br>
return Insert128BitVector(V, V2, NumElems/2, DAG, dl);<br>
}<br>
<br>
+static SDValue Concat256BitVectors(SDValue V1, SDValue V2, EVT VT,<br>
+ unsigned NumElems, SelectionDAG &DAG,<br>
+ SDLoc dl) {<br>
+ SDValue V = Insert256BitVector(DAG.getUNDEF(VT), V1, 0, DAG, dl);<br>
+ return Insert256BitVector(V, V2, NumElems/2, DAG, dl);<br>
+}<br>
+<br>
static TargetLoweringObjectFile *createTLOF(X86TargetMachine &TM) {<br>
const X86Subtarget *Subtarget = &TM.getSubtarget<X86Subtarget>();<br>
bool is64Bit = Subtarget->is64Bit();<br>
@@ -1261,6 +1296,147 @@ void X86TargetLowering::resetOperationAc<br>
}<br>
}<br>
<br>
+ if (!TM.Options.UseSoftFloat && Subtarget->hasAVX512()) {<br>
+ addRegisterClass(MVT::v16i32, &X86::VR512RegClass);<br>
+ addRegisterClass(MVT::v16f32, &X86::VR512RegClass);<br>
+ addRegisterClass(MVT::v8i64, &X86::VR512RegClass);<br>
+ addRegisterClass(MVT::v8f64, &X86::VR512RegClass);<br>
+<br>
+ addRegisterClass(MVT::v8i1, &X86::VK8RegClass);<br>
+ addRegisterClass(MVT::v16i1, &X86::VK16RegClass);<br>
+<br>
+ setLoadExtAction(ISD::EXTLOAD, MVT::v8f32, Legal);<br>
+ setOperationAction(ISD::LOAD, MVT::v16f32, Legal);<br>
+ setOperationAction(ISD::LOAD, MVT::v8f64, Legal);<br>
+ setOperationAction(ISD::LOAD, MVT::v8i64, Legal);<br>
+ setOperationAction(ISD::LOAD, MVT::v16i32, Legal);<br>
+ setOperationAction(ISD::LOAD, MVT::v16i1, Legal);<br>
+<br>
+ setOperationAction(ISD::FADD, MVT::v16f32, Legal);<br>
+ setOperationAction(ISD::FSUB, MVT::v16f32, Legal);<br>
+ setOperationAction(ISD::FMUL, MVT::v16f32, Legal);<br>
+ setOperationAction(ISD::FDIV, MVT::v16f32, Legal);<br>
+ setOperationAction(ISD::FSQRT, MVT::v16f32, Legal);<br>
+ setOperationAction(ISD::FNEG, MVT::v16f32, Custom);<br>
+<br>
+ setOperationAction(ISD::FADD, MVT::v8f64, Legal);<br>
+ setOperationAction(ISD::FSUB, MVT::v8f64, Legal);<br>
+ setOperationAction(ISD::FMUL, MVT::v8f64, Legal);<br>
+ setOperationAction(ISD::FDIV, MVT::v8f64, Legal);<br>
+ setOperationAction(ISD::FSQRT, MVT::v8f64, Legal);<br>
+ setOperationAction(ISD::FNEG, MVT::v8f64, Custom);<br>
+ setOperationAction(ISD::FMA, MVT::v8f64, Legal);<br>
+ setOperationAction(ISD::FMA, MVT::v16f32, Legal);<br>
+ setOperationAction(ISD::SDIV, MVT::v16i32, Custom);<br>
+<br>
+<br>
+ setOperationAction(ISD::FP_TO_SINT, MVT::v16i32, Legal);<br>
+ setOperationAction(ISD::FP_TO_UINT, MVT::v16i32, Legal);<br>
+ setOperationAction(ISD::FP_TO_UINT, MVT::v8i32, Legal);<br>
+ setOperationAction(ISD::SINT_TO_FP, MVT::v16i32, Legal);<br>
+ setOperationAction(ISD::UINT_TO_FP, MVT::v16i32, Legal);<br>
+ setOperationAction(ISD::UINT_TO_FP, MVT::v8i32, Legal);<br>
+ setOperationAction(ISD::FP_ROUND, MVT::v8f32, Legal);<br>
+ setOperationAction(ISD::FP_EXTEND, MVT::v8f32, Legal);<br>
+<br>
+ setOperationAction(ISD::TRUNCATE, MVT::i1, Legal);<br>
+ setOperationAction(ISD::TRUNCATE, MVT::v16i8, Custom);<br>
+ setOperationAction(ISD::TRUNCATE, MVT::v8i32, Custom);<br>
+ setOperationAction(ISD::TRUNCATE, MVT::v8i1, Custom);<br>
+ setOperationAction(ISD::TRUNCATE, MVT::v16i1, Custom);<br>
+ setOperationAction(ISD::ZERO_EXTEND, MVT::v16i32, Custom);<br>
+ setOperationAction(ISD::ZERO_EXTEND, MVT::v8i64, Custom);<br>
+ setOperationAction(ISD::SIGN_EXTEND, MVT::v16i32, Custom);<br>
+ setOperationAction(ISD::SIGN_EXTEND, MVT::v8i64, Custom);<br>
+ setOperationAction(ISD::SIGN_EXTEND, MVT::v16i8, Custom);<br>
+ setOperationAction(ISD::SIGN_EXTEND, MVT::v8i16, Custom);<br>
+ setOperationAction(ISD::SIGN_EXTEND, MVT::v16i16, Custom);<br>
+<br>
+ setOperationAction(ISD::CONCAT_VECTORS, MVT::v8f64, Custom);<br>
+ setOperationAction(ISD::CONCAT_VECTORS, MVT::v8i64, Custom);<br>
+ setOperationAction(ISD::CONCAT_VECTORS, MVT::v16f32, Custom);<br>
+ setOperationAction(ISD::CONCAT_VECTORS, MVT::v16i32, Custom);<br>
+ setOperationAction(ISD::CONCAT_VECTORS, MVT::v8i1, Custom);<br>
+<br>
+ setOperationAction(ISD::SETCC, MVT::v16i1, Custom);<br>
+ setOperationAction(ISD::SETCC, MVT::v8i1, Custom);<br>
+<br>
+ setOperationAction(ISD::MUL, MVT::v8i64, Custom);<br>
+<br>
+ setOperationAction(ISD::BUILD_VECTOR, MVT::v8i1, Custom);<br>
+ setOperationAction(ISD::BUILD_VECTOR, MVT::v16i1, Custom);<br>
+ setOperationAction(ISD::SELECT, MVT::v8f64, Custom);<br>
+ setOperationAction(ISD::SELECT, MVT::v8i64, Custom);<br>
+ setOperationAction(ISD::SELECT, MVT::v16f32, Custom);<br>
+<br>
+ setOperationAction(ISD::ADD, MVT::v8i64, Legal);<br>
+ setOperationAction(ISD::ADD, MVT::v16i32, Legal);<br>
+<br>
+ setOperationAction(ISD::SUB, MVT::v8i64, Legal);<br>
+ setOperationAction(ISD::SUB, MVT::v16i32, Legal);<br>
+<br>
+ setOperationAction(ISD::MUL, MVT::v16i32, Legal);<br>
+<br>
+ setOperationAction(ISD::SRL, MVT::v8i64, Custom);<br>
+ setOperationAction(ISD::SRL, MVT::v16i32, Custom);<br>
+<br>
+ setOperationAction(ISD::SHL, MVT::v8i64, Custom);<br>
+ setOperationAction(ISD::SHL, MVT::v16i32, Custom);<br>
+<br>
+ setOperationAction(ISD::SRA, MVT::v8i64, Custom);<br>
+ setOperationAction(ISD::SRA, MVT::v16i32, Custom);<br>
+<br>
+ setOperationAction(ISD::AND, MVT::v8i64, Legal);<br>
+ setOperationAction(ISD::OR, MVT::v8i64, Legal);<br>
+ setOperationAction(ISD::XOR, MVT::v8i64, Legal);<br>
+<br>
+ // Custom lower several nodes.<br>
+ for (int i = MVT::FIRST_VECTOR_VALUETYPE;<br>
+ i <= MVT::LAST_VECTOR_VALUETYPE; ++i) {<br>
+ MVT VT = (MVT::SimpleValueType)i;<br>
+<br>
+ // Extract subvector is special because the value type<br>
+ // (result) is 256/128-bit but the source is 512-bit wide.<br>
+ if (VT.is128BitVector() || VT.is256BitVector())<br>
+ setOperationAction(ISD::EXTRACT_SUBVECTOR, VT, Custom);<br>
+<br>
+ if (VT.getVectorElementType() == MVT::i1)<br>
+ setOperationAction(ISD::EXTRACT_SUBVECTOR, VT, Legal);<br>
+<br>
+ // Do not attempt to custom lower other non-512-bit vectors<br>
+ if (!VT.is512BitVector())<br>
+ continue;<br>
+<br>
+ if (VT != MVT::v8i64) {<br>
+ setOperationAction(ISD::XOR, VT, Promote);<br>
+ AddPromotedToType (ISD::XOR, VT, MVT::v8i64);<br>
+ setOperationAction(ISD::OR, VT, Promote);<br>
+ AddPromotedToType (ISD::OR, VT, MVT::v8i64);<br>
+ setOperationAction(ISD::AND, VT, Promote);<br>
+ AddPromotedToType (ISD::AND, VT, MVT::v8i64);<br>
+ }<br>
+ setOperationAction(ISD::VECTOR_SHUFFLE, VT, Custom);<br>
+ setOperationAction(ISD::INSERT_VECTOR_ELT, VT, Custom);<br>
+ setOperationAction(ISD::BUILD_VECTOR, VT, Custom);<br>
+ setOperationAction(ISD::VSELECT, VT, Legal);<br>
+ setOperationAction(ISD::EXTRACT_VECTOR_ELT, VT, Custom);<br>
+ setOperationAction(ISD::SCALAR_TO_VECTOR, VT, Custom);<br>
+ setOperationAction(ISD::INSERT_SUBVECTOR, VT, Custom);<br>
+ }<br>
+ for (int i = MVT::v32i8; i != MVT::v8i64; ++i) {<br>
+ MVT VT = (MVT::SimpleValueType)i;<br>
+<br>
+ // Do not attempt to promote non-256-bit vectors<br>
+ if (!VT.is512BitVector())<br>
+ continue;<br>
+<br>
+ setOperationAction(ISD::LOAD, VT, Promote);<br>
+ AddPromotedToType (ISD::LOAD, VT, MVT::v8i64);<br>
+ setOperationAction(ISD::SELECT, VT, Promote);<br>
+ AddPromotedToType (ISD::SELECT, VT, MVT::v8i64);<br>
+ }<br>
+ }// has AVX-512<br>
+<br>
// SIGN_EXTEND_INREGs are evaluated by the extend type. Handle the expansion<br>
// of this type with custom code.<br>
for (int VT = MVT::FIRST_VECTOR_VALUETYPE;<br>
@@ -2007,12 +2183,18 @@ X86TargetLowering::LowerFormalArguments(<br>
RC = &X86::FR32RegClass;<br>
else if (RegVT == MVT::f64)<br>
RC = &X86::FR64RegClass;<br>
+ else if (RegVT.is512BitVector())<br>
+ RC = &X86::VR512RegClass;<br>
else if (RegVT.is256BitVector())<br>
RC = &X86::VR256RegClass;<br>
else if (RegVT.is128BitVector())<br>
RC = &X86::VR128RegClass;<br>
else if (RegVT == MVT::x86mmx)<br>
RC = &X86::VR64RegClass;<br>
+ else if (RegVT == MVT::v8i1)<br>
+ RC = &X86::VK8RegClass;<br>
+ else if (RegVT == MVT::v16i1)<br>
+ RC = &X86::VK16RegClass;<br>
else<br>
llvm_unreachable("Unknown argument type!");<br>
<br>
@@ -4053,42 +4235,59 @@ static bool isMOVDDUPMask(ArrayRef<int><br>
return true;<br>
}<br>
<br>
-/// isVEXTRACTF128Index - Return true if the specified<br>
+/// isVEXTRACTIndex - Return true if the specified<br>
/// EXTRACT_SUBVECTOR operand specifies a vector extract that is<br>
-/// suitable for input to VEXTRACTF128.<br>
-bool X86::isVEXTRACTF128Index(SDNode *N) {<br>
+/// suitable for instruction that extract 128 or 256 bit vectors<br>
+static bool isVEXTRACTIndex(SDNode *N, unsigned vecWidth) {<br>
+ assert((vecWidth == 128 || vecWidth == 256) && "Unexpected vector width");<br>
if (!isa<ConstantSDNode>(N->getOperand(1).getNode()))<br>
return false;<br>
<br>
- // The index should be aligned on a 128-bit boundary.<br>
+ // The index should be aligned on a vecWidth-bit boundary.<br>
uint64_t Index =<br>
cast<ConstantSDNode>(N->getOperand(1).getNode())->getZExtValue();<br>
<br>
MVT VT = N->getValueType(0).getSimpleVT();<br>
unsigned ElSize = VT.getVectorElementType().getSizeInBits();<br>
- bool Result = (Index * ElSize) % 128 == 0;<br>
+ bool Result = (Index * ElSize) % vecWidth == 0;<br>
<br>
return Result;<br>
}<br>
<br>
-/// isVINSERTF128Index - Return true if the specified INSERT_SUBVECTOR<br>
+/// isVINSERTIndex - Return true if the specified INSERT_SUBVECTOR<br>
/// operand specifies a subvector insert that is suitable for input to<br>
-/// VINSERTF128.<br>
-bool X86::isVINSERTF128Index(SDNode *N) {<br>
+/// insertion of 128 or 256-bit subvectors<br>
+static bool isVINSERTIndex(SDNode *N, unsigned vecWidth) {<br>
+ assert((vecWidth == 128 || vecWidth == 256) && "Unexpected vector width");<br>
if (!isa<ConstantSDNode>(N->getOperand(2).getNode()))<br>
return false;<br>
-<br>
- // The index should be aligned on a 128-bit boundary.<br>
+ // The index should be aligned on a vecWidth-bit boundary.<br>
uint64_t Index =<br>
cast<ConstantSDNode>(N->getOperand(2).getNode())->getZExtValue();<br>
<br>
MVT VT = N->getValueType(0).getSimpleVT();<br>
unsigned ElSize = VT.getVectorElementType().getSizeInBits();<br>
- bool Result = (Index * ElSize) % 128 == 0;<br>
+ bool Result = (Index * ElSize) % vecWidth == 0;<br>
<br>
return Result;<br>
}<br>
<br>
+bool X86::isVINSERT128Index(SDNode *N) {<br>
+ return isVINSERTIndex(N, 128);<br>
+}<br>
+<br>
+bool X86::isVINSERT256Index(SDNode *N) {<br>
+ return isVINSERTIndex(N, 256);<br>
+}<br>
+<br>
+bool X86::isVEXTRACT128Index(SDNode *N) {<br>
+ return isVEXTRACTIndex(N, 128);<br>
+}<br>
+<br>
+bool X86::isVEXTRACT256Index(SDNode *N) {<br>
+ return isVEXTRACTIndex(N, 256);<br>
+}<br>
+<br>
/// getShuffleSHUFImmediate - Return the appropriate immediate to shuffle<br>
/// the specified VECTOR_SHUFFLE mask with PSHUF* and SHUFP* instructions.<br>
/// Handles 128-bit and 256-bit.<br>
@@ -4192,12 +4391,10 @@ static unsigned getShufflePALIGNRImmedia<br>
return (Val - i) * EltSize;<br>
}<br>
<br>
-/// getExtractVEXTRACTF128Immediate - Return the appropriate immediate<br>
-/// to extract the specified EXTRACT_SUBVECTOR index with VEXTRACTF128<br>
-/// instructions.<br>
-unsigned X86::getExtractVEXTRACTF128Immediate(SDNode *N) {<br>
+static unsigned getExtractVEXTRACTImmediate(SDNode *N, unsigned vecWidth) {<br>
+ assert((vecWidth == 128 || vecWidth == 256) && "Unsupported vector width");<br>
if (!isa<ConstantSDNode>(N->getOperand(1).getNode()))<br>
- llvm_unreachable("Illegal extract subvector for VEXTRACTF128");<br>
+ llvm_unreachable("Illegal extract subvector for VEXTRACT");<br>
<br>
uint64_t Index =<br>
cast<ConstantSDNode>(N->getOperand(1).getNode())->getZExtValue();<br>
@@ -4205,16 +4402,14 @@ unsigned X86::getExtractVEXTRACTF128Imme<br>
MVT VecVT = N->getOperand(0).getValueType().getSimpleVT();<br>
MVT ElVT = VecVT.getVectorElementType();<br>
<br>
- unsigned NumElemsPerChunk = 128 / ElVT.getSizeInBits();<br>
+ unsigned NumElemsPerChunk = vecWidth / ElVT.getSizeInBits();<br>
return Index / NumElemsPerChunk;<br>
}<br>
<br>
-/// getInsertVINSERTF128Immediate - Return the appropriate immediate<br>
-/// to insert at the specified INSERT_SUBVECTOR index with VINSERTF128<br>
-/// instructions.<br>
-unsigned X86::getInsertVINSERTF128Immediate(SDNode *N) {<br>
+static unsigned getInsertVINSERTImmediate(SDNode *N, unsigned vecWidth) {<br>
+ assert((vecWidth == 128 || vecWidth == 256) && "Unsupported vector width");<br>
if (!isa<ConstantSDNode>(N->getOperand(2).getNode()))<br>
- llvm_unreachable("Illegal insert subvector for VINSERTF128");<br>
+ llvm_unreachable("Illegal insert subvector for VINSERT");<br>
<br>
uint64_t Index =<br>
cast<ConstantSDNode>(N->getOperand(2).getNode())->getZExtValue();<br>
@@ -4222,10 +4417,38 @@ unsigned X86::getInsertVINSERTF128Immedi<br>
MVT VecVT = N->getValueType(0).getSimpleVT();<br>
MVT ElVT = VecVT.getVectorElementType();<br>
<br>
- unsigned NumElemsPerChunk = 128 / ElVT.getSizeInBits();<br>
+ unsigned NumElemsPerChunk = vecWidth / ElVT.getSizeInBits();<br>
return Index / NumElemsPerChunk;<br>
}<br>
<br>
+/// getExtractVEXTRACT128Immediate - Return the appropriate immediate<br>
+/// to extract the specified EXTRACT_SUBVECTOR index with VEXTRACTF128<br>
+/// and VINSERTI128 instructions.<br>
+unsigned X86::getExtractVEXTRACT128Immediate(SDNode *N) {<br>
+ return getExtractVEXTRACTImmediate(N, 128);<br>
+}<br>
+<br>
+/// getExtractVEXTRACT256Immediate - Return the appropriate immediate<br>
+/// to extract the specified EXTRACT_SUBVECTOR index with VEXTRACTF64x4<br>
+/// and VINSERTI64x4 instructions.<br>
+unsigned X86::getExtractVEXTRACT256Immediate(SDNode *N) {<br>
+ return getExtractVEXTRACTImmediate(N, 256);<br>
+}<br>
+<br>
+/// getInsertVINSERT128Immediate - Return the appropriate immediate<br>
+/// to insert at the specified INSERT_SUBVECTOR index with VINSERTF128<br>
+/// and VINSERTI128 instructions.<br>
+unsigned X86::getInsertVINSERT128Immediate(SDNode *N) {<br>
+ return getInsertVINSERTImmediate(N, 128);<br>
+}<br>
+<br>
+/// getInsertVINSERT256Immediate - Return the appropriate immediate<br>
+/// to insert at the specified INSERT_SUBVECTOR index with VINSERTF46x4<br>
+/// and VINSERTI64x4 instructions.<br>
+unsigned X86::getInsertVINSERT256Immediate(SDNode *N) {<br>
+ return getInsertVINSERTImmediate(N, 256);<br>
+}<br>
+<br>
/// getShuffleCLImmediate - Return the appropriate immediate to shuffle<br>
/// the specified VECTOR_SHUFFLE mask with VPERMQ and VPERMPD instructions.<br>
/// Handles 256-bit.<br>
@@ -5715,19 +5938,22 @@ static SDValue LowerAVXCONCAT_VECTORS(SD<br>
SDLoc dl(Op);<br>
MVT ResVT = Op.getValueType().getSimpleVT();<br>
<br>
- assert(ResVT.is256BitVector() && "Value type must be 256-bit wide");<br>
+ assert((ResVT.is256BitVector() ||<br>
+ ResVT.is512BitVector()) && "Value type must be 256-/512-bit wide");<br>
<br>
SDValue V1 = Op.getOperand(0);<br>
SDValue V2 = Op.getOperand(1);<br>
unsigned NumElems = ResVT.getVectorNumElements();<br>
+ if(ResVT.is256BitVector())<br>
+ return Concat128BitVectors(V1, V2, ResVT, NumElems, DAG, dl);<br>
<br>
- return Concat128BitVectors(V1, V2, ResVT, NumElems, DAG, dl);<br>
+ return Concat256BitVectors(V1, V2, ResVT, NumElems, DAG, dl);<br>
}<br>
<br>
static SDValue LowerCONCAT_VECTORS(SDValue Op, SelectionDAG &DAG) {<br>
assert(Op.getNumOperands() == 2);<br>
<br>
- // 256-bit AVX can use the vinsertf128 instruction to create 256-bit vectors<br>
+ // AVX/AVX-512 can use the vinsertf128 instruction to create 256-bit vectors<br>
// from two other 128-bit ones.<br>
return LowerAVXCONCAT_VECTORS(Op, DAG);<br>
}<br>
@@ -7197,6 +7423,7 @@ static SDValue LowerEXTRACT_VECTOR_ELT_S<br>
SDValue<br>
X86TargetLowering::LowerEXTRACT_VECTOR_ELT(SDValue Op,<br>
SelectionDAG &DAG) const {<br>
+ SDLoc dl(Op);<br>
if (!isa<ConstantSDNode>(Op.getOperand(1)))<br>
return SDValue();<br>
<br>
@@ -7205,17 +7432,19 @@ X86TargetLowering::LowerEXTRACT_VECTOR_E<br>
<br>
// If this is a 256-bit vector result, first extract the 128-bit vector and<br>
// then extract the element from the 128-bit vector.<br>
- if (VecVT.is256BitVector()) {<br>
- SDLoc dl(Op.getNode());<br>
- unsigned NumElems = VecVT.getVectorNumElements();<br>
+ if (VecVT.is256BitVector() || VecVT.is512BitVector()) {<br>
SDValue Idx = Op.getOperand(1);<br>
unsigned IdxVal = cast<ConstantSDNode>(Idx)->getZExtValue();<br>
<br>
// Get the 128-bit vector.<br>
Vec = Extract128BitVector(Vec, IdxVal, DAG, dl);<br>
+ EVT EltVT = VecVT.getVectorElementType();<br>
+<br>
+ unsigned ElemsPerChunk = 128 / EltVT.getSizeInBits();<br>
<br>
- if (IdxVal >= NumElems/2)<br>
- IdxVal -= NumElems/2;<br>
+ //if (IdxVal >= NumElems/2)<br>
+ // IdxVal -= NumElems/2;<br>
+ IdxVal -= (IdxVal/ElemsPerChunk)*ElemsPerChunk;<br>
return DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl, Op.getValueType(), Vec,<br>
DAG.getConstant(IdxVal, MVT::i32));<br>
}<br>
@@ -7229,7 +7458,6 @@ X86TargetLowering::LowerEXTRACT_VECTOR_E<br>
}<br>
<br>
MVT VT = Op.getValueType().getSimpleVT();<br>
- SDLoc dl(Op);<br>
// TODO: handle v16i8.<br>
if (VT.getSizeInBits() == 16) {<br>
SDValue Vec = Op.getOperand(0);<br>
@@ -7350,19 +7578,20 @@ X86TargetLowering::LowerINSERT_VECTOR_EL<br>
<br>
// If this is a 256-bit vector result, first extract the 128-bit vector,<br>
// insert the element into the extracted half and then place it back.<br>
- if (VT.is256BitVector()) {<br>
+ if (VT.is256BitVector() || VT.is512BitVector()) {<br>
if (!isa<ConstantSDNode>(N2))<br>
return SDValue();<br>
<br>
// Get the desired 128-bit vector half.<br>
- unsigned NumElems = VT.getVectorNumElements();<br>
unsigned IdxVal = cast<ConstantSDNode>(N2)->getZExtValue();<br>
SDValue V = Extract128BitVector(N0, IdxVal, DAG, dl);<br>
<br>
// Insert the element into the desired half.<br>
- bool Upper = IdxVal >= NumElems/2;<br>
+ unsigned NumEltsIn128 = 128/EltVT.getSizeInBits();<br>
+ unsigned IdxIn128 = IdxVal - (IdxVal/NumEltsIn128) * NumEltsIn128;<br>
+<br>
V = DAG.getNode(ISD::INSERT_VECTOR_ELT, dl, V.getValueType(), V, N1,<br>
- DAG.getConstant(Upper ? IdxVal-NumElems/2 : IdxVal, MVT::i32));<br>
+ DAG.getConstant(IdxIn128, MVT::i32));<br>
<br>
// Insert the changed part back to the 256-bit vector<br>
return Insert128BitVector(N0, V, IdxVal, DAG, dl);<br>
@@ -7395,9 +7624,10 @@ static SDValue LowerSCALAR_TO_VECTOR(SDV<br>
// vector and then insert into the 256-bit vector.<br>
if (!OpVT.is128BitVector()) {<br>
// Insert into a 128-bit vector.<br>
+ unsigned SizeFactor = OpVT.getSizeInBits()/128;<br>
EVT VT128 = EVT::getVectorVT(*Context,<br>
OpVT.getVectorElementType(),<br>
- OpVT.getVectorNumElements() / 2);<br>
+ OpVT.getVectorNumElements() / SizeFactor);<br>
<br>
Op = DAG.getNode(ISD::SCALAR_TO_VECTOR, dl, VT128, Op.getOperand(0));<br>
<br>
@@ -7420,16 +7650,22 @@ static SDValue LowerSCALAR_TO_VECTOR(SDV<br>
// upper bits of a vector.<br>
static SDValue LowerEXTRACT_SUBVECTOR(SDValue Op, const X86Subtarget *Subtarget,<br>
SelectionDAG &DAG) {<br>
- if (Subtarget->hasFp256()) {<br>
- SDLoc dl(Op.getNode());<br>
- SDValue Vec = Op.getNode()->getOperand(0);<br>
- SDValue Idx = Op.getNode()->getOperand(1);<br>
+ SDLoc dl(Op);<br>
+ SDValue In = Op.getOperand(0);<br>
+ SDValue Idx = Op.getOperand(1);<br>
+ unsigned IdxVal = cast<ConstantSDNode>(Idx)->getZExtValue();<br>
+ EVT ResVT = Op.getValueType();<br>
+ EVT InVT = In.getValueType();<br>
<br>
- if (Op.getNode()->getValueType(0).is128BitVector() &&<br>
- Vec.getNode()->getValueType(0).is256BitVector() &&<br>
+ if (Subtarget->hasFp256()) {<br>
+ if (ResVT.is128BitVector() &&<br>
+ (InVT.is256BitVector() || InVT.is512BitVector()) &&<br>
isa<ConstantSDNode>(Idx)) {<br>
- unsigned IdxVal = cast<ConstantSDNode>(Idx)->getZExtValue();<br>
- return Extract128BitVector(Vec, IdxVal, DAG, dl);<br>
+ return Extract128BitVector(In, IdxVal, DAG, dl);<br>
+ }<br>
+ if (ResVT.is256BitVector() && InVT.is512BitVector() &&<br>
+ isa<ConstantSDNode>(Idx)) {<br>
+ return Extract256BitVector(In, IdxVal, DAG, dl);<br>
}<br>
}<br>
return SDValue();<br>
@@ -7446,12 +7682,20 @@ static SDValue LowerINSERT_SUBVECTOR(SDV<br>
SDValue SubVec = Op.getNode()->getOperand(1);<br>
SDValue Idx = Op.getNode()->getOperand(2);<br>
<br>
- if (Op.getNode()->getValueType(0).is256BitVector() &&<br>
+ if ((Op.getNode()->getValueType(0).is256BitVector() ||<br>
+ Op.getNode()->getValueType(0).is512BitVector()) &&<br>
SubVec.getNode()->getValueType(0).is128BitVector() &&<br>
isa<ConstantSDNode>(Idx)) {<br>
unsigned IdxVal = cast<ConstantSDNode>(Idx)->getZExtValue();<br>
return Insert128BitVector(Vec, SubVec, IdxVal, DAG, dl);<br>
}<br>
+<br>
+ if (Op.getNode()->getValueType(0).is512BitVector() &&<br>
+ SubVec.getNode()->getValueType(0).is256BitVector() &&<br>
+ isa<ConstantSDNode>(Idx)) {<br>
+ unsigned IdxVal = cast<ConstantSDNode>(Idx)->getZExtValue();<br>
+ return Insert256BitVector(Vec, SubVec, IdxVal, DAG, dl);<br>
+ }<br>
}<br>
return SDValue();<br>
}<br>
<br>
Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.h<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.h?rev=187491&r1=187490&r2=187491&view=diff" target="_blank">
http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.h?rev=187491&r1=187490&r2=187491&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/Target/X86/X86ISelLowering.h (original)<br>
+++ llvm/trunk/lib/Target/X86/X86ISelLowering.h Wed Jul 31 06:35:14 2013<br>
@@ -434,25 +434,45 @@ namespace llvm {<br>
<br>
/// Define some predicates that are used for node matching.<br>
namespace X86 {<br>
- /// isVEXTRACTF128Index - Return true if the specified<br>
+ /// isVEXTRACT128Index - Return true if the specified<br>
/// EXTRACT_SUBVECTOR operand specifies a vector extract that is<br>
- /// suitable for input to VEXTRACTF128.<br>
- bool isVEXTRACTF128Index(SDNode *N);<br>
+ /// suitable for input to VEXTRACTF128, VEXTRACTI128 instructions.<br>
+ bool isVEXTRACT128Index(SDNode *N);<br>
<br>
- /// isVINSERTF128Index - Return true if the specified<br>
+ /// isVINSERT128Index - Return true if the specified<br>
/// INSERT_SUBVECTOR operand specifies a subvector insert that is<br>
- /// suitable for input to VINSERTF128.<br>
- bool isVINSERTF128Index(SDNode *N);<br>
+ /// suitable for input to VINSERTF128, VINSERTI128 instructions.<br>
+ bool isVINSERT128Index(SDNode *N);<br>
<br>
- /// getExtractVEXTRACTF128Immediate - Return the appropriate<br>
+ /// isVEXTRACT256Index - Return true if the specified<br>
+ /// EXTRACT_SUBVECTOR operand specifies a vector extract that is<br>
+ /// suitable for input to VEXTRACTF64X4, VEXTRACTI64X4 instructions.<br>
+ bool isVEXTRACT256Index(SDNode *N);<br>
+<br>
+ /// isVINSERT256Index - Return true if the specified<br>
+ /// INSERT_SUBVECTOR operand specifies a subvector insert that is<br>
+ /// suitable for input to VINSERTF64X4, VINSERTI64X4 instructions.<br>
+ bool isVINSERT256Index(SDNode *N);<br>
+<br>
+ /// getExtractVEXTRACT128Immediate - Return the appropriate<br>
+ /// immediate to extract the specified EXTRACT_SUBVECTOR index<br>
+ /// with VEXTRACTF128, VEXTRACTI128 instructions.<br>
+ unsigned getExtractVEXTRACT128Immediate(SDNode *N);<br>
+<br>
+ /// getInsertVINSERT128Immediate - Return the appropriate<br>
+ /// immediate to insert at the specified INSERT_SUBVECTOR index<br>
+ /// with VINSERTF128, VINSERT128 instructions.<br>
+ unsigned getInsertVINSERT128Immediate(SDNode *N);<br>
+<br>
+ /// getExtractVEXTRACT256Immediate - Return the appropriate<br>
/// immediate to extract the specified EXTRACT_SUBVECTOR index<br>
- /// with VEXTRACTF128 instructions.<br>
- unsigned getExtractVEXTRACTF128Immediate(SDNode *N);<br>
+ /// with VEXTRACTF64X4, VEXTRACTI64x4 instructions.<br>
+ unsigned getExtractVEXTRACT256Immediate(SDNode *N);<br>
<br>
- /// getInsertVINSERTF128Immediate - Return the appropriate<br>
+ /// getInsertVINSERT256Immediate - Return the appropriate<br>
/// immediate to insert at the specified INSERT_SUBVECTOR index<br>
- /// with VINSERTF128 instructions.<br>
- unsigned getInsertVINSERTF128Immediate(SDNode *N);<br>
+ /// with VINSERTF64x4, VINSERTI64x4 instructions.<br>
+ unsigned getInsertVINSERT256Immediate(SDNode *N);<br>
<br>
/// isZeroNode - Returns true if Elt is a constant zero or a floating point<br>
/// constant +0.0.<br>
<br>
Added: llvm/trunk/lib/Target/X86/X86InstrAVX512.td<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrAVX512.td?rev=187491&view=auto" target="_blank">
http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrAVX512.td?rev=187491&view=auto</a><br>
==============================================================================<br>
--- llvm/trunk/lib/Target/X86/X86InstrAVX512.td (added)<br>
+++ llvm/trunk/lib/Target/X86/X86InstrAVX512.td Wed Jul 31 06:35:14 2013<br>
@@ -0,0 +1,339 @@<br>
+// Bitcasts between 512-bit vector types. Return the original type since<br>
+// no instruction is needed for the conversion<br>
+let Predicates = [HasAVX512] in {<br>
+ def : Pat<(v8f64 (bitconvert (v16f32 VR512:$src))), (v8f64 VR512:$src)>;<br>
+ def : Pat<(v8f64 (bitconvert (v16i32 VR512:$src))), (v8f64 VR512:$src)>;<br>
+ def : Pat<(v8f64 (bitconvert (v8i64 VR512:$src))), (v8f64 VR512:$src)>;<br>
+ def : Pat<(v16f32 (bitconvert (v16i32 VR512:$src))), (v16f32 VR512:$src)>;<br>
+ def : Pat<(v16f32 (bitconvert (v8i64 VR512:$src))), (v16f32 VR512:$src)>;<br>
+ def : Pat<(v16f32 (bitconvert (v8f64 VR512:$src))), (v16f32 VR512:$src)>;<br>
+ def : Pat<(v8i64 (bitconvert (v16f32 VR512:$src))), (v8i64 VR512:$src)>;<br>
+ def : Pat<(v8i64 (bitconvert (v16i32 VR512:$src))), (v8i64 VR512:$src)>;<br>
+ def : Pat<(v8i64 (bitconvert (v8f64 VR512:$src))), (v8i64 VR512:$src)>;<br>
+ def : Pat<(v16i32 (bitconvert (v16f32 VR512:$src))), (v16i32 VR512:$src)>;<br>
+ def : Pat<(v16i32 (bitconvert (v8i64 VR512:$src))), (v16i32 VR512:$src)>;<br>
+ def : Pat<(v16i32 (bitconvert (v8f64 VR512:$src))), (v16i32 VR512:$src)>;<br>
+ def : Pat<(v8f64 (bitconvert (v8i64 VR512:$src))), (v8f64 VR512:$src)>;<br>
+<br>
+ def : Pat<(v2i64 (bitconvert (v4i32 VR128X:$src))), (v2i64 VR128X:$src)>;<br>
+ def : Pat<(v2i64 (bitconvert (v8i16 VR128X:$src))), (v2i64 VR128X:$src)>;<br>
+ def : Pat<(v2i64 (bitconvert (v16i8 VR128X:$src))), (v2i64 VR128X:$src)>;<br>
+ def : Pat<(v2i64 (bitconvert (v2f64 VR128X:$src))), (v2i64 VR128X:$src)>;<br>
+ def : Pat<(v2i64 (bitconvert (v4f32 VR128X:$src))), (v2i64 VR128X:$src)>;<br>
+ def : Pat<(v4i32 (bitconvert (v2i64 VR128X:$src))), (v4i32 VR128X:$src)>;<br>
+ def : Pat<(v4i32 (bitconvert (v8i16 VR128X:$src))), (v4i32 VR128X:$src)>;<br>
+ def : Pat<(v4i32 (bitconvert (v16i8 VR128X:$src))), (v4i32 VR128X:$src)>;<br>
+ def : Pat<(v4i32 (bitconvert (v2f64 VR128X:$src))), (v4i32 VR128X:$src)>;<br>
+ def : Pat<(v4i32 (bitconvert (v4f32 VR128X:$src))), (v4i32 VR128X:$src)>;<br>
+ def : Pat<(v8i16 (bitconvert (v2i64 VR128X:$src))), (v8i16 VR128X:$src)>;<br>
+ def : Pat<(v8i16 (bitconvert (v4i32 VR128X:$src))), (v8i16 VR128X:$src)>;<br>
+ def : Pat<(v8i16 (bitconvert (v16i8 VR128X:$src))), (v8i16 VR128X:$src)>;<br>
+ def : Pat<(v8i16 (bitconvert (v2f64 VR128X:$src))), (v8i16 VR128X:$src)>;<br>
+ def : Pat<(v8i16 (bitconvert (v4f32 VR128X:$src))), (v8i16 VR128X:$src)>;<br>
+ def : Pat<(v16i8 (bitconvert (v2i64 VR128X:$src))), (v16i8 VR128X:$src)>;<br>
+ def : Pat<(v16i8 (bitconvert (v4i32 VR128X:$src))), (v16i8 VR128X:$src)>;<br>
+ def : Pat<(v16i8 (bitconvert (v8i16 VR128X:$src))), (v16i8 VR128X:$src)>;<br>
+ def : Pat<(v16i8 (bitconvert (v2f64 VR128X:$src))), (v16i8 VR128X:$src)>;<br>
+ def : Pat<(v16i8 (bitconvert (v4f32 VR128X:$src))), (v16i8 VR128X:$src)>;<br>
+ def : Pat<(v4f32 (bitconvert (v2i64 VR128X:$src))), (v4f32 VR128X:$src)>;<br>
+ def : Pat<(v4f32 (bitconvert (v4i32 VR128X:$src))), (v4f32 VR128X:$src)>;<br>
+ def : Pat<(v4f32 (bitconvert (v8i16 VR128X:$src))), (v4f32 VR128X:$src)>;<br>
+ def : Pat<(v4f32 (bitconvert (v16i8 VR128X:$src))), (v4f32 VR128X:$src)>;<br>
+ def : Pat<(v4f32 (bitconvert (v2f64 VR128X:$src))), (v4f32 VR128X:$src)>;<br>
+ def : Pat<(v2f64 (bitconvert (v2i64 VR128X:$src))), (v2f64 VR128X:$src)>;<br>
+ def : Pat<(v2f64 (bitconvert (v4i32 VR128X:$src))), (v2f64 VR128X:$src)>;<br>
+ def : Pat<(v2f64 (bitconvert (v8i16 VR128X:$src))), (v2f64 VR128X:$src)>;<br>
+ def : Pat<(v2f64 (bitconvert (v16i8 VR128X:$src))), (v2f64 VR128X:$src)>;<br>
+ def : Pat<(v2f64 (bitconvert (v4f32 VR128X:$src))), (v2f64 VR128X:$src)>;<br>
+<br>
+// Bitcasts between 256-bit vector types. Return the original type since<br>
+// no instruction is needed for the conversion<br>
+ def : Pat<(v4f64 (bitconvert (v8f32 VR256X:$src))), (v4f64 VR256X:$src)>;<br>
+ def : Pat<(v4f64 (bitconvert (v8i32 VR256X:$src))), (v4f64 VR256X:$src)>;<br>
+ def : Pat<(v4f64 (bitconvert (v4i64 VR256X:$src))), (v4f64 VR256X:$src)>;<br>
+ def : Pat<(v4f64 (bitconvert (v16i16 VR256X:$src))), (v4f64 VR256X:$src)>;<br>
+ def : Pat<(v4f64 (bitconvert (v32i8 VR256X:$src))), (v4f64 VR256X:$src)>;<br>
+ def : Pat<(v8f32 (bitconvert (v8i32 VR256X:$src))), (v8f32 VR256X:$src)>;<br>
+ def : Pat<(v8f32 (bitconvert (v4i64 VR256X:$src))), (v8f32 VR256X:$src)>;<br>
+ def : Pat<(v8f32 (bitconvert (v4f64 VR256X:$src))), (v8f32 VR256X:$src)>;<br>
+ def : Pat<(v8f32 (bitconvert (v32i8 VR256X:$src))), (v8f32 VR256X:$src)>;<br>
+ def : Pat<(v8f32 (bitconvert (v16i16 VR256X:$src))), (v8f32 VR256X:$src)>;<br>
+ def : Pat<(v4i64 (bitconvert (v8f32 VR256X:$src))), (v4i64 VR256X:$src)>;<br>
+ def : Pat<(v4i64 (bitconvert (v8i32 VR256X:$src))), (v4i64 VR256X:$src)>;<br>
+ def : Pat<(v4i64 (bitconvert (v4f64 VR256X:$src))), (v4i64 VR256X:$src)>;<br>
+ def : Pat<(v4i64 (bitconvert (v32i8 VR256X:$src))), (v4i64 VR256X:$src)>;<br>
+ def : Pat<(v4i64 (bitconvert (v16i16 VR256X:$src))), (v4i64 VR256X:$src)>;<br>
+ def : Pat<(v32i8 (bitconvert (v4f64 VR256X:$src))), (v32i8 VR256X:$src)>;<br>
+ def : Pat<(v32i8 (bitconvert (v4i64 VR256X:$src))), (v32i8 VR256X:$src)>;<br>
+ def : Pat<(v32i8 (bitconvert (v8f32 VR256X:$src))), (v32i8 VR256X:$src)>;<br>
+ def : Pat<(v32i8 (bitconvert (v8i32 VR256X:$src))), (v32i8 VR256X:$src)>;<br>
+ def : Pat<(v32i8 (bitconvert (v16i16 VR256X:$src))), (v32i8 VR256X:$src)>;<br>
+ def : Pat<(v8i32 (bitconvert (v32i8 VR256X:$src))), (v8i32 VR256X:$src)>;<br>
+ def : Pat<(v8i32 (bitconvert (v16i16 VR256X:$src))), (v8i32 VR256X:$src)>;<br>
+ def : Pat<(v8i32 (bitconvert (v8f32 VR256X:$src))), (v8i32 VR256X:$src)>;<br>
+ def : Pat<(v8i32 (bitconvert (v4i64 VR256X:$src))), (v8i32 VR256X:$src)>;<br>
+ def : Pat<(v8i32 (bitconvert (v4f64 VR256X:$src))), (v8i32 VR256X:$src)>;<br>
+ def : Pat<(v16i16 (bitconvert (v8f32 VR256X:$src))), (v16i16 VR256X:$src)>;<br>
+ def : Pat<(v16i16 (bitconvert (v8i32 VR256X:$src))), (v16i16 VR256X:$src)>;<br>
+ def : Pat<(v16i16 (bitconvert (v4i64 VR256X:$src))), (v16i16 VR256X:$src)>;<br>
+ def : Pat<(v16i16 (bitconvert (v4f64 VR256X:$src))), (v16i16 VR256X:$src)>;<br>
+ def : Pat<(v16i16 (bitconvert (v32i8 VR256X:$src))), (v16i16 VR256X:$src)>;<br>
+}<br>
+<br>
+//===----------------------------------------------------------------------===//<br>
+// AVX-512 - VECTOR INSERT<br>
+//<br>
+// -- 32x8 form --<br>
+let neverHasSideEffects = 1, ExeDomain = SSEPackedSingle in {<br>
+def VINSERTF32x4rr : AVX512AIi8<0x18, MRMSrcReg, (outs VR512:$dst),<br>
+ (ins VR512:$src1, VR128X:$src2, i8imm:$src3),<br>
+ "vinsertf32x4\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}",<br>
+ []>, EVEX_4V, EVEX_V512;<br>
+let mayLoad = 1 in<br>
+def VINSERTF32x4rm : AVX512AIi8<0x18, MRMSrcMem, (outs VR512:$dst),<br>
+ (ins VR512:$src1, f128mem:$src2, i8imm:$src3),<br>
+ "vinsertf32x4\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}",<br>
+ []>, EVEX_4V, EVEX_V512, EVEX_CD8<32, CD8VT4>;<br>
+}<br>
+<br>
+// -- 64x4 fp form --<br>
+let neverHasSideEffects = 1, ExeDomain = SSEPackedDouble in {<br>
+def VINSERTF64x4rr : AVX512AIi8<0x1a, MRMSrcReg, (outs VR512:$dst),<br>
+ (ins VR512:$src1, VR256X:$src2, i8imm:$src3),<br>
+ "vinsertf64x4\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}",<br>
+ []>, EVEX_4V, EVEX_V512, VEX_W;<br>
+let mayLoad = 1 in<br>
+def VINSERTF64x4rm : AVX512AIi8<0x1a, MRMSrcMem, (outs VR512:$dst),<br>
+ (ins VR512:$src1, i256mem:$src2, i8imm:$src3),<br>
+ "vinsertf64x4\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}",<br>
+ []>, EVEX_4V, EVEX_V512, VEX_W, EVEX_CD8<64, CD8VT4>;<br>
+}<br>
+// -- 32x4 integer form --<br>
+let neverHasSideEffects = 1 in {<br>
+def VINSERTI32x4rr : AVX512AIi8<0x38, MRMSrcReg, (outs VR512:$dst),<br>
+ (ins VR512:$src1, VR128X:$src2, i8imm:$src3),<br>
+ "vinserti32x4\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}",<br>
+ []>, EVEX_4V, EVEX_V512;<br>
+let mayLoad = 1 in<br>
+def VINSERTI32x4rm : AVX512AIi8<0x38, MRMSrcMem, (outs VR512:$dst),<br>
+ (ins VR512:$src1, i128mem:$src2, i8imm:$src3),<br>
+ "vinserti32x4\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}",<br>
+ []>, EVEX_4V, EVEX_V512, EVEX_CD8<32, CD8VT4>;<br>
+<br>
+}<br>
+<br>
+let neverHasSideEffects = 1 in {<br>
+// -- 64x4 form --<br>
+def VINSERTI64x4rr : AVX512AIi8<0x3a, MRMSrcReg, (outs VR512:$dst),<br>
+ (ins VR512:$src1, VR256X:$src2, i8imm:$src3),<br>
+ "vinserti64x4\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}",<br>
+ []>, EVEX_4V, EVEX_V512, VEX_W;<br>
+let mayLoad = 1 in<br>
+def VINSERTI64x4rm : AVX512AIi8<0x3a, MRMSrcMem, (outs VR512:$dst),<br>
+ (ins VR512:$src1, i256mem:$src2, i8imm:$src3),<br>
+ "vinserti64x4\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}",<br>
+ []>, EVEX_4V, EVEX_V512, VEX_W, EVEX_CD8<64, CD8VT4>;<br>
+}<br>
+<br>
+def : Pat<(vinsert128_insert:$ins (v16f32 VR512:$src1), (v4f32 VR128X:$src2),<br>
+ (iPTR imm)), (VINSERTF32x4rr VR512:$src1, VR128X:$src2,<br>
+ (INSERT_get_vinsert128_imm VR512:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v8f64 VR512:$src1), (v2f64 VR128X:$src2),<br>
+ (iPTR imm)), (VINSERTF32x4rr VR512:$src1, VR128X:$src2,<br>
+ (INSERT_get_vinsert128_imm VR512:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v8i64 VR512:$src1), (v2i64 VR128X:$src2),<br>
+ (iPTR imm)), (VINSERTI32x4rr VR512:$src1, VR128X:$src2,<br>
+ (INSERT_get_vinsert128_imm VR512:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v16i32 VR512:$src1), (v4i32 VR128X:$src2),<br>
+ (iPTR imm)), (VINSERTI32x4rr VR512:$src1, VR128X:$src2,<br>
+ (INSERT_get_vinsert128_imm VR512:$ins))>;<br>
+<br>
+def : Pat<(vinsert128_insert:$ins (v16f32 VR512:$src1), (loadv4f32 addr:$src2),<br>
+ (iPTR imm)), (VINSERTF32x4rm VR512:$src1, addr:$src2,<br>
+ (INSERT_get_vinsert128_imm VR512:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v16i32 VR512:$src1),<br>
+ (bc_v4i32 (loadv2i64 addr:$src2)),<br>
+ (iPTR imm)), (VINSERTI32x4rm VR512:$src1, addr:$src2,<br>
+ (INSERT_get_vinsert128_imm VR512:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v8f64 VR512:$src1), (loadv2f64 addr:$src2),<br>
+ (iPTR imm)), (VINSERTF32x4rm VR512:$src1, addr:$src2,<br>
+ (INSERT_get_vinsert128_imm VR512:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v8i64 VR512:$src1), (loadv2i64 addr:$src2),<br>
+ (iPTR imm)), (VINSERTI32x4rm VR512:$src1, addr:$src2,<br>
+ (INSERT_get_vinsert128_imm VR512:$ins))>;<br>
+<br>
+def : Pat<(vinsert256_insert:$ins (v16f32 VR512:$src1), (v8f32 VR256X:$src2),<br>
+ (iPTR imm)), (VINSERTF64x4rr VR512:$src1, VR256X:$src2,<br>
+ (INSERT_get_vinsert256_imm VR512:$ins))>;<br>
+def : Pat<(vinsert256_insert:$ins (v8f64 VR512:$src1), (v4f64 VR256X:$src2),<br>
+ (iPTR imm)), (VINSERTF64x4rr VR512:$src1, VR256X:$src2,<br>
+ (INSERT_get_vinsert256_imm VR512:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v8i64 VR512:$src1), (v4i64 VR256X:$src2),<br>
+ (iPTR imm)), (VINSERTI64x4rr VR512:$src1, VR256X:$src2,<br>
+ (INSERT_get_vinsert256_imm VR512:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v16i32 VR512:$src1), (v8i32 VR256X:$src2),<br>
+ (iPTR imm)), (VINSERTI64x4rr VR512:$src1, VR256X:$src2,<br>
+ (INSERT_get_vinsert256_imm VR512:$ins))>;<br>
+<br>
+def : Pat<(vinsert256_insert:$ins (v16f32 VR512:$src1), (loadv8f32 addr:$src2),<br>
+ (iPTR imm)), (VINSERTF64x4rm VR512:$src1, addr:$src2,<br>
+ (INSERT_get_vinsert256_imm VR512:$ins))>;<br>
+def : Pat<(vinsert256_insert:$ins (v8f64 VR512:$src1), (loadv4f64 addr:$src2),<br>
+ (iPTR imm)), (VINSERTF64x4rm VR512:$src1, addr:$src2,<br>
+ (INSERT_get_vinsert256_imm VR512:$ins))>;<br>
+def : Pat<(vinsert256_insert:$ins (v8i64 VR512:$src1), (loadv4i64 addr:$src2),<br>
+ (iPTR imm)), (VINSERTI64x4rm VR512:$src1, addr:$src2,<br>
+ (INSERT_get_vinsert256_imm VR512:$ins))>;<br>
+def : Pat<(vinsert256_insert:$ins (v16i32 VR512:$src1),<br>
+ (bc_v8i32 (loadv4i64 addr:$src2)),<br>
+ (iPTR imm)), (VINSERTI64x4rm VR512:$src1, addr:$src2,<br>
+ (INSERT_get_vinsert256_imm VR512:$ins))>;<br>
+<br>
+// vinsertps - insert f32 to XMM<br>
+def VINSERTPSzrr : AVX512AIi8<0x21, MRMSrcReg, (outs VR128X:$dst),<br>
+ (ins VR128X:$src1, VR128X:$src2, u32u8imm:$src3),<br>
+ !strconcat("vinsertps{z}",<br>
+ "\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}"),<br>
+ [(set VR128X:$dst, (X86insrtps VR128X:$src1, VR128X:$src2, imm:$src3))]>,<br>
+ EVEX_4V;<br>
+def VINSERTPSzrm: AVX512AIi8<0x21, MRMSrcMem, (outs VR128X:$dst),<br>
+ (ins VR128X:$src1, f32mem:$src2, u32u8imm:$src3),<br>
+ !strconcat("vinsertps{z}",<br>
+ "\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}"),<br>
+ [(set VR128X:$dst, (X86insrtps VR128X:$src1,<br>
+ (v4f32 (scalar_to_vector (loadf32 addr:$src2))),<br>
+ imm:$src3))]>, EVEX_4V, EVEX_CD8<32, CD8VT1>;<br>
+<br>
+<br>
+//===----------------------------------------------------------------------===//<br>
+// AVX-512 VECTOR EXTRACT<br>
+//---<br>
+let neverHasSideEffects = 1, ExeDomain = SSEPackedSingle in {<br>
+// -- 32x4 form --<br>
+def VEXTRACTF32x4rr : AVX512AIi8<0x19, MRMDestReg, (outs VR128X:$dst),<br>
+ (ins VR512:$src1, i8imm:$src2),<br>
+ "vextractf32x4\t{$src2, $src1, $dst|$dst, $src1, $src2}",<br>
+ []>, EVEX, EVEX_V512;<br>
+def VEXTRACTF32x4mr : AVX512AIi8<0x19, MRMDestMem, (outs),<br>
+ (ins f128mem:$dst, VR512:$src1, i8imm:$src2),<br>
+ "vextractf32x4\t{$src2, $src1, $dst|$dst, $src1, $src2}",<br>
+ []>, EVEX, EVEX_V512, EVEX_CD8<32, CD8VT4>;<br>
+<br>
+// -- 64x4 form --<br>
+def VEXTRACTF64x4rr : AVX512AIi8<0x1b, MRMDestReg, (outs VR256X:$dst),<br>
+ (ins VR512:$src1, i8imm:$src2),<br>
+ "vextractf64x4\t{$src2, $src1, $dst|$dst, $src1, $src2}",<br>
+ []>, EVEX, EVEX_V512, VEX_W;<br>
+let mayStore = 1 in<br>
+def VEXTRACTF64x4mr : AVX512AIi8<0x1b, MRMDestMem, (outs),<br>
+ (ins f256mem:$dst, VR512:$src1, i8imm:$src2),<br>
+ "vextractf64x4\t{$src2, $src1, $dst|$dst, $src1, $src2}",<br>
+ []>, EVEX, EVEX_V512, VEX_W, EVEX_CD8<64, CD8VT4>;<br>
+}<br>
+<br>
+let neverHasSideEffects = 1 in {<br>
+// -- 32x4 form --<br>
+def VEXTRACTI32x4rr : AVX512AIi8<0x39, MRMDestReg, (outs VR128X:$dst),<br>
+ (ins VR512:$src1, i8imm:$src2),<br>
+ "vextracti32x4\t{$src2, $src1, $dst|$dst, $src1, $src2}",<br>
+ []>, EVEX, EVEX_V512;<br>
+def VEXTRACTI32x4mr : AVX512AIi8<0x39, MRMDestMem, (outs),<br>
+ (ins i128mem:$dst, VR512:$src1, i8imm:$src2),<br>
+ "vextracti32x4\t{$src2, $src1, $dst|$dst, $src1, $src2}",<br>
+ []>, EVEX, EVEX_V512, EVEX_CD8<32, CD8VT4>;<br>
+<br>
+// -- 64x4 form --<br>
+def VEXTRACTI64x4rr : AVX512AIi8<0x3b, MRMDestReg, (outs VR256X:$dst),<br>
+ (ins VR512:$src1, i8imm:$src2),<br>
+ "vextracti64x4\t{$src2, $src1, $dst|$dst, $src1, $src2}",<br>
+ []>, EVEX, EVEX_V512, VEX_W;<br>
+let mayStore = 1 in<br>
+def VEXTRACTI64x4mr : AVX512AIi8<0x3b, MRMDestMem, (outs),<br>
+ (ins i256mem:$dst, VR512:$src1, i8imm:$src2),<br>
+ "vextracti64x4\t{$src2, $src1, $dst|$dst, $src1, $src2}",<br>
+ []>, EVEX, EVEX_V512, VEX_W, EVEX_CD8<64, CD8VT4>;<br>
+}<br>
+<br>
+def : Pat<(vextract128_extract:$ext (v16f32 VR512:$src1), (iPTR imm)),<br>
+ (v4f32 (VEXTRACTF32x4rr VR512:$src1,<br>
+ (EXTRACT_get_vextract128_imm VR128X:$ext)))>;<br>
+<br>
+def : Pat<(vextract128_extract:$ext VR512:$src1, (iPTR imm)),<br>
+ (v4i32 (VEXTRACTF32x4rr VR512:$src1,<br>
+ (EXTRACT_get_vextract128_imm VR128X:$ext)))>;<br>
+<br>
+def : Pat<(vextract128_extract:$ext (v8f64 VR512:$src1), (iPTR imm)),<br>
+ (v2f64 (VEXTRACTF32x4rr VR512:$src1,<br>
+ (EXTRACT_get_vextract128_imm VR128X:$ext)))>;<br>
+<br>
+def : Pat<(vextract128_extract:$ext (v8i64 VR512:$src1), (iPTR imm)),<br>
+ (v2i64 (VEXTRACTI32x4rr VR512:$src1,<br>
+ (EXTRACT_get_vextract128_imm VR128X:$ext)))>;<br>
+<br>
+<br>
+def : Pat<(vextract256_extract:$ext (v16f32 VR512:$src1), (iPTR imm)),<br>
+ (v8f32 (VEXTRACTF64x4rr VR512:$src1,<br>
+ (EXTRACT_get_vextract256_imm VR256X:$ext)))>;<br>
+<br>
+def : Pat<(vextract256_extract:$ext (v16i32 VR512:$src1), (iPTR imm)),<br>
+ (v8i32 (VEXTRACTI64x4rr VR512:$src1,<br>
+ (EXTRACT_get_vextract256_imm VR256X:$ext)))>;<br>
+<br>
+def : Pat<(vextract256_extract:$ext (v8f64 VR512:$src1), (iPTR imm)),<br>
+ (v4f64 (VEXTRACTF64x4rr VR512:$src1,<br>
+ (EXTRACT_get_vextract256_imm VR256X:$ext)))>;<br>
+<br>
+def : Pat<(vextract256_extract:$ext (v8i64 VR512:$src1), (iPTR imm)),<br>
+ (v4i64 (VEXTRACTI64x4rr VR512:$src1,<br>
+ (EXTRACT_get_vextract256_imm VR256X:$ext)))>;<br>
+<br>
+// A 256-bit subvector extract from the first 512-bit vector position<br>
+// is a subregister copy that needs no instruction.<br>
+def : Pat<(v8i32 (extract_subvector (v16i32 VR512:$src), (iPTR 0))),<br>
+ (v8i32 (EXTRACT_SUBREG (v16i32 VR512:$src), sub_ymm))>;<br>
+def : Pat<(v8f32 (extract_subvector (v16f32 VR512:$src), (iPTR 0))),<br>
+ (v8f32 (EXTRACT_SUBREG (v16f32 VR512:$src), sub_ymm))>;<br>
+def : Pat<(v4i64 (extract_subvector (v8i64 VR512:$src), (iPTR 0))),<br>
+ (v4i64 (EXTRACT_SUBREG (v8i64 VR512:$src), sub_ymm))>;<br>
+def : Pat<(v4f64 (extract_subvector (v8f64 VR512:$src), (iPTR 0))),<br>
+ (v4f64 (EXTRACT_SUBREG (v8f64 VR512:$src), sub_ymm))>;<br>
+<br>
+// zmm -> xmm<br>
+def : Pat<(v4i32 (extract_subvector (v16i32 VR512:$src), (iPTR 0))),<br>
+ (v4i32 (EXTRACT_SUBREG (v16i32 VR512:$src), sub_xmm))>;<br>
+def : Pat<(v2i64 (extract_subvector (v8i64 VR512:$src), (iPTR 0))),<br>
+ (v2i64 (EXTRACT_SUBREG (v8i64 VR512:$src), sub_xmm))>;<br>
+def : Pat<(v2f64 (extract_subvector (v8f64 VR512:$src), (iPTR 0))),<br>
+ (v2f64 (EXTRACT_SUBREG (v8f64 VR512:$src), sub_xmm))>;<br>
+def : Pat<(v4f32 (extract_subvector (v16f32 VR512:$src), (iPTR 0))),<br>
+ (v4f32 (EXTRACT_SUBREG (v16f32 VR512:$src), sub_xmm))>;<br>
+<br>
+<br>
+// A 128-bit subvector insert to the first 512-bit vector position<br>
+// is a subregister copy that needs no instruction.<br>
+def : Pat<(insert_subvector undef, (v2i64 VR128X:$src), (iPTR 0)),<br>
+ (INSERT_SUBREG (v8i64 (IMPLICIT_DEF)),<br>
+ (INSERT_SUBREG (v4i64 (IMPLICIT_DEF)), VR128X:$src, sub_xmm),<br>
+ sub_ymm)>;<br>
+def : Pat<(insert_subvector undef, (v2f64 VR128X:$src), (iPTR 0)),<br>
+ (INSERT_SUBREG (v8f64 (IMPLICIT_DEF)),<br>
+ (INSERT_SUBREG (v4f64 (IMPLICIT_DEF)), VR128X:$src, sub_xmm),<br>
+ sub_ymm)>;<br>
+def : Pat<(insert_subvector undef, (v4i32 VR128X:$src), (iPTR 0)),<br>
+ (INSERT_SUBREG (v16i32 (IMPLICIT_DEF)),<br>
+ (INSERT_SUBREG (v8i32 (IMPLICIT_DEF)), VR128X:$src, sub_xmm),<br>
+ sub_ymm)>;<br>
+def : Pat<(insert_subvector undef, (v4f32 VR128X:$src), (iPTR 0)),<br>
+ (INSERT_SUBREG (v16f32 (IMPLICIT_DEF)),<br>
+ (INSERT_SUBREG (v8f32 (IMPLICIT_DEF)), VR128X:$src, sub_xmm),<br>
+ sub_ymm)>;<br>
+<br>
+def : Pat<(insert_subvector undef, (v4i64 VR256X:$src), (iPTR 0)),<br>
+ (INSERT_SUBREG (v8i64 (IMPLICIT_DEF)), VR256X:$src, sub_ymm)>;<br>
+def : Pat<(insert_subvector undef, (v4f64 VR256X:$src), (iPTR 0)),<br>
+ (INSERT_SUBREG (v8f64 (IMPLICIT_DEF)), VR256X:$src, sub_ymm)>;<br>
+def : Pat<(insert_subvector undef, (v8i32 VR256X:$src), (iPTR 0)),<br>
+ (INSERT_SUBREG (v16i32 (IMPLICIT_DEF)), VR256X:$src, sub_ymm)>;<br>
+def : Pat<(insert_subvector undef, (v8f32 VR256X:$src), (iPTR 0)),<br>
+ (INSERT_SUBREG (v16f32 (IMPLICIT_DEF)), VR256X:$src, sub_ymm)>;<br>
+<br>
<br>
Modified: llvm/trunk/lib/Target/X86/X86InstrFragmentsSIMD.td<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrFragmentsSIMD.td?rev=187491&r1=187490&r2=187491&view=diff" target="_blank">
http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrFragmentsSIMD.td?rev=187491&r1=187490&r2=187491&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/Target/X86/X86InstrFragmentsSIMD.td (original)<br>
+++ llvm/trunk/lib/Target/X86/X86InstrFragmentsSIMD.td Wed Jul 31 06:35:14 2013<br>
@@ -405,28 +405,54 @@ def BYTE_imm : SDNodeXForm<imm, [{<br>
return getI32Imm(N->getZExtValue() >> 3);<br>
}]>;<br>
<br>
-// EXTRACT_get_vextractf128_imm xform function: convert extract_subvector index<br>
-// to VEXTRACTF128 imm.<br>
-def EXTRACT_get_vextractf128_imm : SDNodeXForm<extract_subvector, [{<br>
- return getI8Imm(X86::getExtractVEXTRACTF128Immediate(N));<br>
+// EXTRACT_get_vextract128_imm xform function: convert extract_subvector index<br>
+// to VEXTRACTF128/VEXTRACTI128 imm.<br>
+def EXTRACT_get_vextract128_imm : SDNodeXForm<extract_subvector, [{<br>
+ return getI8Imm(X86::getExtractVEXTRACT128Immediate(N));<br>
}]>;<br>
<br>
-// INSERT_get_vinsertf128_imm xform function: convert insert_subvector index to<br>
-// VINSERTF128 imm.<br>
-def INSERT_get_vinsertf128_imm : SDNodeXForm<insert_subvector, [{<br>
- return getI8Imm(X86::getInsertVINSERTF128Immediate(N));<br>
+// INSERT_get_vinsert128_imm xform function: convert insert_subvector index to<br>
+// VINSERTF128/VINSERTI128 imm.<br>
+def INSERT_get_vinsert128_imm : SDNodeXForm<insert_subvector, [{<br>
+ return getI8Imm(X86::getInsertVINSERT128Immediate(N));<br>
}]>;<br>
<br>
-def vextractf128_extract : PatFrag<(ops node:$bigvec, node:$index),<br>
+// EXTRACT_get_vextract256_imm xform function: convert extract_subvector index<br>
+// to VEXTRACTF64x4 imm.<br>
+def EXTRACT_get_vextract256_imm : SDNodeXForm<extract_subvector, [{<br>
+ return getI8Imm(X86::getExtractVEXTRACT256Immediate(N));<br>
+}]>;<br>
+<br>
+// INSERT_get_vinsert256_imm xform function: convert insert_subvector index to<br>
+// VINSERTF64x4 imm.<br>
+def INSERT_get_vinsert256_imm : SDNodeXForm<insert_subvector, [{<br>
+ return getI8Imm(X86::getInsertVINSERT256Immediate(N));<br>
+}]>;<br>
+<br>
+def vextract128_extract : PatFrag<(ops node:$bigvec, node:$index),<br>
+ (extract_subvector node:$bigvec,<br>
+ node:$index), [{<br>
+ return X86::isVEXTRACT128Index(N);<br>
+}], EXTRACT_get_vextract128_imm>;<br>
+<br>
+def vinsert128_insert : PatFrag<(ops node:$bigvec, node:$smallvec,<br>
+ node:$index),<br>
+ (insert_subvector node:$bigvec, node:$smallvec,<br>
+ node:$index), [{<br>
+ return X86::isVINSERT128Index(N);<br>
+}], INSERT_get_vinsert128_imm>;<br>
+<br>
+<br>
+def vextract256_extract : PatFrag<(ops node:$bigvec, node:$index),<br>
(extract_subvector node:$bigvec,<br>
node:$index), [{<br>
- return X86::isVEXTRACTF128Index(N);<br>
-}], EXTRACT_get_vextractf128_imm>;<br>
+ return X86::isVEXTRACT256Index(N);<br>
+}], EXTRACT_get_vextract256_imm>;<br>
<br>
-def vinsertf128_insert : PatFrag<(ops node:$bigvec, node:$smallvec,<br>
+def vinsert256_insert : PatFrag<(ops node:$bigvec, node:$smallvec,<br>
node:$index),<br>
(insert_subvector node:$bigvec, node:$smallvec,<br>
node:$index), [{<br>
- return X86::isVINSERTF128Index(N);<br>
-}], INSERT_get_vinsertf128_imm>;<br>
+ return X86::isVINSERT256Index(N);<br>
+}], INSERT_get_vinsert256_imm>;<br>
<br>
<br>
Modified: llvm/trunk/lib/Target/X86/X86InstrInfo.td<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrInfo.td?rev=187491&r1=187490&r2=187491&view=diff" target="_blank">
http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrInfo.td?rev=187491&r1=187490&r2=187491&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/Target/X86/X86InstrInfo.td (original)<br>
+++ llvm/trunk/lib/Target/X86/X86InstrInfo.td Wed Jul 31 06:35:14 2013<br>
@@ -1861,6 +1861,7 @@ include "X86InstrXOP.td"<br>
<br>
// SSE, MMX and 3DNow! vector support.<br>
include "X86InstrSSE.td"<br>
+include "X86InstrAVX512.td"<br>
include "X86InstrMMX.td"<br>
include "X86Instr3DNow.td"<br>
<br>
<br>
Modified: llvm/trunk/lib/Target/X86/X86InstrSSE.td<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrSSE.td?rev=187491&r1=187490&r2=187491&view=diff" target="_blank">
http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrSSE.td?rev=187491&r1=187490&r2=187491&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/Target/X86/X86InstrSSE.td (original)<br>
+++ llvm/trunk/lib/Target/X86/X86InstrSSE.td Wed Jul 31 06:35:14 2013<br>
@@ -7586,62 +7586,62 @@ def VINSERTF128rm : AVXAIi8<0x18, MRMSrc<br>
}<br>
<br>
let Predicates = [HasAVX] in {<br>
-def : Pat<(vinsertf128_insert:$ins (v8f32 VR256:$src1), (v4f32 VR128:$src2),<br>
+def : Pat<(vinsert128_insert:$ins (v8f32 VR256:$src1), (v4f32 VR128:$src2),<br>
(iPTR imm)),<br>
(VINSERTF128rr VR256:$src1, VR128:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v4f64 VR256:$src1), (v2f64 VR128:$src2),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v4f64 VR256:$src1), (v2f64 VR128:$src2),<br>
(iPTR imm)),<br>
(VINSERTF128rr VR256:$src1, VR128:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
<br>
-def : Pat<(vinsertf128_insert:$ins (v8f32 VR256:$src1), (memopv4f32 addr:$src2),<br>
+def : Pat<(vinsert128_insert:$ins (v8f32 VR256:$src1), (memopv4f32 addr:$src2),<br>
(iPTR imm)),<br>
(VINSERTF128rm VR256:$src1, addr:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v4f64 VR256:$src1), (memopv2f64 addr:$src2),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v4f64 VR256:$src1), (memopv2f64 addr:$src2),<br>
(iPTR imm)),<br>
(VINSERTF128rm VR256:$src1, addr:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
}<br>
<br>
let Predicates = [HasAVX1Only] in {<br>
-def : Pat<(vinsertf128_insert:$ins (v4i64 VR256:$src1), (v2i64 VR128:$src2),<br>
+def : Pat<(vinsert128_insert:$ins (v4i64 VR256:$src1), (v2i64 VR128:$src2),<br>
(iPTR imm)),<br>
(VINSERTF128rr VR256:$src1, VR128:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v8i32 VR256:$src1), (v4i32 VR128:$src2),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v8i32 VR256:$src1), (v4i32 VR128:$src2),<br>
(iPTR imm)),<br>
(VINSERTF128rr VR256:$src1, VR128:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v32i8 VR256:$src1), (v16i8 VR128:$src2),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v32i8 VR256:$src1), (v16i8 VR128:$src2),<br>
(iPTR imm)),<br>
(VINSERTF128rr VR256:$src1, VR128:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v16i16 VR256:$src1), (v8i16 VR128:$src2),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v16i16 VR256:$src1), (v8i16 VR128:$src2),<br>
(iPTR imm)),<br>
(VINSERTF128rr VR256:$src1, VR128:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
<br>
-def : Pat<(vinsertf128_insert:$ins (v4i64 VR256:$src1), (memopv2i64 addr:$src2),<br>
+def : Pat<(vinsert128_insert:$ins (v4i64 VR256:$src1), (memopv2i64 addr:$src2),<br>
(iPTR imm)),<br>
(VINSERTF128rm VR256:$src1, addr:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v8i32 VR256:$src1),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v8i32 VR256:$src1),<br>
(bc_v4i32 (memopv2i64 addr:$src2)),<br>
(iPTR imm)),<br>
(VINSERTF128rm VR256:$src1, addr:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v32i8 VR256:$src1),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v32i8 VR256:$src1),<br>
(bc_v16i8 (memopv2i64 addr:$src2)),<br>
(iPTR imm)),<br>
(VINSERTF128rm VR256:$src1, addr:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v16i16 VR256:$src1),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v16i16 VR256:$src1),<br>
(bc_v8i16 (memopv2i64 addr:$src2)),<br>
(iPTR imm)),<br>
(VINSERTF128rm VR256:$src1, addr:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
}<br>
<br>
//===----------------------------------------------------------------------===//<br>
@@ -7661,59 +7661,59 @@ def VEXTRACTF128mr : AVXAIi8<0x19, MRMDe<br>
<br>
// AVX1 patterns<br>
let Predicates = [HasAVX] in {<br>
-def : Pat<(vextractf128_extract:$ext VR256:$src1, (iPTR imm)),<br>
+def : Pat<(vextract128_extract:$ext VR256:$src1, (iPTR imm)),<br>
(v4f32 (VEXTRACTF128rr<br>
(v8f32 VR256:$src1),<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext)))>;<br>
-def : Pat<(vextractf128_extract:$ext VR256:$src1, (iPTR imm)),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext)))>;<br>
+def : Pat<(vextract128_extract:$ext VR256:$src1, (iPTR imm)),<br>
(v2f64 (VEXTRACTF128rr<br>
(v4f64 VR256:$src1),<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext)))>;<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext)))>;<br>
<br>
-def : Pat<(alignedstore (v4f32 (vextractf128_extract:$ext (v8f32 VR256:$src1),<br>
+def : Pat<(alignedstore (v4f32 (vextract128_extract:$ext (v8f32 VR256:$src1),<br>
(iPTR imm))), addr:$dst),<br>
(VEXTRACTF128mr addr:$dst, VR256:$src1,<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext))>;<br>
-def : Pat<(alignedstore (v2f64 (vextractf128_extract:$ext (v4f64 VR256:$src1),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext))>;<br>
+def : Pat<(alignedstore (v2f64 (vextract128_extract:$ext (v4f64 VR256:$src1),<br>
(iPTR imm))), addr:$dst),<br>
(VEXTRACTF128mr addr:$dst, VR256:$src1,<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext))>;<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext))>;<br>
}<br>
<br>
let Predicates = [HasAVX1Only] in {<br>
-def : Pat<(vextractf128_extract:$ext VR256:$src1, (iPTR imm)),<br>
+def : Pat<(vextract128_extract:$ext VR256:$src1, (iPTR imm)),<br>
(v2i64 (VEXTRACTF128rr<br>
(v4i64 VR256:$src1),<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext)))>;<br>
-def : Pat<(vextractf128_extract:$ext VR256:$src1, (iPTR imm)),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext)))>;<br>
+def : Pat<(vextract128_extract:$ext VR256:$src1, (iPTR imm)),<br>
(v4i32 (VEXTRACTF128rr<br>
(v8i32 VR256:$src1),<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext)))>;<br>
-def : Pat<(vextractf128_extract:$ext VR256:$src1, (iPTR imm)),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext)))>;<br>
+def : Pat<(vextract128_extract:$ext VR256:$src1, (iPTR imm)),<br>
(v8i16 (VEXTRACTF128rr<br>
(v16i16 VR256:$src1),<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext)))>;<br>
-def : Pat<(vextractf128_extract:$ext VR256:$src1, (iPTR imm)),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext)))>;<br>
+def : Pat<(vextract128_extract:$ext VR256:$src1, (iPTR imm)),<br>
(v16i8 (VEXTRACTF128rr<br>
(v32i8 VR256:$src1),<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext)))>;<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext)))>;<br>
<br>
-def : Pat<(alignedstore (v2i64 (vextractf128_extract:$ext (v4i64 VR256:$src1),<br>
+def : Pat<(alignedstore (v2i64 (vextract128_extract:$ext (v4i64 VR256:$src1),<br>
(iPTR imm))), addr:$dst),<br>
(VEXTRACTF128mr addr:$dst, VR256:$src1,<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext))>;<br>
-def : Pat<(alignedstore (v4i32 (vextractf128_extract:$ext (v8i32 VR256:$src1),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext))>;<br>
+def : Pat<(alignedstore (v4i32 (vextract128_extract:$ext (v8i32 VR256:$src1),<br>
(iPTR imm))), addr:$dst),<br>
(VEXTRACTF128mr addr:$dst, VR256:$src1,<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext))>;<br>
-def : Pat<(alignedstore (v8i16 (vextractf128_extract:$ext (v16i16 VR256:$src1),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext))>;<br>
+def : Pat<(alignedstore (v8i16 (vextract128_extract:$ext (v16i16 VR256:$src1),<br>
(iPTR imm))), addr:$dst),<br>
(VEXTRACTF128mr addr:$dst, VR256:$src1,<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext))>;<br>
-def : Pat<(alignedstore (v16i8 (vextractf128_extract:$ext (v32i8 VR256:$src1),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext))>;<br>
+def : Pat<(alignedstore (v16i8 (vextract128_extract:$ext (v32i8 VR256:$src1),<br>
(iPTR imm))), addr:$dst),<br>
(VEXTRACTF128mr addr:$dst, VR256:$src1,<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext))>;<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext))>;<br>
}<br>
<br>
//===----------------------------------------------------------------------===//<br>
@@ -8191,42 +8191,42 @@ def VINSERTI128rm : AVX2AIi8<0x38, MRMSr<br>
}<br>
<br>
let Predicates = [HasAVX2] in {<br>
-def : Pat<(vinsertf128_insert:$ins (v4i64 VR256:$src1), (v2i64 VR128:$src2),<br>
+def : Pat<(vinsert128_insert:$ins (v4i64 VR256:$src1), (v2i64 VR128:$src2),<br>
(iPTR imm)),<br>
(VINSERTI128rr VR256:$src1, VR128:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v8i32 VR256:$src1), (v4i32 VR128:$src2),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v8i32 VR256:$src1), (v4i32 VR128:$src2),<br>
(iPTR imm)),<br>
(VINSERTI128rr VR256:$src1, VR128:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v32i8 VR256:$src1), (v16i8 VR128:$src2),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v32i8 VR256:$src1), (v16i8 VR128:$src2),<br>
(iPTR imm)),<br>
(VINSERTI128rr VR256:$src1, VR128:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v16i16 VR256:$src1), (v8i16 VR128:$src2),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v16i16 VR256:$src1), (v8i16 VR128:$src2),<br>
(iPTR imm)),<br>
(VINSERTI128rr VR256:$src1, VR128:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
<br>
-def : Pat<(vinsertf128_insert:$ins (v4i64 VR256:$src1), (memopv2i64 addr:$src2),<br>
+def : Pat<(vinsert128_insert:$ins (v4i64 VR256:$src1), (memopv2i64 addr:$src2),<br>
(iPTR imm)),<br>
(VINSERTI128rm VR256:$src1, addr:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v8i32 VR256:$src1),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v8i32 VR256:$src1),<br>
(bc_v4i32 (memopv2i64 addr:$src2)),<br>
(iPTR imm)),<br>
(VINSERTI128rm VR256:$src1, addr:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v32i8 VR256:$src1),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v32i8 VR256:$src1),<br>
(bc_v16i8 (memopv2i64 addr:$src2)),<br>
(iPTR imm)),<br>
(VINSERTI128rm VR256:$src1, addr:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
-def : Pat<(vinsertf128_insert:$ins (v16i16 VR256:$src1),<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
+def : Pat<(vinsert128_insert:$ins (v16i16 VR256:$src1),<br>
(bc_v8i16 (memopv2i64 addr:$src2)),<br>
(iPTR imm)),<br>
(VINSERTI128rm VR256:$src1, addr:$src2,<br>
- (INSERT_get_vinsertf128_imm VR256:$ins))>;<br>
+ (INSERT_get_vinsert128_imm VR256:$ins))>;<br>
}<br>
<br>
//===----------------------------------------------------------------------===//<br>
@@ -8245,39 +8245,39 @@ def VEXTRACTI128mr : AVX2AIi8<0x39, MRMD<br>
VEX, VEX_L;<br>
<br>
let Predicates = [HasAVX2] in {<br>
-def : Pat<(vextractf128_extract:$ext VR256:$src1, (iPTR imm)),<br>
+def : Pat<(vextract128_extract:$ext VR256:$src1, (iPTR imm)),<br>
(v2i64 (VEXTRACTI128rr<br>
(v4i64 VR256:$src1),<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext)))>;<br>
-def : Pat<(vextractf128_extract:$ext VR256:$src1, (iPTR imm)),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext)))>;<br>
+def : Pat<(vextract128_extract:$ext VR256:$src1, (iPTR imm)),<br>
(v4i32 (VEXTRACTI128rr<br>
(v8i32 VR256:$src1),<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext)))>;<br>
-def : Pat<(vextractf128_extract:$ext VR256:$src1, (iPTR imm)),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext)))>;<br>
+def : Pat<(vextract128_extract:$ext VR256:$src1, (iPTR imm)),<br>
(v8i16 (VEXTRACTI128rr<br>
(v16i16 VR256:$src1),<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext)))>;<br>
-def : Pat<(vextractf128_extract:$ext VR256:$src1, (iPTR imm)),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext)))>;<br>
+def : Pat<(vextract128_extract:$ext VR256:$src1, (iPTR imm)),<br>
(v16i8 (VEXTRACTI128rr<br>
(v32i8 VR256:$src1),<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext)))>;<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext)))>;<br>
<br>
-def : Pat<(alignedstore (v2i64 (vextractf128_extract:$ext (v4i64 VR256:$src1),<br>
+def : Pat<(alignedstore (v2i64 (vextract128_extract:$ext (v4i64 VR256:$src1),<br>
(iPTR imm))), addr:$dst),<br>
(VEXTRACTI128mr addr:$dst, VR256:$src1,<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext))>;<br>
-def : Pat<(alignedstore (v4i32 (vextractf128_extract:$ext (v8i32 VR256:$src1),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext))>;<br>
+def : Pat<(alignedstore (v4i32 (vextract128_extract:$ext (v8i32 VR256:$src1),<br>
(iPTR imm))), addr:$dst),<br>
(VEXTRACTI128mr addr:$dst, VR256:$src1,<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext))>;<br>
-def : Pat<(alignedstore (v8i16 (vextractf128_extract:$ext (v16i16 VR256:$src1),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext))>;<br>
+def : Pat<(alignedstore (v8i16 (vextract128_extract:$ext (v16i16 VR256:$src1),<br>
(iPTR imm))), addr:$dst),<br>
(VEXTRACTI128mr addr:$dst, VR256:$src1,<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext))>;<br>
-def : Pat<(alignedstore (v16i8 (vextractf128_extract:$ext (v32i8 VR256:$src1),<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext))>;<br>
+def : Pat<(alignedstore (v16i8 (vextract128_extract:$ext (v32i8 VR256:$src1),<br>
(iPTR imm))), addr:$dst),<br>
(VEXTRACTI128mr addr:$dst, VR256:$src1,<br>
- (EXTRACT_get_vextractf128_imm VR128:$ext))>;<br>
+ (EXTRACT_get_vextract128_imm VR128:$ext))>;<br>
}<br>
<br>
//===----------------------------------------------------------------------===//<br>
<br>
Added: llvm/trunk/test/CodeGen/X86/avx512-insert-extract.ll<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-insert-extract.ll?rev=187491&view=auto" target="_blank">
http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/avx512-insert-extract.ll?rev=187491&view=auto</a><br>
==============================================================================<br>
--- llvm/trunk/test/CodeGen/X86/avx512-insert-extract.ll (added)<br>
+++ llvm/trunk/test/CodeGen/X86/avx512-insert-extract.ll Wed Jul 31 06:35:14 2013<br>
@@ -0,0 +1,44 @@<br>
+; RUN: llc < %s -march=x86-64 -mtriple=x86_64-apple-darwin -mcpu=knl | FileCheck %s<br>
+<br>
+;CHECK: test1<br>
+;CHECK: vinsertps<br>
+;CHECK: vinsertf32x4<br>
+;CHECK: ret<br>
+define <16 x float> @test1(<16 x float> %x, float* %br, float %y) nounwind {<br>
+ %rrr = load float* %br<br>
+ %rrr2 = insertelement <16 x float> %x, float %rrr, i32 1<br>
+ %rrr3 = insertelement <16 x float> %rrr2, float %y, i32 14<br>
+ ret <16 x float> %rrr3<br>
+}<br>
+<br>
+;CHECK: test2<br>
+;CHECK: vinsertf32x4<br>
+;CHECK: vextractf32x4<br>
+;CHECK: vinsertf32x4<br>
+;CHECK: ret<br>
+define <8 x double> @test2(<8 x double> %x, double* %br, double %y) nounwind {<br>
+ %rrr = load double* %br<br>
+ %rrr2 = insertelement <8 x double> %x, double %rrr, i32 1<br>
+ %rrr3 = insertelement <8 x double> %rrr2, double %y, i32 6<br>
+ ret <8 x double> %rrr3<br>
+}<br>
+<br>
+;CHECK: test3<br>
+;CHECK: vextractf32x4<br>
+;CHECK: vinsertf32x4<br>
+;CHECK: ret<br>
+define <16 x float> @test3(<16 x float> %x) nounwind {<br>
+ %eee = extractelement <16 x float> %x, i32 4<br>
+ %rrr2 = insertelement <16 x float> %x, float %eee, i32 1<br>
+ ret <16 x float> %rrr2<br>
+}<br>
+<br>
+;CHECK: test4<br>
+;CHECK: vextracti32x4<br>
+;CHECK: vinserti32x4<br>
+;CHECK: ret<br>
+define <8 x i64> @test4(<8 x i64> %x) nounwind {<br>
+ %eee = extractelement <8 x i64> %x, i32 4<br>
+ %rrr2 = insertelement <8 x i64> %x, i64 %eee, i32 1<br>
+ ret <8 x i64> %rrr2<br>
+}<br>
\ No newline at end of file<br>
<br>
<br>
_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@cs.uiuc.edu">llvm-commits@cs.uiuc.edu</a><br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits</a><o:p></o:p></p>
</div>
<p class="MsoNormal"><br>
<br clear="all">
<o:p></o:p></p>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<p class="MsoNormal">-- <br>
~Craig <o:p></o:p></p>
</div>
</div>
<p>---------------------------------------------------------------------<br>
Intel Israel (74) Limited</p>
<p>This e-mail and any attachments may contain confidential material for<br>
the sole use of the intended recipient(s). Any review or distribution<br>
by others is strictly prohibited. If you are not the intended<br>
recipient, please contact the sender and delete all copies.</p></body>
</html>