[llvm] r342575 - [x86] change names of vector splitting helper functions; NFC

Craig Topper via llvm-commits llvm-commits at lists.llvm.org
Wed Sep 19 13:39:02 PDT 2018


Given that it’s already restricted like you said I think this fine. It
almost seems like ANDN should be formed during isel not DAG combine like we
do for scalar. But I think I tried and failed to change that as an
experiment.

On Wed, Sep 19, 2018 at 1:28 PM Sanjay Patel <spatel at rotateright.com> wrote:

> No, I think these are only within lowering calls today.
>
> I was looking at using this for a combine of ANDNP or possibly
> EXTRACT_SUBVECTOR that's limited to just one particular case: with AVX1, I
> want to split a 256-bit op into 128-bit halves to avoid this insert/extract
> sequence:
>     vinsertf128    $1, %xmm3, %ymm0, %ymm0
>     vandnps    LCPI0_0(%rip), %ymm0, %ymm0
>     vextractf128    $1, %ymm0, %xmm1
>
> Should I revert the name change and try to use splitOpsAndApply instead?
>
> On Wed, Sep 19, 2018 at 1:15 PM Craig Topper <craig.topper at gmail.com>
> wrote:
>
>> Are these used outside of lowering today? DAG combine should be using
>> splitOpsAndApply to deal with arbitrarily wide types.
>>
>> On Wed, Sep 19, 2018 at 11:53 AM Sanjay Patel via llvm-commits <
>> llvm-commits at lists.llvm.org> wrote:
>>
>>> Author: spatel
>>> Date: Wed Sep 19 11:52:00 2018
>>> New Revision: 342575
>>>
>>> URL: http://llvm.org/viewvc/llvm-project?rev=342575&view=rev
>>> Log:
>>> [x86] change names of vector splitting helper functions; NFC
>>>
>>> As the code comments suggest, these are about splitting, and they
>>> are not necessarily limited to lowering, so that misled me.
>>>
>>> There's nothing that's actually x86-specific in these either, so
>>> they might be better placed in a common header so any target can
>>> use them.
>>>
>>> Modified:
>>>     llvm/trunk/lib/Target/X86/X86ISelLowering.cpp
>>>
>>> Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.cpp
>>> URL:
>>> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=342575&r1=342574&r2=342575&view=diff
>>>
>>> ==============================================================================
>>> --- llvm/trunk/lib/Target/X86/X86ISelLowering.cpp (original)
>>> +++ llvm/trunk/lib/Target/X86/X86ISelLowering.cpp Wed Sep 19 11:52:00
>>> 2018
>>> @@ -22832,7 +22832,7 @@ static SDValue LowerCTTZ(SDValue Op, Sel
>>>
>>>  /// Break a 256-bit integer operation into two new 128-bit ones and then
>>>  /// concatenate the result back.
>>> -static SDValue Lower256IntArith(SDValue Op, SelectionDAG &DAG) {
>>> +static SDValue split256IntArith(SDValue Op, SelectionDAG &DAG) {
>>>    MVT VT = Op.getSimpleValueType();
>>>
>>>    assert(VT.is256BitVector() && VT.isInteger() &&
>>> @@ -22861,7 +22861,7 @@ static SDValue Lower256IntArith(SDValue
>>>
>>>  /// Break a 512-bit integer operation into two new 256-bit ones and then
>>>  /// concatenate the result back.
>>> -static SDValue Lower512IntArith(SDValue Op, SelectionDAG &DAG) {
>>> +static SDValue split512IntArith(SDValue Op, SelectionDAG &DAG) {
>>>    MVT VT = Op.getSimpleValueType();
>>>
>>>    assert(VT.is512BitVector() && VT.isInteger() &&
>>> @@ -22896,7 +22896,7 @@ static SDValue LowerADD_SUB(SDValue Op,
>>>    assert(Op.getSimpleValueType().is256BitVector() &&
>>>           Op.getSimpleValueType().isInteger() &&
>>>           "Only handle AVX 256-bit vector integer operation");
>>> -  return Lower256IntArith(Op, DAG);
>>> +  return split256IntArith(Op, DAG);
>>>  }
>>>
>>>  static SDValue LowerABS(SDValue Op, SelectionDAG &DAG) {
>>> @@ -22924,7 +22924,7 @@ static SDValue LowerMINMAX(SDValue Op, S
>>>
>>>    // For AVX1 cases, split to use legal ops (everything but v4i64).
>>>    if (VT.getScalarType() != MVT::i64 && VT.is256BitVector())
>>> -    return Lower256IntArith(Op, DAG);
>>> +    return split256IntArith(Op, DAG);
>>>
>>>    SDLoc DL(Op);
>>>    unsigned Opcode = Op.getOpcode();
>>> @@ -22966,9 +22966,9 @@ static SDValue LowerMUL(SDValue Op, cons
>>>    if (VT.getScalarType() == MVT::i1)
>>>      return DAG.getNode(ISD::AND, dl, VT, Op.getOperand(0),
>>> Op.getOperand(1));
>>>
>>> -  // Decompose 256-bit ops into smaller 128-bit ops.
>>> +  // Decompose 256-bit ops into 128-bit ops.
>>>    if (VT.is256BitVector() && !Subtarget.hasInt256())
>>> -    return Lower256IntArith(Op, DAG);
>>> +    return split256IntArith(Op, DAG);
>>>
>>>    SDValue A = Op.getOperand(0);
>>>    SDValue B = Op.getOperand(1);
>>> @@ -22980,13 +22980,13 @@ static SDValue LowerMUL(SDValue Op, cons
>>>        // For 512-bit vectors, split into 256-bit vectors to allow the
>>>        // sign-extension to occur.
>>>        if (VT == MVT::v64i8)
>>> -        return Lower512IntArith(Op, DAG);
>>> +        return split512IntArith(Op, DAG);
>>>
>>>        // For 256-bit vectors, split into 128-bit vectors to allow the
>>>        // sign-extension to occur. We don't need this on AVX512BW as we
>>> can
>>>        // safely sign-extend to v32i16.
>>>        if (VT == MVT::v32i8 && !Subtarget.hasBWI())
>>> -        return Lower256IntArith(Op, DAG);
>>> +        return split256IntArith(Op, DAG);
>>>
>>>        MVT ExVT = MVT::getVectorVT(MVT::i16, VT.getVectorNumElements());
>>>        return DAG.getNode(
>>> @@ -23117,9 +23117,9 @@ static SDValue LowerMULH(SDValue Op, con
>>>    SDValue A = Op.getOperand(0);
>>>    SDValue B = Op.getOperand(1);
>>>
>>> -  // Decompose 256-bit ops into smaller 128-bit ops.
>>> +  // Decompose 256-bit ops into 128-bit ops.
>>>    if (VT.is256BitVector() && !Subtarget.hasInt256())
>>> -    return Lower256IntArith(Op, DAG);
>>> +    return split256IntArith(Op, DAG);
>>>
>>>    if (VT == MVT::v4i32 || VT == MVT::v8i32 || VT == MVT::v16i32) {
>>>      assert((VT == MVT::v4i32 && Subtarget.hasSSE2()) ||
>>> @@ -23202,7 +23202,7 @@ static SDValue LowerMULH(SDValue Op, con
>>>    // For 512-bit vectors, split into 256-bit vectors to allow the
>>>    // sign-extension to occur.
>>>    if (VT == MVT::v64i8)
>>> -    return Lower512IntArith(Op, DAG);
>>> +    return split512IntArith(Op, DAG);
>>>
>>>    // AVX2 implementations - extend xmm subvectors to ymm.
>>>    if (Subtarget.hasInt256()) {
>>> @@ -24257,9 +24257,9 @@ static SDValue LowerShift(SDValue Op, co
>>>      return R;
>>>    }
>>>
>>> -  // Decompose 256-bit shifts into smaller 128-bit shifts.
>>> +  // Decompose 256-bit shifts into 128-bit shifts.
>>>    if (VT.is256BitVector())
>>> -    return Lower256IntArith(Op, DAG);
>>> +    return split256IntArith(Op, DAG);
>>>
>>>    return SDValue();
>>>  }
>>> @@ -24299,9 +24299,8 @@ static SDValue LowerRotate(SDValue Op, c
>>>    // XOP has 128-bit vector variable + immediate rotates.
>>>    // +ve/-ve Amt = rotate left/right - just need to handle ISD::ROTL.
>>>    if (Subtarget.hasXOP()) {
>>> -    // Split 256-bit integers.
>>>      if (VT.is256BitVector())
>>> -      return Lower256IntArith(Op, DAG);
>>> +      return split256IntArith(Op, DAG);
>>>      assert(VT.is128BitVector() && "Only rotate 128-bit vectors!");
>>>
>>>      // Attempt to rotate by immediate.
>>> @@ -24320,7 +24319,7 @@ static SDValue LowerRotate(SDValue Op, c
>>>
>>>    // Split 256-bit integers on pre-AVX2 targets.
>>>    if (VT.is256BitVector() && !Subtarget.hasAVX2())
>>> -    return Lower256IntArith(Op, DAG);
>>> +    return split256IntArith(Op, DAG);
>>>
>>>    assert((VT == MVT::v4i32 || VT == MVT::v8i16 || VT == MVT::v16i8 ||
>>>            ((VT == MVT::v8i32 || VT == MVT::v16i16 || VT == MVT::v32i8)
>>> &&
>>>
>>>
>>> _______________________________________________
>>> llvm-commits mailing list
>>> llvm-commits at lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>>>
>> --
>> ~Craig
>>
> --
~Craig
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20180919/40a7757e/attachment-0001.html>


More information about the llvm-commits mailing list