[llvm-commits] [llvm] r72507 - in /llvm/trunk: include/llvm/Target/TargetLowering.h lib/CodeGen/SelectionDAG/DAGCombiner.cpp lib/Target/X86/X86ISelLowering.cpp lib/Target/X86/X86ISelLowering.h test/CodeGen/X86/narrow_op-1.ll test/CodeGen/X86/narrow_op-2.ll
Evan Cheng
evan.cheng at apple.com
Thu May 28 10:56:20 PDT 2009
On May 27, 2009, at 11:38 PM, Duncan Sands wrote:
> Hi Evan, nice transform! But I think you shouldn't do it if the
> original store was volatile. Also, mightn't this turn an aligned
Why not? This would just reduce the store width.
> store into an unaligned one? That could be costly.
I don't think that can happen but I'll add a check.
>
>> + SDValue N0 = Value.getOperand(0);
>> + if (ISD::isNormalLoad(N0.getNode()) && N0.hasOneUse()) {
>
> why not return SDValue() here if the conditions aren't
> satisfied, rather than having a big indented if.
*Shrug*. I am a proponent of this style but I don't think it's always
clearer this way.
>
>> + while (NewBW < BitWidth &&
>> + !(TLI.isTypeLegal(NewVT) &&
>> + TLI.isOperationLegalOrCustom(Opc, NewVT) &&
>
> isOperationLegalOrCustom checks for isTypeLegal.
Ok.
>
>> + if (NewBW == BitWidth)
>> + return SDValue(0, 0);
>
> If BitWidth wasn't a power of two, then you might jump over
> it. So I think the test should be NewBW >= BitWidth.
Ok.
>
>> + APInt Mask = APInt::getBitsSet(BitWidth, ShAmt, ShAmt + NewBW);
>> + if ((Imm & Mask) == Imm) {
>
> Here too you could bail out if the condition doesn't hold,
> reducing indentation.
>
>> + if (TLI.isBigEndian())
>> + PtrOff = (BitWidth - NewBW) / 8 - PtrOff;
>
> If the original type was a funky integer type like i3 then
> this will give the wrong answer. The following should do
> the trick:
> PtrOff = (BitWidth + 7 - NewBW) / 8 - PtrOff;
> Or use getStoreSizeInBits(N1.getValueType()). Actually
> isn't N1.getValueType() the same as VT? If so, you could
> have defined BitWidth as VT.getSizeInBits().
Ok.
>
>> +bool X86TargetLowering::isNarrowingProfitable(MVT VT1, MVT VT2)
>> const {
>> + // i16 instructions are longer (0x66 prefix) and potentially
>> slower.
>> + return !(VT1 == MVT::i32 && VT2 == MVT::i16);
>> +}
>
> So i64 -> i16 is profitable on x86-64?
It's probably even.
Evan
>
> Ciao,
>
> Duncan.
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
More information about the llvm-commits
mailing list