[llvm] r366470 - [LAA] Re-check bit-width of pointers after stripping.

Philip Reames via llvm-commits llvm-commits at lists.llvm.org
Thu Jul 18 12:57:58 PDT 2019


I'm really not sure this is the right approach.  Particularly since 
there seems to be at least one other case where we're having to patch 
the caller of stripAndAccumulate to handle different bitwidths.

Before the unification, some of the versions didn't peak through 
addrspacecast for exactly this reason.  It looks like that was lost in 
the merge?  Or was it intentional?  If it's just an accident, wouldn't 
the best fix be to reintroduce that restriction?

Philip

On 7/18/19 10:30 AM, Michael Liao via llvm-commits wrote:
> Author: hliao
> Date: Thu Jul 18 10:30:27 2019
> New Revision: 366470
>
> URL: http://llvm.org/viewvc/llvm-project?rev=366470&view=rev
> Log:
> [LAA] Re-check bit-width of pointers after stripping.
>
> Summary:
> - As the pointer stripping now tracks through `addrspacecast`, prepare
>    to handle the bit-width difference from the result pointer.
>
> Reviewers: jdoerfert
>
> Subscribers: jvesely, nhaehnle, hiraditya, arphaman, llvm-commits
>
> Tags: #llvm
>
> Differential Revision: https://reviews.llvm.org/D64928
>
> Modified:
>      llvm/trunk/lib/Analysis/LoopAccessAnalysis.cpp
>      llvm/trunk/test/Transforms/SLPVectorizer/AMDGPU/address-space-ptr-sze-gep-index-assert.ll
>
> Modified: llvm/trunk/lib/Analysis/LoopAccessAnalysis.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Analysis/LoopAccessAnalysis.cpp?rev=366470&r1=366469&r2=366470&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Analysis/LoopAccessAnalysis.cpp (original)
> +++ llvm/trunk/lib/Analysis/LoopAccessAnalysis.cpp Thu Jul 18 10:30:27 2019
> @@ -1189,12 +1189,25 @@ bool llvm::isConsecutiveAccess(Value *A,
>   
>     unsigned IdxWidth = DL.getIndexSizeInBits(ASA);
>     Type *Ty = cast<PointerType>(PtrA->getType())->getElementType();
> -  APInt Size(IdxWidth, DL.getTypeStoreSize(Ty));
>   
>     APInt OffsetA(IdxWidth, 0), OffsetB(IdxWidth, 0);
>     PtrA = PtrA->stripAndAccumulateInBoundsConstantOffsets(DL, OffsetA);
>     PtrB = PtrB->stripAndAccumulateInBoundsConstantOffsets(DL, OffsetB);
>   
> +  // Retrieve the address space again as pointer stripping now tracks through
> +  // `addrspacecast`.
> +  ASA = cast<PointerType>(PtrA->getType())->getAddressSpace();
> +  ASB = cast<PointerType>(PtrB->getType())->getAddressSpace();
> +  // Check that the address spaces match and that the pointers are valid.
> +  if (ASA != ASB)
> +    return false;
> +
> +  IdxWidth = DL.getIndexSizeInBits(ASA);
> +  OffsetA = OffsetA.sextOrTrunc(IdxWidth);
> +  OffsetB = OffsetB.sextOrTrunc(IdxWidth);
> +
> +  APInt Size(IdxWidth, DL.getTypeStoreSize(Ty));
> +
>     //  OffsetDelta = OffsetB - OffsetA;
>     const SCEV *OffsetSCEVA = SE.getConstant(OffsetA);
>     const SCEV *OffsetSCEVB = SE.getConstant(OffsetB);
>
> Modified: llvm/trunk/test/Transforms/SLPVectorizer/AMDGPU/address-space-ptr-sze-gep-index-assert.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/AMDGPU/address-space-ptr-sze-gep-index-assert.ll?rev=366470&r1=366469&r2=366470&view=diff
> ==============================================================================
> --- llvm/trunk/test/Transforms/SLPVectorizer/AMDGPU/address-space-ptr-sze-gep-index-assert.ll (original)
> +++ llvm/trunk/test/Transforms/SLPVectorizer/AMDGPU/address-space-ptr-sze-gep-index-assert.ll Thu Jul 18 10:30:27 2019
> @@ -147,3 +147,16 @@ bb:
>     store i32 %sub1, i32* undef
>     ret void
>   }
> +
> +; CHECK-LABEL: slp_crash_on_addrspacecast
> +; CHECK: ret void
> +define void @slp_crash_on_addrspacecast() {
> +entry:
> +  %0 = getelementptr inbounds i64, i64 addrspace(3)* undef, i32 undef
> +  %p0 = addrspacecast i64 addrspace(3)* %0 to i64*
> +  store i64 undef, i64* %p0, align 8
> +  %1 = getelementptr inbounds i64, i64 addrspace(3)* undef, i32 undef
> +  %p1 = addrspacecast i64 addrspace(3)* %1 to i64*
> +  store i64 undef, i64* %p1, align 8
> +  ret void
> +}
>
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits


More information about the llvm-commits mailing list