[PATCH] D89577: [VectorCombine] Avoid crossing address space boundaries.

Artem Belevich via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Oct 16 12:20:15 PDT 2020


tra added inline comments.


================
Comment at: llvm/test/Transforms/VectorCombine/AMDGPU/as-transition.ll:17
+; CHECK-NEXT:    [[TMP0:%.*]] = bitcast float* [[C]] to <1 x float>*
+; CHECK-NEXT:    [[TMP1:%.*]] = load <1 x float>, <1 x float>* [[TMP0]], align 4
+; CHECK-NEXT:    [[E:%.*]] = shufflevector <1 x float> [[TMP1]], <1 x float> undef, <4 x i32> <i32 0, i32 undef, i32 undef, i32 undef>
----------------
spatel wrote:
> We still want to create a vector load rather than just bail out on the address-space mismatch?
I believe so. Vectorized load is still beneficial, assuming VectorCombine can do something useful w/o seeing through to the original pointer. 
I don't know if it does, though. I can un-minimize this test case to do a full 4-element load and check.

I guess we may not be able to do it as often, but it should still be useful for AMDGPU and NVPTX where kernel arguments live in a special address space and the loads would be just one addrspacecast away from the original source.

Eventually we may want to have a special variant of `stripPointerCasts()` which would only strip casts until AS change boundary.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D89577/new/

https://reviews.llvm.org/D89577



More information about the llvm-commits mailing list