[PATCH] D96386: [AMDGPU] Fix promote alloca with double use in a same insn

Matt Arsenault via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Feb 9 18:38:22 PST 2021


arsenm added inline comments.


================
Comment at: llvm/lib/Target/AMDGPU/AMDGPUPromoteAlloca.cpp:603-608
+    auto UI = llvm::find(WorkList, User);
+    if (UI != WorkList.end()) {
+      WorkList.erase(UI);
+      WorkList.push_back(User);
       continue;
+    }
----------------
rampitec wrote:
> arsenm wrote:
> > I don't see how this sorts it, the ordering is still determined by the arbitrary ordering in users. Can't the replacement just check all candidate operands
> The function is recursive and does DFS. This code pushes any later use to the end.
> 
> Checking all uses at replacement is practically impossible because it may be a long use-def chain. You would need to do DFS again at every replacement.
This is a DFS on the user list, not the function. I do not think this will give you the guarantee that the first user dominates the second. But you also just need to ensure the transitive users are enqueued regardless. Would it be easier to just use a SetVector instead of is_contained and a regular worklist


================
Comment at: llvm/test/CodeGen/AMDGPU/promote-alloca-mem-intrinsics.ll:68
+; CHECK: %i1 = bitcast double addrspace(3)* %arrayidx2 to i8 addrspace(3)*
+; CHECK: call void @llvm.memcpy.p3i8.p3i8.i64(i8 addrspace(3)* align 8 %i, i8 addrspace(3)* align 8 %i1, i64 16, i1 false)
+define amdgpu_kernel void @promote_alloca_used_twice_in_memcpy(i32 %c) {
----------------
I noticed this dropped dereferenceable, should maybe fix that


================
Comment at: llvm/test/CodeGen/AMDGPU/promote-alloca-mem-intrinsics.ll:79
+}
+
 attributes #0 = { nounwind "amdgpu-flat-work-group-size"="64,64" "amdgpu-waves-per-eu"="1,3" }
----------------
Can you also add tests with select and phi both derived from the same


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D96386/new/

https://reviews.llvm.org/D96386



More information about the llvm-commits mailing list