[llvm-dev] [AMDGPU] Strange results with different address spaces
Haidl, Michael via llvm-dev
llvm-dev at lists.llvm.org
Tue Dec 5 23:28:15 PST 2017
Hi,
the IR comes from clang in -O0 form. So no optimizations are performed by the front-end. The IR goes through a backend agnostic preparation phase that brings it into SSA from and changes the AS from 0 to 1. After this phase the IR goes through another pass manager that performs O3 passes and the AMDGPU target passes for object file generation. I looked into the AMDGPU backend and the only place where this metadata is added is in AMDGPUAnnotateUniformValues.cpp. The pass queries dependency analysis for the load and checks if it is reported as uniform. Afterwards the metadata is added to the GEP.
Removing the O3 passes before code generation solves the problem so does separating the O3 passes and the backend passes into separate pass managers. I assume dependency analysis does not run in the second pass manager because no metadata is generated at all.
Could this be a bug in DA reporting the load falsely as uniform by not taking the intrinsics into account?
Cheers,
Michael
Von: Matt Arsenault [mailto:whatmannerofburgeristhis at gmail.com] Im Auftrag von Matt Arsenault
Gesendet: Dienstag, 5. Dezember 2017 20:01
An: Haidl, Michael <michael.haidl at uni-muenster.de>
Cc: llvm-dev at lists.llvm.org
Betreff: Re: [llvm-dev] [AMDGPU] Strange results with different address spaces
On Dec 5, 2017, at 13:53, Matt Arsenault <arsenm2 at gmail.com<mailto:arsenm2 at gmail.com>> wrote:
On Dec 5, 2017, at 02:51, Haidl, Michael via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote:
Hi dev list,
I am currently exploring the integration of AMDGPU/ROCm into the PACXX project and observing some strange behavior of the AMDGPU backend. The following IR is generated for a simple address space test that copies from global to shared memory and back to global after a barrier synchronization.
Here is the IR is attached as as1.ll
The output is as follows:
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 80 80 80 80 80 80 80 80 80 80 80 80 80 80 80 80 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 112 112 112 112 112 112 112 112 112 112 112 112 112 112 112 112 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 144 144 144 144 144 144 144 144 144 144 144 144 144 144 144 144 160 160 160 160 160 160 160 160 160 160 160 160 160 160 160 160 176 176 176 176 176 176 176 176 176 176 176 176 176 176 176 176 192 192 192 192 192 192 192 192 192 192 192 192 192 192 192 192 208 208 208 208 208 208 208 208 208 208 208 208 208 208 208 208 224 224 224 224 224 224 224 224 224 224 224 224 224 224 224 224 240 240 240 240 240 240 240 240 240 240 240 240 240 240 240 240
It looks like the addressing in as1.ll is incorrectly concluded to be uniform:
%6 = tail call i32 @llvm.amdgcn.workitem.id.x() #0, !range !11
%7 = tail call i32 @llvm.amdgcn.workgroup.id.x() #0
%mul.i.i.i.i.i = mul nsw i32 %7, %3
%add.i.i.i.i.i = add nsw i32 %mul.i.i.i.i.i, %6
%idxprom.i.i.i = sext i32 %add.i.i.i.i.i to i64
%8 = getelementptr i32, i32 addrspace(1)* %callable.coerce0, i64 %idxprom.i.i.i, !amdgpu.uniform !12, !amdgpu.noclobber !12
However since this depends on workitem.id<http://workitem.id/>.x, it certainly is not
-Matt
Actually you have the amdgpu.uniform annotation already here, and it isn’t added by the backend optimization pass, so there’s a bug in however you produced this. It just happens the uniform load optimization doesn’t trigger on flat loads.
-Matt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20171206/51388107/attachment.html>
More information about the llvm-dev
mailing list