<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/57163>57163</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
[AMDGPU] Some confusion about LLVM IR alloca instruction
</td>
</tr>
<tr>
<th>Labels</th>
<td>
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
GaleSeLee
</td>
</tr>
</table>
<pre>
Hello, develops!
I use `clang++` and `hipcc` to generate LLVM IR for NVIDIA GPU and AMD GPU. They generate different alloca statements.
Here is an example:
```
__device__ void int_a_kernel() {
int a = 1;
}
```
NVIDIA GPUs LLVM IR:
```
define dso_local void @_Z12int_a_kernelv() #0 {
%1 = alloca i32, align 4
store i32 1, i32* %1, align 4
ret void
}
```
But AMDGPU's has more memory info
```
define dso_local void @_Z12int_a_kernelv() #0 {
%1 = alloca i32, align 4, addrspace(5)
%2 = addrspacecast i32 addrspace(5)* %1 to i32*
store i32 1, i32* %2, align 4
ret void
}
```
I wonder is there a way to convert LLVM ir for NVIDIA GPU to ir for AMDGPU?
Thanks!
</pre>
<img width="1px" height="1px" alt="" src="http://email.email.llvm.org/o/eJzFVFtvmzAU_jXm5WgRmBCSBx6SRW0jtdO0bn3YCzL4ELw6OLJNuvz7HUNoq627vA1Zwuf-nc_Hrow8FzeotWH8PUg8oTZHx3gCLN6yeL2D3iGwRVxr0e0Z34S1iEF0MmhbdazrIHsDe-zQCo9we_twB7tP0BgLHx52290arj9-GULWd9uwn8HnFs8vEVI1DVrsPAhCUgtwntQHUrjZiOOGzKAcJQH8Lg5HjSxdjyYqf1mDWJbUhaqxLOFklATV-VKUj2g71IwvGV8ByzejL9CnQlVg6RYSll70LN--mfulGzd1-TsYEhvVUWfOlKEjPYJh87j8mvDXmE4TKJ7Gr5ExniUDrAslKuXhiIRW-w7mk5fzJhCTckJP1sFpPcS-4WzRDzD-3OSm9-GcqEnGcwetcHAIRQ5IvzPx1Zj_0nHYSmndUdRIGTJK8SqSj5GTQy2cH2j5JeRCT5jYka2_UvkW7_9G5Q6eTCfRhsn1bRhhAU_iHGrXpjuh9eMYKfvzZQnoRuXlKNKrMeXnVnSPwwWNsEgWizhbzvl8Eckilat0JSKvvMaCZZtLYLaFe3PAULDpnTIdiMr0_vmWTmR3ztu-9uQQ9VYXrff0DtBw8ytae-XbvprV5kCC1qfp9-5ozTesPYnKuR4J2FWWJ4s0aos4XopFs8pFXGXLqmrqCmWTN-mcp1mSLjHSokLtAlQCGamCx5zHyyRLcj6Ps1ktE8xyzkVT5SuJkiYJD0LpWSg8M3Yf2WLAUPV7R0atHD0Xz0bhHJ0Y4pRf9L41trgWGu_xFjEaABcD2h-JF3SV">