[llvm-dev] Does middle-end pass need to consider some special type when doing optimization? Or letting back-end to revert the optimization accordingly?

Florian Hahn via llvm-dev llvm-dev at lists.llvm.org
Mon Mar 22 06:40:19 PDT 2021



> On Mar 19, 2021, at 02:04, Zhang, Xiang1 via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> 
> Yes, that is equivalent, but at Front end, we don’t have existed type to express AMX type.
> The “AMX type” in c/c++ language is implied by the following structure:
>  
> typedef int tile1024i __attribute__((__vector_size__(1024), __aligned__(64)));
> typedef struct __tile1024i_str {
>   const unsigned short row;
>   const unsigned short col;
>   tile1024i tile;
> } __tile1024i  
>  
> So we handle the “%src = load <256 x i32>, <256 x i32>* %addr, align 64       %2 = bitcast <256 x i32> %src to x86_amx”
> not “%2 = load x86_amx, x86_amx* %addr, align 64”
> 

Are the bitcasts introduced by the frontend? If you need different semantics for loading from an `x86_amx` pointer, could the frontend generate a call to an intrinsic instead?

Cheers,
Florian

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210322/3e374a44/attachment.html>


More information about the llvm-dev mailing list