[llvm-dev] Does middle-end pass need to consider some special type when doing optimization? Or letting back-end to revert the optimization accordingly?

Florian Hahn via llvm-dev llvm-dev at lists.llvm.org
Mon Mar 22 08:04:13 PDT 2021



> On Mar 22, 2021, at 14:02, Luo, Yuanke <yuanke.luo at intel.com> wrote:
> 
> Yes, bitcasts introduced by the frontend call amx intrinsics. We use vector to represent 2D amx tile in C language, on the other hand we don’t want to mix our amx tile to other vector operation, so x86_amx is introduced to isolate amx intrinsics from normal vector operation. The bitcast is to monitor that a normal vector is passed to amx intrinsics. In below example, we need to transform the bitcast to a vector store and an amx load intrinsic. The x86_amx* is unexpected at the beginning, but in the pass of InstrCombine the middle-end generate the x86_amx pointer.
>  
> define dso_local void @test_src_add(<256 x i32> %x, <256 x i32> %y, i16 %r, i16 %c, i8* %buf, i64 %s) {
> ; CHECK-LABEL: @test_src_add(
> ; CHECK-NEXT:  entry:
> ; CHECK-NEXT:    [[TMP0:%.*]] = alloca <256 x i32>, align 64
> ; CHECK-NEXT:    [[ADD:%.*]] = add <256 x i32> [[Y:%.*]], [[X:%.*]]
> ; CHECK-NEXT:    [[TMP1:%.*]] = bitcast <256 x i32>* [[TMP0]] to i8*
> ; CHECK-NEXT:    store <256 x i32> [[ADD]], <256 x i32>* [[TMP0]], align 1024
> ; CHECK-NEXT:    [[TMP2:%.*]] = call x86_amx @llvm.x86.tileloadd64.internal(i16 [[R:%.*]], i16 [[C:%.*]], i8* [[TMP1]], i64 64)
> ; CHECK-NEXT:    call void @llvm.x86.tilestored64.internal(i16 [[R]], i16 [[C]], i8* [[BUF:%.*]], i64 [[S:%.*]], x86_amx [[TMP2]])
> ; CHECK-NEXT:    ret void
> ;
> entry:
>   %add = add <256 x i32> %y, %x
>   %t = bitcast <256 x i32> %add to x86_amx
>   call void @llvm.x86.tilestored64.internal(i16 %r, i16 %c, i8* %buf, i64 %s, x86_amx %t)
>   ret void
> }
>  

Ok I think I understand the issue better now. IIUC you use `bitcast` in the frontend to convert between regular vector and the AMX values?

This doesn’t really match the way `bitcast` is defined (as discussed earlier) and this mismatch seems to be the source of the issues. I don’t think you should use `bitcast`s that way and instead adjust the frontend to emit different code for the conversion between vector and amx values (e.g. use an intrinsic to convert between vector and amx values; the intrinsic can be directly lowered to the conversion code).

I think there are at least two ways forward:

1. Avoid using bitcasts for the conversion in the frontend.
2. Try & define the semantics of bitcast/load for AMX types, such that the transformations you want to exclude in instcombine are illegal. 

If you decide to go with 2., you probably will have to make a convincing  argument why this is the right thing to do and why other alternatives do not work, because it means that certain general transformations that are legal at the moment become illegal for certain types (which is illustrated by the instcombine patches you mentioned)

Cheers.
Florian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210322/25944c0c/attachment.html>


More information about the llvm-dev mailing list