[llvm-dev] Does middle-end pass need to consider some special type when doing optimization? Or letting back-end to revert the optimization accordingly?

Nuno Lopes via llvm-dev llvm-dev at lists.llvm.org
Thu Mar 18 02:54:29 PDT 2021


I don't know anything about AMX, but let me give you some pointers (no pun
intended).

 

Regarding pointers, the direction LLVM is taking is to have just 2 pointer
types: a data pointer type and a function pointer type. That's it. That
allows us to remove a lot of bitcasts between pointers. You see now that
load instructions have an argument with the type, which for now is
duplicated with the pointer type, but won't be as soon as pointer types
disappear.

So if you need a special pointer type that can't be casted to other pointer
types, the way to do it in LLVM is with a different address space. Then you
can configure how many bits it takes, etc. And more importantly, pointers in
that space can't be casted to another space without using a special
instruction (which LLVM optimizers won't introduce).

 

FYI by using a different address space, you may lose a few optimizations,
because optimizers assume nothing about the non-default address space. We
have discussed an API to let folks express assumptions optimizers could make
(e.g., is null == (void*)0 ?), but nothing was implemented so far.

 

Nuno

 

 

From: Wang, Pengfei
Sent: 17 March 2021 10:11
To: llvm-dev <llvm-dev at lists.llvm.org>
Cc: Luo, Yuanke <yuanke.luo at intel.com>
Subject: [llvm-dev] Does middle-end pass need to consider some special type
when doing optimization? Or letting back-end to revert the optimization
accordingly?

 

Hi,

 

We are developing prototypes for Intel Advanced Matrix Extensions (AMX) [1]
programing model in Clang and LLVM [2].

We met several cases when the certain type we added are optimized
unexpectedly in the middle-end. E.g. optimizing phi + biscast + load:

 

From

%a = load <256 x i32>, <256 x i32>* %mem, align 64

. .

%b = phi <256 x i32> [ %a, %label1 ], [%someother, %label2]

%c = bitcast <256 x i32> %b to x86_amx

To

%a = bitcast <256 x i32>* %mem to x86_amx*

%b = load x86_amx, x86_amx*, align 64

. .

%c = phi x86_amx [ %b, %label1 ], [%someother, %label2]

 

To prevent such unexpected transforms, we concretely added the type check in
each point of the optimizations. 

Roman pointed out the changes are not the right direction [3], and thought
it's bug for backend. While we agreed backend might be able to handle it for
the functionality, we think it is better to handle it in the midden-end
since they are negative optimizations for AMX.

 

First, let me put some background here:

1.	x86_amx* is different from trivial pointers.

The AMX load instruction is much different from other load instructions. It
is not only need the memory address but also the shape / stride of the tile
register. We did some extra work in the backend to deduce the shape
information from the context. We don't want the pass to add new x86_amx
related usage because this will result in the difficulty in deduction. That
said bitcasting other pointer types to x86_amx* is not trivial as assumed
here.

2.	The physical tile registers have more limitations.              

a.	No copy instruction between tile registers.
b.	Spilling / reload a tile register is expensive in light of its size
is 1024 bytes.
c.	The shapes of tile registers need to be pre-configured before use
and all data in tile registers will turn into invalid once re-configured.
That said we need to dominate as more tile registers as possible to
configure their shapes with one configure instruction, otherwise we need to
spill and reload the live registers once we need to re-configure.
d.	The number of tile registers is rather small (only 8) and different
shapes cannot be reused.

Based on the limitations, we need to reduce the use / live range of tile
registers. But optimizations may increase the opportunity of the use. So
even we can handle some combined operation for AMX type, we still prefer to
prevent it from the beginning. Unless we can totally roll back the
optimization. Which is also not a good solution in my opinion.

3.	For more information, please refer to discussion in [3].

For other optimization points, please refer [4][5].

 

I think the main controversy from Roman is if middle-end pass should
consider some special type when doing optimization. I tend to let middle-end
do the type check on account of the peculiarity of AMX type. But I'm not
sure if we have precedent to handle the similar issue in other targets. I'm
open and glad to do it either way so long as we have an elegant solution.

Any suggestions are welcome.

 

[1]
https://software.intel.com/content/www/us/en/develop/articles/intel-sdm.html
#architecture

[2] https://lists.llvm.org/pipermail/llvm-dev/2020-November/146770.html

[3] https://reviews.llvm.org/D98247

[4] https://reviews.llvm.org/D98595

[5] https://reviews.llvm.org/D98757

 

Thanks

Pengfei

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210318/eba7b85b/attachment-0001.html>


More information about the llvm-dev mailing list