[LLVMdev] Handling Masked Vector Operations
dag at cray.com
dag at cray.com
Thu May 9 08:19:12 PDT 2013
James Courtier-Dutton <james.dutton at gmail.com> writes:
> On 2 May 2013 16:57, <dag at cray.com> wrote:
>> We're looking at how to handle masked vector operations in architectures
>> like Knight's Corner. In our case, we have to translate from a fully
>> vectorized IR that has mask support to llvm IR which does not have mask
>> support.
>>
>
> Has anyone done a comparision between the "fully vectorized IR and "LLVM IR" ?
> If someone has already invented a "fully vectorized IR", it might be
> beneficial to not re-invent it for LLVM.
> For example, if you are optimizing a loop, and splitting it into 3
> loops, one of which can then be fully vectorized, it would be useful
> to represent that optimization/translation at the IR level. Adding
> mask support to LLVM IR would therefore seem a sensible course to me.
> It might be a short term pain, but would possibly benefit the longterm
> optimization goals of LLVM.
The vectorized IR we are translating from has explicit masking at the
leaf nodes and implied masking at the inner nodes. For example:
___MERGE___
/ \
+ -
/ \ / \
/ \ / \
[a#m1] [b#m1] [a#m2] [b#m2]
So the add is assumed to operate under #m1 and the subtract is assumed
to operate under #m2. Then there is an explicit merge operation to form
the final vector. In this case #m2 = ~#m1.
I believe we can represent this in LLVM IR with selects as long as we
have predication at the leaves. The trick is to have isel match all of
these selects and generate an efficient predicated operation. I'm
working on trying that experiment to see if it will suffice.
So I don't know that a fully predicated IR would be any better than
selects + predication at the leaves. That's why I'm doing the
experiment. :)
-David
More information about the llvm-dev
mailing list