[llvm-dev] [RFC] Matrix support (take 2)

Gerolf Hoflehner via llvm-dev llvm-dev at lists.llvm.org
Fri Dec 21 02:12:22 PST 2018



> On Dec 21, 2018, at 12:07 AM, Adam Nemet via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> 
> 
> 
> On Dec 19, 2018, at 3:08 PM, Simon Moll <moll at cs.uni-saarland.de <mailto:moll at cs.uni-saarland.de>> wrote:
> 
>> Hi,
>> 
>> On 12/19/18 11:21 PM, David Greene via llvm-dev wrote:
>>> Adam Nemet via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> writes:
>>> 
>>>>     I spent some time chatting with Adam about this and have a better
>>>>     understanding of his concerns here. It seems to me that if having
>>>>     masking intrinsics is the long-term solution we want, we should do
>>>>     that now (for add and sub) rather than building arbitrary matrix
>>>>     layout info into intrinsics, since a mask has all the information
>>>>     that we actually need.
>>>> 
>>>> I think that sounds like a reasonable compromise. We already have
>>>> masked load/store intrinsics so adding add and sub just follows that
>>>> precedent. If the decision is made to move masking to the core
>>>> operations, the new intrinsics would just move as well.
>>> How will existing passes be taught about the new intrinsics?  For
>>> example, what would have to be done to instcombine to teach it about
>>> these intrinsics?  Let's suppose every existing operation had an
>>> equivalent masked intrinsic.  Would it be easier to teach all of the
>>> passes about them or would it be easier to teach the passes about a mask
>>> operand on the existing Instructions?  Would it be easier to teach isel
>>> about all the intrinsics or would it be easier to teach isel about a
>>> mask operand?
>> 
>> Consider that over night we introduce optional mask parameters to vector instructions. Then, since you can not safely ignore the mask, every transformation and analysis that is somehow concerned with vector instructions is potentially broken and needs to be fixed.
>> 
>> If you go with masking intrinsics, and set the attributes right, it is clear that transformations won't break your code and you will need to teach InstCombine, DAGCombiner, etc that a `masked.fadd` is just an `fadd` with a mask. However, this gives you the opportunity to "re-enable" one optimization add a time each time making sure that the mask is handled correctly. In case of InstCombine, the vector instruction patterns transfer to mask intrinsics: if all mask intrinsics in the pattern have the same mask parameter you can apply the transformation, the resulting mask intrinsics will again take the same mask parameter.
>> 
>> Also, this need not be a hard transition from vector instructions to masking intrinsics.. you can add new types of masking intrinsics in batches along with the required transformations. Masking intrinsics and vector instruction can live side by side (as they do today, anyway).
> 
> +1
> 
> Also this thread is getting off-topic. It would probably be best to continue the discussion about the masked-intrinsic transition strategy under https://reviews.llvm.org/D53613 <https://reviews.llvm.org/D53613>.

I agree. The thread started to lose focus. It seems all that is needed to get this going on open source is get into the pragmatics mind set  like Chris said and “just” agree on a decision on the sticking question like >you had suggested earlier.

> My sense is that this info is important for your lowering, and your approach of using dataflow analysis to recover this will fail in some cases.

> 

> Since layout and padding information is important, it seems most logical to put this into the type.  Doing so would make it available in all these places.

> 

> That said, I still don’t really understand why you *need* it.


>This seems like the main sticking point so let’s close on this first and see if my answers above are satisfying.
> 


The rest will follow once the learning and iteration starts. 

> 
>> 
>>> I honestly don't know the answers to these questions.  But I think they
>>> are important to consider, especially if intrinsics are seen as a bridge
>>> to first-class IR support for masking.
>> 
>> I think its sensible to use masking intrinsics (or EVL https://reviews.llvm.org/D53613 <https://reviews.llvm.org/D53613>) on IR level and masked SD nodes in the backend. However, i agree that intrinsics should just be a bridge to native support mid term.
>> 
>> - Simon
>> 
>>>                                  -David
>>> _______________________________________________
>>> LLVM Developers mailing list
>>> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>
>> 
>> -- 
>> 
>> Simon Moll
>> Researcher / PhD Student
>> 
>> Compiler Design Lab (Prof. Hack)
>> Saarland University, Computer Science
>> Building E1.3, Room 4.31
>> 
>> Tel. +49 (0)681 302-57521 : moll at cs.uni-saarland.de <mailto:moll at cs.uni-saarland.de>
>> Fax. +49 (0)681 302-3065  : http://compilers.cs.uni-saarland.de/people/moll <http://compilers.cs.uni-saarland.de/people/moll>
>> 
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20181221/180da97f/attachment.html>


More information about the llvm-dev mailing list