[llvm-dev] PSLP: Padded SLP Automatic Vectorization

Matt P. Dziubinski via llvm-dev llvm-dev at lists.llvm.org
Fri Oct 2 07:33:46 PDT 2020


On 9/29/2020 14:37, David Chisnall via llvm-dev wrote:
> On 28/09/2020 15:45, Matt P. Dziubinski via llvm-dev wrote:
>> Hey, I noticed this talk from the EuroLLVM 2015 
>> (https://llvm.org/devmtg/2015-04/slides/pslp_slides_EUROLLVM2015.pdf) 
>> on the PSLP vectorization algorithm (CGO 2015 paper: 
>> http://vporpo.me/papers/pslp_cgo2015.pdf).
>>
>> Is anyone working on implementing it?
>>
>> If so, are there Phab reviews I can subscribe to?
> 
> The CGO paper was based on a very old LLVM and the last I heard, moving 
> the transform to a newer LLVM and rerunning the benchmarks made the 
> speedups go away.  It's not clear what the cause of this was and the 
> team responsible didn't have the time to do any root cause analysis.

Thank you for the reply; interesting!

Incidentally, would you happen to know whether VW-SLP fares any better?

I'm referring to "VW-SLP: Auto-Vectorization with Adaptive Vector Width" 
from PACT 2018 (http://vporpo.me/papers/vwslp_pact2018.pdf; also 
presented as "Extending the SLP vectorizer to support variable vector 
widths" at the 2018 LLVM Developers’ Meeting, 
http://llvm.org/devmtg/2018-10/).

I'm wondering whether it subsumes PSLP or whether there are areas where 
PSLP still works (or worked) better, given the (brief) comparison in the 
paper (from which it seems like it addresses the problem possibly in a 
more general manner):

"The widely used bottom-up SLP algorithm has been improved in several 
ways. Porpodas et al. [32] propose  PSLP, a technique that pads the 
scalar code with redundant instructions, to convert non-isomorphic 
instruction sequences into isomorphic ones, thus extending the 
applicability of SLP. Just like VW-SLP-S, PSLP can vectorize code when 
some of the lanes differ, but it is most effective when the 
non-isomorphic parts are only a short section of the instruction chain. 
VW-SLP, on the other hand, works even if the chain never converges."

Best,
Matt


More information about the llvm-dev mailing list