[llvm-dev] [RFC] Make LoopVectorize Aware of SLP Operations
Florian Hahn via llvm-dev
llvm-dev at lists.llvm.org
Thu Feb 8 07:30:08 PST 2018
Hi,
On 08/02/2018 04:22, Caballero, Diego wrote:
> Hi Florian!
>
> This proposal sounds pretty exciting! Integrating SLP-aware loop vectorization (or the other way around) and SLP into the VPlan framework is definitely aligned with the long term vision and we would prefer this approach to the LoopReroll and InstCombine alternatives that you mentioned. We prefer a generic implementation that can handle complicated cases to something ad-hoc for some simple ones. Because of this, we would have some comments regarding the design that you propose:
>
>> 1. Detect loops containing SLP opportunities (operations on compound
>> values)
>> 2. Extend the cost model to choose between interleaving or using
>> compound values
>> 3. Add support for vectorizing compound operations to VPlan
>
> Currently, VPlan is not fully integrated in all the stages of the inner loop vectorizer pipeline. For that reason, part of your implementation (#1 and #2) would happen outside of VPlan and another part (#3) would be VPlan-based. As you know, we are currently leveraging a new vectorization path where 1) VPlan is built upfront in the pipeline (http://lists.llvm.org/pipermail/llvm-dev/2017-December/119523.html) and 2) all the vectorization stages will be implemented on top of the VPlan representation. We think that your proposal is a really good candidate to be implemented in this new “VPlan-native” vectorization path. In this way, we would avoid the porting effort of #1 and #2 to the final VPlan-base infrastructure and would give you the opportunity of getting involved in the design of VPlan. It would also avoid introducing the complexity of SLP into the existing cost model and code generation, which is also a concern to consider. We should definitely talk in depth about the requirements to implement this in the new vectorization path but we truly believe it's the best approach.
>
Thank you very much for your detailed response!
I also think that this proposal would fit very well in the
"VPlan-native" vectorization path, especially building and evaluating
multiple plans for different strategies (interleaved and SLP style for
example) should make the cost-modelling easier in more complex
scenarios. I suppose this proposal could be a good candidate for an
initial user of the new VPlan model and I would also be happy with
helping out with related VPlan infrastructure work!
>
>> * loops where some vectors need to be transformed, for example where
>> different operations are performed on different (groups of) lanes,
>> like A[i].x + B[i].x, A[i].y - B[i].y which could be transformed to
>> A[i].x + B[i].x, A[i].y + (-B[i].y), or where one compound group
>> needs to be reordered, like A[i].x + B[i].y, A[i].y + B[i].x
>
> This kind of transformation/reordering is an example of what could be a preparatory VPlan-to-VPlan transformation that could precede and simplify the core analysis of the SLP-aware analysis.
>
Yes in those cases I think it would be very beneficial to not commit to
a single strategy too early.
>
>> The SLP vectorizer in LLVM already implements a similar analysis. In the long term, it would probably make sense to share the analysis between LoopVectorize and the SLP vectorizer. But initially we think it would be safer to start with a separate and initially more limited analysis in LoopVectorize.
>
> As a first step, this sounds reasonable to me. If it's implemented on top of VPlan it could be reused in a future VPlan-base SLP. We should have this in mind for the design from the very beginning so that we maximize the reuse of the common ground of "standalone" SLP and SLP-aware loop vectorization.
Agreed. The infrastructure we add here should make it at least easier to
re-use parts for standalone SLP.
>
>
>> One limitation here is that we commit to either interleaving/compound vectorization when calculating the cost for the loads. Depending on the other instructions in the loop, interleaving could be beneficial overall, even though we could use compound vectorization for some operations. Initially we could only consider SLP-style vectorization if it can be used for all instructions.
>
> VPlan will allow the independent evaluation of multiple vectorization scenarios in the future. This seems to fit into that category.
Yes, I think this would fit very well in the Vplan-native model and also
allow for a modular implementation of cost-modelling.
>
>
>> Add support for SLP style vectorization to Vplan
>> ------------------------------------------------------------------------
>> Introduce 2 new recipes VPSLPMemoryRecipe and VPSLPInstructionRecipe.
>
> We introduced VPInstructions to model masking in patch D38676. They are necessary to properly model the def-use/use-def chains in the VPlan representation, and we believe you will need to represent def-use/use-def chains for newly inserted operations, shuffles, and inserts/extracts. As such, VPInstructions would be a more proper representation than the Recipes per the long term vision. In fact, we plan to replace some of the current existing recipes with VPInstructions in the near future. For this reason, we think that the output of the SLP vectorization should be modelled using VPInstruction and not with new Recipes.
Interesting, thanks for the pointer. I will have a closer look, but I
think VPInstructions should be good fit here.
Thanks,
Florian
More information about the llvm-dev
mailing list