[all-commits] [llvm/llvm-project] d4d4ce: [SVE][LoopVectorize] Add masked load/store and gat...
david-arm via All-commits
all-commits at lists.llvm.org
Tue Feb 2 01:53:13 PST 2021
Branch: refs/heads/main
Home: https://github.com/llvm/llvm-project
Commit: d4d4ceeb8f3be67be94781ed718ceb103213df74
https://github.com/llvm/llvm-project/commit/d4d4ceeb8f3be67be94781ed718ceb103213df74
Author: David Sherwood <david.sherwood at arm.com>
Date: 2021-02-02 (Tue, 02 Feb 2021)
Changed paths:
M llvm/lib/IR/IRBuilder.cpp
M llvm/lib/Target/AArch64/AArch64TargetTransformInfo.h
M llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
A llvm/test/Transforms/LoopVectorize/AArch64/sve-gather-scatter.ll
A llvm/test/Transforms/LoopVectorize/AArch64/sve-masked-loadstore.ll
Log Message:
-----------
[SVE][LoopVectorize] Add masked load/store and gather/scatter support for SVE
This patch updates IRBuilder::CreateMaskedGather/Scatter to work
with ScalableVectorType and adds isLegalMaskedGather/Scatter functions
to AArch64TargetTransformInfo. In addition I've fixed up
isLegalMaskedLoad/Store to return true for supported scalar types,
since this is what the vectorizer asks for.
In LoopVectorize.cpp I've changed
LoopVectorizationCostModel::getInterleaveGroupCost to return an invalid
cost for scalable vectors, since currently this relies upon using shuffle
vector for reversing vectors. In addition, in
LoopVectorizationCostModel::setCostBasedWideningDecision I have assumed
that the cost of scalarising memory ops is infinitely expensive.
I have added some simple masked load/store and gather/scatter tests,
including cases where we use gathers and scatters for conditional invariant
loads and stores.
Differential Revision: https://reviews.llvm.org/D95350
More information about the All-commits
mailing list