[all-commits] [llvm/llvm-project] 638378: [SVE][CodeGenPrepare] Sink address calculations th...
Paul Walker via All-commits
all-commits at lists.llvm.org
Wed Oct 11 05:20:22 PDT 2023
Branch: refs/heads/main
Home: https://github.com/llvm/llvm-project
Commit: 6383785bad524ce60208a4194fc724bc775605b7
https://github.com/llvm/llvm-project/commit/6383785bad524ce60208a4194fc724bc775605b7
Author: Paul Walker <paul.walker at arm.com>
Date: 2023-10-11 (Wed, 11 Oct 2023)
Changed paths:
M llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
A llvm/test/Transforms/CodeGenPrepare/AArch64/sink-gather-scatter-addressing.ll
Log Message:
-----------
[SVE][CodeGenPrepare] Sink address calculations that match SVE gather/scatter addressing modes. (#66996)
SVE supports scalar+vector and scalar+extw(vector) addressing modes.
However, the masked gather/scatter intrinsics take a vector of
addresses, which means address computations can be hoisted out of
loops. The is especially true for things like offsets where the
true size of offsets is lost by the time you get to code generation.
This is problematic because it forces the code generator to legalise
towards `<vscale x 2 x ty>` vectors that will not maximise bandwidth
if the main block datatypes is in fact i32 or smaller.
This patch sinks GEPs and extends for cases where one of the above
addressing modes can be used.
NOTE: There are cases where it would be better to split the extend
in two with one half hoisted out of a loop and the other within the
loop. Whilst true I think this switch of default is still better
than before because the extra extends are an improvement over being
forced to split a gather/scatter.
More information about the All-commits
mailing list