[llvm-dev] [RFC] intrinsics for load/store-with-length semantics
Eli Friedman via llvm-dev
llvm-dev at lists.llvm.org
Thu Aug 27 03:43:37 PDT 2020
“The vectorizer needs this” seems like a fair reason to add it to the IR.
Pattern-matching an llvm.masked.load with an llvm.get.active.lane.mask operand might not be that terrible? If that works, I’d prefer to go with that because we already have that codepath. Otherwise, adding a new intrinsic seems okay.
There’s a possibility that we’ll want a version of llvm.masked.load that takes both a length and a mask, eventually. See https://reviews.llvm.org/D57504 . Not completely sure how that should interact with this proposal.
-Eli
From: llvm-dev <llvm-dev-bounces at lists.llvm.org> On Behalf Of Hussain Kadhem via llvm-dev
Sent: Thursday, August 27, 2020 2:52 AM
To: llvm-dev at lists.llvm.org
Subject: [EXT] [llvm-dev] [RFC] intrinsics for load/store-with-length semantics
We propose introducing two new intrinsics: llvm.variable.length.load and llvm.variable.length.store.
We have implemented the infrastructure for defining and lowering these in this phabricator patch: https://reviews.llvm.org/D86693
These represent the semantics of loading and storing a variable number of bytes to a fixed-width register;
in effect, a masked load or store where the only active lanes are given by a contiguous block.
There are a few reasons for separately representing this kind of operation, even though as noted it can be represented by a subset of masked loads and stores.
- For targets that have separate hardware support for this kind of operation, it makes it easier to generate an optimal lowering. We are currently working on enabling this for some PowerPC subtargets. In particular, there are some targets that support this kind of operation but not masked loads and stores in general, including Power9.
- Scalarization of this pattern can be done using a number of branches logarithmic in the width of the register, rather than the linear case for general masked operations.
- Scalarized residuals of vectorized loops tend to employ these semantics (tail-folding in particular), so this infrastructure can be used to make more specific optimization decisions for lowering loop residuals. This also pulls out the logic of how to represent and lower such semantics from the loop vectorizer, allowing for better separation of concerns. Our group is currently working on implementing some of these optimizations in the loop vectorizer.
- Representing these semantics using current masked intrinsics would require introducing intermediate steps to generate the appropriate bitmasks, and then detecting them during lowering. This introduces nontrivial complexity that we want to avoid. If it isn't possible to detect all cases during lowering by inspecting the AST, expensive runtime checks would then have to be introduced.
Please refer to the phabricator patch for our implementation, which includes intrinsic definitions, new SDAG nodes, and support for type widening and scalarization.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200827/beec825d/attachment.html>
More information about the llvm-dev
mailing list