[llvm] 79f7630 - [LangRef] Clarify semantics of masked vector load/store (#82469)
via llvm-commits
llvm-commits at lists.llvm.org
Sat Aug 3 06:00:38 PDT 2024
Author: Ralf Jung
Date: 2024-08-03T15:00:35+02:00
New Revision: 79f7630e28589364ccf989a4a838f5dd74ce260a
URL: https://github.com/llvm/llvm-project/commit/79f7630e28589364ccf989a4a838f5dd74ce260a
DIFF: https://github.com/llvm/llvm-project/commit/79f7630e28589364ccf989a4a838f5dd74ce260a.diff
LOG: [LangRef] Clarify semantics of masked vector load/store (#82469)
Basically, these operations are equivalent to a loop that iterates all
elements and then does a `getelementptr` (without `inbounds`!) plus
`load`/`store` only for the masked-on elements.
Added:
Modified:
llvm/docs/LangRef.rst
Removed:
################################################################################
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 326505043cc2d..b17e3c828ed3d 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -25182,7 +25182,10 @@ Semantics:
""""""""""
The '``llvm.masked.load``' intrinsic is designed for conditional reading of selected vector elements in a single IR operation. It is useful for targets that support vector masked loads and allows vectorizing predicated basic blocks on these targets. Other targets may support this intrinsic
diff erently, for example by lowering it into a sequence of branches that guard scalar load operations.
-The result of this operation is equivalent to a regular vector load instruction followed by a 'select' between the loaded and the passthru values, predicated on the same mask. However, using this intrinsic prevents exceptions on memory access to masked-off lanes.
+The result of this operation is equivalent to a regular vector load instruction followed by a 'select' between the loaded and the passthru values, predicated on the same mask, except that the masked-off lanes are not accessed.
+Only the masked-on lanes of the vector need to be inbounds of an allocation (but all these lanes need to be inbounds of the same allocation).
+In particular, using this intrinsic prevents exceptions on memory accesses to masked-off lanes.
+Masked-off lanes are also not considered accessed for the purpose of data races or ``noalias`` constraints.
::
@@ -25224,7 +25227,10 @@ Semantics:
""""""""""
The '``llvm.masked.store``' intrinsics is designed for conditional writing of selected vector elements in a single IR operation. It is useful for targets that support vector masked store and allows vectorizing predicated basic blocks on these targets. Other targets may support this intrinsic
diff erently, for example by lowering it into a sequence of branches that guard scalar store operations.
-The result of this operation is equivalent to a load-modify-store sequence. However, using this intrinsic prevents exceptions and data races on memory access to masked-off lanes.
+The result of this operation is equivalent to a load-modify-store sequence, except that the masked-off lanes are not accessed.
+Only the masked-on lanes of the vector need to be inbounds of an allocation (but all these lanes need to be inbounds of the same allocation).
+In particular, using this intrinsic prevents exceptions on memory accesses to masked-off lanes.
+Masked-off lanes are also not considered accessed for the purpose of data races or ``noalias`` constraints.
::
More information about the llvm-commits
mailing list