[llvm] [hwasan] Optimize outlined memaccess for fixed shadow on Aarch64 (PR #88544)
Florian Mayer via llvm-commits
llvm-commits at lists.llvm.org
Fri Apr 12 11:27:36 PDT 2024
================
@@ -929,11 +929,33 @@ void HWAddressSanitizer::instrumentMemAccessOutline(Value *Ptr, bool IsWrite,
IRBuilder<> IRB(InsertBefore);
Module *M = IRB.GetInsertBlock()->getParent()->getParent();
- IRB.CreateCall(Intrinsic::getDeclaration(
- M, UseShortGranules
- ? Intrinsic::hwasan_check_memaccess_shortgranules
- : Intrinsic::hwasan_check_memaccess),
- {ShadowBase, Ptr, ConstantInt::get(Int32Ty, AccessInfo)});
+
+ // Aarch64 makes it difficult to embed large constants (such as the shadow
+ // offset) in the code. Our intrinsic supports a 16-bit constant (to be left
+ // shifted by 32 bits) - this is not an onerous constraint, since
+ // 1) kShadowBaseAlignment == 32 2) an offset of 4TB (1024 << 32) is
+ // representable, and ought to be good enough for anybody.
+ bool useFixedShadowIntrinsic = true;
+ if (TargetTriple.isAArch64() && ClMappingOffset.getNumOccurrences() > 0 &&
+ UseShortGranules) {
+ uint16_t offset_shifted = Mapping.Offset >> 32;
+ if ((uint64_t)offset_shifted << 32 != Mapping.Offset)
+ useFixedShadowIntrinsic = false;
+ } else
+ useFixedShadowIntrinsic = false;
+
+ if (useFixedShadowIntrinsic)
----------------
fmayer wrote:
I was thinking the same, but if you look at my other comment, I think once we make the logic shorter, it's not necessary.
https://github.com/llvm/llvm-project/pull/88544
More information about the llvm-commits
mailing list