[llvm] [hwasan] Optimize outlined memaccess for fixed shadow on Aarch64 (PR #88544)
Thurston Dang via llvm-commits
llvm-commits at lists.llvm.org
Tue Apr 23 13:37:23 PDT 2024
================
@@ -930,11 +930,32 @@ void HWAddressSanitizer::instrumentMemAccessOutline(Value *Ptr, bool IsWrite,
IRBuilder<> IRB(InsertBefore);
Module *M = IRB.GetInsertBlock()->getParent()->getParent();
- IRB.CreateCall(Intrinsic::getDeclaration(
- M, UseShortGranules
- ? Intrinsic::hwasan_check_memaccess_shortgranules
- : Intrinsic::hwasan_check_memaccess),
- {ShadowBase, Ptr, ConstantInt::get(Int32Ty, AccessInfo)});
+ bool useFixedShadowIntrinsic = false;
+ // The memaccess fixed shadow intrinsic is only supported on AArch64,
+ // which allows a 16-bit immediate to be left-shifted by 32.
+ // Since kShadowBaseAlignment == 32, and Linux by default will not
+ // mmap above 48-bits, practically any valid shadow offset is
+ // representable.
+ // In particular, an offset of 4TB (1024 << 32) is representable, and
+ // ought to be good enough for anybody.
+ if (TargetTriple.isAArch64() && ClMappingOffset.getNumOccurrences() > 0) {
----------------
thurstond wrote:
Updated in https://github.com/llvm/llvm-project/pull/88544/commits/cdbecc6603e835cd9ff2b10fcdd1ba5bd467a1b6
https://github.com/llvm/llvm-project/pull/88544
More information about the llvm-commits
mailing list