[llvm] [Asan][RISCV] Teach AddressSanitizer to support indexed load/store. (PR #100930)
Yeting Kuo via llvm-commits
llvm-commits at lists.llvm.org
Thu Aug 8 01:20:03 PDT 2024
================
@@ -110,6 +113,42 @@ bool RISCVTTIImpl::getMemoryRefInfo(SmallVectorImpl<MemoryRefInfo> &Interesting,
EVL, Stride);
return true;
}
+ case Intrinsic::riscv_vloxei_mask:
+ case Intrinsic::riscv_vluxei_mask:
+ case Intrinsic::riscv_vsoxei_mask:
+ case Intrinsic::riscv_vsuxei_mask:
+ HasMask = true;
+ [[fallthrough]];
+ case Intrinsic::riscv_vloxei:
+ case Intrinsic::riscv_vluxei:
+ case Intrinsic::riscv_vsoxei:
+ case Intrinsic::riscv_vsuxei: {
+ // Intrinsic interface (only listed ordered version):
+ // riscv_vloxei(merge, ptr, index, vl)
+ // riscv_vloxei_mask(merge, ptr, index, mask, vl, policy)
+ // riscv_vsoxei(val, ptr, index, vl)
+ // riscv_vsoxei_mask(val, ptr, index, mask, vl, policy)
+ bool IsWrite = II->getType()->isVoidTy();
+ Type *Ty = IsWrite ? II->getArgOperand(0)->getType() : II->getType();
+ const auto *RVVIInfo = RISCVVIntrinsicsTable::getRISCVVIntrinsicInfo(IntNo);
+ unsigned VLIndex = RVVIInfo->VLOperand;
+ unsigned PtrOperandNo = VLIndex - 2 - HasMask;
+ // RVV indexed loads/stores zero-extend offset operands which are narrower
+ // than XLEN to XLEN.
+ Value *OffsetOp = II->getArgOperand(PtrOperandNo + 1);
+ Type *OffsetTy = OffsetOp->getType();
+ if (OffsetTy->getScalarType()->getIntegerBitWidth() < ST->getXLen()) {
+ VectorType *OrigType = cast<VectorType>(OffsetTy);
+ Type *ExtendTy = VectorType::get(XLenIntTy, OrigType);
+ OffsetOp = IB.CreateZExt(OffsetOp, ExtendTy);
----------------
yetingk wrote:
Adding offset extension query make MemoryRefInfo needs 3 members for indexed load/store, since we should also add extended kind (signed, unsingned) and extended size. Do you think specific target extension in generic code like b0edafd is allowable?
https://github.com/llvm/llvm-project/pull/100930
More information about the llvm-commits
mailing list