[PATCH] D111631: [AArch64][SVE] Fix handling of stack protection with SVE
Sander de Smalen via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Thu Nov 11 03:59:50 PST 2021
sdesmalen added a comment.
Hi @john.brawn, thanks for updating your patch, this approach seems like a good improvement! Just left a few more comments.
================
Comment at: llvm/lib/CodeGen/PrologEpilogInserter.cpp:969
+ "already set.");
+ } else if (!MFI.isObjectPreAllocated(MFI.getStackProtectorIndex())) {
llvm_unreachable(
----------------
nit: unnecessary curly braces
================
Comment at: llvm/lib/CodeGen/PrologEpilogInserter.cpp:970
+ } else if (!MFI.isObjectPreAllocated(MFI.getStackProtectorIndex())) {
llvm_unreachable(
"Stack protector not pre-allocated by LocalStackSlotPass.");
----------------
Do we need to some way to ensure that the LocalStackSlotAllocationPass doesn't pre-allocate the stack protector value if we know it will be allocated to a different StackID?
================
Comment at: llvm/lib/CodeGen/StackProtector.cpp:175
if (MemLoc.hasValue() && MemLoc->Size.hasValue() &&
- MemLoc->Size.getValue() > AllocSize)
+ MemLoc->Size.getValue() > AllocSize.getKnownMinValue())
return true;
----------------
This should use `TypeSize::isKnownGT(Typesize::getFixed(MemLoc->Size.getValue()), AllocSize)`.
I doubt it would ever return `true` when `AllocSize` is scalable though, becuase from what I can see in the code, `MemLoc.hasValue()` will return `false` if `I` accesses a scalable size, because `MemoryLocation` will not be precise.
================
Comment at: llvm/lib/CodeGen/StackProtector.cpp:214
// Adjust AllocSize to be the space remaining after this offset.
- if (HasAddressTaken(I, AllocSize - Offset.getLimitedValue()))
+ TypeSize OffsetSize = TypeSize::Fixed(Offset.getLimitedValue());
+ if (HasAddressTaken(I, AllocSize - OffsetSize))
----------------
If the gep is indexing into a scalable vector type, then OffsetSize cannot be fixed size?
================
Comment at: llvm/lib/CodeGen/StackProtector.cpp:340
bool IsLarge = false;
- if (ContainsProtectableArray(AI->getAllocatedType(), IsLarge, Strong)) {
+ Type *AllocType = AI->getAllocatedType();
+ if (ContainsProtectableArray(AllocType, IsLarge, Strong)) {
----------------
unnecessary change?
================
Comment at: llvm/lib/CodeGen/StackProtector.cpp:356
- if (Strong && HasAddressTaken(AI, M->getDataLayout().getTypeAllocSize(
- AI->getAllocatedType()))) {
+ TypeSize AllocSize = M->getDataLayout().getTypeAllocSize(AllocType);
+ if (Strong && HasAddressTaken(AI, AllocSize)) {
----------------
same here, perhaps just move this into NFC patch and submit it separately.
================
Comment at: llvm/lib/Target/AArch64/AArch64FrameLowering.cpp:2960
+ MFI.setObjectAlignment(MFI.getStackProtectorIndex(), Align(16));
+ }
+
----------------
I think this patch also needs some code to `determineSVEStackObjectOffsets` to ensure that the stack protector value is ordered before all other objects when allocating their offsets. (this is probably best tested with an MIR test)
================
Comment at: llvm/test/CodeGen/AArch64/stack-guard-sve.ll:16
+ %x = alloca <vscale x 4 x float>, align 16
+ store <vscale x 4 x float> shufflevector (<vscale x 4 x float> insertelement (<vscale x 4 x float> poison, float 0.000000e+00, i32 0), <vscale x 4 x float> poison, <vscale x 4 x i32> zeroinitializer), <vscale x 4 x float>* %x, align 16
+ %0 = load <vscale x 4 x float>, <vscale x 4 x float>* %x, align 16
----------------
nit: To simplify the test, you could also write:
store <vscale x 4 x float> zeroinitializer, <vscale x 4 x float>* %x, align 16
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D111631/new/
https://reviews.llvm.org/D111631
More information about the llvm-commits
mailing list