[PATCH] D122431: Basic support for posix_memalign / __builtin_object_size interaction
Nikita Popov via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Tue Apr 5 06:59:04 PDT 2022
nikic added inline comments.
================
Comment at: llvm/include/llvm/Analysis/MemoryBuiltins.h:179
+ const TargetLibraryInfo *TLI, AAResults *AA,
+ bool MustSucceed);
----------------
Make it an optional parameter instead?
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:828
+ if (!Visited.insert(&BB).second)
+ return unknown();
+
----------------
With the getUniquePredecessor() variant noted before this wouldn't matter, but if you do explore branching cases, then I'm not sure returning unknown here makes sense. Consider a simple `store; if (...) { ... } else { ... } load;` case. The upwards walk would hit the store along one path first and then along the other again and return unknown at that pointer.
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:843
+ if (!Options.AA) {
+ if (SI->getOperand(1) == Load.getOperand(0))
+ return compute(SI->getOperand(0));
----------------
Use `getPointerOperand()` here.
Also probably need to check `SI->getValueOperand()->getType()->isPointerTy()`, unless compute() handles non-pointer values gracefully.
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:848
+ }
+ AliasResult AR = Options.AA->alias(SI->getOperand(1), &Load);
+ switch ((AliasResult::Kind)AR) {
----------------
This should be checking aliasing with `Load.getPointerOperand()`, *not* `&Load` itself.
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:895
+ // Is the error status of posix_memalign correctly checked? If not it
+ // would be incorrect to assume it succeeds and bos doesn't see the
+ // previous value.
----------------
bos -> the load
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:916
+ PredecessorSizeOffsets.push_back(findLoadSizeOffset(
+ Load, *PredBB, BasicBlock::iterator(PredBB->getTerminator()), Visited));
+
----------------
Should queue on a worklist rather than recursing. For the posix_memalign use-case, it might make sense to just restrict to `getUniquePredecessor()`, as we probably don't care about branching cases there?
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:593
+ if (AA)
+ EvalOptions.AA = AA;
+
----------------
Can assign unconditionally.
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:831
+ auto Unknown = [this]() {
+ ++ObjectVisitorLoad;
+ return unknown();
----------------
This should probably only be counted once on the top level?
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:836
+ do {
+ Instruction &I = *From;
+ if (!I.mayWriteToMemory())
----------------
We should be checking against a budget here, to avoid scanning 150k instructions in search of a matching store...
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D122431/new/
https://reviews.llvm.org/D122431
More information about the llvm-commits
mailing list