[PATCH] D122431: Basic support for posix_memalign / __builtin_object_size interaction
Nikita Popov via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Wed Apr 6 04:16:37 PDT 2022
nikic added inline comments.
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:827
+ unsigned &ScannedInstCount) {
+ constexpr unsigned MaxInstsToScan = 1024;
+
----------------
I think this limit is still overly large -- I think our usual value starting value for limits like this is 32. If you think that's too low to be practically useful, I'd go with 128 here.
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:844
+ do {
+ Instruction &I = *From;
+
----------------
Should have a `if(I.isDebugInst()) continue` check here, to make sure debug insts can't affect optimization. (Before the limit check.)
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:853
+ if (auto *SI = dyn_cast<StoreInst>(&I)) {
+ // Minimal work if we don't have alias analysis.
+ if (!Options.AA) {
----------------
Just wondering if it might be better to just bail entirely from the load-handling code if we don't have AA, rather than having extra code to support both? I don't particularly mind, but given how we essentially only care about the one lowerObjectSizeCall() usage that does have AA, this may not be particularly useful. We'd have to add some tests for a different caller that goes through these non-AA codepaths.
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:866
+ case AliasResult::MustAlias:
+ return Known(compute(SI->getOperand(0)));
+ default:
----------------
nit: getValueOperand()
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:878
+
+ // posix_memalign case. It assumes that posix_memalign doesn't fail :-/
+ LibFunc TLIFn;
----------------
Second part of this comment is now outdated :)
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:896
+ AliasResult AR =
+ Options.AA->alias(CB->getOperand(0), Load.getOperand(0));
+ switch ((AliasResult::Kind)AR) {
----------------
nit: Load.getPointerOperand()
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:930
+ Load, *PredBB, BasicBlock::iterator(PredBB->getTerminator()),
+ VisitedBlocks, ScannedInstCount));
+
----------------
As a small optimization, we could short-circuit on unknown here.
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:843
+ if (!Options.AA) {
+ if (SI->getOperand(1) == Load.getOperand(0))
+ return compute(SI->getOperand(0));
----------------
serge-sans-paille wrote:
> nikic wrote:
> > Use `getPointerOperand()` here.
> >
> > Also probably need to check `SI->getValueOperand()->getType()->isPointerTy()`, unless compute() handles non-pointer values gracefully.
> I *think* this is already guarenteed by the fact that we match against the load pointer operand, which is of type `T**`
This isn't guaranteed with opaque pointers -- but also for the more general MustAlias case below, which will strip pointer casts.
================
Comment at: llvm/lib/Analysis/MemoryBuiltins.cpp:831
+ auto Unknown = [this]() {
+ ++ObjectVisitorLoad;
+ return unknown();
----------------
serge-sans-paille wrote:
> nikic wrote:
> > This should probably only be counted once on the top level?
> This is used to count the "Number of load instructions with unsolved size and offset", thus the detailed counting.
Right, but currently you're incrementing this once per unknown predecessor, rather than once per load, no?
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D122431/new/
https://reviews.llvm.org/D122431
More information about the llvm-commits
mailing list