[PATCH] D94106: [Local] Treat calls that may not return as being alive (WIP).

Florian Hahn via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Jan 15 06:16:52 PST 2021


fhahn updated this revision to Diff 316928.
fhahn added a comment.
Herald added subscribers: lxfind, okura, bbn, sstefan1, kuter, kerbowa, aheejin, jgravelle-google, sbc100, nhaehnle, jvesely, dschuff.
Herald added a reviewer: sstefan1.
Herald added a reviewer: baziotis.



In D94106#2480386 <https://reviews.llvm.org/D94106#2480386>, @nikic wrote:

> In D94106#2480348 <https://reviews.llvm.org/D94106#2480348>, @fhahn wrote:
>
>> In D94106#2480229 <https://reviews.llvm.org/D94106#2480229>, @jdoerfert wrote:
>>
>>> In D94106#2480220 <https://reviews.llvm.org/D94106#2480220>, @nikic wrote:
>>>
>>>> Happy to see this issue finally fixed :)
>>>>
>>>> I think the general approach here should be more along the lines of infering WillReturn inside FuncAttrs, and then just checking the WillReturn attribute here. Especially for languages which have no forward-progress guarantee it is important that WillReturn actually gets inferred somewhere, otherwise intrinsics will be the only thing that set the attribute.
>>>
>>> Maybe if we turn on the attributor for a few selected attributes only?
>>
>> I think it would be good to approach this incrementally. So start with gradually making the behavior 'more' correct than it is at the moment, while not regressing on performance. Treating functions without side effects without `mustprogress` as 'alive' seems like a relatively safe thing to do as a start.
>>
>> We should also improve the inference, but to make progress it might be be desirable to not predicate one on the other to start with. I realize that having good-enough inference to start with is the more principled approach and I am not sure if the desire to make progress outweighs that in the case at hand.
>
> I like incremental improvements, but I don't think this really qualifies. What you are doing here is not making the behavior more correct while not regressing performance -- you are making it more correct while not regressing performance //for languages with a forward-progress guarantee//. What's likely going to happen is that this lands because it works well enough for C++, and then the remaining and technically harder part gets forgotten...

There should be almost no impact for C++ (because all readnone functions should also be mustprogress -> willreturn), but there will be some impact to different languages (C). I don't think C++ performance is the only language for which performance regressions will block a patch. We have several large C benchmarks in our internal tracking for example and in general we report regression upstream and consider them reason for reverting. As for other languages, ideally they would closely track `main` to be able to spot & flag issues ASAP.

In any case, I went ahead and update this to use `willreturn` directly as suggested. There are a few pending patches that add `willreturn` in a few key places. With those, I think it should be very similar to the original patch, but split up properly. This is probably the best we can do to start with. In terms of performance impact, I expect the impact to be low for optimization levels with aggressive inlining.

There are still a few intrinsics that require adding `willreturn` for the tests to pass. Some updates related to that are included in the patch, but I can split them off. Once we are happy with the patch, I'll send a heads-up to llvm-dev, to let people know they should make sure `willreturn` is added, if possible, to intrinsics for their targets and frontend languages.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D94106/new/

https://reviews.llvm.org/D94106

Files:
  llvm/include/llvm/IR/InstrTypes.h
  llvm/include/llvm/IR/Intrinsics.td
  llvm/include/llvm/IR/IntrinsicsARM.td
  llvm/include/llvm/IR/IntrinsicsWebAssembly.td
  llvm/lib/Transforms/Utils/Local.cpp
  llvm/test/Analysis/TypeBasedAliasAnalysis/functionattrs.ll
  llvm/test/CodeGen/AMDGPU/inline-attr.ll
  llvm/test/Transforms/Attributor/ArgumentPromotion/fp80.ll
  llvm/test/Transforms/Attributor/align.ll
  llvm/test/Transforms/Attributor/nocapture-1.ll
  llvm/test/Transforms/Attributor/nocapture-2.ll
  llvm/test/Transforms/Attributor/norecurse.ll
  llvm/test/Transforms/Attributor/range.ll
  llvm/test/Transforms/BDCE/basic.ll
  llvm/test/Transforms/CodeGenPrepare/X86/delete-assume-dead-code.ll
  llvm/test/Transforms/Coroutines/coro-split-00.ll
  llvm/test/Transforms/Coroutines/coro-split-hidden.ll
  llvm/test/Transforms/Coroutines/no-suspend.ll
  llvm/test/Transforms/DCE/calls-errno.ll
  llvm/test/Transforms/DeadStoreElimination/MSSA/simple.ll
  llvm/test/Transforms/DeadStoreElimination/MemDepAnalysis/DeleteThrowableInst.ll
  llvm/test/Transforms/DeadStoreElimination/MemDepAnalysis/simple.ll
  llvm/test/Transforms/InferFunctionAttrs/norecurse_debug.ll
  llvm/test/Transforms/Inline/dead-calls-willreturn.ll
  llvm/test/Transforms/InstCombine/constant-fold-libfunc.ll
  llvm/test/Transforms/InstCombine/nothrow.ll
  llvm/test/Transforms/InstSimplify/ConstProp/calls-math-finite.ll
  llvm/test/Transforms/InstSimplify/ConstProp/calls.ll
  llvm/test/Transforms/InstSimplify/ConstProp/math-1.ll
  llvm/test/Transforms/InstSimplify/ConstProp/math-2.ll
  llvm/test/Transforms/InstSimplify/ConstProp/rint.ll
  llvm/test/Transforms/InstSimplify/ConstProp/round.ll
  llvm/test/Transforms/InstSimplify/ConstProp/trunc.ll
  llvm/test/Transforms/InstSimplify/remove-dead-call.ll
  llvm/test/Transforms/InstSimplify/returned.ll
  llvm/test/Transforms/MemCpyOpt/memcpy.ll
  llvm/test/Transforms/NewGVN/eliminate-callsite-inline.ll
  llvm/test/Transforms/OpenMP/parallel_deletion.ll
  llvm/test/Transforms/Reassociate/erase_inst_made_change.ll

-------------- next part --------------
A non-text attachment was scrubbed...
Name: D94106.316928.patch
Type: text/x-patch
Size: 87253 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20210115/46676780/attachment.bin>


More information about the llvm-commits mailing list