[PATCH] D78614: [AssumeBuilder] Add assume-simplify pass

Johannes Doerfert via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon May 4 19:23:03 PDT 2020


jdoerfert added a comment.

In D78614#2018709 <https://reviews.llvm.org/D78614#2018709>, @Tyker wrote:

> on a separate note.
>  i started testing on real-word code with enable-knowledge-retention.
>  after a few fixes the compiler didn't crashed on any application i tried. and there doesn't seem to be any miss-compile either..


I wanted to get a feeling but run into this:

  opt: /data/src/llvm-project/llvm/lib/Analysis/AssumeBundleQueries.cpp:154: llvm::RetainedKnowledge llvm::getKnowledgeForValue(const llvm::Value*, llvm::ArrayRef<llvm::Attribute::AttrKind>, llvm::AssumptionCache*, llvm::             function_ref<bool(llvm::RetainedKnowledge, llvm::Instruction*)>): Assertion `!!RKCheck && "invalid Assumption cache"' failed
  
  2.ยป-Running pass 'Combine redundant instructions' on function '@clause_AllIndexedClauses'

I was running O3 <https://reviews.llvm.org/owners/package/3/> on the LLVM-Test Suite SPASS clause.c file.

> however the compile time cost is very high, originally around +15% after some tweaking i lowered it to 9%.
> 
> from what i saw the cause of the slowdown is cause by salvageKnowledge that generate lots of extra instructions and many part of the optimizer scale on the number of instruction.
>  here is the time taken by the parts i changed/added in the tweaked version:
>  0.43% in salvageKnowledge
>  0.01% in assumes Queries
>  0.70% in the assume simplify pass (i added it in 4 places in the pipeline as part of tweaks)
>  although they could probably be optimized, they are not a big part of the slowdown.
> 
> i still have a few idea to try out and there is also lots of arbitrary decision i made during development that could be guided by measurement.
>  so the current overhead can surely be lowered but i don't think i can get it to a reasonable level without significantly limiting the preserved information.
> 
> i didn't start measuring the runtime impact but i don't expect it to be good yet. since droppable uses are not widely in place.
> 
> any thought ?

First, we should profile where assumes are created, how many, and with what information (=attributes).
I'm usually not to concerned with the initial numbers as long as we haven't addressed the low hanging fruits.
We should also verify it is the number of instructions that is the problem. It could be other things, e.g., additional information or just handling of the operand bundles.

We should save some instructions moving alignment assumptions to operand bundles but that is something else.



================
Comment at: llvm/include/llvm/Analysis/AssumeBundleQueries.h:126
+constexpr StringRef IgnoreBundleTag = "ignore";
+
 /// Return true iff the operand bundles of the provided llvm.assume doesn't
----------------
Documentation, please


================
Comment at: llvm/include/llvm/Analysis/AssumeBundleQueries.h:162
+RetainedKnowledge getKnowledgeFromBundle(CallInst &Assume,
+                                         const CallBase::BundleOpInfo &BOI);
+
----------------
Nit: `This extracts the` or `Extract the` 


================
Comment at: llvm/lib/Transforms/Utils/AssumeBundleBuilder.cpp:228
+      : F(F), AC(AC), DT(DT), C(C),
+        IgnoreTag(C.getOrInsertBundleTag("ignore")) {}
+
----------------
Tyker wrote:
> jdoerfert wrote:
> > "ignore" should be a constexpr value in some header and everyone should use it.
> i added IgnoreBundleTag but sadly some uses of "ignore" in IR can't be replaced by it. since it resides in Analysis
Can't we include analysis headers from here? transform should have a dependence on analysis anyway, right? If not, we can move it into the LLVMContext definition or some place like that.


================
Comment at: llvm/lib/Transforms/Utils/AssumeBundleBuilder.cpp:289
+          }
+        }
+        auto &Lookup = Knowledge[{RK.WasOn, RK.AttrKind}];
----------------
Tyker wrote:
> jdoerfert wrote:
> > I'm always confused by the "context instruction" thing. Does it mean before or after that instruction? I mean, if `&*F.getEntryBlock().getFirstInsertionPt()` may not return is this doing the right thing? Do we have a test?
> i don't know either but this is covered by test4A and it seems to be before the context instruction or the test would fail.
> 
> the test verifies that we do not replace the assume with function attributes when the first instruction is a may throw.
OK, before makes sense. We have a test, great. Thanks for checking!


================
Comment at: llvm/utils/lit/lit/main.py:276
+        if test.result.code in tests_by_code:
+            tests_by_code[test.result.code].append(test)
 
----------------
If this is because of `NOEXE` in the test suite, that is fixed right now elsewhere. Either way, this does not belong here ;)


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D78614/new/

https://reviews.llvm.org/D78614





More information about the llvm-commits mailing list