<div dir="ltr"><div>Interesting. I was wondering whether it would be a good idea to separate passes into further separate groups, depending on what they do. For example:</div><div>- Code reduction/elimination (ex. DCE, quite a few of the loop passes)<br></div><div>- Instruction substitution (ex. vectorization passes/instcombine)</div><div><br></div><div>The problem I see with the current approach is that I think the code reduction passes get the short end of the stick; they can only run as many times as they're added to the PassManager, meaning for larger projects something could be missed.</div><div><br></div><div>What I'd propose (take this with a grain of salt obviously) is some sort of implementation where the code reduction passes are all continually run until they are done. After that, do the same thing but with the substitution passes. I don't know if there are any specific passes that just make code prettier for substitution, but they would (hypothetically) be run once in between. This is probably not the best way to go about this, but I think it could help.</div><div><br></div><div>Thanks,</div><div>Karl<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Jan 28, 2020 at 5:35 PM Doerfert, Johannes <<a href="mailto:jdoerfert@anl.gov" target="_blank">jdoerfert@anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Karl,<br>
<br>
here is my, slightly oversimplified, take on this, I hope it helps.<br>
<br>
We have a fixed, manually curated pipeline which seems to perform reasonably well (see for example llvm/lib/Transforms/IPO/PassManagerBuilder.cpp).<br>
There are (call graph SCC) passes that run as part of this pipeline potentially multiple times, but still in the fixed order (as far as I know).<br>
<br>
EJ (cc'ed) and I are going to propose a GSoC project to "learn" the interplay between sets of passes, e.g., what has to go together and in which order, and, potentially, alternative pipelines we could offer to people.<br>
There are various details that are not totally clear yet but based on existing research it seems there are nice improvements to be expected if we find a way to manage the infrastructure challenges that come with such an effort.<br>
<br>
Cheers,<br>
Johannes<br>
<br>
<br>
________________________________________<br>
From: llvm-dev <<a href="mailto:llvm-dev-bounces@lists.llvm.org" target="_blank">llvm-dev-bounces@lists.llvm.org</a>> on behalf of Karl Rehm via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>><br>
Sent: Tuesday, January 28, 2020 16:19<br>
To: <a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
Subject: [llvm-dev] Confused about optimization pass order<br>
<br>
Hello,<br>
I'm wondering how exactly LLVM deals with passes that open up opportunities for other passes. For example, InstCombine says that it opens many opportunities for dead code/store elimination. However, the reverse may also be true. How does LLVM currently handle this?<br>
</blockquote></div>