<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 14, 2015 at 9:06 AM, Duncan P. N. Exon Smith <span dir="ltr"><<a href="mailto:dexonsmith@apple.com" target="_blank">dexonsmith@apple.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5"><br>
> On 2015-Apr-10, at 09:12, David Blaikie <<a href="mailto:dblaikie@gmail.com">dblaikie@gmail.com</a>> wrote:<br>
><br>
><br>
><br>
> On Thu, Apr 9, 2015 at 12:37 PM, Duncan P. N. Exon Smith <<a href="mailto:dexonsmith@apple.com">dexonsmith@apple.com</a>> wrote:<br>
><br>
> > On 2015-Apr-09, at 11:06, David Blaikie <<a href="mailto:dblaikie@gmail.com">dblaikie@gmail.com</a>> wrote:<br>
> ><br>
> > Late to the party because I figured other people would chime in, but I'll have a go...<br>
> ><br>
> > On Tue, Mar 31, 2015 at 7:10 PM, Duncan P. N. Exon Smith <<a href="mailto:dexonsmith@apple.com">dexonsmith@apple.com</a>> wrote:<br>
> > A while back I finished up some work [1] that Chad started to preserve<br>
> > use-list-order in bitcode [2], hidden behind an "experimental" option<br>
> > called `-preserve-bc-use-list-order`. I then added a similar<br>
> > `-preserve-ll-use-list-order` option for LLVM assembly [3].<br>
> ><br>
> > [1]: <a href="http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-July/074604.html" target="_blank">http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-July/074604.html</a><br>
> > [2]: <a href="https://llvm.org/bugs/show_bug.cgi?id=5680" target="_blank">https://llvm.org/bugs/show_bug.cgi?id=5680</a><br>
> > [3]: <a href="https://llvm.org/bugs/show_bug.cgi?id=20515" target="_blank">https://llvm.org/bugs/show_bug.cgi?id=20515</a><br>
> ><br>
> > I'd like to move both of these options out of "experimental" mode, and<br>
> > turn `-preserve-bc-use-list-order` on by default. I've attached a patch<br>
> > that does this.<br>
> ><br>
> > Why?<br>
> > ====<br>
> ><br>
> > Use-list order affects the output of some LLVM passes. A typical<br>
> > example is a pass that walks through basic block predecessors. The<br>
> > use-list order is deterministic, but not preserved when serializing the<br>
> > LLVM IR to bitcode or assembly.<br>
> ><br>
> > In rare (but frustrating) cases, this makes it difficult to reproduce a<br>
> > crash or miscompile from an intermediate result.<br>
> ><br>
> > For example, consider running an LTO build and serializing between the<br>
> > LTO optimization pipeline and the LTO codegen pipeline. On SPEC,<br>
> > serializing to/from bitcode will change the output executable in 33<br>
> > benchmarks. If you use `-preserve-bc-use-list-order`, all executables<br>
> > match.<br>
> ><br>
> > Why do you need those to match? What's the penalty/problem when these don't match?<br>
><br>
> You need these to match so that running a pass using `opt` or `llc` on<br>
> a dumped bitcode file gets you the same result as running the pass<br>
> directly. This sort of thing (dumping out temporary results and<br>
> continuing from there) is useful when triaging bugs. The problem is<br>
> that the bug may not reproduce at all when starting from the dumped<br>
> bitcode.<br>
><br>
> > Reproducibility for tests/bugs/etc seems important, but if it's any worse/better with a particular use list, that's a bug we should fix, right? Forcing a specific use list isn't going to fix those bugs, just make sure we make the same decision (good or bad) every time.<br>
><br>
> I'm not sure there's consensus that this is a bug. It's not optimal,<br>
> but it's not clear to me that it's invalid to depend on use-list order.<br>
> In some cases, it may be a reasonable heuristic that improves compile<br>
> time (I'm not arguing for that, I'm just not convinced otherwise).<br>
><br>
> > What does it cost?<br>
> > ==================<br>
> ><br>
> > Manman collected a bunch of numbers, with `-preserve-bc-use-list-order`<br>
> > and `-preserve-ll-use-list-order` both turned on:<br>
> ><br>
> > - Time increase on LTO build time: negligible.<br>
> > - Filesize increase in bitcode files output by `clang -flto`:<br>
> > - clang: 6.8556% (sum when from 310412640 to 331693296 bytes).<br>
> > - Filesize increase in merged bitcode files (output of<br>
> > `ld -save-temps` when using `clang -flto` on Darwin).<br>
> > — SPEC: 5.24% to 23.4% increase in file size.<br>
> > — clang and related tools: 0.83% to 9.19% (details in<br>
> > filesize_clang.txt, attached).<br>
> > - Time increase of running `llvm-dis` on merged bitcode files:<br>
> > — 6.4% to 15% on SPEC benchmarks with running time >= 1 second.<br>
> > — 7.9% to 14.5% on clang with running time >= 1 second (details in<br>
> > dis_clang.txt, attached).<br>
> > - Time increase of running `llvm-as` on merged bitcode files:<br>
> > — 3.5% to 39% on SPEC benchmarks with running time >= 1 second.<br>
> > — 14.7% to 24.5% with running time >= 1 second (details in<br>
> > as_clang.txt, attached).<br>
> ><br>
> > These seem like pretty big costs to pay (bitcode size is going to be particularly important to Google - big projects under LTO, limits on the total size of the inputs to the link step, etc). To the point above, it's not clear why we'd want to pay that cost. (I'm partly playing devil's advocate here - I realize reproducibility is really important for a bunch of reasons, though this particular reproducibility is a marginal one (compared to "run the compiler twice on the same input and get two different answers") but it seems like we've generally treated these issues as bugs and fixed the optimizations to be use-list-order independent in the past, no?)<br>
> ><br>
><br>
> (FWIW, there's some ancient discussion in PR5680 about this.)<br>
><br>
> I don't have a strong opinion on whether depending on use-list order<br>
> should be considered a bug. However, it *is* a bug not to be able to<br>
> roundtrip to bitcode and get the same end result.<br>
><br>
> While it may be possible to remove the compiler's dependency on use-list<br>
> order, no one has signed up to do the work, it's not clear what the<br>
> compile time cost would be, and there isn't consensus that it's the<br>
> right end goal.<br>
><br>
> I'd say the current solution, while you've signed up/done the work has a pretty clear and significant cost, and I'm not sure about consensus (I figured other people might chime in which is why I didn't bother until now - Chandler mentioned he'd tried & hadn't found many supporters of his dissenting opinion so I figured I'd have a go).<br>
><br>
> In the meantime, this fixes the bug. If/when that hypothetical work is<br>
> done and validated we can turn this off by default if it makes sense to.<br>
><br>
> In terms of LTO bitcode size: serializing use-list order isn't actually<br>
> necessary/useful for the "normal" outputs of `clang -flto`. It's<br>
> important for `clang -emit-llvm`, `clang [-flto] -save-temps`, and<br>
> `<gold/libLTO> -save-temps`, but serialization between "compile" and<br>
> "link" is a deterministic and reproducible step. A possible<br>
> optimization would be to disable the option when writing `clang`'s<br>
> (final) output file in `clang -flto`. Thoughts?<br>
><br>
> Sounds plausible - I'd be inclined to go a step further and make this opt-in for tools/actions rather than opt-out for clang -flto.<br>
><br>
> I'm assuming -emit-llvm and -save-temps behavior is important to use as debugging tools -<br>
<br>
</div></div>Yup, that's the main reason.<br>
<span class=""><br>
> or is there some other reason you're relying on those features not to introduce variation? If it's a matter of "LLVM debugging tools should enable use list order preservation to make the lives of LLVM developers easier" then I think it makes sense for us to opt in all our debugging tools to do this but leave the default behavior to not incur this extra cost - so that people using LLVM as a library don't pay this extra cost in their production pipelines (but, like us, can enable it with a flag when needed).<br>
<br>
</span>Okay, I'm convinced. I'll make it that way over the next couple of days.<br></blockquote><div><br>Awesome - thanks a bunch! Sorry I didn't bring it up earlier to save you some of the rework. Let me know if there's anything I can lend a hand with.<br><br>- David<br> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5"><br>
> - David<br>
><br>
><br>
> Alternatively, some portions of the aforementioned hypothetical work<br>
> might be low-hanging fruit. E.g., it might not be hard to validate that<br>
> the use-list order of `ConstantInt`s doesn't affect output (and if it<br>
> does, it would probably be easy to fix); at the same time, I imagine<br>
> they account for a disproportionate percentage of the bitcode bloat<br>
> (they have large use-lists that change frequently).<br>
<br>
</div></div></blockquote></div><br></div></div>