<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Apr 10, 2014 at 6:32 PM, Jim Grosbach <span dir="ltr"><<a href="mailto:grosbach@apple.com" target="_blank">grosbach@apple.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">It’s very important that a run of “llc” on one machine produce the same output on two heterogenous machines given the same input and command lines*. That’s not true right now, leading to lots of bot failures that patch originators can’t reproduce because they’re getting different code locally due to the auto-detection. The recent “Test failures with 3.4.1” thread for examples.<br>
</blockquote><div><br></div><div>I think we should do this, but only to make llc's behavior more deterministic and predictable. I don't think we should start checking in tests that rely on the default subtarget features without explicitly requesting the relevant features.</div>
<div><br></div><div>Consider somebody who works on an ARM or x86 variant like atom. Probably what they'll want to do is set up a bot that runs the LLVM test suite in such a way that their subtarget features are on by default. Our test suite currently "supports" that if the host CPU features happen to be the ones you want. Instead, we should probably move to a world where bots can set different defaults by configuring the tools appropriately. For example, they could rewrite 'llc' to 'llc -mcpu=blah' in lit if no mcpu flags exist.</div>
<div><br></div><div>This would be similar to what we do in Clang for the default C++ ABI. If the test actually needs the MSVC or Itanium C++ ABI, they ask for it explicitly. Otherwise they test one or the other depending on the default target triple. I don't think we should double the number of RUN lines to keep that test coverage, and I don't think the cost of Windows-bot-only test failures is too high.</div>
<div><br></div><div>2c</div></div></div></div>