[PATCH] D22071: Correct ordering of loads/stores.

Justin Lebar via llvm-commits llvm-commits at lists.llvm.org
Thu Jul 7 08:09:59 PDT 2016


jlebar added inline comments.

================
Comment at: test/Transforms/LoadStoreVectorizer/AMDGPU/insertion-point.ll:12
@@ -12,2 +11,3 @@
+; CHECK: %w = add i32 %y, 9
 ; CHECK: %foo = add i32 %z, %w
 define void @insert_load_point(float addrspace(1)* nocapture %a, float addrspace(1)* nocapture %b, float addrspace(1)* nocapture readonly %c, i64 %idx, i32 %x, i32 %y) #0 {
----------------
arsenm wrote:
> jlebar wrote:
> > arsenm wrote:
> > > jlebar wrote:
> > > > > It looks to me it is intentionally testing an implementation detail ("insert_load_point").
> > > > 
> > > > Looks like it to me, too.  When we committed the original patches, we all agreed that we wouldn't act with a bias towards the existing code, since we committed with existing unresolved issues.  I think this should count under that rubric.
> > > > 
> > > > That is, can we fix the test so it no longer tests an implementation detail?  I suppose you don't need to do that in this patch if you don't want.
> > > The pass should still have an expectation for where the instructions will be inserted relative to the originals, I think a test ensuring this is useful
> > > The pass should still have an expectation for where the instructions will be inserted relative to the originals
> > 
> > I guess I am OK with this if we can articulate in the test file or the cpp file exactly what is the rule that we expect applies to our output.  If we cannot articulate a rule, and instead we're just checking that the pass does what it currently does, I do not think that is a good test.  The reason is that, without an articulation of the rule, if the test fails, we have no way to tell whether there's a bug or if the test just needs to be changed.
> > 
> > (And if we can articulate a rule, it should go without saying that, inasmuch as reasonable, the test should check only for adherence to the rule, ignoring other ancillary properties of the output.)
> Test should generally have a comment explaining what they are testing anyway. This change was mostly why I added this test in the first place. If something is changing any behavior of the pass, a test should capture this. I don't understand the concern about wondering if it's a bug or the test needs update, the point of having the test is you have to look at the test output changes to verify that it is still correct
> the point of having the test is you have to look at the test output changes to verify that it is still correct

My thesis is that this process is bug-prone.  My evidence for this is that Alina found multiple tests in this test suite that had bugs -- tests that checked that some sequence of operations was vectorized when in fact it was not safe to vectorize it.  This motivates my suggestion, which is that, inasmuch as we can, we should avoid engaging in this process (by writing tests that are not fragile to uninteresting details), and, where we can't avoid the process entirely (maybe the details are interesting, and maybe that's the case here), we should write down explicitly what behavior we expect from the pass.

I don't mean to suggest that the bugs in the tests were the result of you being careless -- I didn't catch them either when I reviewed the patches.  My point is just that every time humans have to look at new output and decide if it's correct, there is a chance that we'll overlook a bug.  And based on this history, that chance is not negligible.  Even if you disagree with my application of the evidence and think it's unlikely that the three of us would make such a bug, surely other maintainers may not be as scrupulous.

Thus my suggestion: If we can write down the behavior we expect from the pass -- "Vectorized loads should be inserted at the position of the first load, and instructions which were between the first and last load should be reordered preserving their relative order inasmuch as possible." (or whatever the actual rule is) -- then when the test fails, we can judge against *that* whether the test or the pass is broken.  And if we have to update the test, we have some chance of creating the correct output ourselves, rather than just accepting the output created by the pass (which is more likely, in my judgement, to lead to us accepting buggy output, per above).

I think we're pretty close in what we want, honestly.  The main difference, I think, is that I am saying that we should try not to test behavior for which we cannot articulate a rule.  If the behavior is so incidental that we can't even say what it's supposed to do, I don't see why we'd want to enstone it as a test.


http://reviews.llvm.org/D22071





More information about the llvm-commits mailing list