[PATCH] D6995: Reassociate: reprocess RedoInsts after each instruction

Mehdi Amini via llvm-commits llvm-commits at lists.llvm.org
Mon Nov 16 14:47:34 PST 2015


> On Nov 16, 2015, at 2:36 PM, Quentin Colombet <qcolombet at apple.com> wrote:
> 
> 
>> On Nov 16, 2015, at 2:24 PM, Mehdi Amini <mehdi.amini at apple.com <mailto:mehdi.amini at apple.com>> wrote:
>> 
>>> 
>>> On Nov 16, 2015, at 2:09 PM, Duncan P. N. Exon Smith <dexonsmith at apple.com <mailto:dexonsmith at apple.com>> wrote:
>>> 
>>>> 
>>>> On 2015-Nov-16, at 13:54, Mehdi Amini <mehdi.amini at apple.com <mailto:mehdi.amini at apple.com>> wrote:
>>>> 
>>>> 
>>>>> On Nov 16, 2015, at 1:50 PM, Duncan P. N. Exon Smith <dexonsmith at apple.com <mailto:dexonsmith at apple.com>> wrote:
>>>>> 
>>>>> 
>>>>>> On 2015-Nov-16, at 13:47, Mehdi AMINI <mehdi.amini at apple.com <mailto:mehdi.amini at apple.com>> wrote:
>>>>>> 
>>>>>> joker.eph added a comment.
>>>>>> 
>>>>>> Only 3 test are still crashing now, but at the time I wrote the patch, I iterated on a fuzzer and these six tests (not eight) were stressing different patterns and different part of reassociate, so they're absolutely not redundant.
>>>>> 
>>>>> You could use six different functions in a single .ll file, right?
>>>> 
>>>> I “could"
>>> 
>>> I mean, that gives the same coverage, right?  Since it seems to reduce
>>> the ongoing cost of running
>> 
>> Cost of running? You’re launching less processes but you’re losing parallelism. I guess we have enough of the latter, but are we really limited by the first right now?
>> 
>>> and maintaining the test(s), then merging
>>> them seems better.
>> 
>> How is the cost of maintenance reduced? If only one case is crashing, I found it more convenient to investigate and maintain if they’re in separate files.
> 
> If they are different crashes, I expect to have different commit/fix for each of them.

Fair enough, the revision history shows that I improved the patch with 4 different diffs (and the tests were added as I found them), I could split the revision in 4 again and come up with separate patched to fix each of the individual tests incrementally. 
But I don’t think it would be the best use of my time (or the reviewers' time) at this point.

> 
>> 
>>> 
>>>>>> I'm not sure either what CHECK line to put for a compiler crash non-regression, there are multiple cases in the test suite like this where FileCheck is not involved.
>>>>> 
>>>>> It's best practice to check for an expected output even if it's just
>>>>> a crasher.  Most of the tests where FileCheck isn't involved are
>>>>> legacy (and should be fixed if possible).
>>>> 
>>>> Should I just add a CHECK-LABEL with the function name? 
>>>> What is the added value for the test?
>>> 
>>> Usually you can find *something* that adds value.  For this, I imagine
>>> you could check that -reassociate transforms the IR the way you
>>> expect…
>> 
>> I consider this bad practice: this increases the cost of maintenance without adding any value.
> 
> Well, it makes sure the IR output is somehow correct.
> Where is no point in “not crashing” if that means generating invalid IR, right?

There are other tests for this purpose.
Or on the extreme we could just dump the input IR of the benchmark suite and add check on every line of the output of  "opt -O3”.

> 
>> I don’t actually care what reassociate produce on these examples, as long as it does not crash. I could just test that the output does not change compared to what is produced right now. But it increases the maintenance cost because whenever reassociate becomes smarter or find a new canonicalization, this test may need to be audited and updated. I can’t imagine this to scale.
> 
> I disagree, it helps to see how reassociate becomes smarter when there is a commit enhancing it.
> Yes, this is more work for someone creating the patch, but I think it is necessary work.

No, my experience tells me this is really bad. 
To have worked on such a codebase (not that long ago), it is very quickly a maintenance nightmare because any of your patch that change a canonicalization will “break” tens of tests and you spent more time trying to understand if you broke these tests or if these tests were just not testing the right thing the right way.
Any check I would add here would not mean the IR is in “canonical” or desired form, and I don’t see how it could be considered better to have such a check rather than none.


— 
Mehdi



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20151116/e76c3b21/attachment.html>


More information about the llvm-commits mailing list