[PATCH] D58098: [IR] Add Use::moveToFrontOfUseList() method

Duncan P. N. Exon Smith via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Feb 20 10:19:27 PST 2019


dexonsmith added inline comments.


================
Comment at: unittests/IR/UseTest.cpp:131
+  Instruction *Ret = F->getEntryBlock().getTerminator();
+  constexpr unsigned Size = 20 * 1000;
+  for (unsigned I = 1; I <= Size; ++I) {
----------------
rtereshin wrote:
> dexonsmith wrote:
> > rtereshin wrote:
> > > dexonsmith wrote:
> > > > rtereshin wrote:
> > > > > dexonsmith wrote:
> > > > > > This seems really big.  What extra coverage are you getting from this?
> > > > > > 
> > > > > > Generally, there seems to be a lot of permutation here that is unrelated to the new API.  It'll take time to run.  What value is it adding?
> > > > > For some time I had that idea that it could be useful if we had "compile time regression tests", which are also reliable and fast. I think it's possible when a speed up achieved is asymptotically significant, like going from O(n^2) algorithm to O(n) one. If the n is big enough and the constant factor is small enough, it may be possible to design a test which is very fast even when ran on very slow machines (bots or whatnot) if the compile time problem is still in check, but really, really slow even on fastest machines if there is a regression.
> > > > > 
> > > > > This test takes about 50 ms to run if `moveToFrontOfUseList` does what's expected, as well as the other operations (like adding a new use) behave like they currently do (inserting new uses in the beginning of the list, for instance). If any of it breaks apart, this test takes almost 2 minutes to run (before it fails with one of the google-tests assertions that check the values of the "performance counters", which in this case specifically are possible).
> > > > > 
> > > > > Please let me know what do you think!
> > > > > For some time I had that idea that it could be useful if we had "compile time regression tests", which are also reliable and fast. 
> > > > 
> > > > Seems like a great idea to have, but it probably deserves a broader discussion on llvm-dev about how they should be integrated into the test suite, whether they're part of `check-llvm` or a separate `check-llvm-perf`, etc.; personally I'm a bit skeptical of having them intermingled with non-performance tests without having them stand out *somehow*.
> > > > 
> > > > Maybe you've had a discussion already and I missed it?  If so, or if there's significant precedent of this already in tree, please point me at it so I can catch up :).  But even in that case, I think you should have an independent test that checks the functionality, separately from a test that checks the performance of this high-level task.
> > > > Maybe you've had a discussion already and I missed it? If so, or if there's significant precedent of this already in tree, please point me at it so I can catch up :).
> > > 
> > > If there is a precedent, I'm not aware of it. I'll change the test so it would only perform so many actions to catch the trend, no more, and add a purely functional one.
> > To be clear, I'm not pushing back on having this kind of test; it seems really valuable!  I just think you should start a conversation on llvm-dev about where to put it, whether it needs to be named specially, whether it should be in a different `check-` target, etc.
> > 
> > >  I'll change the test so it would only perform so many actions to catch the trend
> > 
> > I'm not sure I understand how that would be different from what you have now.
> > I'm not sure I understand how that would be different from what you have now.
> 
> It won't run any noticeably longer even if the order changes, it will just fail, and it won't take many steps in a debugger to go through step by step. Isn't the large size and potential running time are the atypical things about this test as it is now?
> 
> If you think it also shouldn't "simulate" closely what a supposed real use case is doing, only move a use to the front and check that the use is indeed upfront, then yes, we should just effectively remove this test.
> 
> As for discussing performance tests on llvm-dev I'd rather separate this patch and that discussion (if possible) so the patch isn't blocked on a generic discussion regarding regression testing in LLVM.
> 
> What do you think?
If the precise way that adding a new user to a value affects the value's use-list is important, then there should be a targeted unit test for that.  It doesn't make sense to me to check that here.

> Isn't the large size and potential running time are the atypical things about this test as it is now?

I guess my concern is that test would manually write out a high-level algorithm (which hasn't been encapsulated) and check low-level details of use-list management.  It seems like it'll be testing something at the wrong level.

Does anyone else have thoughts?

Note: I'd still love to see the performance test committed somehow; I do think we should be exploring that kind of algorithmic test.


Repository:
  rL LLVM

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D58098/new/

https://reviews.llvm.org/D58098





More information about the llvm-commits mailing list