[llvm] 962339a - [lit] Reliable progress indicator and ETA

Roman Lebedev via llvm-commits llvm-commits at lists.llvm.org
Tue Mar 23 02:16:42 PDT 2021


Author: Roman Lebedev
Date: 2021-03-23T12:16:19+03:00
New Revision: 962339a5eca2c838cb0a3dae6814d7942ccd8ce1

URL: https://github.com/llvm/llvm-project/commit/962339a5eca2c838cb0a3dae6814d7942ccd8ce1
DIFF: https://github.com/llvm/llvm-project/commit/962339a5eca2c838cb0a3dae6814d7942ccd8ce1.diff

LOG: [lit] Reliable progress indicator and ETA

Quality of progress bar and ETA in lit has always bothered me.

For example, given `./bin/llvm-lit /repositories/llvm-project/clang/test/CodeGen* -sv`
at 1%, it says it will take 10 more minutes,
at 25%, it says it will take 1.25 more minutes,
at 50%, it says it will take 30 more seconds,
and in the end finishes with `Testing Time: 39.49s`. That's rather wildly unprecise.

Currently, it assumes that every single test will take the same amount of time to run on average.
This is is a somewhat reasonable approximation overall, but it is quite clearly imprecise,
especially in the beginning.

But, we can do better now, after D98179! We now know how long the tests took to run last time.
So we can build a better ETA predictor, by accumulating the time spent already,
the time that will be spent on the tests for which we know the previous time,
and for the test for which we don't have previous time, again use the average time
over the tests for which we know current or previous run time.
It would be better to use median, but i'm wary of the cost that may incur.

Now, on **first** run of `./bin/llvm-lit /repositories/llvm-project/clang/test/CodeGen* -sv`
at 10%, it says it will take 30 seconds,
at 25%, it says it will take 50 more seconds,
at 50%, it says it will take 27 more seconds,
and in the end finishes with `Testing Time: 41.64s`. That's pretty reasonable.

And on second run of `./bin/llvm-lit /repositories/llvm-project/clang/test/CodeGen* -sv`
at 1%, it says it will take 1 minutes,
at 25%, it says it will take 30 more seconds,
at 50%, it says it will take 19 more seconds,
and in the end finishes with `Testing Time: 39.49s`. That's amazing i think!

I think people will love this :)

Reviewed By: yln

Differential Revision: https://reviews.llvm.org/D99073

Added: 
    

Modified: 
    llvm/utils/lit/lit/ProgressBar.py
    llvm/utils/lit/lit/display.py
    llvm/utils/lit/lit/main.py

Removed: 
    


################################################################################
diff  --git a/llvm/utils/lit/lit/ProgressBar.py b/llvm/utils/lit/lit/ProgressBar.py
index 4f8bd3cc75e3a..fd721db780b5c 100644
--- a/llvm/utils/lit/lit/ProgressBar.py
+++ b/llvm/utils/lit/lit/ProgressBar.py
@@ -253,7 +253,7 @@ def update(self, percent, message):
             elapsed = time.time() - self.startTime
             if percent > .0001 and elapsed > 1:
                 total = elapsed / percent
-                eta = int(total - elapsed)
+                eta = total - elapsed
                 h = eta//3600.
                 m = (eta//60) % 60
                 s = eta % 60

diff  --git a/llvm/utils/lit/lit/display.py b/llvm/utils/lit/lit/display.py
index 3543b287f25ea..ce346eeebef2c 100644
--- a/llvm/utils/lit/lit/display.py
+++ b/llvm/utils/lit/lit/display.py
@@ -5,8 +5,10 @@ def create_display(opts, tests, total_tests, workers):
     if opts.quiet:
         return NopDisplay()
 
-    of_total = (' of %d' % total_tests) if (tests != total_tests) else ''
-    header = '-- Testing: %d%s tests, %d workers --' % (tests, of_total, workers)
+    num_tests = len(tests)
+    of_total = (' of %d' % total_tests) if (num_tests != total_tests) else ''
+    header = '-- Testing: %d%s tests, %d workers --' % (
+        num_tests, of_total, workers)
 
     progress_bar = None
     if opts.succinct and opts.useProgressBar:
@@ -21,6 +23,42 @@ def create_display(opts, tests, total_tests, workers):
     return Display(opts, tests, header, progress_bar)
 
 
+class ProgressPredictor(object):
+    def __init__(self, tests):
+        self.completed = 0
+        self.time_elapsed = 0.0
+        self.predictable_tests_remaining = 0
+        self.predictable_time_remaining = 0.0
+        self.unpredictable_tests_remaining = 0
+
+        for test in tests:
+            if test.previous_elapsed:
+                self.predictable_tests_remaining += 1
+                self.predictable_time_remaining += test.previous_elapsed
+            else:
+                self.unpredictable_tests_remaining += 1
+
+    def update(self, test):
+        self.completed += 1
+        self.time_elapsed += test.result.elapsed
+
+        if test.previous_elapsed:
+            self.predictable_tests_remaining -= 1
+            self.predictable_time_remaining -= test.previous_elapsed
+        else:
+            self.unpredictable_tests_remaining -= 1
+
+        # NOTE: median would be more precise, but might be too slow.
+        average_test_time = (self.time_elapsed + self.predictable_time_remaining) / \
+            (self.completed + self.predictable_tests_remaining)
+        unpredictable_time_remaining = average_test_time * \
+            self.unpredictable_tests_remaining
+        total_time_remaining = self.predictable_time_remaining + unpredictable_time_remaining
+        total_time = self.time_elapsed + total_time_remaining
+
+        return self.time_elapsed / total_time
+
+
 class NopDisplay(object):
     def print_header(self): pass
     def update(self, test): pass
@@ -30,8 +68,10 @@ def clear(self, interrupted): pass
 class Display(object):
     def __init__(self, opts, tests, header, progress_bar):
         self.opts = opts
-        self.tests = tests
+        self.num_tests = len(tests)
         self.header = header
+        self.progress_predictor = ProgressPredictor(
+            tests) if progress_bar else None
         self.progress_bar = progress_bar
         self.completed = 0
 
@@ -55,7 +95,7 @@ def update(self, test):
         if self.progress_bar:
             if test.isFailure():
                 self.progress_bar.barColor = 'RED'
-            percent = float(self.completed) / self.tests
+            percent = self.progress_predictor.update(test)
             self.progress_bar.update(percent, test.getFullName())
 
     def clear(self, interrupted):
@@ -66,7 +106,7 @@ def print_result(self, test):
         # Show the test result line.
         test_name = test.getFullName()
         print('%s: %s (%d of %d)' % (test.result.code.name, test_name,
-                                     self.completed, self.tests))
+                                     self.completed, self.num_tests))
 
         # Show the test failure output, if requested.
         if (test.isFailure() and self.opts.showOutput) or \

diff  --git a/llvm/utils/lit/lit/main.py b/llvm/utils/lit/lit/main.py
index e4c3a34b2d227..47fe73388eaa7 100755
--- a/llvm/utils/lit/lit/main.py
+++ b/llvm/utils/lit/lit/main.py
@@ -205,8 +205,7 @@ def mark_excluded(discovered_tests, selected_tests):
 
 def run_tests(tests, lit_config, opts, discovered_tests):
     workers = min(len(tests), opts.workers)
-    display = lit.display.create_display(opts, len(tests), discovered_tests,
-                                         workers)
+    display = lit.display.create_display(opts, tests, discovered_tests, workers)
 
     run = lit.run.Run(tests, lit_config, workers, display.update,
                       opts.max_failures, opts.timeout)


        


More information about the llvm-commits mailing list