[PATCH] D46735: [Test-Suite] Added Box Blur And Sobel Edge Detection

Matthias Braun via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri May 11 18:13:01 PDT 2018


MatzeB added a comment.



> Some optimizations (e.g. cache-locality, parallelization) can cut the execution time by order by magnitudes. With gemm, I have seen single-thread speed-ups of 34x. With parallelization, it will be even more. If the execution time without optimization is one second, it will be too short with optimization, especially with parallelization and accelerator-offloading which adds invocation overheads.
> 
> It's great to have a discussion on how such benchmarks should look like.
> 
> Instead of one-size-fits-it-all, should we have multiple problem sizes? There is already `SMALL_DATASET`, which is smaller than the default, but what about larger ones? SPEC has "test" (should execute everything at least once, great to check correctness), "train" (for PGO-training), "ref" (the scored benchmark input; in CPU 2017 runs up to 2 hrs). Polybench has `MINI_DATASET` to `EXTRALARGE_DATASET` which are defined by workingset-size, instead of purpose or runtime.

- First: We have to choose a default problem size and I just wanted to emphasize that it should not be too big, so running the llvm test-suite finishes in a reasonable timeframe.

- As I mentioned in another part of my review: I'd recommend writing the benchmarks in a way that they take a single number as a command line argument and then scale the problem size based on that number. We don't really have infrastructure for that today (I consider `SMALL_DATASET` a super crude tool and would rather not extend it with more variants...), but this style seems easy enough to implement to me and should allow scaling the input size up and down in case someone comes around writing better infrastructure.


Repository:
  rT test-suite

https://reviews.llvm.org/D46735





More information about the llvm-commits mailing list