[llvm-dev] [RFC] A new intrinsic, `llvm.blackbox`, to explicitly prevent constprop, die, etc optimizations
Sean Silva via llvm-dev
llvm-dev at lists.llvm.org
Tue Nov 17 19:33:57 PST 2015
On Mon, Nov 16, 2015 at 10:11 PM, Dmitri Gribenko <gribozavr at gmail.com>
wrote:
> On Mon, Nov 16, 2015 at 9:34 PM, Sean Silva <chisophugis at gmail.com> wrote:
> >
> >
> > On Mon, Nov 16, 2015 at 9:07 PM, Dmitri Gribenko <gribozavr at gmail.com>
> > wrote:
> >>
> >> On Mon, Nov 16, 2015 at 8:55 PM, Sean Silva <chisophugis at gmail.com>
> wrote:
> >> >
> >> >
> >> > On Mon, Nov 16, 2015 at 6:59 PM, Dmitri Gribenko via llvm-dev
> >> > <llvm-dev at lists.llvm.org> wrote:
> >> >>
> >> >> On Mon, Nov 16, 2015 at 10:03 AM, James Molloy via llvm-dev
> >> >> <llvm-dev at lists.llvm.org> wrote:
> >> >> > You don't appear to have addressed my suggestion to not require a
> >> >> > perfect
> >> >> > external world, instead to measure the overhead of an imperfect
> world
> >> >> > (by
> >> >> > using an empty benchmark) and subtracting that from the measured
> >> >> > benchmark
> >> >> > score.
> >> >>
> >> >> In microbenchmarks, performance is not additive. You can't compose
> >> >> two pieces of code and predict that the benchmark results will be the
> >> >> sum of the individual measurements.
> >> >
> >> >
> >> > This sounds like an argument against microbenchmarks in general,
> rather
> >> > than
> >> > against James's point. James' point assumes that you are doing a
> >> > meaningful
> >> > benchmark (since, presumably, you are trying to understand the
> relative
> >> > performance in your application).
> >>
> >> Any benchmarking, and especially microbenchmarking, should not be
> >> primarily about measuring the relative performance change. It is a
> >> small scientific experiment, where you don't just get numbers -- you
> >> need to have an explanation why are you getting these numbers. And,
> >> especially for microbenchmarks, having an explanation, and a way to
> >> validate it, as well as one's assumptions, is critical.
> >>
> >> In large system benchmarks performance is not additive either -- when
> >> you have multiple subsystems, cores and queues. But this does not
> >> mean that system-level benchmarks are not useful. As any benchmarks,
> >> they need interpretation.
> >
> >
> > I agree with all this, but I don't understand how adding an external
> > function call would interfere at all. I guess I could rephrase what I was
> > saying as "if having an external function call prevents the benchmark
> from
> > performing its purpose, it is hard to believe that any conclusions coming
> > from such a benchmark would be applicable to real code". The only
> exceptions
> > I can think about are extremely low-level asm measurements (which are
> > written in asm anyway so this whole discussion of llvm.blackbox is
> > irrelevant).
>
> The thing is, the black box function one can implement in the language
> would not be a perfect substitute for the real producer or consumer.
>
> I don't know about Rust, but in other high-level languages,
> implementing a black box as a generic function might cause an overhead
> due to the way generic functions are implemented,
How does an intrinsic at IR-level avoid this? If the slowdown from
generic-ness is happening at the language/frontend semantic level, then an
IR intrinsic doesn't seem like it would help.
-- Sean Silva
higher than the
> overhead of a regular function call. For example, the value might
> need to be moved to the heap to be passed as an unconstrained generic
> parameter. This wouldn't be the case in real code, where the function
> would be non-generic, and possibly even inlined into the callee.
>
> Dmitri
>
> --
> main(i,j){for(i=2;;i++){for(j=2;j<i;j++){if(!(i%j)){j=0;break;}}if
> (j){printf("%d\n",i);}}} /*Dmitri Gribenko <gribozavr at gmail.com>*/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20151117/f25c6f1c/attachment.html>
More information about the llvm-dev
mailing list