[llvm-dev] [RFC] A new intrinsic, `llvm.blackbox`, to explicitly prevent constprop, die, etc optimizations

Alex Elsayed via llvm-dev llvm-dev at lists.llvm.org
Mon Nov 9 17:56:35 PST 2015


On Fri, 06 Nov 2015 09:27:32 -0800, Daniel Berlin via llvm-dev wrote:

<snip>
> 
> Great!
> 
> You should then test that this happens, and additionally write a test
> that can't be optimized away, since the above is apparently not a useful
> microbenchmark for anything but the compiler ;-)
> 
> Seriously though, there are basically three cases (with a bit of
> handwaving)
> 
> 1. You want to test that the compiler optimizes something in a certain
> way.  The above example, without anything else, you actually want to
> test that the compiler optimizes this away completely.
> This doesn't require anything except using something like FIleCheck and
> producing IR at the end of Rust's optimization pipeline.
> 
> 2. You want to make the above code into a benchmark, and ensure the
> compiler is required to keep the number and relative order of certain
> operations.
> Use volatile for this.
> 
> Volatile is not what you seem to think it is, or may think about it in
> terms of what people use it for in C/C++.
> volatile in llvm has a well defined meaning:
> http://llvm.org/docs/LangRef.html#volatile-memory-accesses
> 
> 3. You want to get the compiler to only do certain optimizations to your
> code.
> 
> Yes, you have to either write a test harness (even if that test harness
> is "your normal compiler, with certain flags passed"), or use ours, for
> that ;-)
> 
> 
> It seems like you want #2, so you should use volatile.
> 
> But don't conflate #2 and #3.
> 
> As said:
> If you want the compiler to only do certain things to your code, you
> should tell it to only do those things by giving it a pass pipeline that
> only does those things.  Nothing else is going to solve this problem
> well.
> 
> If you want the compiler to do every optimization it knows to your code,
> but want it to maintain the number and relative order of certain
> operations, that's volatile.

I think the fundamental thing you're missing is that benchmarks are an 
exercise in if/then:

*If* a user exercises this API, *then* how well would it perform?

Of course, in the case of a user, the data could come from anywhere, and 
go anywhere - the terminal, a network socket, whatever.

However, in a benchmark, all the data comes from (and goes) to places the 
compiler and see.

Thus, it's necessary to make the compiler _pretend_ the data came from 
and goes to a "black box", in order for the benchmarks to even *remotely* 
resemble what they're meant to test.

This is actually distinct from #1, #2, _and_ #3 above - quite simply, 
what is needed is a way to simulate a "real usage" scenario without 
actually contacting the external world.



More information about the llvm-dev mailing list