[llvm-dev] [RFC] A new intrinsic, `llvm.blackbox`, to explicitly prevent constprop, die, etc optimizations

Daniel Berlin via llvm-dev llvm-dev at lists.llvm.org
Fri Nov 6 09:27:32 PST 2015


On Fri, Nov 6, 2015 at 8:35 AM, Richard Diamond <wichard at vitalitystudios.com
> wrote:

>
>
> On Tue, Nov 3, 2015 at 3:15 PM, Daniel Berlin <dberlin at dberlin.org> wrote:
>
>>
>>
>> On Tue, Nov 3, 2015 at 12:29 PM, Richard Diamond <
>> wichard at vitalitystudios.com> wrote:
>>
>>>
>>>
>>> On Mon, Nov 2, 2015 at 9:16 PM, Daniel Berlin <dberlin at dberlin.org>
>>> wrote:
>>>
>>>> I'm very unclear and why you think a generic black box intrinsic will
>>>> have any different performance impact ;-)
>>>>
>>>>
>>>> I'm also unclear on what the goal with this intrinsic is.
>>>> I understand the symptoms you are trying to solve - what exactly is the
>>>> disease.
>>>>
>>>> IE you say "
>>>>
>>>> I'd like to propose a new intrinsic for use in preventing optimizations
>>>> from deleting IR due to constant propagation, dead code elimination, etc."
>>>>
>>>> But why are you trying to achieve this goal?
>>>>
>>>
>>> It's a cleaner design than current solutions (as far as I'm aware).
>>>
>>
>> For what, exact, well defined goal?
>>
>
>> Trying to make certain specific optimizations not work does not seem like
>> a goal unto itself.
>> It's a thing you are doing to achieve something else, right?
>> (Because if not, it has a very well defined and well supported solutions
>> - set up a pass manager that runs the passes you want)
>>
>> What is the something else?
>>
>> IE what is the problem that led you to consider this solution.
>>
>
> I apologize if I'm not being clear enough. This contrived example
> ```rust
> #[bench]
> fn bench_xor_1000_ints(b: &mut Bencher) {
>     b.iter(|| {
>         (0..1000).fold(0, |old, new| old ^ new);
>     });
> }
> ```
> is completely optimized away.
>


Great!

You should then test that this happens, and additionally write a test that
can't be optimized away, since the above is apparently not a useful
microbenchmark for anything but the compiler ;-)

Seriously though, there are basically three cases (with a bit of handwaving)

1. You want to test that the compiler optimizes something in a certain
way.  The above example, without anything else, you actually want to test
that the compiler optimizes this away completely.
This doesn't require anything except using something like FIleCheck and
producing IR at the end of Rust's optimization pipeline.

2. You want to make the above code into a benchmark, and ensure the
compiler is required to keep the number and relative order of certain
operations.
Use volatile for this.

Volatile is not what you seem to think it is, or may think about it in
terms of what people use it for in C/C++.
volatile in llvm has a well defined meaning:
http://llvm.org/docs/LangRef.html#volatile-memory-accesses

3. You want to get the compiler to only do certain optimizations to your
code.

Yes, you have to either write a test harness (even if that test harness is
"your normal compiler, with certain flags passed"), or use ours, for that
;-)


It seems like you want #2, so you should use volatile.

But don't conflate #2 and #3.

As said:
If you want the compiler to only do certain things to your code, you should
tell it to only do those things by giving it a pass pipeline that only does
those things.  Nothing else is going to solve this problem well.

If you want the compiler to do every optimization it knows to your code,
but want it to maintain the number and relative order of certain
operations, that's volatile.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20151106/673ca0e5/attachment.html>


More information about the llvm-dev mailing list