[LLVMdev] [RFC] Stripping unusable intrinsics

Owen Anderson resistor at mac.com
Tue Dec 23 15:07:46 PST 2014


> On Dec 23, 2014, at 1:40 PM, Chandler Carruth <chandlerc at google.com> wrote:
> 
> If we're going to talk about what the right long-term design is, let me put out a different opinion. I used to be somewhat torn on this issue, but this discussion and looking at the particular intrinsics in question, I'm rapidly being persuaded.
> 
> We shouldn't have any target specific intrinsics. At the very least, we shouldn't use them anywhere in the front- or middle-end, even if we have them.
> 
> Today, frontends need to emit specific target instrinsics *and* have the optimizer be aware of them. I can see a few reasons why:
> 
> 1) Missing semantics -- the IR may not have *quite* the semantics desired and provided by the target's ISA.
> 2) Historical expectations -- the GCC-compatible builtins are named after the instructions, and the target independent builtins lower to intrinsics so the target-specific ones should too.
> 3) Poor instruction selection -- we could emit the logic as boring IR, but we fail to instruction select that well, so as a hack we emit the instruction directly and teach the optimizer to still optimize through it.
> 
> If we want to pursue the *right* design, I think we should be fixing these three issues and then we won't need the optimizer to be aware of any of this.

I strongly disagree with your conclusions here.  Everything you’re suggesting is rooted in three base assumptions that are not true for many clients:
	- that all source languages are basically C
	- that all programming models are more or less like C on a *nix system
	- that all hardware is basically like the intersection of X86 and ARM (“typical RISC machine”)

Consider the use case of an OpenGL shader compiler.  Its source language is not C (despite syntactic appearances) and the frontend may need to express semantics that are difficult or impossible to express in target-independent IR.  Its programming model is not like a C compiler, including constructs like cross-thread derivatives, uniform vs varying calculations, etc.  It’s target instruction set is likely nothing at all like X86 or ARM, likely including an arithmetic set that is very different from your typical CPU, as well as lots of ISA-level construct for interacting with various fixed function hardware units.

Consider the less exotic use case of a DSP compiler.  DSPs typically have lots of instructions for “unusual” arithmetic operations that are intended to map to very specific use cases: lots of variants of rounding and/or wrapping control, lots of extending/widening/doubling operations, memory accesses with unusual stride patterns.  The entire purpose of the existence of a DSP is to deliver high computation bandwidth under tight latency constraints.  If your DSP compiler fails to make use of exotic arithmetic operations that the user requested, the whole system has *failed* at being a DSP.

Consider the even-closer-to-home use case of vector programming.  There are three major families of vector extensions in widespread use (SSE, NEON, and Altivec) as well as many variants and lesser-known instruction sets. And while all three agree on a small core of functionality (fadd <4 x float> !), all of them include large bodies of just plain arithmetic that are not covered by the others and are not practically expressible in target-independent IR.  Even if we add the union of their functionalities to target independent IR, then we have the reverse problem where the frontend and optimizers may produce IR that most backends have little to no hope of generating good code for.  And let’s not forget that, while the requirements here are somewhat less strict than on a DSP, our users will still be very unhappy if they write a top-half-extending-saturating-absolute-difference builtin and we give them 100 instructions of emulated gunk back.

While I agree with the underlying sentiment that we should strive to minimize the intrusion of target-specific intrinsics as much as possible, and compartmentalizing them into their source backends as much as possible, expecting to reach a world with no intrinsic considerations in any part of the frontend or optimizer just seems hopelessly idealistic.

—Owen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20141223/4528321d/attachment.html>


More information about the llvm-dev mailing list