[LLVMdev] Custom Lowering and fneg

Villmow, Micah Micah.Villmow at amd.com
Wed Sep 10 16:20:08 PDT 2008



-----Original Message-----
From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu]
On Behalf Of Eli Friedman
Sent: Wednesday, September 10, 2008 3:30 PM
To: LLVM Developers Mailing List
Subject: Re: [LLVMdev] Custom Lowering and fneg

On Wed, Sep 10, 2008 at 2:35 PM, Villmow, Micah <Micah.Villmow at amd.com>
wrote:
> Generating the following LLVM IR:
>
> define void @test_unary_op_anegate(float %x, float addrspace(11)*
%result)
> nounwind  {
> entry:
>         %neg = sub float -0.000000e+000, %x             ; <float>
[#uses=1]
>         store float %neg, float addrspace(11)* %result
>         ret void
> }
>
> However, when I attempt to run it through my backend which can handle
binary
> math ops correctly I keep asserting on the following item.
>
> Cannot yet select: 017B8010: i32 = fneg 017B7E78

That seems strange... you definitely shouldn't be seeing an fneg with
an i32 result.  What sorts of changes have you made to the SPARC
backend?  Have you tried looking at the output of "llc
-view-dag-combine1-dags" and "llc -view-legalize-dags" to see where
exactly this node is getting introduced?

Thanks I'll get this a try. I've made quite a few changes, introduced
many new instructions and formats for my backend and working on getting
the various data types to work with all the new instructions.

> What I cannot figure out is why it is attempting to pattern match on
an i32
> when there are no i32's in my test program.

With the regular SPARC backend, what ends up happening is the following:
1. The float is passed in an integer register (here's where the i32
first shows up)
2. The DAG combiner notices this, and combines the
fneg(bit_convert(arg)) to bit_convert(xor(arg, sign_bit)).
3. The store of the bit_convert gets turned into an i32 store, and
there are now no more floats in the code.

I removed the SPARC pattern matching instructions for the fneg
instruction because I don't want this behavior. The architecture I'm
targeting has different performance constraints where floating point
performance is better than integer performance, so I need to turn this
type of conversion off.

> I've tried a custom lowering function that lowers it to dst = sub 0,
src0
> and forcing it to f32, but it complains that result and the op value
types
> are incorrect.

If you have an fneg with an i32 result, something is already messed up.

> On another not, is there any known examples of using Tablegen with a
> typeless register class?

What do you mean?

The register types that are generated themselves don't hold any type
information. How the bits are treated depends on the instruction being
generated. My register are 128bit in width that can hold either 32bit
floats and ints, or 64 bit floats in scalar or vector form. All the
other Target backends seem to have register classes for each specific
use case, not a register class that can handle every case, i.e. if a
256bit register is needed, then I just use 2 sequential 128bit
registers. Also, my instruction set has basically unlimited registers, I
can't really seem a way to model this either.

> Or with instruction formats where the modifiers are
> on the registers and the instructions(i.e. mul_x2 GPR0, GPR1_neg,
GPR2_abs,
> which is equivalent to GPR0 = (-GPR1 * abs(GPR2)*2)?

No examples I know of, but I don't think there should be any issues
using multiclass, as long as there aren't too many possible modifiers;
see http://llvm.org/docs/TableGenFundamentals.html and various uses in
the code for how that works.

Thanks, I'll go back to that and see if I can get it to work how I want.


-Eli
_______________________________________________
LLVM Developers mailing list
LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev






More information about the llvm-dev mailing list