[LLVMdev] [Mesa3d-dev] Folding vector instructions

Alex alex.lavoro.propio at gmail.com
Wed Dec 31 06:15:09 PST 2008


Chris Lattner wrote:
> The direction we're going is to expose more and more vector operations in
> LLVM IR.  For example, compares and select are currently being worked on,
> so you can do a comparison of two vectors which returns a vector of bools,
> and use that as the compare value of a select instruction (selecting between
> two vectors).  This would allow implementing min and a variety of other
> operations and is easier for the codegen to reassemble into a first-class
> min operation etc.

With the motivation of making the codegen easier to reassemble the first-class
operations, do you also mean that there will be vector version of add, sub,
mul, which are usually supported by many vector GPU?

Stephane Marchesin wrote:
> So what remains are chips that are natively vector GPUs. The question
> is more whether we'll be able to have llvm build up vector
> instructions from scalar ones

The reason why I started this thread was looking for some example code doing
this? Have we already had any backend in LLVM doing this? It seems not easy
to me.

Zack Rusin wrote:
> I think Alex was referring here to a AOS layout which is completely not
> ready.
> Actually currently the plan is to have essentially a "two pass" LLVM IR. I
> wanted the first one to never lower any of the GPU instructions so we'd have
> intrinsics or maybe even just function calls like gallium.lit, gallium.dot,
> gallium.noise and such. Then gallium should query the driver to figure out
> which instructions the GPU supports and runs our custom llvm lowering pass
> that decomposes those into things the GPU supports.

If I understand correct, that is to say, the gallium will dynamically build a
lowering pass by querying the capability (instructions supported by the GPU)?
Instead, isn't it a better approach to have a lowering pass for each GPU and
gallium simply uses it?

> Essentially I'd like to
> make as many complicated things in gallium as possible to make the GPU llvm
> backends in drivers as simple as possible and this would help us make the
> pattern matching in the generator /a lot/ easier (matching gallium.lit vs 9+
> instructions it would be be decomposed to) and give us a more generic GPU
> independent layer above. But that hasn't been done yet, I hope to be able to
> write that code while working on the OpenCL implementation for Gallium.

This two-pass approach is what I am taking now to write the compiler for a GPU (
sorry but I am not allowed to reveal the name).

I don't work on the gallium directly. I am writing a frontend which converts
vs_3_0  to LLVM IR. That's why I reference both SOA and AOS code. I think the
NDA will allow me (to be confirmed) to contribute only this frontend but not
the LLVM backend neither the lowering pass of this GPU.

What do you plan to do with SOA and AOS paths in the gallium?

(1) Will they eventually be developed independently? So that for a scalar/SIMD
GPU, the SOA will be used to generate LLVM IR, and for a vector GPU, AOS is
used?

(2) At present the difference between SOA and AOS path is not only the
layout of
the input data. The AOS seems to be more complete for me, though Rusin has said
that it's completely not ready and not used in the gallium. Is there a plan to
merge/add the support of function/branch and LLVM IR extract/insert/shuffle to
the SOA code?

By the way, is there any open source frontend which converts GLSL to LLVM IR?

Alex.



More information about the llvm-dev mailing list