[LLVMdev] Upstream PTX backend that uses target independent code generator if possible

David A. Greene greened at obbligato.org
Tue Aug 10 12:02:23 PDT 2010

Che-Liang Chiou <clchiou at gmail.com> writes:

> I surfed their code, and it seems that they didn't use code generator.
> That means there design should be similar to CBackend or CPPBackend.
> So I guess it can't generate some machine instructions like MAD,
> and there are some PTX instruction set features that are hard to exploit
> if not using code generator.
> But I didn't study their code thoroughly, so I might be wrong about this.

I haven't had a chance to look at it yet either.

>>> I have tested this backend to translate a work-efficient parallel scan
>>> kernel ( http://http.developer.nvidia.com/GPUGems3/gpugems3_ch39.html
>>> ) into PTX code.  The generated PTX code was then executed on real
>>> hardware, and the result is correct.
>> How much of the LLVM IR does this support?  What's missing?
> Have to add some intrinsics, calling conventions, and address spaces.
> I would say these are relatively small changes.

Are you generating masks at all?  If so, how are you doing that?
Similarly to how the ARM backend does predicates (handling all the
representation, etc. in the target-specific codegen)?

I've have been wanting to see predicates (vector and scalar) in the
LLVM IR for a long time.  Perhaps the PTX backend is an opportunity
to explore that.


More information about the llvm-dev mailing list