[LLVMdev] Xilinx zynq-7000 (7030) as a Gallium3D LLVM FPGA target

Luke Kenneth Casson Leighton luke.leighton at gmail.com
Sun Aug 21 08:59:06 PDT 2011


On Sun, Aug 21, 2011 at 5:27 AM, Nick Lewycky <nicholas at mxc.ca> wrote:

>>> The way in which Gallium3D targets LLVM, is that it waits until it
>>> receives
>>> the shader program from the application, then compiles that down to LLVM
>>> IR.

>>  nick.... the Zynq-7000 series of Dual-Core Cortex A9 800mhz 28nm CPUs
>> have an on-board Series 7 Artix-7 or Kinect-7 FPGA (depending on the

>>  so - does that change things at all? :)
>
> No, because that doesn't have:
>  - nearly enough gates. Recall that a modern GPU has more gates than a
> modern CPU, so you're orders of magnitude away.
>  - quite enough I/O bandwidth. Assuming off-chip TMDS/LVDS (sensible, given
> that neither the ARM core nor the FPGA have a high enough clock rate),

 well the Series 7 has 6.6gb/sec adjustable Serial Transceivers, and
there are cases where people have actually implemented DVI / HDMI with
that.  but yes, a TFP410 would be a goood idea :)

> the
> limiting I/O bandwidth is between the GPU and its video memory. That product
> claims it can do DDR3, which is not quite the same as GDDR5.

 ahh, given that OGP is creating a Graphics Card with a PCI (33mhz)
bus, 256mb DDR2 RAM and a Spartan 3 (4000), i think that trying to set
sights on "the absolute latest and greatest in GPU Technology" is a
leeetle ambitious :)

 but, you know what's just incredible?  that this debate can be had
*at all*.  if this CPU didn't exist, there wouldn't _be_ the
possibility of an affordable platform on which to even *try* to create
a GPU engine.

> You could try to trade-off between FPGA and the ARM core, but the ARM is
> only running at 800MHz. Maybe it's possible to get competitive performance,
> but it doesn't sound like a promising start.

 the goal isn't to get competitive performance - the goal is to get
from way behind even the starting line.  i'd settle for "good enough"
performance. gimme 1280x720, 25fps, 15-bit colour, and i'd be happy.

>> shader program.  i've seen papers for example - someone won 3rd prize
>> from a competition by xilinx, he was a Seoul University student,
>> managed to implement parts of OpenGL ES 1.1 in an FPGA, by porting

> Wow, that must've been a lot of work. This is what I was alluding to in my
> second paragraph of the previous email, except that I didn't realize someone
> had actually done it.
>
> Of course, OpenGL ES 1.1 is still fixed-function hardware. That's a much
> easier problem, and not useful beyond current-gen cell-phones.

 rright.  ok.  understood.

>>  anyway, yes: what's possible, and where can people find out more
>> about how gallium3d uses LLVM?
>
> Ask the Mesa/Gallium folks, we really just get the occasional bug report.
> Personally, I follow zrusin.blogspot.com for my Gallium3d news.

 ok.

>  and (for those people not familiar
>>
>> with 3D), why is the shader program not "static" i.e. why is a
>> compiler needed at runtime at _all_? (if there's an answer already
>> somewhere on a wiki, already, that'd be great).
>>
>>  and: would moving this compiler onto the FPGA (so that it interprets
>> the shader program), making it an interpreter instead, be a viable
>> option?  just throwing ideas out, here.
>
> I'm sure there's many ways to produce an open graphics chip and card, and
> I'm sure there's even ways to use LLVM to simplify the hardware design, and
> maybe even the drivers. The point I'm trying to make is that This Stuff Is
> Really Hard and there's no prepackaged answer that us folks on a mailing
> list are going to be able to give you based on a list of products. It's
> certainly not a situation of "push a button" and compile a new GPU.

 shaame :)

> There's
> a lot of avenues to try, and I don't know enough to tell you which one to
> pursue.

 and... you know what? at least some affordable re-programmable
hardware gives a wider audience of potential free software developers
the opportunity to even try.

 even if the Zynq-7000 in its current form isn't good enough to be
"good enough" for 3D, there will be new versions down the line.  22nm
and so on.

> If you have questions about LLVM itself, we'll happily do our best to answer
> them!

 thanks nick

 l.




More information about the llvm-dev mailing list