[LLVMdev] [GSoC] "Microsoft Direct3D shader bytecode backend" proposal

Charles Davis cdavis at mymail.mines.edu
Tue Mar 29 09:02:18 PDT 2011


On 3/29/11 5:14 AM, Justin Holewinski wrote:
> On Mon, Mar 28, 2011 at 9:22 PM, Charles Davis <cdavis at mymail.mines.edu
> <mailto:cdavis at mymail.mines.edu>> wrote:
> 
>     Here's the other of my proposals for this year's Google Summer of Code.
>     (The first is on cfe-dev.) It was suggested to me by Dan Kegel (from the
>     Wine project, they really want this).
> 
>     Title: Microsoft Direct3D shader bytecode backend
> 
>     Abstract:
> 
>     There is a distinct lack of open-source frameworks for compiling HLSL,
>     the shader language used by Direct3D, into bytecode that D3D can
>     understand. Currently, the only such framework is Ryan Gordon's
>     MojoShader, whose HLSL compiler component is still under heavy
>     development. By utilizing LLVM, it may be possible to generate
>     high-performance shader code from HLSL, just as Apple is known to do for
>     GLSL. The first step is a backend to generate D3D bytecode from LLVM IR.
> 
>     Content:
> 
>     1. Proposal
> 
>     Currently, there are very few open-source frameworks for compiling
>     High-Level Shader Language (HLSL) into shader bytecode that can be
>     understood by Direct3D, a popular interface to 3D hardware in games.
>     This is because Microsoft provides its own interface in its D3DX (and
>     underlying D3DCompiler) DLLs. Most games therefore end up using D3DX to
>     compile their Direct3D shaders.
> 
>     Microsoft seems to have paid no attention to optimization of the
>     resulting shader code, though. With LLVM, we can do better.
> 
> 
> Do you have any sources for this?  In my experience, fxc is able to do
> some clever tricks such as replacing if-then-else conditions with
> predicated instructions and swizzles.
I heard rumors that drivers tend not to like highly optimized bytecode
input. But that seems to count against optimization. I could take that
part out.
>  
> 
> 
>     By using LLVM's optimizers, programs can potentially generate
>     high-performance shader code that works well on the platform on which
>     they are running. In addition, an open-source implementation would allow
>     the compiler to be embedded in many different applications--not the
>     least of which is Wine, which is in desperate need of a working shader
>     compiler.
> 
> 
> I'm a bit confused how Wine would take advantage of a Direct3D bytecode
> compiler.  Would they not want to re-compile Direct3D bytecode (most
> often shipped with games in binary form instead of HLSL source) to
> something an OpenGL implementation on *nix could handle?
They already do that. What they want right now (among other things) is a
D3DCompiler_*.dll implementation (see http://wiki.winehq.org/HLSLCompiler ).
>  
> 
> 
>     The first step to this HLSL compiler is an LLVM backend for generating
>     D3D shader bytecode from LLVM IR. Therefore, I intend to implement such
>     a backend. Because implementing a full backend is a daunting task, I
>     intend to implement just enough to get simple examples working with
>     Wine's Direct3D implementation.
> 
> 
> Could you be a bit more specific on your goals?  A few quick questions
> that come to mind are:
> 
> 1. What shader models will you aim to support?
SM1-3. SM4 was a huge break from SM3 (so I've gathered from reading Wine
source; all the opcodes seem to be different), so I'll do that later.
> 2. What types of shared will you aim to support?  e.g. vertex, pixel,
> geometry, hull
Since I'm only doing up to SM3, vertex and pixel shaders only.
> 3. How do you propose to handle vertex buffer semantics?  e.g.
> POSITION0, TEXCOORD0, NORMAL, etc.
I can think of several ways:

- Make the frontend declare special symbols, and handle these symbols
specially in the backend
- Decorate the struct members with metadata

I'm leaning towards the latter one.
> 
> Perhaps a simple example would be nice, showing a very simple LLVM IR
> input and the (proposed) bytecode output.
How about this (this is from
http://www.neatware.com/lbstudio/web/hlsl.html)?

struct a2v {
  float4 position : POSITION;
};

struct v2p {
  float4 position : POSITION;
};

void main(in a2v IN, out v2p OUT, uniform float4x4 ModelViewMatrix) {
  OUT.position = mul(IN.position, ModelViewMatrix);
}

This would generate something like this (assuming I took the metadata
route):

%struct.a2v = { <4 x float> !0 }
%struct.v2p = { <4 x float> !0 }

!0 = metadata !{ <something that indicates this is a position attribute> }

define void @main(%struct.a2v* %IN, %struct.v2p* %OUT, [4 x <4 x float>]
%ModelViewMatrix) {
  %OUT.Position = getelementptr %struct.v2p* %OUT, i32 0, i32 0
  %IN.Position = getelementptr %struct.a2v* %IN, i32 0, i32 0
  %0 = load <4 x float>* %IN.Position
  # Note the intrinsic for matrix multiplies
  %1 = call <4 x float> @llvm.d3d.mul.float4(<4 x float>%0, [4 x <4 x
float>] %ModelViewMatrix)
  store <4 x float> %1, <4 x float>* %OUT.Position
}

and should generate assembly to the effect of:

vs_1_1
dcl_position v0
m4x4 oPos, v0, c0
mov oD0, c4

Thanks for your input.

Chip



More information about the llvm-dev mailing list