[LLVMdev] FW: LLVM IR is a compiler IR
mclagett at hotmail.com
Thu Oct 6 13:24:03 PDT 2011
Sorry for the noise, but this is the message I meant to send to the list rather than replying to David directly. Unfortunately, I just sent his message to me before.
From: mclagett at hotmail.com
To: greened at obbligato.org
Subject: RE: [LLVMdev] LLVM IR is a compiler IR
Date: Thu, 6 Oct 2011 19:44:11 +0000
Thanks for your prompt reply. My answers are below at the end of your message.
> From: greened at obbligato.org
> To: mclagett at hotmail.com
> CC: llvmdev at cs.uiuc.edu
> Subject: Re: [LLVMdev] LLVM IR is a compiler IR
> Date: Thu, 6 Oct 2011 14:02:48 -0500
> Michael Clagett <mclagett at hotmail.com> writes:
> > There's about 32 core op codes that constitute the basic instruction
> > set and I can envision mapping each of these to some sequence of LLVM
> > IR. There's also a whole lot more "extended opcodes" that are
> > executed by the same core instruction execution loop but which are
> > coded using the built-in Intel assembler and added dynamically by the
> > system. I could envision also going to the trouble of mapping each of
> > these to a sequence of LLVM IR instructions and then being able to
> > emit a series of LLVM IR sequences purely based on the sequence of vm
> > opcodes encountered in a scan of code compiled for the vm.
> > I'm hoping that such a product could then be submitted to all the LLVM
> > optimizations and result in better Intel assembly code generation than
> > what I have hand-coded myself (in my implementations of either the
> > core or the extended opcodes -- and especially in the intel code
> > sequences resulting from the use of these opcodes in sequences
> > together). So first question is simply to ask for a validation of
> > this thinking and whether such a strategy seems feasible.
> Let me make sure I'm understanding you correctly. You want to map each
> of you opcodes into an LLVM sequence and then use the LLVM optimizations
> and JIT to generate efficient native code implementations? Then you
> would invoke those implementations during interpretation?
> Or is it that you want to take a bytecode program, map it to LLVM IR,
> run it through optimizations and codegen to produce a native executable?
> Either one of these will work and LLVM seems like a good match as long
> as you don't expect the optimizations to understand the higher-level
> semantics of your opcodes (without some work by you, at least).
> I don't quiet grasp any benefit to the first use as I would just go
> ahead and generate the optimal native code sequence for each opcode once
> and be done with it. No LLVM needed at all. So I suspect this is not
> what you want to do
It is actually the first of your alternatives above that I was hoping to achieve and the reason I was thinking this would be valuable is twofold. First, I don't have so much faith in the quality or optimal character of my own byte code implementations. The core 32 opcodes tend to be high-level implementations of low-level operations in dealing with the elements of the virtual machine -- things like stack operations on the two built in stacks of the virtual machine or movements to and from the vm's address register or access to and from the addresses moved there. Other core opcodes include '2*' (multiply top of stack by 2) 'COM' (one's complement of top of stack) ';' (jump vm instruction ptr to address on top of return stack) and that sort of thing. The top of data and return stacks are mapped to register EAX and EDI, respectively, and the address reg is mapped to ESI. But any stack operations that involve' push'-ing, 'pop'-ing, 'dup'-ing, 'drop'-ing, etc. end up going to memory storage where the bulk of the stack storage lives. Each of these core primitives are on average around 10 intel instructions long, and many of the extended opcodes that have been coded are there to bypass more costly sequences of core primitives that they replace. It was my general feeling that a good SSA-based compilation mechanism like that of LLVM could do a better job at maximizing the use of the Intel's limited resources than I could.
Moreover, as long as code remains at the VM instruction level, these resources are even more constrained than usual. EDX needs to be preserved to hold the VM instruction pointer. EAX, ESI and EDI need to be preserved for the purposes outlined above. So there is an advantage in compiling VM opcode sequences to assembler that can violate these invariants on a more extended basis and use the entire register set for a longer period of time. Similar considerations apply to simply reducing from 10 instructions to 1 or 2 instructions operations that at the VM level require the stack, but that at the intel assembler level would more naturally be handled in registers.
Finally, I just have the general feeling that more significant compiler optimizations can be effected across sequences of what are my vm opcode implementations. This is a general feeling, but I'm hoping fairly well-founded.
Hope that explains my thinking better. Does that change at all your view of the benefits that I might achieve from LLVM?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev