[LLVMdev] LLVM and newlib progress

John Criswell criswell at cs.uiuc.edu
Thu Nov 9 11:16:34 PST 2006


Pertti Kellomäki wrote:
> This is in response to Reid's and John's comments about
> intrinsics.
>
> The setting of the work is a project on reconfigurable
> processors using the Transport Triggered Architecture (TTA)
> <http://en.wikipedia.org/wiki/Transport_triggered_architecture>.
> For the compiler this means that the target architecture
> is not fixed, but rather an instance of a processor template.
> Different instances of the template can vary in the mix of
> function units and their connectivity. In addition to the
> source files, the compiler takes a processor description
> as input.
>
> In practical terms this means that there is not much point
> in keeping native libraries around, as the processor instances
> are not compatible with each other. There is also no operating
> system to make calls to. I/O is done by fiddling bits in
> function units.
>   
> The plan is to use LLVM as a front end, and write a back
> end that maps LLVM byte code to the target architecture.
> One of the main issues is instruction scheduling, in order to
> utilize the instruction level parallelism that TTAs potentially
> provide.
>
> Much of libc is just convenience functions expressible in plain C,
> so my plan is to compile newlib to byte code libraries, which
> would be linked with the application at the byte code level.
> The linked byte code would then be passed to the back end and
> mapped to the final target.
>
> The only issue is how to deal with system calls. The idea
> of using intrinsic functions for them comes from the way
> memcpy etc. are currently handled in LLVM. At LLVM byte code level,
> the libraries would contain calls to the intrinsic functions in
> appropriate places, and upon encountering them the back end
> would generate the corresponding code for the target.
>   
Right.  On a conventional system, this is a very straightforward way to
do it.  It's just that, on a conventional system, using an external
native code library is quicker and easier than adding code generation
for a new intrinsic.
> If there are better options, I'm all ears. I have not committed
> a single line of code yet, so design changes are very easy to do ;-)
> We do have a linker for the target architecture, so I suppose
> it would be possible to leave calls to the library functions
> involving I/O unresolved at the byte code level, and link those
> functions in at the target level. At a first glance intrinsics
> seem to be less hassle, but I could well be wrong.
>
> In practice I/O will probably boil down to reading a byte and writing
> a byte, mainly for debugging purposes. My understanding is that
> the real I/O will take place via dual port memories, DMA, or
> some other mechanism outside of libc.
>   
So, let me see if I understand this right:

First, it sounds like you're programming on the bare processor, so your
I/O instructions are either special processor instructions or volatile
loads/stores to special memory locations.  In that case, you would not
use a "system call" intrinsic; you would use an ioread/iowrite intrinsic
(these are similar to load/store and are briefly documented in the
LLVA-OS paper).  If you're doing memory mapped I/O, you could probably
use LLVM volatile load/store instructions and not have to add any
intrinsics.

Second, you could implement these "intrinsics" as either external
functions or as LLVM intrinsic functions specially handled by the code
generator.  I believe you're saying that the encoding of the I/O
instructions change.  If that is the case, then you are right: adding an
intrinsic and having the LLVM code generator generate the right
instructions would probably be easiest.

Do I understand things correctly?

-- John T.






More information about the llvm-dev mailing list