[LLVMdev] PIC16 removal details

Jim Grosbach grosbach at apple.com
Mon Sep 26 12:38:32 PDT 2011

On Sep 26, 2011, at 11:29 AM, Dan Gohman wrote:

> On Sep 21, 2011, at 4:51 PM, Matthew Hilt wrote:
>> The target in this case is 8-bit, accumulator based, however it is Von Neumann; so - good to know it's not *quite* the "Trifecta of Doom".  LLVM offers some very attractive features for our usage, and it would be disappointing to abandon it as an option all together.  Faced with the alternative being to write the compiler from scratch, is the consensus that doing so will be a better path forward for us than a new LLVM backend?
> It's hard to say.  To most of us, "8-bit accumulator based" suggests a heavy load of constraints.
> The more constraints you have, the harder it's going to be to shoehorn a large, complex,
> and naive (to your constraints) piece of software into the middle of everything, interposed
> between the programmer and the hardware.  It will require more than just extending LLVM to
> your needs.  It'll require fighting against LLVM actively working against your needs.

Von Neumann is the biggest thing, so that's a good thing.

The PIC16 port chose to not use the mem2reg and reg2mem passes in the backend, which resulted in many of LLVM's optimization passes being severely hamstrung, not least because LLVM IR is SSA only for register values, not memory values. I would suggest instead investing the effort to implement target-specific code, both in instruction selection and in dedicated machine level passes, to optimize for the odder characteristics of this target. This would be immensely helped if you can fashion a good abstraction for what LLVM considers to be registers, perhaps onto a fast-access memory bank or something similar. The more you can do that, the more you'll be leveraging the existing codebase. A late machine function pass should be able to go through the code and clean up redundant copies, loads and stores. For example, the PIC16 could handle banking by having every global access include setting the bank bits, then have the appropriate target hooks to let the machine dead code elimination pass remove any extras, or, alternatively, insert no banking instructions at all and require the linker to insert them where required after data memory has been laid out (more work, but a superior end result).

Personally, I would consider whether that abstraction of LLVM's concept of registers could be found to be the go vs. no-go for a port. If I had a solution for that which wasn't a force-fit, then I'd give a true LLVM port a try. Otherwise, I'd lean towards alternatives. That said, I've written compilers for these sorts of architectures, and there's no easy answers no matter which way you go.

One middle ground alternative you may want to consider is using clang, but not the LLVM back end. One could theoretically hook up a lowering from clang's AST's that's not LLVM IR and feed that into a custom target-specific back end. That way you'd at least have all the awesome parsing and semantic analysis tools clang offers.


More information about the llvm-dev mailing list