[LLVMdev] Assuring ARM code quality in LLVM

Renato Golin renato.golin at arm.com
Fri Apr 8 05:46:50 PDT 2011

On 8 April 2011 11:21, Xerxes RĂ„nby <xerxes at zafena.se> wrote:
> Hope this will help fix the regressions

Hi Xerxes,

I see you're the owner of that board, thanks for the detailed
description of the tests.

By what you say, I think that the board itself is serving its purpose,
and 2.9 only got that regression because it wasn't fixed it in time.

My intention was to know what can we (ARM) can do to help LLVM repeat
your success on all tests (clang, llvm, llvm-gcc, dragonegg), at least
before each release. How can we make ARM a first-class target, in
which a regression such as the one you found is a blocker to release.


I'm not sure me connecting to those boards will help much, since I
know very little of the build-bot infrastructure. However, I'd be
happy to help setting them up to run the tests we need (with some help
in the beginning).

>From what you've listed, we only have v5T and one cortex A8. We could
provide more A8/A9s, M3s (mBed) boards and also models of all
variations possible.

Does it make any difference running them here at ARM or locally on
OSUOSL? We could easily help them get more boards, too...

On a different topic,

>From what I understood, this tests run completely on the target
(compilation of LLVM, unit tests, etc). So, for bare-metal tests that
wouldn't work. We'd need to have a staging server that would compile
to all platforms and run the tests (maybe just a subset) in the
hardware via USB (mBeds) or JTAG on most other boards, or even better,
on models directly.

Do you think this is doable with the build bot infrastructure?


More information about the llvm-dev mailing list