[LLVMdev] Assuring ARM code quality in LLVM
Galina Kistanova
gkistanova at gmail.com
Mon Apr 11 13:24:42 PDT 2011
Hi Renato,
>I was recently investigating the build bot infrastructure and noticed
>that the arm-linux target is failing for quite a long time. I believe
>that it means ARM code is not executed all that often in LLVM tests,
>is that correct?
>We were wondering what kind of support we could give to make sure ARM
>code is correct and don't regress, specially before releases (I know
>it's a bit late for 2.9).
>We have thought about some routes:
> - Install some build slaves on our side, so you can test them (linux,
>bare-metal), if it's possible to cross-compile on build slaves.
It is possible.
We host several build slaves which cross compile. The issue with cross
compilation is that tests do support well the cross scenario. We are
working on this but it moves slower then we wish it would be.
>>From what you've listed, we only have v5T and one cortex A8. We could
>provide more A8/A9s, M3s (mBed) boards and also models of all
>variations possible.
>Does it make any difference running them here at ARM or locally on
>OSUOSL? We could easily help them get more boards, too...
Llvm lab seems is the right place to put them, so people could get an
access to boards for debugging.
As a general note, my observations shows that slow builds gets less
attention from the community then fast ones.
This could be an issue with small boards.
I have been tried native builds on a beagleboard, it took about 4
hours to build without the tests which is a bit too slow.
We will need to find a way to make ARM builds faster.
Thanks
Galina
On Fri, Apr 8, 2011 at 5:46 AM, Renato Golin <renato.golin at arm.com> wrote:
> On 8 April 2011 11:21, Xerxes Rånby <xerxes at zafena.se> wrote:
>> Hope this will help fix the regressions
>
> Hi Xerxes,
>
> I see you're the owner of that board, thanks for the detailed
> description of the tests.
>
> By what you say, I think that the board itself is serving its purpose,
> and 2.9 only got that regression because it wasn't fixed it in time.
>
> My intention was to know what can we (ARM) can do to help LLVM repeat
> your success on all tests (clang, llvm, llvm-gcc, dragonegg), at least
> before each release. How can we make ARM a first-class target, in
> which a regression such as the one you found is a blocker to release.
>
>
> Duncan,
>
> I'm not sure me connecting to those boards will help much, since I
> know very little of the build-bot infrastructure. However, I'd be
> happy to help setting them up to run the tests we need (with some help
> in the beginning).
>
> >From what you've listed, we only have v5T and one cortex A8. We could
> provide more A8/A9s, M3s (mBed) boards and also models of all
> variations possible.
>
> Does it make any difference running them here at ARM or locally on
> OSUOSL? We could easily help them get more boards, too...
>
>
> On a different topic,
>
> >From what I understood, this tests run completely on the target
> (compilation of LLVM, unit tests, etc). So, for bare-metal tests that
> wouldn't work. We'd need to have a staging server that would compile
> to all platforms and run the tests (maybe just a subset) in the
> hardware via USB (mBeds) or JTAG on most other boards, or even better,
> on models directly.
>
> Do you think this is doable with the build bot infrastructure?
>
> cheers,
> --renato
>
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>
More information about the llvm-dev
mailing list