[llvm-dev] Submitting an Experimental Target

Renato Golin via llvm-dev llvm-dev at lists.llvm.org
Mon Dec 27 15:15:43 PST 2021


On Sat, 25 Dec 2021 at 20:51, Nikita Ermoshkin via llvm-dev <
llvm-dev at lists.llvm.org> wrote:

> Hi LLVM Folks
>
> This is a rather long message with many questions, so sorry for that.
>
> Over the last year-ish I created a new experimental target for the
> Propeller 2 microcontroller. It lives in my own fork of llvm-project here:
> https://github.com/ne75/llvm-project
>
> (Someone else had also reached out about creating their own target for P2
> after they saw my work and the discussion was here, but I think their
> effort died as I haven’t heard any updates from them. That discussion was
> here https://lists.llvm.org/pipermail/llvm-dev/2021-February/148739.html)
>

Hi Nikita,

New targets are always exciting, but the burden on existing targets and
developers increases with each new one, and that's why we have a high bar
to add them.


It’s not 100% complete (not all instructions are implemented yet, as
> there’s like 400 of them, mostly for controlling various hardware
> peripherals), but it’s relatively functional and can do most things that
> one commonly would want to do, and I’ve got a supporting C and runtime
> library. It’s about time it makes it’s way into the mainline LLVM repo I
> think, as there seem to be often changes to backend structure, prototypes,
> etc that whenever I pull in main into my fork, everything breaks and I
> spend two days refactoring my code to fix it, so it would be nice to have
> that changed by whoever as actually making the changes.
>

This sounds promising, and all of us who work on downstream projects have
the same burden, but depending on the state of the target itself, its tests
and its intended progress by its sub-community, passing that burden to
unrelated (upstream) developers doesn't help much.

The gamble is that, if you move in too early, and you can't keep up with
the breakages and changes, then the pressure to remove will be high, and
you'll have a harder time to re-introduce it later even if it gets into
better shape.

1. Is submitting an “in development” backend that’s in this state and
> leaving it experimental acceptable? If not, what are the minimum
> requirements?
>

It depends. First and foremost, it *must* follow the new target guidelines
(in the link you posted down there): active community, existing hardware,
public manuals, no contentious code / features, following the coding
standard, etc.

Many of these things are in a spectrum, so if most of it is ok, it should
be fine to ignore small diversions, if and only if the community promises
(and delivers in due time) to fix them before moving to official targets.
but if many of them are missing, then it's harder to ignore, and we ask
that it's fixed downstream before the first commits.

2. My code doesn’t currently conform to LLVM’s style guide. Is this
> acceptable for an experimental/in development target?
>
> I struggle with LLVM’s style (just different than how I write code) so I
> want to avoid going through and fixing all the little things until actually
> necessary
>

This is a big deal. It's not acceptable to add brand new code that doesn't
conform to LLVM.

We assume that people using LLVM's code for their projects, especially
those with the intent to upstream back, do continue using the same
standards we use.

Each project has its own standards and as a demonstration of being a good
citizen, you should adopt it when working on that project. This is true for
virtually all open source projects, we're not the only ones doing this.

If you haven't yet used clang-format, I strongly suggest you try it. It
makes formatting code so much easier, especially when it's not your natural
standard.

3. Currently, I am the only maintainer of this fork, so I would be the code
> owner. Does this mean I’d be on the hook for reviewing all the changes that
> might come down the pipeline that aren’t explicitly affecting functionality
> (say, some change in the Target class definition)
>
> I hope to get more folks from the Propeller community working on this in
> the future, but it’s a rather small project for now.
>

You'd absolutely be on the hook for making sure your target is stable,
isn't breaking other people's code and changes in the generic code isn't
breaking your tests because the tests are bad or confusing.

If the target doesn't keep up with overall quality over a prolonged period
of time, it will be target for removal, as I mentioned earlier. As code
owner and sole developer of the back-end, it would be basically up to you
to make sure that doesn't happen.

That's the main reason why we ask for "an active community behind the
target". A single person doing that is a stressful job that we do not
recommend.

Perhaps you could collect interested parties on other lists / channels and
once there are enough people willing to put up the work, you can more
confidently support the target upstream.

On the testing side, I have a series of simple tests that run on the actual
> hardware (no simulator is available). They are not exhaustive of compiler
> features, microcontroller features, or code coverage of the target code in
> LLVM. They are just the bare minimum I run to make sure I didn’t break
> something massive and I can still compile and run simple applications.
>

This isn't really enough. You must have code generation tests (IR to IR, IR
to MIR, MIR to ASM, MIR to OBJ) that proves that the target does what it
needs to do, ie. generate correct executables that do what the code
dictates.

You need tests for all supported instructions, function prologue/epilogue,
ABI code-gen, object file generation, etc as LIT tests.

If you have the hardware, and you want to run tests on that, this is *in
addition* to the LIT tests and it also means you'll have to run a buildbot
that must be green at all times, 24/365.

At the very least during the experimental stage, you'll also need to have a
buildbot with your target turned on running its LIT tests, because no other
bot will do that. It's also highly advisable to keep that bot running after
moving to official, to carry on the development or configurations that are
not the default.

You must provide the hardware and admin costs for your target's buildbots.

4. What is the minimum set of tests for the experimental target that must
> exist within the LLVM test suite. Is it acceptable to have none (since I’m
> the only developer and can run tests on my own)?
>

Not acceptable to not have tests. Experimental targets are part of the code
we ship every 6 months and the quality needs to be the same as the rest of
the code.


> 5. Is submitting this fork to be reviewed and merged as simple as creating
> a patch and submitting it through Phabricator? (As described here:
> https://llvm.org/docs/DeveloperPolicy.html#making-and-submitting-a-patch and
> here: https://llvm.org/docs/Phabricator.html) Or is there a different
> process since this is such a large change?
>

You first send an RFC to the mailing list (this email) to see if people
would be interested in your target. Ie. if the interest of the community is
big enough that there would be enough people willing to pay some of the
costs for the target to be in LLVM upstream.

Then you need to make sure you adapted your code and process to cater to
all conditions in the documentation, many of which are still lacking as I
explained above.

Then you submit a string of patches (1/10, 2/10, .. 10/10) that slowly
introduce your target up to working condition (for example, generating asm
instructions, with test, etc).

Once *all* patches are approved by multiple people we can merge all of them
in one go.

Check the initial process for some recent targets, ex. RISCV, m68k, CSKY,
LoongArch. There has been enough discussions around those targets and
processes that you can learn everything you need from those.

cheers,
--renato
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20211227/98a59bb4/attachment.html>


More information about the llvm-dev mailing list