[cfe-dev] Cleaning up ARM parts of the Clang driver

James Molloy james.molloy at arm.com
Tue Oct 4 04:09:22 PDT 2011


Hi,

 

I'm doing some cleaning up of the target-specific hacks in the Driver, as
prep for some cross-compilation changes discussed previously on the list.

 

My previous set of patches tablegen'd up a lot of the horrible string
matching, but I've realised that actually it's almost all superfluous and
can be deleted completely.

 

That said, there are some defaults and corner cases that I'd like the list's
(and especially Douglas, Chandler and Eric's) opinions on before I go any
further. This all relates to the ARM specific stuff - I can't really touch
the Intel, MIPS or PowerPC stuff so it'd be better if someone else cleans
that up if they feel like it. A lot of these changes revolve around not
trying to be too clever in the frontend driver and leaving defaults to more
specialised parts of the back ends.

 

Does anyone have any opinions or answers to the following? Thanks!

 

1:

 

  // Select the ABI to use.

  //

  // FIXME: Support -meabi.

  const char *ABIName = 0;

  if (Arg *A = Args.getLastArg(options::OPT_mabi_EQ)) {

    ABIName = A->getValue(Args);

  } else {

    // Select the default based on the platform.

    switch(Triple.getEnvironment()) {

    case llvm::Triple::GNUEABI:

      ABIName = "aapcs-linux";

      break;

    case llvm::Triple::EABI:

      ABIName = "aapcs";

      break;

    default:

      ABIName = "apcs-gnu";

    }

  }

  CmdArgs.push_back("-target-abi");

  CmdArgs.push_back(ABIName);

 

I don't really think the target ABI needs to be set if it is part of the
triple (the else clause) - that should be inferred by all other tools by the
triple.

 

2: The driver currently tries to pick a default CPU for the architecture if
none is specified. I think this is wrong - if no CPU is specified then the
compiler should *not* tune for a specific CPU. The user would expect if
specifying -arch armv7, for the compiler to give an output that has good
blended performance on all armv7 cores, not cortex-a8 specifically for
example.

 

3: Selecting the floating point ABI is a rather large set of switch-cases
which I think can be contracted into a simple:

 

  // If unspecified, choose the default based on the platform.

  if (FloatABI.empty()) {

    const llvm::Triple &Triple = getToolChain().getTriple();

    if (Triple.getOS() == llvm::Triple::Darwin &&

        (ArchName.startswith("v6") || ArchName.startswith("v7"))

      FloatABI = "soft";

    else

      FloatABI = "softfp";

  }

 

I also don't see why the current default is "soft" if no ABI is specified.
Why not "softfp"? If the target has hardware FP support that'll be used, if
not then it is implied that the ABI is "soft". I also don't know how
critical that Darwin special-case is. If it could be gotten rid of, that'd
be lovely.

 

4: These are hacks:

 

  // Use software floating point operations?

  if (FloatABI == "soft") {

    CmdArgs.push_back("-target-feature");

    CmdArgs.push_back("+soft-float");

  }

 

  // Use software floating point argument passing?

  if (FloatABI != "hard") {

    CmdArgs.push_back("-target-feature");

    CmdArgs.push_back("+soft-float-abi");

  }

 

They're referenced from lib/Basic/Targets.cpp - would it not be better for
Targets.cpp to infer these from the other passed parameters? That's actually
a self-answering question, so my question really is: "Is there any reason
why I can't make the trivial change to make this so?"

 

5:

    // Set the target features based on the FPU.

    if (FPU == "fpa" || FPU == "fpe2" || FPU == "fpe3" || FPU == "maverick")
{

 

Where did these names come from? Do people actually use them? I see google
results for GCC command lines specific to a Cirrus core - is that the reason
they're there? Can we get rid of them as nasty GCC relics?

 

6:

  // Setting -msoft-float effectively disables NEON because of the GCC

  // implementation, although the same isn't true of VFP or VFP3.

  if (FloatABI == "soft") {

    CmdArgs.push_back("-target-feature");

    CmdArgs.push_back("-neon");

  }

 

It's not evident why this code is here. Why does -msoft-float disable NEON?
NEON has an integer vectoriser too. GCC relic?

 

7:

  // Setting -mno-global-merge disables the codegen global merge pass.
Setting 

  // -mglobal-merge has no effect as the pass is enabled by default.

  if (Arg *A = Args.getLastArg(options::OPT_mglobal_merge,

                               options::OPT_mno_global_merge)) {

    if (A->getOption().matches(options::OPT_mno_global_merge))

      CmdArgs.push_back("-mno-global-merge");

  }

 

It seems like this should be better handled by the sys::cl code in llc.

 

Cheers,

 

James
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20111004/d9358e81/attachment.html>


More information about the cfe-dev mailing list