[cfe-dev] [llvm-dev] RFC: Implementing the Swift calling convention in LLVM and Clang

Renato Golin via cfe-dev cfe-dev at lists.llvm.org
Thu Mar 3 02:00:13 PST 2016


On 2 March 2016 at 20:03, John McCall <rjmccall at apple.com> wrote:
> We don’t need to.  We don't use the intermediary convention’s rules for aggregates.
> The Swift rule for aggregate arguments is literally “if it’s too complex according to
> <foo>, pass it indirectly; otherwise, expand it into a sequence of scalar values and
> pass them separately”.  If that means it’s partially passed in registers and partially
> on the stack, that’s okay; we might need to re-assemble it in the callee, but the
> first part of the rule limits how expensive that can ever get.

Right. My worry is, then, how this plays out with ARM's AAPCS.

As you said below, you *have* to interoperate with C code, so you will
*have* to interoperate with AAPCS on ARM.

AAPCS's rules on aggregates are not simple, but they also allow part
of it in registers, part on the stack. I'm guessing you won't have the
same exact rules, but similar ones, which may prove harder to
implement than the former.


> That’s pretty sub-optimal compared to just returning in registers.  Also, most
> backends do have the ability to return small structs in multiple registers already.

Yes, but not all of them can return more than two, which may constrain
you if you have both error and context values in a function call, in
addition to the return value.


> I don’t understand what you mean here.  The out-parameter is still explicit in
> LLVM IR.  Nothing about this is novel, except that C frontends generally won’t
> combine indirect results with direct results.

Sorry, I had understood this, but your reply (for some reason) made me
think it was a hidden contract, not an explicit argument. Ignore me,
then. :)


> Right.  The backend isn’t great about removing memory operations that survive to it.

Precisely!


> Swift does not run in an independent environment; it has to interact with
> existing C code.  That existing code does not reserve any registers globally
> for this use.  Even if that were feasible, we don’t actually want to steal a
> register globally from all the C code on the system that probably never
> interacts with Swift.

So, as Reid said, usage of built-ins might help you here.

Relying on LLVM's ability to not mess up your fiddling with variable
arguments seems unstable. Adding specific attributes to functions or
arguments seem too invasive. So a solution would be to add a built-in
in the beginning of the function to mark those arguments as special.

Instead of alloca %a + load -> store + return, you could have
llvm.swift.error.load(%a) -> llvm.swift.error.return(%a), which
survives most of middle-end passes intact, and a late pass then change
the function to return a composite type, either a structure or a
larger type, that will be lowered in more than one register.

This makes sure error propagation won't be optimised away, and that
you can receive the error in any register (or even stack), but will
always return it in the same registers (ex. on ARM, R1 for i32, R2+R3
for i64, etc).

I understand this might be far off what you guys did, and I'm not
trying to re-write history, just brainstorming a bit.

IMO, both David and Richard are right. This is likely not a huge deal
for the CC code, but we'd be silly not to take this opportunity to
make it less fragile overall.

cheers,
--renato



More information about the cfe-dev mailing list