[cfe-dev] [llvm-dev] RFC: Implementing the Swift calling convention in LLVM and Clang
John McCall via cfe-dev
cfe-dev at lists.llvm.org
Thu Mar 3 09:36:39 PST 2016
> On Mar 3, 2016, at 2:00 AM, Renato Golin <renato.golin at linaro.org> wrote:
>
> On 2 March 2016 at 20:03, John McCall <rjmccall at apple.com> wrote:
>> We don’t need to. We don't use the intermediary convention’s rules for aggregates.
>> The Swift rule for aggregate arguments is literally “if it’s too complex according to
>> <foo>, pass it indirectly; otherwise, expand it into a sequence of scalar values and
>> pass them separately”. If that means it’s partially passed in registers and partially
>> on the stack, that’s okay; we might need to re-assemble it in the callee, but the
>> first part of the rule limits how expensive that can ever get.
>
> Right. My worry is, then, how this plays out with ARM's AAPCS.
>
> As you said below, you *have* to interoperate with C code, so you will
> *have* to interoperate with AAPCS on ARM.
I’m not sure of your point here. We don’t use the Swift CC to call C functions.
It does not matter, at all, whether the frontend lowering of an aggregate under
the Swift CC resembles the frontend lowering of the same aggregate under AAPCS.
I brought up interoperation with C code as a counterpoint to the idea of globally
reserving a register.
> AAPCS's rules on aggregates are not simple, but they also allow part
> of it in registers, part on the stack. I'm guessing you won't have the
> same exact rules, but similar ones, which may prove harder to
> implement than the former.
>> That’s pretty sub-optimal compared to just returning in registers. Also, most
>> backends do have the ability to return small structs in multiple registers already.
>
> Yes, but not all of them can return more than two, which may constrain
> you if you have both error and context values in a function call, in
> addition to the return value.
We do actually use a different swiftcc calling convention in IR. I don’t see any
serious interop problems here. The “intermediary” convention is just the original
basis of swiftcc on the target.
>> I don’t understand what you mean here. The out-parameter is still explicit in
>> LLVM IR. Nothing about this is novel, except that C frontends generally won’t
>> combine indirect results with direct results.
>
> Sorry, I had understood this, but your reply (for some reason) made me
> think it was a hidden contract, not an explicit argument. Ignore me,
> then. :)
>
>
>> Right. The backend isn’t great about removing memory operations that survive to it.
>
> Precisely!
>
>
>> Swift does not run in an independent environment; it has to interact with
>> existing C code. That existing code does not reserve any registers globally
>> for this use. Even if that were feasible, we don’t actually want to steal a
>> register globally from all the C code on the system that probably never
>> interacts with Swift.
>
> So, as Reid said, usage of built-ins might help you here.
>
> Relying on LLVM's ability to not mess up your fiddling with variable
> arguments seems unstable. Adding specific attributes to functions or
> arguments seem too invasive.
I’m not sure why you say that. We already do have parameter ABI override
attributes with target-specific behavior in LLVM IR: sret and inreg.
I can understand being uneasy with adding new swiftcc-specific attributes, though.
It would be reasonable to make this more general. Attributes can be parameterized;
maybe we could just say something like abi(“context”), and leave it to the CC to
interpret that?
Having that sort of ability might make some special cases easier for C lowering,
too, come to think of it. Imagine an x86 ABI that — based on type information
otherwise erased by the conversion to LLVM IR — sometimes returns a float in
an SSE register and sometimes on the x86 stack. It would be very awkward to
express that today, but some sort of abi(“x87”) attribute would make it easy.
> So a solution would be to add a built-in
> in the beginning of the function to mark those arguments as special.
>
> Instead of alloca %a + load -> store + return, you could have
> llvm.swift.error.load(%a) -> llvm.swift.error.return(%a), which
> survives most of middle-end passes intact, and a late pass then change
> the function to return a composite type, either a structure or a
> larger type, that will be lowered in more than one register.
>
> This makes sure error propagation won't be optimised away, and that
> you can receive the error in any register (or even stack), but will
> always return it in the same registers (ex. on ARM, R1 for i32, R2+R3
> for i64, etc).
>
> I understand this might be far off what you guys did, and I'm not
> trying to re-write history, just brainstorming a bit.
>
> IMO, both David and Richard are right. This is likely not a huge deal
> for the CC code, but we'd be silly not to take this opportunity to
> make it less fragile overall.
The lowering required for this would be very similar to the lowering that Manman’s
patch does for swift-error: the backend basically does special value
propagation. The main difference is that it’s completely opaque to the middle-end
by default instead of looking like a load or store that ordinary memory optimizations
can handle. That seems like a loss, since those optimizations would actually do
the right thing.
John.
More information about the cfe-dev
mailing list