[cfe-dev] RFC: Support x86 interrupt and exception handlers

Hal Finkel via cfe-dev cfe-dev at lists.llvm.org
Tue Sep 22 00:56:16 PDT 2015

----- Original Message -----
> From: "Joachim Durchholz via cfe-dev" <cfe-dev at lists.llvm.org>
> To: cfe-dev at lists.llvm.org
> Sent: Tuesday, September 22, 2015 2:18:13 AM
> Subject: Re: [cfe-dev] RFC: Support x86 interrupt and exception handlers
> >>> The main purpose of x86 interrupt attribute is to allow
> >>> programmers to write x86 interrupt/exception handlers in C
> >>> WITHOUT assembly stubs to avoid extra branch from assembly stubs
> >>> to C functions.
> >>
> >> Let's take just this statement. Are you sure that the call/ret
> >> pair for such a stub has *any* significant impact on the interrupt
> >> latency? I'm quite doubtful of that. That's completely ignoring
> >> the
> >> fact that raw interrupt or exception latency is generally not an
> >> issue.
> >
> > I agree that this is a highly-relevant question. Demonstrating the
> > benefit here is important. I can certainly imagine that it might be
> > the case that on embedded systems the latency here is more
> > important.
> > The other thing is that it might matter a lot more for syscall/trap
> > overhead than for actual externally-sourced interrupts.
> Embedded systems are mostly ARM these days, so embedded is not going
> to
> give much traction to your work.
> Maybe Atoms are used there; Intel designed them to be useful for
> embedded, I just don't know whether that worked out.
> I'm a bit sceptical that the stub matters much. Most of the overhead
> in
> interrupts and syscalls/traps is the context switch.

Full context switches are expensive, however, not all system calls require full context switches. gettid on Linux, for example, is generally considered quite fast because, although you do switch into kernel mode, there's no context switch before returning to user mode. The overall cost of doing the (hardware-level) mode switch can be only low 10s of cycles on modern hardware, and I can certainly imagine that the cost of saving and restoring all registers (including vector registers) to be significant by comparison.

It *used* to be much more expensive (by an order of magnitude or more) to execute a sysenter/sysexit pair, but this is generically no longer true. There's no mandatory TLB flush, etc. and hardware has improved a lot in this area over the past decade or more.


> On the '386,
> that
> was dozens (hundreds?) of bytes pushed to the stack. I don't know
> what
> later models do (my assembly days are long gone), but given that one
> of
> the major influences on Linux' kernel API design is avoiding context
> switches, I guess things didn't get better.
> Talking about Linux, you might want to head over to these folks and
> see
> what they do about stubs and whether they use gcc options to
> eliminate
> them where possible. Linux folks are very performance-conscious, if
> even
> they don't care about the stubs, then stubs are likely irrelevant, or
> at
> least much less important than the maintenance burden for stub
> elimination.
> Just my 2 cents from the sideline, back to lurking on backend topics
> :-)
> Regards,
> Jo
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev

Hal Finkel
Assistant Computational Scientist
Leadership Computing Facility
Argonne National Laboratory

More information about the cfe-dev mailing list