[LLVMdev] LLVM Exception Handling

Renato Golin rengolin at systemcall.org
Sun Sep 26 04:19:49 PDT 2010


On 25 September 2010 23:46, Nathan Jeffords <blunted2night at gmail.com> wrote:
> catch:
>   %v = ptrtoint i8 * %x to i32
>   %r = icmp eq i32 %v, 255
>   br i1 %r, label %bad, label %worse
> bad:
>   ret i32 -1
> worse:
>   ret i32 -2
> }

If I understood correctly, you're trying to pass the clean-up flag
through %x directly on the invoke call. But later avoid the
eh.exception call, assuming that was in %x.

The problem is that you're mixing two concepts: The exception
structure contains information about the object that was thrown and
not a "good" number. That's the role of the clean-up flag (in case the
catch blocks can't deal with the exception) or the landing pads (that
should reflect the return values the user asked for in their
programs).

It's the users role to tell what's good and what's not (return values
included). the only thing you (compiler) can do is to explode
prematurely in case you can't properly catch the error (ie. throw
inside throw, throw inside delete, etc).

If that's the case, your implementation will not work for dwarf
exceptions, and I wouldn't recommend having an *invoke* syntax for
each type of exception handling mechanism.

Other question: why are you passing untyped %x? I haven't seen any
untyped variable in LLVM, so far, and I think it's good to be
redundant in this case. That alone would have caught the mistake. If
you need an i32 (for your bad/worse comparison), throwing i8* would
have hinted that you crossed the concepts.


On a side note...

Exception handling was designed by the devil himself. Part of the flow
control is designed by the user (try/catch blocks, throw
specifications), part of it is designed by the compiler, in exception
tables (specific unwinding instructions and types), and part by the
library writers (unwinding and personality routines). All that,
decided in three different time frames, by three different kinds of
developers, have to communicate perfectly in run time.

It'd be very difficult for the compiler to optimize automatically
without breaking run-time assumptions. All of that is controlled by
different ABIs, that make sure all three universes are talking the
same language. You can't change one without changing all the others...

To be honest, I'm still surprised that it actually works at all! ;)

-- 
cheers,
--renato

http://systemcall.org/

Reclaim your digital rights, eliminate DRM, learn more at
http://www.defectivebydesign.org/what_is_drm




More information about the llvm-dev mailing list