[LLVMdev] Making LLVM safer in out-of-memory situations

Gasiunas, Vaidas vaidas.gasiunas at sap.com
Fri Dec 20 05:21:39 PST 2013


Hi Philip,

> If I'm reading you correctly, you are relying on exception propagation 
> and handler (destructors for local objects) execution. You have chosen 
> not to add extra exception logic to LLVM itself, but are relying on the 
> correctness of exception propagation within the code.  (The last two 
> sentances are intended to be a restatement of what your message said.  
> If I misunderstood, please correct me.)

It was probably not completely correct to say that we did not extend 
the exception propagation in LLVM. In most of cases where malloc or 
other C allocation functions are called, we had to add a check for NULL
and throw std::bad_alloc. But these are a kind of straightforward fixes, 
that do not require much effort. 

> Does this mean that you're compiling your build of LLVM with exceptions 
> enabled?  By default, I believe LLVM is built without RTTI or exception 
> support.

OK, I see. This explains why the destructors in LLVM are not always 
prepared to be executed in exception situations. Yes, we build LLVM 
with exception support. In principle, we build it with the same options 
like the rest of our project. Actually, I could hardly imagine that 
we could handle OOM situations without error handling. 

> For the particular cases you mentioned with auto pointers and allocation 
> in destructors, are these issues also present along existing error 
> paths?  Or for that matter simply examples of bad coding practice?  If 
> so, pushing back selected changes would be welcomed.  I'd be happy to 
> help review.

Yes, there are some examples of bad coding practice. The root problem,
however, is that the destructors in LLVM do a lot of complicated stuff. 
Instead of just deleting objects following a strictly hierarchical ownership 
structure, the objects are unregistered in various relationships, which 
potentially trigger unwinding in quite different locations. Such non-trivial 
coding sometimes requires dynamic allocation of new collections, which is
problematic in OOM situations. Actually, we did not manage to completely
fix the unwinding of the compiler state. That was one of the reasons to 
move all compiler passes to a separate process.

Here is a rough overview of our current set of patches to LLVM3.3. In total, 
we have 31 patches to LLVM related to OOM handing. They fix only the 
components that we could not outsource to the separate process: 
the core IR classes that we use for IR generation, IR bitcode serialization,
and the dynamic code loader. 

* In 8 of the patches we fix the malloc calls to throw bad_alloc.
* 10 patches fix destructors. Some of fixes are about disabling or
rewriting the code triggering dynamic allocations. Some other fixes 
disable asserts that check for invariants that are not valid in exception 
situation.
* 5  patches deal with exceptions in constructors. One kind of problems 
results from the fact that if an exception is thrown in the constructor of 
an object, the destructor is not called which causes a leak.
There is also a specific problem that IR objects register themselves with 
their parents already in their constructor. And if a constructor fails 
afterwards, the parent contains a dangling pointer to its child.
* 5 patches fix temporary ownership of objects, mostly the situations 
when object are created, but not added to the owning collections.

Note that 31 patches does not mean 31 fixes, because some of them 
contain all fixes to a particular file or class. 

Regards,
Vaidas





More information about the llvm-dev mailing list