[cfe-commits] patch: libcxxabi cxa_*_virtual and cxa_guard_* methods

John McCall rjmccall at apple.com
Sun May 22 11:33:08 PDT 2011

On May 22, 2011, at 12:20 AM, Nick Lewycky wrote:
> On 21 May 2011 17:05, John McCall <rjmccall at apple.com> wrote:
> > On May 19, 2011, at 11:11 PM, Nick Lewycky wrote:
> > > This isn't heavily tested, I'm mostly just trying to make sure I
> > > have the style down. Please let me know if we should be using
> > > different file names (hard to follow a pattern where there isn't one),
> > > of if the private methods in cxa_guard actually belong in an
> > > anonymous namespace, etc.
> > Guard variables need to support recursive initialization:  the
> > initialization of one variable must be able to kick off arbitrary code
> > that might perform its own initialization.
> Okay, I can handle that. I wrote a test for it:
> namespace test3 {
>     int helper1(int i) {
>         if (i == 0)
>             return 0;
>         static int j = 0 helper1(i - 1);
>         return j + 1;
>     }
>     void test() {
>         static int x = helper1(10);
>     }
> }
> and discovered that libstdc++ doesn't support it. They detect the
> recursive initialization and throw an exception.

This isn't what I meant;  actual recursive dependencies like this
are undefined behavior even in C++03.  I meant that the initialization
of one variable must be able to kick off arbitrary code that might
perform a *different* initialization.  Your patch doesn't support this
because your home-grown mutex is non-recursive, and therefore
this code will deadlock:

  int foo() { return 5; }
  int bar() { static int x = foo(); }
  int baz() { static int x = bar(); }

As an enhancement request for later, it would be a substantial
debugging aid to continue to detect this recursive initialization
in the single-threaded case.

> > In C++11, there's a guarantee that this won't deadlock
> > even if the first initialization blocks on a different thread.
> Hold up, what does this actually mean? Given:
>   int test_a(int x) {
>     if (x == 0)
>       return 0;
>     static int A = test_b(x - 1);
>     return A + 1;
>   }
>   int test_b(int x) {
>     if (x == 0)
>       return 0;
>     static int B = test_a(x - 1);
>     return B + 1;
>   }
> suppose that thread 1 enters test_a(10) and grabs the lock on A at
> the same time that thread 2 enters test_b(10) and grabs the lock
> on B. The deadlock is avoided how?

Sorry, I was unclear.  This deadlock is clearly not completely avoidable
except by serializing all initializations, which I think isn't the intent of
the standard.  What I meant is that initializations should be able to
block on a separate thread and make progress as long as there's no
cycle in the initializers.

The new wording in C++11 is somewhat helpful:

  If control enters the declaration concurrently while the variable
  is being initialized, the concurrent execution shall wait for completion
  of the initialization.[footnote: The implementation must not introduce
  any deadlock around execution of the initializer.] If control re-enters
  the declaration recursively while the variable is being initialized, the
  behavior is undefined.

The footnote could be clearer, but I think it's saying that no deadlocks
should be introduced *other than those possible due to the mandated
waiting between threads*.  So holding a global recursive lock during
initialization, as libstdcxx does, introduces deadlocks that aren't
inherent in the code.

It might also be useful to consider the original paper;  kudos to Howard
for the research here:
(Note that the example implementation in this paper is unnecessarily
complicated by the use of epochs;  three sentinel values (uninitialized,
currently initializing, completely initialized) would be sufficient, unless
I am missing some unexplained subtlety.)

> It's not clear to me what you're proposing, but I strongly want to avoid
> anything that blocks the other threads from running their own
> initializers in parallel for as long as they're independent.

Yes, I agree.


More information about the cfe-commits mailing list