[cfe-commits] PthreadLock Checker enhancements

Jordy Rose jediknil at belkadan.com
Mon Jun 20 00:01:42 PDT 2011

Hi, Rui. Cool code here...both of these checks seem pretty useful. A couple comments:

- Are there checks for the XNU locking as well? Even though most of the code is shared, it's easy for regressions to slip in if there's no test for them.

- We usually try to avoid including system headers in tests, because they might not exist on some platforms. (And you don't need stdio.h anyway!) Instead, just include the relevant declarations at the top of the test file. It's even okay to simplify and make pthread_mutex_t an opaque pointer or something.

- Technically, POSIX mutexes can be recursive locks (see PTHREAD_MUTEX_RECURSIVE), which would produce false-positives in the double-locking case. I'm not sure what we'd want to do about this. :-) (At some point, I think we need a way to set arbitrary flags for checkers, or to see under what name a checker is enabled.)

- It'd be nice to clean up the diagnostic messages a little bit; it's better to err on the verbose side rather than the terse side.

Further comments inline with excerpts from your patch:

> +  enum {
> +    DoubleLocking = 0,
> +    LockOrderReversal = 1,
> +    NumChecks
> +  };
> +  mutable BugType *BT[NumChecks];

I think we're moving towards llvm::OwningPointer<BugType> instead of raw BugType pointers here. That way you can ditch the constructor and destructor boilerplate. (You can see an example in some of the other checkers, such as VLASizeChecker.)

I don't think the enum-indexed-array vs. individual variables issue is important, but once you're using llvm::OwningPointer there's no particular reason to use an array anymore.

>    const GRState *lockSucc = state;

I know this isn't in /your/ code, but your code makes this initialization a dead store.

> +  bool isLocked = false;
> +
> +  for (llvm::ImmutableList<const MemRegion*>::iterator I = LS.begin(),
> +    E = LS.end(); I != E; ++I) {
> +    if (*I == lockR) {
> +      isLocked = true;
> +      break;
> +    }
> +  }

Normally I'd say this should be wrapped in a helper function, per the LLVM Coding Standards at http://llvm.org/docs/CodingStandards.html#hl_predicateloops . But really someone should just add a contains() method to ImmutableList.

> +    default:
> +      assert(0);
> +      break;

Thank you for putting in the default case. :-) It's better to use llvm_unreachable() than assert(0). (I think this is so it will still trap in Release builds. Or so it can get optimized completely away in Release builds. One or the other.)

>      C.addTransition(C.generateNode(CE, lockFail));

You can just say C.addTransition(lockFail) and it'll do the right thing here. (This comes up in a couple other places, including in the original code, but it'd be nice cleanup.)

> +  const MemRegion *firstLockR = LS.getHead();
> +  if (firstLockR != lockR) {
> +    if (BT[LockOrderReversal] == NULL)
> +      BT[LockOrderReversal] = new BugType("Lock order reversal", "Lock checker");
> +    ExplodedNode *N = C.generateSink();
> +    if (!N)
> +      return;
> +    llvm::SmallString<128> S;
> +    llvm::raw_svector_ostream os(S);
> +    os << "Lock order reversal between ";
> +    SummarizeRegion(os, lockR);
> +    os << " and ";
> +    SummarizeRegion(os, firstLockR);
> +    EnhancedBugReport *report = new EnhancedBugReport(*BT[LockOrderReversal],
> +                                                      os.str(), N);
> +    C.EmitReport(report);
>      return;
> +  }
> +  // Record that the lock was released. 
> +  state = state->set<LockSet>(LS.getTail());

It'd be nice to recover from the unlock reversal; right now there are no successful transitions out of this node, meaning the rest of the function won't be analyzed. (If I'm remembering the design of the analyzer correctly.) The best thing to do would be to pull it out of the ImmutableList, I guess, but /that's/ annoying. A less nice thing to do would be to continue anyway without popping any locks off the list, and then you get a chain of irrelevant warnings for every other lock after this. But how often do you nest more than two locks anyway?

> +bool PthreadLockChecker::SummarizeRegion(llvm::raw_ostream& os,
> +                                         const MemRegion *MR) const {
> +  const TypedRegion *TR = dyn_cast<TypedRegion>(MR);
> +  if (!TR)
> +    return false;
> -  C.addTransition(C.generateNode(CE, unlockState));  
> +  switch (TR->getKind()) {
> +    case MemRegion::FunctionTextRegionKind: {
> +      const FunctionDecl *FD = cast<FunctionTextRegion>(TR)->getDecl();
> +      if (FD)
> +        os << "'" << FD << "'";
> +      return true;
> +    }
> +    case MemRegion::CXXThisRegionKind:
> +    case MemRegion::CXXTempObjectRegionKind:
> +    case MemRegion::FieldRegionKind:
> +    case MemRegion::VarRegionKind:
> +    case MemRegion::ObjCIvarRegionKind:
> +      os << "'" << TR->getString() << "'";
> +      return true;
> +    default:
> +      return false;
> +  }
>  }

Your SummarizeRegion (which diff didn't do a good job on) doesn't match your usage. The return false/true in SummarizeRegion is supposed to warn you if you have some bizarre region that can't be summarized. In this case, even the result of a function call (a symbolic region) won't print anything here, and your diagnostic messages will look very strange.

I'd suggest rewriting your diagnostics to leave out the names of the locks. After all, what's the name of this lock?

pthread_mutex_t *locks = get_locks();
// do stuff

That's an ElementRegion in a SymbolicRegion whose symbol is the value of a VarRegion, and there's no great way to print that.

Instead, you might just want to say:

This mutex has already been locked.
[with a range on the argument expression]

This was not the most recently locked mutex.
[with a range on the argument expression]

It'd also be nice to have diagnostic notes showing the /first/ time the mutex was locked and the most recently locked mutex, respectively, but that's a job for another day. :-)

In general, though, this looks good -- it's a useful case where we won't ever be able to hijack the MallocChecker machinery to help us with locks.


More information about the cfe-commits mailing list