[libcxx-commits] [PATCH] D155873: [libc++][hardening] Categorize more assertions.

Konstantin Varlamov via Phabricator via libcxx-commits libcxx-commits at lists.llvm.org
Fri Jul 21 15:27:06 PDT 2023


var-const added inline comments.


================
Comment at: libcxx/include/__tree:171
 {
-    _LIBCPP_ASSERT_UNCATEGORIZED(__x != nullptr, "Root node shouldn't be null");
+    _LIBCPP_ASSERT_INTERNAL(__x != nullptr, "Root node shouldn't be null");
     while (__x->__left_ != nullptr)
----------------
Mordante wrote:
> As mentioned in D155906 I really would like to have these kind of cheap internal sanity checks enabled in `_LIBCPP_ENABLE_HARDENED_MODE`. They are not user errors, but they still cause potential issues for users, including users in safety-critial places. When there's not enough time to get that done before LLVM 17, I'm happy to defer that to LLVM 18.
Thanks for starting this discussion!

My current thinking has been to keep the hardened mode as lightweight as possible. I have a big concern about performance penalty blocking adoption; also, while I'm not 100% sure this can happen, my ideal would be if we could make the hardened mode the default at some point (of course, users would still be able to opt out and use the unchecked mode). Because of that, so far I have been generally erring on the side of performance and using the following criteria for enabling an assertion:
1. Is the failure directly triggered by user input?
2. Is the failure //directly// exploitable?
3. Is the failure condition relatively cheap to check?

Personally, I feel #2 is the most important distinction. Almost any error, including logic errors that don't trigger UB, can lead to very incorrect results and also contribute to a security vulnerability. I envision the hardened mode as only preventing errors that create a security vulnerability //directly// (e.g. an out-of-bounds read) -- the kind of errors where we can confidently say "if your program has one of those, it is vulnerable, regardless of any other context" (and thus we can justify the extra penalty for checking those).

>From that perspective, it follows that checks for null pointers should not be a part of the hardened mode (FWIW, it was somewhat surprising to me, or at least I found it counterintuitive). From what I know (and I'd be happy to be corrected on this if I'm missing something), dereferencing a null pointer on any relatively modern OS leads to a crash and is not directly exploitable (it might contribute to a more complicated exploit arising from context, but so can many other types of failures).

The upside I'm hoping we get from limiting the number of checks we perform is wider adoption. The intention behind hardened mode is not that it provides complete safety or entirely eliminates UB -- rather, it makes your program measurably //safer// at a "negligible" cost; it relatively cheaply protects against the most catastrophic stuff. I'm hoping something like the 80/20 principle applies here (meaning that just a fraction of the cost that would be necessary to protect against all UB is enough to prevent 80% of exploitable vulnerabilities). The "perfect" hardened mode (aspirational, not realistic) is the mode where if you were to enable it, you would be guaranteed that an attacker could not use your program to achieve arbitrary code execution, //and// it's cheap enough to use in production.

We currently have the hardened mode and the debug mode. While I'd prefer to keep this as simple as possible and stick with the 2 modes, I'd be open to exploring adding a 3rd mode in-between those two that would contain more checks that the hardened mode while still aiming at some reasonable level of performance (which for the debug mode is almost a non-goal).

Tagging @ldionne as well in case he has a different perspective.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D155873/new/

https://reviews.llvm.org/D155873



More information about the libcxx-commits mailing list