[llvm-dev] Test Error Paths for Expected & ErrorOr

Stefan Gränitz via llvm-dev llvm-dev at lists.llvm.org
Tue Sep 5 06:18:24 PDT 2017


Brainstormed challenges for a potential "Error Sanitizer"

So far all solvable except the side-effects examples on top:
https://github.com/weliveindetail/llvm-expected/blob/bb2bd80d8b30a02ce972603cd34963fa1fe174b0/tests/TestForceAllErrorsChallenges.h#L25

Ideas / Opinions?

Am 11.08.17 um 01:14 schrieb Stefan Gränitz:

> I am back with some news:
>
> Tried Clang cc1, but not much use of Expected/ErrorOr.
> Went on instrumenting the llvm-ar tool and ran a few tests: "llvm-ar t
> /usr/local/lib/libc++.a" has 267 mutation points, many of them duplicates.
> Quite a number of error handlers do exit() - I gave up gtest.
> Added command line option -force-error=<int> + automate with shell script.
> Built and ran with ASAN&UBSAN - no issues found. Disappointing,
> expected at least a some minor leak! ;)
> Added -debug -debug-only=ForceAllErrors for extra debug dump, like
> short stack trace where instance is broken.
>
> Fork is on GitHub:
> https://github.com/weliveindetail/llvm-ForceAllErrors
> https://github.com/weliveindetail/llvm-ForceAllErrors/commits/ForceAllErrors
>
> Am 31.07.17 um 17:53 schrieb David Blaikie:
>> On Mon, Jul 31, 2017 at 8:19 AM Stefan Gränitz
>> <stefan.graenitz at gmail.com <mailto:stefan.graenitz at gmail.com>> wrote:
>>
>>     Hi Lang, hi David, thanks for looking into this.
>>
>>>     Did you identify many cases where "real work" (in your example,
>>>     the nullptr dereference" was being done in an error branch?
>>
>>     In my own code yes, not in LLVM ;) I'd like to run it on a large
>>     example, some llvm tool or clang cc1 maybe. I hope to find the
>>     time end of this week.
>>
>>>     My suspicion is that that should be rare, but that your tool
>>>     would be great for exposing logic errors and resource leaks if
>>>     run with the sanitizers turned on.
>>
>>     Very good idea!
>>
>>>     In an ideal world we'd go even further and build a clang/LLDB
>>>     based tool that can identify what kinds of errors a function can
>>>     produce, then inject instances of those: That would allow us to
>>>     test actual error handling logic too, not just the generic
>>>     surrounding logic. 
>>
>>     Right, currently I only inject a generic error mock. Handlers may
>>     not be prepared for it and testing them is not possible so far.
>>     Not sure how a detection for the actual error types could look
>>     like, but I am curious for ideas.
>>
>>
>> Yeah, I imagine that would be tricky - 'true' mutation testing (a
>> compiler that deliberately and systematically miscompiles branches
>> (in a similar way to your approach of systematically producing
>> errors) & helps discover untested parts of the code (any mutation
>> that doesn't result in a test failure is missing testing)) would
>> probably find these, or maybe static analysis.
>>
>> Alternatively, if this technique were really embedded deep into
>> llvm::Error, then it could differentiate between the various handles
>> in a handleError - except I suppose it'd have no way of creating
>> arbitrary errors required to pass to them - maybe with some API entry
>> point (give for any T that is an ErrorInfo, have
>> ErrorInfo::createTestStub or the like that could be used). It'd be
>> tricky, I'd imagine.
>>
>> - Dave
>>  
>>
>>
>>     I will get back to you with results of a bigger test run asap.
>>
>>     Am 28.07.17 um 23:36 schrieb Lang Hames:
>>
>>>     Hi Stefan, David,
>>>
>>>     This is very interesting stuff - it adds a dimension of error
>>>     security that Error/Expected can't provide on their own. I think
>>>     it would be interesting to try to build a tool around this.
>>>
>>>     Did you identify many cases where "real work" (in your example,
>>>     the nullptr dereference" was being done in an error branch? My
>>>     suspicion is that that should be rare, but that your tool would
>>>     be great for exposing logic errors and resource leaks if run
>>>     with the sanitizers turned on.
>>>
>>>     In an ideal world we'd go even further and build a clang/LLDB
>>>     based tool that can identify what kinds of errors a function can
>>>     produce, then inject instances of those: That would allow us to
>>>     test actual error handling logic too, not just the generic
>>>     surrounding logic. 
>>>
>>>     Cheers,
>>>     Lang.
>>>
>>>
>>>     On Thu, Jul 27, 2017 at 8:56 AM, David Blaikie
>>>     <dblaikie at gmail.com <mailto:dblaikie at gmail.com>> wrote:
>>>
>>>
>>>
>>>         On Thu, Jul 27, 2017 at 8:54 AM Stefan Gränitz
>>>         <stefan.graenitz at gmail.com
>>>         <mailto:stefan.graenitz at gmail.com>> wrote:
>>>
>>>             Yes definitely, testing a small piece of code like the
>>>             GlobPattern::create() example, it would mostly indicate
>>>             missing unit tests or insufficient test data.
>>>
>>>             In contrast to unit tests, however, it can also verify
>>>             correct handling of errors passed between function call
>>>             hierarchies in more complex scenarios.
>>>             For this I should point to the other example in the
>>>             code, where it's applied to llvm::object::createBinary():
>>>             https://github.com/weliveindetail/ForceAllErrors-in-LLVM/blob/master/test/TestLLVMObject.h#L13
>>>
>>>             Here it detects and runs 44 different control paths,
>>>             that can hardly be covered by a unit test altogether,
>>>             because they don't depend on the input to creatBinary()
>>>             but rather on the environment the test runs in.
>>>
>>>          Yep, testing OS level environmental failures would be great
>>>         for this - I wonder if there's a good way to distinguish
>>>         between them (so that this only hits those cases, but
>>>         doesn't unduly 'cover' other cases that should be targeted
>>>         by tests, etc). Essentially something more opt-in or some
>>>         other handshake. (perhaps a certain kind of Error that
>>>         represents a "this failure is due to the environment, not
>>>         the caller's arguments"? Not sure)
>>>
>>>         Hopefully Lang (author of Error/Expected) chimes in - be
>>>         curious to hear his thoughts on this stuff too.
>>>
>>>         Thanks again for developing it/bringing it up here! :)
>>>
>>>             Am 27.07.17 um 16:46 schrieb David Blaikie:
>>>>             I /kind/ of like the idea - but it almost feels like
>>>>             this would be a tool for finding out that test coverage
>>>>             is insufficient, then adding tests that actually
>>>>             exercise the bad input, etc (this should be equally
>>>>             discoverable by code coverage, probably? Maybe not if
>>>>             multiple error paths all collapse together, maybe... )
>>>>
>>>>             For instance, with your example, especially once
>>>>             there's an identified bug that helps motivate, would it
>>>>             not be better to add a test that does pass a fileName
>>>>             input that fails GlobPattern::create?
>>>>
>>>>             On Thu, Jul 27, 2017 at 5:10 AM Stefan Gränitz via
>>>>             llvm-dev <llvm-dev at lists.llvm.org
>>>>             <mailto:llvm-dev at lists.llvm.org>> wrote:
>>>>
>>>>                 Hello, this is a call for feedback: opinions,
>>>>                 improvements, testers..
>>>>
>>>>                 I use the support classes Expected<T> and
>>>>                 ErrorOr<T> quite often
>>>>                 recently and I like the concept a lot! Thanks Lang btw!
>>>>                 However, from time to time I found issues in the
>>>>                 execution paths of my
>>>>                 error cases and got annoyed by their naturally low
>>>>                 test coverage.
>>>>
>>>>                 So I started sketching a test that runs all error
>>>>                 paths for a given
>>>>                 piece of code to detect these issues. I just pushed
>>>>                 it to GitHub and
>>>>                 added a little readme:
>>>>                 https://github.com/weliveindetail/ForceAllErrors-in-LLVM
>>>>
>>>>                 Are there people on the list facing the same issue?
>>>>                 How do you test your error paths?
>>>>                 Could this be of use for you if it was in a
>>>>                 reusable state?
>>>>                 Is there something similar already around?
>>>>                 Anyone seeing bugs or improvements?
>>>>                 Could it maybe even increase coverage in the LLVM
>>>>                 test suite some day?
>>>>
>>>>                 Thanks for all kinds of feedback!
>>>>                 Cheers, Stefan
>>>>
>>>>                 --
>>>>                 https://weliveindetail.github.io/blog/
>>>>                 https://cryptup.org/pub/stefan.graenitz@gmail.com
>>>>
>>>>
>>>>                 _______________________________________________
>>>>                 LLVM Developers mailing list
>>>>                 llvm-dev at lists.llvm.org
>>>>                 <mailto:llvm-dev at lists.llvm.org>
>>>>                 http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>>>
>>>
>>>             -- 
>>>             https://weliveindetail.github.io/blog/
>>>             https://cryptup.org/pub/stefan.graenitz@gmail.com
>>>
>>>
>>
>>     -- 
>>     https://weliveindetail.github.io/blog/
>>     https://cryptup.org/pub/stefan.graenitz@gmail.com
>>
>
> -- 
> https://weliveindetail.github.io/blog/
> https://cryptup.org/pub/stefan.graenitz@gmail.com

-- 
https://weliveindetail.github.io/blog/
https://cryptup.org/pub/stefan.graenitz@gmail.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170905/584ef541/attachment-0001.html>


More information about the llvm-dev mailing list