[llvm] r312582 - Revert "[Decompression] Fail gracefully when out of memory"

David Blaikie via llvm-commits llvm-commits at lists.llvm.org
Fri Sep 8 13:16:16 PDT 2017


On Fri, Sep 8, 2017 at 1:11 PM Kostya Serebryany <kcc at google.com> wrote:

> On Fri, Sep 8, 2017 at 10:45 AM, David Blaikie <dblaikie at gmail.com> wrote:
>
>>
>>
>> On Fri, Sep 8, 2017 at 10:40 AM Kostya Serebryany <kcc at google.com> wrote:
>>
>>> On Fri, Sep 8, 2017 at 10:11 AM, David Blaikie <dblaikie at gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Sep 8, 2017 at 10:06 AM Kostya Serebryany <kcc at google.com>
>>>> wrote:
>>>>
>>>>> On Fri, Sep 8, 2017 at 9:36 AM, David Blaikie <dblaikie at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> On Thu, Sep 7, 2017 at 6:38 PM Kostya Serebryany <kcc at google.com>
>>>>>> wrote:
>>>>>>
>>>>>>> On Thu, Sep 7, 2017 at 9:14 AM, David Blaikie <dblaikie at gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> maybe this is unreasonable, but could libFuzzer/asan/something mock
>>>>>>>> out malloc & have it fail for large allocations
>>>>>>>>
>>>>>>>
>>>>>>> Depending on your definition of 'fail'.
>>>>>>> libFuzzer has a flag -rss_limit_mb (default: 2048) that will case
>>>>>>> the process to crash (i.e. report a bug) on
>>>>>>>   a) any single allocation exceeding the limit (yes, libFuzzer
>>>>>>> intercepts malloc) or
>>>>>>>   b) if the total rss exceeds the limit.
>>>>>>>
>>>>>>> asan has another flag (allocator_may_return_null, off by default)
>>>>>>> that will cause malloc to return 0 on large allocations instead of
>>>>>>> crashing.
>>>>>>>
>>>>>>
>>>>>> That ^ seems like a good thing/approach.
>>>>>>
>>>>>
>>>>>
>>>>> Not sure I understand you.
>>>>> Is allocator_may_return_null=0 a good approach or
>>>>> allocator_may_return_null=1?
>>>>>
>>>>
>>>> allocator_may_return_null=1, I think. But I could be wrong/no doubt
>>>> other folks have other ideas.
>>>>
>>>
>>> The default for asan is =0 (i.e. crash on large allocs), but on
>>> ClusterFuzz for Chrome
>>> we use =1 because it allows to find some extra bugs (when malloc returns
>>> 0, it goes unchecked and the code dereferences the null pointer using a
>>> large index => oops).
>>> However when combined with libFuzzer =1 does not work because of
>>>  -rss_limit_mb=2048 which does not allow malloc >= 2048Mb.
>>> The problem is not when we try to allocate 1Tb, the problem is when we
>>> try to allocate 3Gb or 6 times by 512Mb.
>>>
>>
>> Not sure I'm following - it sounds like (originally) the issue is that
>> the fuzzer might produce inputs that cause the code under test to allocate
>> a lot of memory (either in one allocation, or several). So it seems to me
>> like it'd be helpful/good if the fuzzer/asan/something (if it's easy)
>> placed an artificial bound on total amount of memory that can be allocated
>> (having the malloc return 0 beyond that). Maybe that's not practical, I'm
>> not sure.
>>
>
> It may or may not be practical depending on the project specifics.
> I would like to be in the world where this is never practical.
>

I'm not sure I understand what you mean.

It seems unfortunate to me to build fixed memory limits into programs -
it's nice if they scale up with bigger inputs and more memory on the
machine to do more of whatever it is they do - only limited by the memory
available on the machine, not some fixed limit in the program.

If this means making programs robust to allocation failure (failing
gracefully rather than 'crashing') for them to be fuzzable - that seems, to
me at least, an OK tradeoff.


>
>
>
>
>>
>>
>>>
>>>
>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> (though we're sort of trained that checking for allocation failure is
>>>>>> almost pointless because modern systems don't actually allocate at the time
>>>>>> & wait to page fault the memory in (I'm sure I'm using the terminology
>>>>>> incorrectly, but hopefully makes enough sense) - but seems like a possible
>>>>>> necessary/useful tradeoff for fuzzing rather than every program needing to
>>>>>> implement/enforce a fixed memory bound)
>>>>>>
>>>>>>
>>>>>>> By default asan crashes on HUGE allocations (e.g. 1Tb) but doesn't
>>>>>>> fail on anything it can actually allocate.
>>>>>>>
>>>>>>> --kcc
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> rather than having applications need to support such a bound
>>>>>>>> themselves?
>>>>>>>>
>>>>>>>> On Thu, Sep 7, 2017 at 9:10 AM Kostya Serebryany <kcc at google.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> On Thu, Sep 7, 2017 at 3:17 AM, George Rimar <
>>>>>>>>> grimar at accesssoftek.com> wrote:
>>>>>>>>>
>>>>>>>>>> Compressing/decompressing can be used for any sections in theory,
>>>>>>>>>>
>>>>>>>>>> but in practice used for .debug_* ones only I think (not sure
>>>>>>>>>> though).
>>>>>>>>>>
>>>>>>>>>> Afaik LLVM does not support DWARF64 now.
>>>>>>>>>>
>>>>>>>>>> Given above we probably can limit decompressed size to 4gb and it
>>>>>>>>>> be reasonable limit ?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> It might be reasonable as the default limit, but for fuzzing 4Gb
>>>>>>>>> is a bit too expensive.
>>>>>>>>> The default total rss limit in libFuzzer is 2Gb, and the same
>>>>>>>>> limit is enforced on oss-fuzz.
>>>>>>>>> So, there should be a way to enforce another limit (1Gb sounds
>>>>>>>>> best to me)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> Another point (already mentioned in different thread) is that
>>>>>>>>>> install_bad_alloc_error_handler API looks completely dead then ?
>>>>>>>>>> Since it looks not possible to use it currently without breaking
>>>>>>>>>> ASan​.
>>>>>>>>>>
>>>>>>>>>> Best regards,
>>>>>>>>>> George | Developer | Access Softek, Inc
>>>>>>>>>> ------------------------------
>>>>>>>>>> *От:* Kostya Serebryany <kcc at google.com>
>>>>>>>>>> *Отправлено:* 7 сентября 2017 г. 3:26
>>>>>>>>>> *Кому:* Jonas Devlieghere
>>>>>>>>>> *Копия:* Vedant Kumar; LLVM Commits; George Rimar
>>>>>>>>>> *Тема:* Re: [llvm] r312582 - Revert "[Decompression] Fail
>>>>>>>>>> gracefully when out of memory"
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Wed, Sep 6, 2017 at 2:38 PM, Jonas Devlieghere <
>>>>>>>>>> jdevlieghere at apple.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Kostya,
>>>>>>>>>>>
>>>>>>>>>>> I completely agree with you that this is far from an optimal
>>>>>>>>>>> solution! However, I don’t really see an alternative here without a
>>>>>>>>>>> complete rewrite of how we currently do decompression. For this particular
>>>>>>>>>>> case, how would you decide whether the allocated chuck is too big?
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> It could be an arbitrary limit which can be redefined at
>>>>>>>>>> compile-time or, better, with a run-time flag (and the default set to e.g.
>>>>>>>>>> 1Gb?)
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> Any limit would be arbitrary before doing the actual extraction.
>>>>>>>>>>> This is a pretty common problem with extraction
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Oh yea. zip bombs they call it.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> and I *think* libz has some kind of work around for it, though
>>>>>>>>>>> I’d have to check to refresh my memory. We could probably do something
>>>>>>>>>>> similar in the decompressor, but that’d be a pretty big change and this
>>>>>>>>>>> seemed like a reasonable first step as opposed to just crashing.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> This change doesn't really fix the problem though.
>>>>>>>>>> Yes, it might help if the requested allocation is e.g. 42Tb.
>>>>>>>>>> But what if the requested allocation is 3Gb? LibFuzzer will still
>>>>>>>>>> die with OOM (default rss_limit_mb=2048).
>>>>>>>>>> What if the requested allocation is 1.99Gb? libFuzzer will dies
>>>>>>>>>> due to OOM again, just because the process will consume 2Gb
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Here's the original differential https://reviews.llvm.org/D37447
>>>>>>>>>>>
>>>>>>>>>>> Jonas
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Sep 6, 2017, at 9:37 PM, Kostya Serebryany <kcc at google.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> I missed the original commit but I'm glad it's reverted.
>>>>>>>>>>> I think it's a really bad practice to allocate huge chunks of
>>>>>>>>>>> RAM (petabytes?) and rely on the new handler to bark.
>>>>>>>>>>> Instead, we should simply not try to allocate large chunks. In
>>>>>>>>>>> most cases if the compiler-related too wants to allocate many Gb of RAM at
>>>>>>>>>>> once it's an indication of a logical bug.
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Sep 5, 2017 at 3:04 PM, Vedant Kumar via llvm-commits <
>>>>>>>>>>> llvm-commits at lists.llvm.org> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Author: vedantk
>>>>>>>>>>>> Date: Tue Sep  5 15:04:00 2017
>>>>>>>>>>>> New Revision: 312582
>>>>>>>>>>>>
>>>>>>>>>>>> URL: http://llvm.org/viewvc/llvm-project?rev=312582&view=rev
>>>>>>>>>>>> Log:
>>>>>>>>>>>> Revert "[Decompression] Fail gracefully when out of memory"
>>>>>>>>>>>>
>>>>>>>>>>>> This reverts commit r312526.
>>>>>>>>>>>>
>>>>>>>>>>>> Revert "Fix
>>>>>>>>>>>> test/DebugInfo/dwarfdump-decompression-invalid-size.test"
>>>>>>>>>>>>
>>>>>>>>>>>> This reverts commit r312527.
>>>>>>>>>>>>
>>>>>>>>>>>> It causes an ASan failure:
>>>>>>>>>>>>
>>>>>>>>>>>> http://lab.llvm.org:8080/green/job/clang-stage2-cmake-RgSan_check/4150
>>>>>>>>>>>>
>>>>>>>>>>>> Removed:
>>>>>>>>>>>>
>>>>>>>>>>>> llvm/trunk/test/DebugInfo/Inputs/dwarfdump-decompression-invalid-size.elf-x86-64
>>>>>>>>>>>>
>>>>>>>>>>>> llvm/trunk/test/DebugInfo/dwarfdump-decompression-invalid-size.test
>>>>>>>>>>>> Modified:
>>>>>>>>>>>>     llvm/trunk/include/llvm/Object/Decompressor.h
>>>>>>>>>>>>     llvm/trunk/lib/Object/Decompressor.cpp
>>>>>>>>>>>>
>>>>>>>>>>>> Modified: llvm/trunk/include/llvm/Object/Decompressor.h
>>>>>>>>>>>> URL:
>>>>>>>>>>>> http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Object/Decompressor.h?rev=312582&r1=312581&r2=312582&view=diff
>>>>>>>>>>>>
>>>>>>>>>>>> ==============================================================================
>>>>>>>>>>>> --- llvm/trunk/include/llvm/Object/Decompressor.h (original)
>>>>>>>>>>>> +++ llvm/trunk/include/llvm/Object/Decompressor.h Tue Sep  5
>>>>>>>>>>>> 15:04:00 2017
>>>>>>>>>>>> @@ -13,7 +13,6 @@
>>>>>>>>>>>>  #include "llvm/ADT/SmallString.h"
>>>>>>>>>>>>  #include "llvm/ADT/StringRef.h"
>>>>>>>>>>>>  #include "llvm/Object/ObjectFile.h"
>>>>>>>>>>>> -#include "llvm/Support/ErrorHandling.h"
>>>>>>>>>>>>
>>>>>>>>>>>>  namespace llvm {
>>>>>>>>>>>>  namespace object {
>>>>>>>>>>>> @@ -32,9 +31,7 @@ public:
>>>>>>>>>>>>    /// @brief Resize the buffer and uncompress section data
>>>>>>>>>>>> into it.
>>>>>>>>>>>>    /// @param Out         Destination buffer.
>>>>>>>>>>>>    template <class T> Error resizeAndDecompress(T &Out) {
>>>>>>>>>>>> -    install_bad_alloc_error_handler(outOfMemoryHandler, this);
>>>>>>>>>>>>      Out.resize(DecompressedSize);
>>>>>>>>>>>> -    remove_bad_alloc_error_handler();
>>>>>>>>>>>>      return decompress({Out.data(), (size_t)DecompressedSize});
>>>>>>>>>>>>    }
>>>>>>>>>>>>
>>>>>>>>>>>> @@ -55,14 +52,11 @@ public:
>>>>>>>>>>>>    static bool isGnuStyle(StringRef Name);
>>>>>>>>>>>>
>>>>>>>>>>>>  private:
>>>>>>>>>>>> -  static void outOfMemoryHandler(void *Data, const std::string
>>>>>>>>>>>> &Message, bool);
>>>>>>>>>>>> -
>>>>>>>>>>>> -  Decompressor(StringRef Name, StringRef Data);
>>>>>>>>>>>> +  Decompressor(StringRef Data);
>>>>>>>>>>>>
>>>>>>>>>>>>    Error consumeCompressedGnuHeader();
>>>>>>>>>>>>    Error consumeCompressedZLibHeader(bool Is64Bit, bool
>>>>>>>>>>>> IsLittleEndian);
>>>>>>>>>>>>
>>>>>>>>>>>> -  StringRef SectionName;
>>>>>>>>>>>>    StringRef SectionData;
>>>>>>>>>>>>    uint64_t DecompressedSize;
>>>>>>>>>>>>  };
>>>>>>>>>>>>
>>>>>>>>>>>> Modified: llvm/trunk/lib/Object/Decompressor.cpp
>>>>>>>>>>>> URL:
>>>>>>>>>>>> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Object/Decompressor.cpp?rev=312582&r1=312581&r2=312582&view=diff
>>>>>>>>>>>>
>>>>>>>>>>>> ==============================================================================
>>>>>>>>>>>> --- llvm/trunk/lib/Object/Decompressor.cpp (original)
>>>>>>>>>>>> +++ llvm/trunk/lib/Object/Decompressor.cpp Tue Sep  5 15:04:00
>>>>>>>>>>>> 2017
>>>>>>>>>>>> @@ -23,7 +23,7 @@ Expected<Decompressor> Decompressor::cre
>>>>>>>>>>>>    if (!zlib::isAvailable())
>>>>>>>>>>>>      return createError("zlib is not available");
>>>>>>>>>>>>
>>>>>>>>>>>> -  Decompressor D(Name, Data);
>>>>>>>>>>>> +  Decompressor D(Data);
>>>>>>>>>>>>    Error Err = isGnuStyle(Name) ? D.consumeCompressedGnuHeader()
>>>>>>>>>>>>                                 :
>>>>>>>>>>>> D.consumeCompressedZLibHeader(Is64Bit, IsLE);
>>>>>>>>>>>>    if (Err)
>>>>>>>>>>>> @@ -31,8 +31,8 @@ Expected<Decompressor> Decompressor::cre
>>>>>>>>>>>>    return D;
>>>>>>>>>>>>  }
>>>>>>>>>>>>
>>>>>>>>>>>> -Decompressor::Decompressor(StringRef Name, StringRef Data)
>>>>>>>>>>>> -    : SectionName(Name), SectionData(Data),
>>>>>>>>>>>> DecompressedSize(0) {}
>>>>>>>>>>>> +Decompressor::Decompressor(StringRef Data)
>>>>>>>>>>>> +    : SectionData(Data), DecompressedSize(0) {}
>>>>>>>>>>>>
>>>>>>>>>>>>  Error Decompressor::consumeCompressedGnuHeader() {
>>>>>>>>>>>>    if (!SectionData.startswith("ZLIB"))
>>>>>>>>>>>> @@ -92,11 +92,3 @@ Error Decompressor::decompress(MutableAr
>>>>>>>>>>>>    size_t Size = Buffer.size();
>>>>>>>>>>>>    return zlib::uncompress(SectionData, Buffer.data(), Size);
>>>>>>>>>>>>  }
>>>>>>>>>>>> -
>>>>>>>>>>>> -void Decompressor::outOfMemoryHandler(void *Data, const
>>>>>>>>>>>> std::string &Message,
>>>>>>>>>>>> -                                      bool) {
>>>>>>>>>>>> -  const auto *D = static_cast<const Decompressor *>(Data);
>>>>>>>>>>>> -  report_fatal_error("decompression of '" +
>>>>>>>>>>>> Twine(D->SectionName) +
>>>>>>>>>>>> -                     "' failed: unable to allocate " +
>>>>>>>>>>>> -                     Twine(D->DecompressedSize) + " bytes.");
>>>>>>>>>>>> -}
>>>>>>>>>>>>
>>>>>>>>>>>> Removed:
>>>>>>>>>>>> llvm/trunk/test/DebugInfo/Inputs/dwarfdump-decompression-invalid-size.elf-x86-64
>>>>>>>>>>>> URL:
>>>>>>>>>>>> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/DebugInfo/Inputs/dwarfdump-decompression-invalid-size.elf-x86-64?rev=312581&view=auto
>>>>>>>>>>>>
>>>>>>>>>>>> ==============================================================================
>>>>>>>>>>>> Binary files
>>>>>>>>>>>> llvm/trunk/test/DebugInfo/Inputs/dwarfdump-decompression-invalid-size.elf-x86-64
>>>>>>>>>>>> (original) and
>>>>>>>>>>>> llvm/trunk/test/DebugInfo/Inputs/dwarfdump-decompression-invalid-size.elf-x86-64
>>>>>>>>>>>> (removed) differ
>>>>>>>>>>>>
>>>>>>>>>>>> Removed:
>>>>>>>>>>>> llvm/trunk/test/DebugInfo/dwarfdump-decompression-invalid-size.test
>>>>>>>>>>>> URL:
>>>>>>>>>>>> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/DebugInfo/dwarfdump-decompression-invalid-size.test?rev=312581&view=auto
>>>>>>>>>>>>
>>>>>>>>>>>> ==============================================================================
>>>>>>>>>>>> ---
>>>>>>>>>>>> llvm/trunk/test/DebugInfo/dwarfdump-decompression-invalid-size.test
>>>>>>>>>>>> (original)
>>>>>>>>>>>> +++
>>>>>>>>>>>> llvm/trunk/test/DebugInfo/dwarfdump-decompression-invalid-size.test
>>>>>>>>>>>> (removed)
>>>>>>>>>>>> @@ -1,15 +0,0 @@
>>>>>>>>>>>> -REQUIRES: zlib
>>>>>>>>>>>> -
>>>>>>>>>>>> -// dwarfdump-decompression-invalid-size.elf-x86-64 is prepared
>>>>>>>>>>>> using following
>>>>>>>>>>>> -// source code and invocation:
>>>>>>>>>>>> -// test.cpp:
>>>>>>>>>>>> -// int main() { return 0; }
>>>>>>>>>>>> -//
>>>>>>>>>>>> -// gcc test.cpp -o out -g -Wl,--compress-debug-sections,zlib
>>>>>>>>>>>> -//
>>>>>>>>>>>> -// After that result object was modified manually.
>>>>>>>>>>>> Decompressed size of
>>>>>>>>>>>> -// .debug_frame section was changed to 0xffffffffffffffff in
>>>>>>>>>>>> compression
>>>>>>>>>>>> -// header.
>>>>>>>>>>>> -RUN: not llvm-dwarfdump
>>>>>>>>>>>> %p/Inputs/dwarfdump-decompression-invalid-size.elf-x86-64 2>&1 | FileCheck
>>>>>>>>>>>> %s
>>>>>>>>>>>> -
>>>>>>>>>>>> -CHECK: decompression of '.debug_frame' failed: unable to
>>>>>>>>>>>> allocate 18446744073709551615 bytes.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> llvm-commits mailing list
>>>>>>>>>>>> llvm-commits at lists.llvm.org
>>>>>>>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20170908/82d682ac/attachment.html>


More information about the llvm-commits mailing list