[llvm] r336156 - [ADT] Add llvm::unique_function which is like std::function but

Chandler Carruth via llvm-commits llvm-commits at lists.llvm.org
Tue Jul 3 13:26:16 PDT 2018


Sorry, yeah. There is a bit of thread w/ Eli about this. We need an
explicit alignment attribute. I'll add one ASAP...

On Tue, Jul 3, 2018 at 12:24 PM Aditya Nandakumar <
aditya_nandakumar at apple.com> wrote:

> Hi Chandler,
>
> This seems to be failing on MacOS (The UnitTest segfaults consistently on
> our bots)
> I see this
>
> Assertion failed: ((reinterpret_cast<uintptr_t>(P) & ~((uintptr_t)-1 <<
> NumLowBitsAvailable)) == 0 && “Alignment not satisfied for an actual
> function pointer!"), function getAsVoidPointer, file
> llvm/include/llvm/Support/PointerLikeTypeTraits.h, line 129
>
> Can you please take a look?
>
> > On Jul 2, 2018, at 4:57 PM, Chandler Carruth via llvm-commits <
> llvm-commits at lists.llvm.org> wrote:
> >
> > Author: chandlerc
> > Date: Mon Jul  2 16:57:29 2018
> > New Revision: 336156
> >
> > URL: http://llvm.org/viewvc/llvm-project?rev=336156&view=rev
> > Log:
> > [ADT] Add llvm::unique_function which is like std::function but
> > supporting move-only closures.
> >
> > Most of the core optimizations for std::function are here plus
> > a potentially novel one that detects trivially movable and destroyable
> > functors and implements those with fewer indirections.
> >
> > This is especially useful as we start trying to add concurrency
> > primitives as those often end up with move-only types (futures,
> > promises, etc) and wanting them to work through lambdas.
> >
> > As further work, we could add better support for things like
> const-qualified
> > operator()s to support more algorithms, and r-value ref qualified
> operator()s
> > to model call-once. None of that is here though.
> >
> > We can also provide our own llvm::function that has some of the
> optimizations
> > used in this class, but with copy semantics instead of move semantics.
> >
> > This is motivated by increasing usage of things like executors and the
> task
> > queue where it is useful to embed move-only types like a std::promise
> within
> > a type erased function. That isn't possible without this version of a
> type
> > erased function.
> >
> > Differential Revision: https://reviews.llvm.org/D48349
> >
> > Added:
> >    llvm/trunk/include/llvm/ADT/FunctionExtras.h
> >    llvm/trunk/unittests/ADT/FunctionExtrasTest.cpp
> > Modified:
> >    llvm/trunk/include/llvm/Support/Compiler.h
> >    llvm/trunk/include/llvm/Support/PointerLikeTypeTraits.h
> >    llvm/trunk/unittests/ADT/CMakeLists.txt
> >
> > Added: llvm/trunk/include/llvm/ADT/FunctionExtras.h
> > URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/ADT/FunctionExtras.h?rev=336156&view=auto
> >
> ==============================================================================
> > --- llvm/trunk/include/llvm/ADT/FunctionExtras.h (added)
> > +++ llvm/trunk/include/llvm/ADT/FunctionExtras.h Mon Jul  2 16:57:29 2018
> > @@ -0,0 +1,274 @@
> > +//===- FunctionExtras.h - Function type erasure utilities -------*- C++
> -*-===//
> > +//
> > +//                     The LLVM Compiler Infrastructure
> > +//
> > +// This file is distributed under the University of Illinois Open Source
> > +// License. See LICENSE.TXT for details.
> > +//
> >
> +//===----------------------------------------------------------------------===//
> > +/// \file
> > +/// This file provides a collection of function (or more generally,
> callable)
> > +/// type erasure utilities supplementing those provided by the standard
> library
> > +/// in `<function>`.
> > +///
> > +/// It provides `unique_function`, which works like `std::function` but
> supports
> > +/// move-only callable objects.
> > +///
> > +/// Future plans:
> > +/// - Add a `function` that provides const, volatile, and ref-qualified
> support,
> > +///   which doesn't work with `std::function`.
> > +/// - Provide support for specifying multiple signatures to type erase
> callable
> > +///   objects with an overload set, such as those produced by generic
> lambdas.
> > +/// - Expand to include a copyable utility that directly replaces
> std::function
> > +///   but brings the above improvements.
> > +///
> > +/// Note that LLVM's utilities are greatly simplified by not supporting
> > +/// allocators.
> > +///
> > +/// If the standard library ever begins to provide comparable
> facilities we can
> > +/// consider switching to those.
> > +///
> >
> +//===----------------------------------------------------------------------===//
> > +
> > +#ifndef LLVM_ADT_FUNCTION_EXTRAS_H
> > +#define LLVM_ADT_FUNCTION_EXTRAS_H
> > +
> > +#include "llvm/ADT/PointerIntPair.h"
> > +#include "llvm/ADT/PointerUnion.h"
> > +#include <memory>
> > +#include <type_traits>
> > +
> > +namespace llvm {
> > +
> > +template <typename FunctionT> class unique_function;
> > +
> > +template <typename ReturnT, typename... ParamTs>
> > +class unique_function<ReturnT(ParamTs...)> {
> > +  static constexpr int InlineStorageSize = sizeof(void *) * 3;
> > +
> > +  // Provide a type function to map parameters that won't observe extra
> copies
> > +  // or moves and which are small enough to likely pass in register to
> values
> > +  // and all other types to l-value reference types. We use this to
> compute the
> > +  // types used in our erased call utility to minimize copies and moves
> unless
> > +  // doing so would force things unnecessarily into memory.
> > +  //
> > +  // The heuristic used is related to common ABI register passing
> conventions.
> > +  // It doesn't have to be exact though, and in one way it is more
> strict
> > +  // because we want to still be able to observe either moves *or*
> copies.
> > +  template <typename T>
> > +  using AdjustedParamT = typename std::conditional<
> > +      !std::is_reference<T>::value &&
> > +          std::is_trivially_copy_constructible<T>::value &&
> > +          std::is_trivially_move_constructible<T>::value &&
> > +          sizeof(T) <= (2 * sizeof(void *)),
> > +      T, T &>::type;
> > +
> > +  // The type of the erased function pointer we use as a callback to
> dispatch to
> > +  // the stored callable when it is trivial to move and destroy.
> > +  using CallPtrT = ReturnT (*)(void *CallableAddr,
> > +                               AdjustedParamT<ParamTs>... Params);
> > +  using MovePtrT = void (*)(void *LHSCallableAddr, void
> *RHSCallableAddr);
> > +  using DestroyPtrT = void (*)(void *CallableAddr);
> > +
> > +  /// A struct we use to aggregate three callbacks when we need full
> set of
> > +  /// operations.
> > +  struct NonTrivialCallbacks {
> > +    CallPtrT CallPtr;
> > +    MovePtrT MovePtr;
> > +    DestroyPtrT DestroyPtr;
> > +  };
> > +
> > +  // Now we can create a pointer union between either a direct, trivial
> call
> > +  // pointer and a pointer to a static struct of the call, move, and
> destroy
> > +  // pointers. We do this to keep the footprint in this object a single
> pointer
> > +  // while supporting all the necessary type-erased operation.
> > +  using CallbackPointerUnionT = PointerUnion<CallPtrT,
> NonTrivialCallbacks *>;
> > +
> > +  // The main storage buffer. This will either have a pointer to
> out-of-line
> > +  // storage or an inline buffer storing the callable.
> > +  union StorageUnionT {
> > +    // For out-of-line storage we keep a pointer to the underlying
> storage and
> > +    // the size. This is enough to deallocate the memory.
> > +    struct OutOfLineStorageT {
> > +      void *StoragePtr;
> > +      size_t Size;
> > +      size_t Alignment;
> > +    } OutOfLineStorage;
> > +    static_assert(
> > +        sizeof(OutOfLineStorageT) <= InlineStorageSize,
> > +        "Should always use all of the out-of-line storage for inline
> storage!");
> > +
> > +    // For in-line storage, we just provide an aligned character
> buffer. We
> > +    // provide three pointers worth of storage here.
> > +    typename std::aligned_storage<InlineStorageSize, alignof(void
> *)>::type
> > +        InlineStorage;
> > +  } StorageUnion;
> > +
> > +  // A compressed pointer to either our dispatching callback or our
> table of
> > +  // dispatching callbacks and the flag for whether the callable itself
> is
> > +  // stored inline or not.
> > +  PointerIntPair<CallbackPointerUnionT, 1, bool> CallbackAndInlineFlag;
> > +
> > +  bool isInlineStorage() const { return CallbackAndInlineFlag.getInt();
> }
> > +
> > +  bool isTrivialCallback() const {
> > +    return CallbackAndInlineFlag.getPointer().template is<CallPtrT>();
> > +  }
> > +
> > +  CallPtrT getTrivialCallback() const {
> > +    return CallbackAndInlineFlag.getPointer().template get<CallPtrT>();
> > +  }
> > +
> > +  NonTrivialCallbacks *getNonTrivialCallbacks() const {
> > +    return CallbackAndInlineFlag.getPointer()
> > +        .template get<NonTrivialCallbacks *>();
> > +  }
> > +
> > +  void *getInlineStorage() { return &StorageUnion.InlineStorage; }
> > +
> > +  void *getOutOfLineStorage() {
> > +    return StorageUnion.OutOfLineStorage.StoragePtr;
> > +  }
> > +  size_t getOutOfLineStorageSize() const {
> > +    return StorageUnion.OutOfLineStorage.Size;
> > +  }
> > +  size_t getOutOfLineStorageAlignment() const {
> > +    return StorageUnion.OutOfLineStorage.Alignment;
> > +  }
> > +
> > +  void setOutOfLineStorage(void *Ptr, size_t Size, size_t Alignment) {
> > +    StorageUnion.OutOfLineStorage = {Ptr, Size, Alignment};
> > +  }
> > +
> > +  template <typename CallableT>
> > +  static ReturnT CallImpl(void *CallableAddr,
> > +                          AdjustedParamT<ParamTs>... Params) {
> > +    return (*reinterpret_cast<CallableT *>(CallableAddr))(
> > +        std::forward<ParamTs>(Params)...);
> > +  }
> > +
> > +  template <typename CallableT>
> > +  static void MoveImpl(void *LHSCallableAddr, void *RHSCallableAddr)
> noexcept {
> > +    new (LHSCallableAddr)
> > +        CallableT(std::move(*reinterpret_cast<CallableT
> *>(RHSCallableAddr)));
> > +  }
> > +
> > +  template <typename CallableT>
> > +  static void DestroyImpl(void *CallableAddr) noexcept {
> > +    reinterpret_cast<CallableT *>(CallableAddr)->~CallableT();
> > +  }
> > +
> > +public:
> > +  unique_function() = default;
> > +  unique_function(std::nullptr_t /*null_callable*/) {}
> > +
> > +  ~unique_function() {
> > +    if (!CallbackAndInlineFlag.getPointer())
> > +      return;
> > +
> > +    // Cache this value so we don't re-check it after type-erased
> operations.
> > +    bool IsInlineStorage = isInlineStorage();
> > +
> > +    if (!isTrivialCallback())
> > +      getNonTrivialCallbacks()->DestroyPtr(
> > +          IsInlineStorage ? getInlineStorage() : getOutOfLineStorage());
> > +
> > +    if (!IsInlineStorage)
> > +      deallocate_buffer(getOutOfLineStorage(),
> getOutOfLineStorageSize(),
> > +                        getOutOfLineStorageAlignment());
> > +  }
> > +
> > +  unique_function(unique_function &&RHS) noexcept {
> > +    // Copy the callback and inline flag.
> > +    CallbackAndInlineFlag = RHS.CallbackAndInlineFlag;
> > +
> > +    // If the RHS is empty, just copying the above is sufficient.
> > +    if (!RHS)
> > +      return;
> > +
> > +    if (!isInlineStorage()) {
> > +      // The out-of-line case is easiest to move.
> > +      StorageUnion.OutOfLineStorage = RHS.StorageUnion.OutOfLineStorage;
> > +    } else if (isTrivialCallback()) {
> > +      // Move is trivial, just memcpy the bytes across.
> > +      memcpy(getInlineStorage(), RHS.getInlineStorage(),
> InlineStorageSize);
> > +    } else {
> > +      // Non-trivial move, so dispatch to a type-erased implementation.
> > +      getNonTrivialCallbacks()->MovePtr(getInlineStorage(),
> > +                                        RHS.getInlineStorage());
> > +    }
> > +
> > +    // Clear the old callback and inline flag to get back to as-if-null.
> > +    RHS.CallbackAndInlineFlag = {};
> > +
> > +#ifndef NDEBUG
> > +    // In debug builds, we also scribble across the rest of the storage.
> > +    memset(RHS.getInlineStorage(), 0xAD, InlineStorageSize);
> > +#endif
> > +  }
> > +
> > +  unique_function &operator=(unique_function &&RHS) noexcept {
> > +    if (this == &RHS)
> > +      return *this;
> > +
> > +    // Because we don't try to provide any exception safety guarantees
> we can
> > +    // implement move assignment very simply by first destroying the
> current
> > +    // object and then move-constructing over top of it.
> > +    this->~unique_function();
> > +    new (this) unique_function(std::move(RHS));
> > +    return *this;
> > +  }
> > +
> > +  template <typename CallableT> unique_function(CallableT Callable) {
> > +    bool IsInlineStorage = true;
> > +    void *CallableAddr = getInlineStorage();
> > +    if (sizeof(CallableT) > InlineStorageSize ||
> > +        alignof(CallableT) >
> alignof(decltype(StorageUnion.InlineStorage))) {
> > +      IsInlineStorage = false;
> > +      // Allocate out-of-line storage. FIXME: Use an explicit alignment
> > +      // parameter in C++17 mode.
> > +      auto Size = sizeof(CallableT);
> > +      auto Alignment = alignof(CallableT);
> > +      CallableAddr = allocate_buffer(Size, Alignment);
> > +      setOutOfLineStorage(CallableAddr, Size, Alignment);
> > +    }
> > +
> > +    // Now move into the storage.
> > +    new (CallableAddr) CallableT(std::move(Callable));
> > +
> > +    // See if we can create a trivial callback.
> > +    // FIXME: we should use constexpr if here and below to avoid
> instantiating
> > +    // the non-trivial static objects when unnecessary. While the
> linker should
> > +    // remove them, it is still wasteful.
> > +    if (std::is_trivially_move_constructible<CallableT>::value &&
> > +        std::is_trivially_destructible<CallableT>::value) {
> > +      CallbackAndInlineFlag = {&CallImpl<CallableT>, IsInlineStorage};
> > +      return;
> > +    }
> > +
> > +    // Otherwise, we need to point at an object with a vtable that
> contains all
> > +    // the different type erased behaviors needed. Create a static
> instance of
> > +    // the derived type here and then use a pointer to that.
> > +    static NonTrivialCallbacks Callbacks = {
> > +        &CallImpl<CallableT>, &MoveImpl<CallableT>,
> &DestroyImpl<CallableT>};
> > +
> > +    CallbackAndInlineFlag = {&Callbacks, IsInlineStorage};
> > +  }
> > +
> > +  ReturnT operator()(ParamTs... Params) {
> > +    void *CallableAddr =
> > +        isInlineStorage() ? getInlineStorage() : getOutOfLineStorage();
> > +
> > +    return (isTrivialCallback()
> > +                ? getTrivialCallback()
> > +                : getNonTrivialCallbacks()->CallPtr)(CallableAddr,
> Params...);
> > +  }
> > +
> > +  explicit operator bool() const {
> > +    return (bool)CallbackAndInlineFlag.getPointer();
> > +  }
> > +};
> > +
> > +} // end namespace llvm
> > +
> > +#endif // LLVM_ADT_FUNCTION_H
> >
> > Modified: llvm/trunk/include/llvm/Support/Compiler.h
> > URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Support/Compiler.h?rev=336156&r1=336155&r2=336156&view=diff
> >
> ==============================================================================
> > --- llvm/trunk/include/llvm/Support/Compiler.h (original)
> > +++ llvm/trunk/include/llvm/Support/Compiler.h Mon Jul  2 16:57:29 2018
> > @@ -17,6 +17,9 @@
> >
> > #include "llvm/Config/llvm-config.h"
> >
> > +#include <new>
> > +#include <stddef.h>
> > +
> > #if defined(_MSC_VER)
> > #include <sal.h>
> > #endif
> > @@ -503,4 +506,46 @@ void AnnotateIgnoreWritesEnd(const char
> > #define LLVM_ENABLE_EXCEPTIONS 1
> > #endif
> >
> > +namespace llvm {
> > +
> > +/// Allocate a buffer of memory with the given size and alignment.
> > +///
> > +/// When the compiler supports aligned operator new, this will use it
> to to
> > +/// handle even over-aligned allocations.
> > +///
> > +/// However, this doesn't make any attempt to leverage the fancier
> techniques
> > +/// like posix_memalign due to portability. It is mostly intended to
> allow
> > +/// compatibility with platforms that, after aligned allocation was
> added, use
> > +/// reduced default alignment.
> > +inline void *allocate_buffer(size_t Size, size_t Alignment) {
> > +  return ::operator new(Size
> > +#if __cpp_aligned_new
> > +                        ,
> > +                        std::align_val_t(Alignment)
> > +#endif
> > +  );
> > +}
> > +
> > +/// Deallocate a buffer of memory with the given size and alignment.
> > +///
> > +/// If supported, this will used the sized delete operator. Also if
> supported,
> > +/// this will pass the alignment to the delete operator.
> > +///
> > +/// The pointer must have been allocated with the corresponding new
> operator,
> > +/// most likely using the above helper.
> > +inline void deallocate_buffer(void *Ptr, size_t Size, size_t Alignment)
> {
> > +  ::operator delete(Ptr
> > +#if __cpp_sized_deallocation
> > +                    ,
> > +                    Size
> > +#endif
> > +#if __cpp_aligned_new
> > +                    ,
> > +                    std::align_val_t(Alignment)
> > +#endif
> > +  );
> > +}
> > +
> > +} // End namespace llvm
> > +
> > #endif
> >
> > Modified: llvm/trunk/include/llvm/Support/PointerLikeTypeTraits.h
> > URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Support/PointerLikeTypeTraits.h?rev=336156&r1=336155&r2=336156&view=diff
> >
> ==============================================================================
> > --- llvm/trunk/include/llvm/Support/PointerLikeTypeTraits.h (original)
> > +++ llvm/trunk/include/llvm/Support/PointerLikeTypeTraits.h Mon Jul  2
> 16:57:29 2018
> > @@ -16,6 +16,7 @@
> > #define LLVM_SUPPORT_POINTERLIKETYPETRAITS_H
> >
> > #include "llvm/Support/DataTypes.h"
> > +#include <assert.h>
> > #include <type_traits>
> >
> > namespace llvm {
> > @@ -111,6 +112,39 @@ template <> struct PointerLikeTypeTraits
> >   enum { NumLowBitsAvailable = 0 };
> > };
> >
> > +/// Provide suitable custom traits struct for function pointers.
> > +///
> > +/// Function pointers can't be directly given these traits as functions
> can't
> > +/// have their alignment computed with `alignof` and we need different
> casting.
> > +///
> > +/// To rely on higher alignment for a specialized use, you can provide a
> > +/// customized form of this template explicitly with higher alignment,
> and
> > +/// potentially use alignment attributes on functions to satisfy that.
> > +template <int Alignment, typename FunctionPointerT>
> > +struct FunctionPointerLikeTypeTraits {
> > +  enum { NumLowBitsAvailable = detail::ConstantLog2<Alignment>::value };
> > +  static inline void *getAsVoidPointer(FunctionPointerT P) {
> > +    assert((reinterpret_cast<uintptr_t>(P) &
> > +            ~((uintptr_t)-1 << NumLowBitsAvailable)) == 0 &&
> > +           "Alignment not satisfied for an actual function pointer!");
> > +    return reinterpret_cast<void *>(P);
> > +  }
> > +  static inline FunctionPointerT getFromVoidPointer(void *P) {
> > +    return reinterpret_cast<FunctionPointerT>(P);
> > +  }
> > +};
> > +
> > +/// Provide a default specialization for function pointers that assumes
> 4-byte
> > +/// alignment.
> > +///
> > +/// We assume here that functions used with this are always at least
> 4-byte
> > +/// aligned. This means that, for example, thumb functions won't work
> or systems
> > +/// with weird unaligned function pointers won't work. But all
> practical systems
> > +/// we support satisfy this requirement.
> > +template <typename ReturnT, typename... ParamTs>
> > +struct PointerLikeTypeTraits<ReturnT (*)(ParamTs...)>
> > +    : FunctionPointerLikeTypeTraits<4, ReturnT (*)(ParamTs...)> {};
> > +
> > } // end namespace llvm
> >
> > #endif
> >
> > Modified: llvm/trunk/unittests/ADT/CMakeLists.txt
> > URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/unittests/ADT/CMakeLists.txt?rev=336156&r1=336155&r2=336156&view=diff
> >
> ==============================================================================
> > --- llvm/trunk/unittests/ADT/CMakeLists.txt (original)
> > +++ llvm/trunk/unittests/ADT/CMakeLists.txt Mon Jul  2 16:57:29 2018
> > @@ -18,6 +18,7 @@ add_llvm_unittest(ADTTests
> >   DepthFirstIteratorTest.cpp
> >   EquivalenceClassesTest.cpp
> >   FoldingSet.cpp
> > +  FunctionExtrasTest.cpp
> >   FunctionRefTest.cpp
> >   HashingTest.cpp
> >   IListBaseTest.cpp
> >
> > Added: llvm/trunk/unittests/ADT/FunctionExtrasTest.cpp
> > URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/unittests/ADT/FunctionExtrasTest.cpp?rev=336156&view=auto
> >
> ==============================================================================
> > --- llvm/trunk/unittests/ADT/FunctionExtrasTest.cpp (added)
> > +++ llvm/trunk/unittests/ADT/FunctionExtrasTest.cpp Mon Jul  2 16:57:29
> 2018
> > @@ -0,0 +1,228 @@
> > +//===- FunctionExtrasTest.cpp - Unit tests for function type erasure
> ------===//
> > +//
> > +//                     The LLVM Compiler Infrastructure
> > +//
> > +// This file is distributed under the University of Illinois Open Source
> > +// License. See LICENSE.TXT for details.
> > +//
> >
> +//===----------------------------------------------------------------------===//
> > +
> > +#include "llvm/ADT/FunctionExtras.h"
> > +#include "gtest/gtest.h"
> > +
> > +#include <memory>
> > +
> > +using namespace llvm;
> > +
> > +namespace {
> > +
> > +TEST(UniqueFunctionTest, Basic) {
> > +  unique_function<int(int, int)> Sum = [](int A, int B) { return A + B;
> };
> > +  EXPECT_EQ(Sum(1, 2), 3);
> > +
> > +  unique_function<int(int, int)> Sum2 = std::move(Sum);
> > +  EXPECT_EQ(Sum2(1, 2), 3);
> > +
> > +  unique_function<int(int, int)> Sum3 = [](int A, int B) { return A +
> B; };
> > +  Sum2 = std::move(Sum3);
> > +  EXPECT_EQ(Sum2(1, 2), 3);
> > +
> > +  Sum2 = unique_function<int(int, int)>([](int A, int B) { return A +
> B; });
> > +  EXPECT_EQ(Sum2(1, 2), 3);
> > +
> > +  // Explicit self-move test.
> > +  *&Sum2 = std::move(Sum2);
> > +  EXPECT_EQ(Sum2(1, 2), 3);
> > +
> > +  Sum2 = unique_function<int(int, int)>();
> > +  EXPECT_FALSE(Sum2);
> > +
> > +  // Make sure we can forward through l-value reference parameters.
> > +  unique_function<void(int &)> Inc = [](int &X) { ++X; };
> > +  int X = 42;
> > +  Inc(X);
> > +  EXPECT_EQ(X, 43);
> > +
> > +  // Make sure we can forward through r-value reference parameters with
> > +  // move-only types.
> > +  unique_function<int(std::unique_ptr<int> &&)> ReadAndDeallocByRef =
> > +      [](std::unique_ptr<int> &&Ptr) {
> > +        int V = *Ptr;
> > +        Ptr.reset();
> > +        return V;
> > +      };
> > +  std::unique_ptr<int> Ptr{new int(13)};
> > +  EXPECT_EQ(ReadAndDeallocByRef(std::move(Ptr)), 13);
> > +  EXPECT_FALSE((bool)Ptr);
> > +
> > +  // Make sure we can pass a move-only temporary as opposed to a local
> variable.
> > +  EXPECT_EQ(ReadAndDeallocByRef(std::unique_ptr<int>(new int(42))), 42);
> > +
> > +  // Make sure we can pass a move-only type by-value.
> > +  unique_function<int(std::unique_ptr<int>)> ReadAndDeallocByVal =
> > +      [](std::unique_ptr<int> Ptr) {
> > +        int V = *Ptr;
> > +        Ptr.reset();
> > +        return V;
> > +      };
> > +  Ptr.reset(new int(13));
> > +  EXPECT_EQ(ReadAndDeallocByVal(std::move(Ptr)), 13);
> > +  EXPECT_FALSE((bool)Ptr);
> > +
> > +  EXPECT_EQ(ReadAndDeallocByVal(std::unique_ptr<int>(new int(42))), 42);
> > +}
> > +
> > +TEST(UniqueFunctionTest, Captures) {
> > +  long A = 1, B = 2, C = 3, D = 4, E = 5;
> > +
> > +  unique_function<long()> Tmp;
> > +
> > +  unique_function<long()> C1 = [A]() { return A; };
> > +  EXPECT_EQ(C1(), 1);
> > +  Tmp = std::move(C1);
> > +  EXPECT_EQ(Tmp(), 1);
> > +
> > +  unique_function<long()> C2 = [A, B]() { return A + B; };
> > +  EXPECT_EQ(C2(), 3);
> > +  Tmp = std::move(C2);
> > +  EXPECT_EQ(Tmp(), 3);
> > +
> > +  unique_function<long()> C3 = [A, B, C]() { return A + B + C; };
> > +  EXPECT_EQ(C3(), 6);
> > +  Tmp = std::move(C3);
> > +  EXPECT_EQ(Tmp(), 6);
> > +
> > +  unique_function<long()> C4 = [A, B, C, D]() { return A + B + C + D; };
> > +  EXPECT_EQ(C4(), 10);
> > +  Tmp = std::move(C4);
> > +  EXPECT_EQ(Tmp(), 10);
> > +
> > +  unique_function<long()> C5 = [A, B, C, D, E]() { return A + B + C + D
> + E; };
> > +  EXPECT_EQ(C5(), 15);
> > +  Tmp = std::move(C5);
> > +  EXPECT_EQ(Tmp(), 15);
> > +}
> > +
> > +TEST(UniqueFunctionTest, MoveOnly) {
> > +  struct SmallCallable {
> > +    std::unique_ptr<int> A{new int(1)};
> > +
> > +    int operator()(int B) { return *A + B; }
> > +  };
> > +  unique_function<int(int)> Small = SmallCallable();
> > +  EXPECT_EQ(Small(2), 3);
> > +  unique_function<int(int)> Small2 = std::move(Small);
> > +  EXPECT_EQ(Small2(2), 3);
> > +
> > +  struct LargeCallable {
> > +    std::unique_ptr<int> A{new int(1)};
> > +    std::unique_ptr<int> B{new int(2)};
> > +    std::unique_ptr<int> C{new int(3)};
> > +    std::unique_ptr<int> D{new int(4)};
> > +    std::unique_ptr<int> E{new int(5)};
> > +
> > +    int operator()() { return *A + *B + *C + *D + *E; }
> > +  };
> > +  unique_function<int()> Large = LargeCallable();
> > +  EXPECT_EQ(Large(), 15);
> > +  unique_function<int()> Large2 = std::move(Large);
> > +  EXPECT_EQ(Large2(), 15);
> > +}
> > +
> > +TEST(UniqueFunctionTest, CountForwardingCopies) {
> > +  struct CopyCounter {
> > +    int &CopyCount;
> > +
> > +    CopyCounter(int &CopyCount) : CopyCount(CopyCount) {}
> > +    CopyCounter(const CopyCounter &Arg) : CopyCount(Arg.CopyCount) {
> > +      ++CopyCount;
> > +    }
> > +  };
> > +
> > +  unique_function<void(CopyCounter)> ByValF = [](CopyCounter) {};
> > +  int CopyCount = 0;
> > +  ByValF(CopyCounter(CopyCount));
> > +  EXPECT_EQ(1, CopyCount);
> > +
> > +  CopyCount = 0;
> > +  {
> > +    CopyCounter Counter{CopyCount};
> > +    ByValF(Counter);
> > +  }
> > +  EXPECT_EQ(2, CopyCount);
> > +
> > +  // Check that we don't generate a copy at all when we can bind a
> reference all
> > +  // the way down, even if that reference could *in theory* allow
> copies.
> > +  unique_function<void(const CopyCounter &)> ByRefF = [](const
> CopyCounter &) {
> > +  };
> > +  CopyCount = 0;
> > +  ByRefF(CopyCounter(CopyCount));
> > +  EXPECT_EQ(0, CopyCount);
> > +
> > +  CopyCount = 0;
> > +  {
> > +    CopyCounter Counter{CopyCount};
> > +    ByRefF(Counter);
> > +  }
> > +  EXPECT_EQ(0, CopyCount);
> > +
> > +  // If we use a reference, we can make a stronger guarantee that *no*
> copy
> > +  // occurs.
> > +  struct Uncopyable {
> > +    Uncopyable() = default;
> > +    Uncopyable(const Uncopyable &) = delete;
> > +  };
> > +  unique_function<void(const Uncopyable &)> UncopyableF =
> > +      [](const Uncopyable &) {};
> > +  UncopyableF(Uncopyable());
> > +  Uncopyable X;
> > +  UncopyableF(X);
> > +}
> > +
> > +TEST(UniqueFunctionTest, CountForwardingMoves) {
> > +  struct MoveCounter {
> > +    int &MoveCount;
> > +
> > +    MoveCounter(int &MoveCount) : MoveCount(MoveCount) {}
> > +    MoveCounter(MoveCounter &&Arg) : MoveCount(Arg.MoveCount) {
> ++MoveCount; }
> > +  };
> > +
> > +  unique_function<void(MoveCounter)> ByValF = [](MoveCounter) {};
> > +  int MoveCount = 0;
> > +  ByValF(MoveCounter(MoveCount));
> > +  EXPECT_EQ(1, MoveCount);
> > +
> > +  MoveCount = 0;
> > +  {
> > +    MoveCounter Counter{MoveCount};
> > +    ByValF(std::move(Counter));
> > +  }
> > +  EXPECT_EQ(2, MoveCount);
> > +
> > +  // Check that when we use an r-value reference we get no spurious
> copies.
> > +  unique_function<void(MoveCounter &&)> ByRefF = [](MoveCounter &&) {};
> > +  MoveCount = 0;
> > +  ByRefF(MoveCounter(MoveCount));
> > +  EXPECT_EQ(0, MoveCount);
> > +
> > +  MoveCount = 0;
> > +  {
> > +    MoveCounter Counter{MoveCount};
> > +    ByRefF(std::move(Counter));
> > +  }
> > +  EXPECT_EQ(0, MoveCount);
> > +
> > +  // If we use an r-value reference we can in fact make a stronger
> guarantee
> > +  // with an unmovable type.
> > +  struct Unmovable {
> > +    Unmovable() = default;
> > +    Unmovable(Unmovable &&) = delete;
> > +  };
> > +  unique_function<void(const Unmovable &)> UnmovableF = [](const
> Unmovable &) {
> > +  };
> > +  UnmovableF(Unmovable());
> > +  Unmovable X;
> > +  UnmovableF(X);
> > +}
> > +
> > +} // anonymous namespace
> >
> >
> > _______________________________________________
> > llvm-commits mailing list
> > llvm-commits at lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20180703/6353e1e8/attachment.html>


More information about the llvm-commits mailing list