[llvm-commits] [PATCH] [Review Request] Memory Support changes to enable setting page protection flags

Kaylor, Andrew andrew.kaylor at intel.com
Tue Sep 11 12:21:35 PDT 2012


Hi Jim,

The reason I preferred to have explicit members of the enum for composed values is that it lets me use the enum as a parameter type and avoid possible invocation errors (for example, using the POSIX flags rather than these).  If you feel strongly that it's better to have a clean enum, I'm OK with that, but for now I've prepared a new patch implementing your other comments as well as the revised error handling that Michael Spencer suggested (but only for the new functions being added).

Let me know what you think.  We've got another patch in the works which uses these functions.

Thanks,
Andy

From: Jim Grosbach [mailto:grosbach at apple.com]
Sent: Tuesday, September 04, 2012 6:15 PM
To: Kaylor, Andrew
Cc: Commit Messages and Patches for LLVM
Subject: Re: [llvm-commits] [PATCH] [Review Request] Memory Support changes to enable setting page protection flags

This looks like a nice cleanup to the interfaces. I like it!

A few, entirely minor, comments inline.


Index: include/llvm/Support/Memory.h
===================================================================
--- include/llvm/Support/Memory.h    (revision 161944)
+++ include/llvm/Support/Memory.h (working copy)
@@ -43,6 +43,68 @@
   /// @brief An abstraction for memory operations.
   class Memory {
   public:
+    enum ProtectionFlags {
+      MF_READ = 1,
+      MF_WRITE = 2,
+      MF_READWRITE = (MF_READ | MF_WRITE),
+      MF_EXEC = 4,
+      MF_READEXEC = (MF_READ | MF_EXEC),
+      MF_READWRITEEXEC = (MF_READ | MF_WRITE | MF_EXEC)

I'd prefer not to have explicit members of the enum for composed values and just mask them together manually at invocation points. I.e., have only MF_READ, MF_WRITE, and MF_EXEC.


+    };
+
+    /// This method allocates a block of memory that is suitable for loading
+    /// dynamically generated code (e.g. JIT). An attempt to allocate
+    /// \p NumBytes bytes of virtual memory is made.
+    /// \p NearBlock may point to an existing allocation in which case
+    /// an attempt is made to allocate more memory near the existing block.
+    /// The actual allocated address is not guaranteed to be near the requested
+    /// address.
+    /// \p Flags is used to set the initial protection flags for the block
+    /// of the memory.

Need a \p[out] doc entry for ErrMsg. Ditto on other interfaces.


+    ///
+    /// This method may allocate more than the number of bytes requested.  The
+    /// actual number of bytes allocated is indicated in the returned
+    /// MemoryBlock.
+    ///
+    /// The start of the allocated block must be aligned with the
+    /// system allocation granularity (64K on Windows, page size on Linux).
+    /// If the address following \p NearBlock is not so aligned, it will be
+    /// rounded up to the next allocation granularity boundary.
+    ///
+    /// On success, this returns a non-null memory block, otherwise it returns
+    /// a null memory block and fills in *ErrMsg.

Can use a DOxygen @returns here (or the equivalent '\' form. "\r", probably?). Ditto on other interfaces.


+    ///
+    /// @brief Allocate mapped memory.
+    static MemoryBlock allocateMappedMemory(size_t NumBytes,
+                                            const MemoryBlock *const NearBlock,
+                                            ProtectionFlags Flags = MF_READWRITE,
+                                            std::string *ErrMsg = 0);
+
+    /// This method releases a block of memory that was allocated with the
+    /// allocateMappedMemory method. It should not be used to release any
+    /// memory block allocated any other way.
+    ///
+    /// On success, this returns false, otherwise it returns true and fills
+    /// in *ErrMsg.
+    /// @brief Release mapped memory.
+    static bool releaseMappedMemory(MemoryBlock &block, std::string *ErrMsg = 0);
+
+    /// This method sets the protection flags for a block of memory to the
+    /// state specified by /p Flags.  The behavior is not specified if the
+    /// memory was not allocated using the allocateMappedMemory method.
+    ///
+    /// If /p Flags is MF_WRITE, the actual behavior varies
+    /// with the operating system (i.e. MF_READWRITE on Windows) and the
+    /// target architecture (i.e. MF_WRITE -> MF_READWRITE on i386).
+    ///
+    /// On success, this returns false, otherwise it returns true and fills
+    /// in *ErrMsg.
+    ///
+    /// @brief Set memory protection state.
+    static bool protectMappedMemory(const MemoryBlock &M,
+                                    ProtectionFlags Flags,
+                                    std::string *ErrMsg = 0);
+
     /// This method allocates a block of Read/Write/Execute memory that is
     /// suitable for executing dynamically generated code (e.g. JIT). An
     /// attempt to allocate \p NumBytes bytes of virtual memory is made.
Index: lib/Support/Memory.cpp
===================================================================
--- lib/Support/Memory.cpp    (revision 161944)
+++ lib/Support/Memory.cpp (working copy)
@@ -34,6 +34,8 @@

 extern "C" void sys_icache_invalidate(const void *Addr, size_t len);

+#ifndef LLVM_ON_WIN32
+

Note sure I follow here. If this is non-windows only, it should be moved out of the general Memory.cpp and into the appropriate target-specific .inc file.


 /// InvalidateInstructionCache - Before the JIT can run a block of code
 /// that has been emitted it must invalidate the instruction cache on some
 /// platforms.
@@ -79,3 +81,6 @@

   ValgrindDiscardTranslations(Addr, Len);
 }
+
+#endif // end !LLVM_ON_WIN32
+
Index: lib/Support/Unix/Memory.inc
===================================================================
--- lib/Support/Unix/Memory.inc        (revision 161944)
+++ lib/Support/Unix/Memory.inc     (working copy)
@@ -23,6 +23,122 @@
 #include <mach/mach.h>
 #endif

+namespace {
+
+int getPosixProtectionFlags(llvm::sys::Memory::ProtectionFlags Flags) {
+  switch (Flags) {
+  case llvm::sys::Memory::MF_READ:
+    return PROT_READ;
+  case llvm::sys::Memory::MF_WRITE:
+    return PROT_WRITE;
+  case llvm::sys::Memory::MF_READWRITE:
+    return PROT_READ | PROT_WRITE;
+  case llvm::sys::Memory::MF_READEXEC:
+    return PROT_READ | PROT_EXEC;
+  case llvm::sys::Memory::MF_READWRITEEXEC:
+    return PROT_READ | PROT_WRITE | PROT_EXEC;
+  case llvm::sys::Memory::MF_EXEC:
+    return PROT_EXEC;
+  }
+  // Default in case values are added to the enum, as required by some compilers

Comment should be a full sentence and end in a period.

+  return PROT_NONE;
+}
+
+} // namespace
+
+llvm::sys::MemoryBlock
+llvm::sys::Memory::allocateMappedMemory(size_t NumBytes,
+                                         const MemoryBlock *const NearBlock,
+                                         ProtectionFlags PFlags,
+                                         std::string *ErrMsg) {
+  if (NumBytes == 0)
+    return sys::MemoryBlock();
+
+  static const size_t PageSize = Process::GetPageSize();
+  const size_t NumPages = (NumBytes+PageSize-1)/PageSize;
+
+  int fd = -1;
+#ifdef NEED_DEV_ZERO_FOR_MMAP
+  static int zero_fd = open("/dev/zero", O_RDWR);
+  if (zero_fd == -1) {
+    MakeErrMsg(ErrMsg, "Can't open /dev/zero device");
+    return sys::MemoryBlock();
+  }
+  fd = zero_fd;
+#endif
+
+  int MMFlags = MAP_PRIVATE |
+#ifdef HAVE_MMAP_ANONYMOUS
+  MAP_ANONYMOUS
+#else
+  MAP_ANON
+#endif
+  ; // Ends statement above
+
+  int Protect = getPosixProtectionFlags(PFlags);
+
+  // Use any near hint and the page size to set a page-aligned starting address
+  uintptr_t Start = NearBlock ? reinterpret_cast<uintptr_t>(NearBlock->base()) +
+                                      NearBlock->size() : 0;
+  if (Start && Start % PageSize)
+    Start += PageSize - Start % PageSize;
+
+  void *Addr = ::mmap(reinterpret_cast<void*>(Start), PageSize*NumPages,
+                      Protect, MMFlags, fd, 0);
+  if (Addr == MAP_FAILED) {
+    if (NearBlock) //Try again without a near hint
+      return allocateMappedMemory(NumBytes, 0, PFlags, ErrMsg);
+
+    MakeErrMsg(ErrMsg, "Can't allocate mapped memory");
+    return sys::MemoryBlock();
+  }
+
+  sys::MemoryBlock Result;
+  Result.Address = Addr;
+  Result.Size = NumPages*PageSize;
+
+  if (PFlags & MF_EXEC)
+    sys::Memory::InvalidateInstructionCache(Result.Address, Result.Size);
+
+  return Result;
+}
+
+bool
+llvm::sys::Memory::releaseMappedMemory(MemoryBlock &M, std::string *ErrMsg) {
+  if (M.Address == 0 || M.Size == 0)
+    return false;
+
+  if (0 != ::munmap(M.Address, M.Size))
+    return MakeErrMsg(ErrMsg, "Can't release mapped memory");
+
+  M.Address = 0;
+  M.Size = 0;
+
+  return false;
+}
+
+bool
+llvm::sys::Memory::protectMappedMemory(const MemoryBlock &M,
+                                       ProtectionFlags Flags,
+                                       std::string *ErrMsg) {
+  if (M.Address == 0 || M.Size == 0)
+    return false;
+
+  if (!Flags)
+    return MakeErrMsg(ErrMsg, "Unsupported protection state requested.");
+
+  int Protect = getPosixProtectionFlags(Flags);
+
+  int Result = ::mprotect(M.Address, M.Size, Protect);
+  if (Result != 0)
+    return MakeErrMsg(ErrMsg, "Unable to set memory protection.");
+
+  if (Flags & MF_EXEC)
+    sys::Memory::InvalidateInstructionCache(M.Address, M.Size);
+
+  return false;
+}
+
 /// AllocateRWX - Allocate a slab of memory with read/write/execute
 /// permissions.  This is typically used for JIT applications where we want
 /// to emit code to the memory then jump to it.  Getting this type of memory
Index: lib/Support/Windows/Memory.inc
===================================================================
--- lib/Support/Windows/Memory.inc (revision 161944)
+++ lib/Support/Windows/Memory.inc          (working copy)
@@ -1,4 +1,4 @@
-//===- Win32/Memory.cpp - Win32 Memory Implementation -----------*- C++ -*-===//
+//=- Win32/Memory.cpp - Win32 Memory Implementation -----------*- C++ -*-===//

Inadvertent change, maybe? This makes the header comment formatting different than neighboring files and not be 80 columns.


 //
 //                     The LLVM Compiler Infrastructure
 //
@@ -16,49 +16,143 @@
 #include "llvm/Support/DataTypes.h"
 #include "llvm/Support/Process.h"

-namespace llvm {
-using namespace sys;

Nice. Glad to see this.


+namespace {

+DWORD getWindowsProtectionFlags(llvm::sys::Memory::ProtectionFlags Flags) {
+  switch (Flags) {
+  // Contrary to what you might expect, the Windows page protection flags
+  // are not a bitwise combination of RWX values
+  case llvm::sys::Memory::MF_READ:
+    return PAGE_READONLY;
+  case llvm::sys::Memory::MF_WRITE:
+    // Note: PAGE_WRITE is not supported by VirtualProtect
+    return PAGE_READWRITE;
+  case llvm::sys::Memory::MF_READWRITE:
+    return PAGE_READWRITE;
+  case llvm::sys::Memory::MF_READEXEC:
+    return PAGE_EXECUTE_READ;
+  case llvm::sys::Memory::MF_READWRITEEXEC:
+    return PAGE_EXECUTE_READWRITE;
+  case llvm::sys::Memory::MF_EXEC:
+    return PAGE_EXECUTE;
+  }
+  // Default in case values are added to the enum, as required by some compilers
+  return PAGE_NOACCESS;
+}
+
+size_t getAllocationGranularity() {
+  SYSTEM_INFO  Info;
+  ::GetSystemInfo(&Info);
+  if (Info.dwPageSize > Info.dwAllocationGranularity)
+    return Info.dwPageSize;
+  else
+    return Info.dwAllocationGranularity;
+}
+
+} // namespace
+
 //===----------------------------------------------------------------------===//
 //=== WARNING: Implementation here must contain only Win32 specific code
 //===          and must not be UNIX code
 //===----------------------------------------------------------------------===//

-MemoryBlock Memory::AllocateRWX(size_t NumBytes,
-                                const MemoryBlock *NearBlock,
-                                std::string *ErrMsg) {
-  if (NumBytes == 0) return MemoryBlock();
+llvm::sys::MemoryBlock llvm::sys::Memory::allocateMappedMemory(size_t NumBytes,
+                                         const MemoryBlock *const NearBlock,
+                                         ProtectionFlags Flags,
+                                         std::string *ErrMsg) {
+  if (NumBytes == 0)
+    return sys::MemoryBlock();

-  static const size_t pageSize = Process::GetPageSize();
-  size_t NumPages = (NumBytes+pageSize-1)/pageSize;
+  // While we'd be happy to allocate single pages, the Windows allocation
+  // granularity may be larger than a single page (in practice, it is 64K)
+  // so mapping less than that will create an unreachable fragment of memory.
+  static const size_t Granularity = getAllocationGranularity();
+  const size_t NumBlocks = (NumBytes+Granularity-1)/Granularity;

-  PVOID start = NearBlock ? static_cast<unsigned char *>(NearBlock->base()) +
-                                NearBlock->size() : NULL;
+  uintptr_t Start = NearBlock ? reinterpret_cast<uintptr_t>(NearBlock->base()) +
+                                NearBlock->size()
+                           : NULL;

-  void *pa = VirtualAlloc(start, NumPages*pageSize, MEM_RESERVE | MEM_COMMIT,
-                  PAGE_EXECUTE_READWRITE);
-  if (pa == NULL) {
+  // If the requested address is not aligned to the allocation granularity,
+  // round up to get beyond NearBlock. VirtualAlloc would have rounded down.
+  if (Start && Start % Granularity != 0)
+    Start += Granularity - Start % Granularity;
+
+  DWORD Protect = getWindowsProtectionFlags(Flags);
+
+  void *PA = ::VirtualAlloc(reinterpret_cast<void*>(Start),
+                            NumBlocks*Granularity,
+                            MEM_RESERVE | MEM_COMMIT, Protect);
+  if (PA == NULL) {
     if (NearBlock) {
       // Try again without the NearBlock hint
-      return AllocateRWX(NumBytes, NULL, ErrMsg);
+      return allocateMappedMemory(NumBytes, NULL, Flags, ErrMsg);
     }
-    MakeErrMsg(ErrMsg, "Can't allocate RWX Memory: ");
-    return MemoryBlock();
+    MakeErrMsg(ErrMsg, "Can't allocate mapped memory: ");
+    return sys::MemoryBlock();
   }

-  MemoryBlock result;
-  result.Address = pa;
-  result.Size = NumPages*pageSize;
-  return result;
+  sys::MemoryBlock Result;
+  Result.Address = PA;
+  Result.Size = NumBlocks*Granularity;
+                                 ;
+  if (Flags & MF_EXEC)
+    sys::Memory::InvalidateInstructionCache(Result.Address, Result.Size);
+
+  return Result;
 }

-bool Memory::ReleaseRWX(MemoryBlock &M, std::string *ErrMsg) {
-  if (M.Address == 0 || M.Size == 0) return false;
+bool llvm::sys::Memory::releaseMappedMemory(MemoryBlock &M,
+                                 std::string *ErrMsg) {
+  if (M.Address == 0 || M.Size == 0)
+    return false;
+
   if (!VirtualFree(M.Address, 0, MEM_RELEASE))
-    return MakeErrMsg(ErrMsg, "Can't release RWX Memory: ");
+    return MakeErrMsg(ErrMsg, "Unable to release mapped memory: ");
+
+  M.Address = 0;
+  M.Size = 0;
+
   return false;
 }

+bool llvm::sys::Memory::protectMappedMemory(const MemoryBlock &M,
+                                 ProtectionFlags Flags,
+                                 std::string *ErrMsg) {
+  if (M.Address == 0 || M.Size == 0)
+    return false;
+
+  DWORD Protect = getWindowsProtectionFlags(Flags);
+
+  DWORD OldFlags;
+  if (!VirtualProtect(M.Address, M.Size, Protect, &OldFlags))
+    return MakeErrMsg(ErrMsg, "Unable to set memory protection: ");
+
+  if (Flags & MF_EXEC)
+    sys::Memory::InvalidateInstructionCache(M.Address, M.Size);
+
+  return false;
+}
+
+/// InvalidateInstructionCache - Before the JIT can run a block of code
+/// that has been emitted it must invalidate the instruction cache on some
+/// platforms.
+void llvm::sys::Memory::InvalidateInstructionCache(
+    const void *Addr, size_t Len) {
+  FlushInstructionCache(GetCurrentProcess(), Addr, Len);
+}
+
+llvm::sys::MemoryBlock llvm::sys::Memory::AllocateRWX(size_t NumBytes,
+                                const MemoryBlock *NearBlock,
+                                std::string *ErrMsg) {
+  return allocateMappedMemory(NumBytes, NearBlock,
+                              MF_READWRITEEXEC, ErrMsg);
+}
+
+bool llvm::sys::Memory::ReleaseRWX(MemoryBlock &M, std::string *ErrMsg) {
+  return releaseMappedMemory(M, ErrMsg);
+}
+
 static DWORD getProtection(const void *addr) {
   MEMORY_BASIC_INFORMATION info;
   if (sizeof(info) == ::VirtualQuery(addr, &info, sizeof(info))) {
@@ -67,21 +161,21 @@
   return 0;
 }

-bool Memory::setWritable(MemoryBlock &M, std::string *ErrMsg) {
+bool llvm::sys::Memory::setWritable(MemoryBlock &M, std::string *ErrMsg) {
   if (!setRangeWritable(M.Address, M.Size)) {
     return MakeErrMsg(ErrMsg, "Cannot set memory to writeable: ");
   }
   return true;
 }

-bool Memory::setExecutable(MemoryBlock &M, std::string *ErrMsg) {
+bool llvm::sys::Memory::setExecutable(MemoryBlock &M, std::string *ErrMsg) {
   if (!setRangeExecutable(M.Address, M.Size)) {
     return MakeErrMsg(ErrMsg, "Cannot set memory to executable: ");
   }
   return true;
 }

-bool Memory::setRangeWritable(const void *Addr, size_t Size) {
+bool llvm::sys::Memory::setRangeWritable(const void *Addr, size_t Size) {
   DWORD prot = getProtection(Addr);
   if (!prot)
     return false;
@@ -98,7 +192,7 @@
             == TRUE;
 }

-bool Memory::setRangeExecutable(const void *Addr, size_t Size) {
+bool llvm::sys::Memory::setRangeExecutable(const void *Addr, size_t Size) {
   DWORD prot = getProtection(Addr);
   if (!prot)
     return false;
@@ -116,5 +210,3 @@
   return ::VirtualProtect(const_cast<LPVOID>(Addr), Size, prot, &oldProt)
             == TRUE;
 }
-
-}
Index: unittests/Support/CMakeLists.txt
===================================================================
--- unittests/Support/CMakeLists.txt  (revision 161944)
+++ unittests/Support/CMakeLists.txt           (working copy)
@@ -17,6 +17,7 @@
   LeakDetectorTest.cpp
   ManagedStatic.cpp
   MathExtrasTest.cpp
+  MemoryTest.cpp
   Path.cpp
   raw_ostream_test.cpp
   RegexTest.cpp
Index: unittests/Support/MemoryTest.cpp
===================================================================
--- unittests/Support/MemoryTest.cpp            (revision 0)
+++ unittests/Support/MemoryTest.cpp         (revision 0)
@@ -0,0 +1,319 @@
+//===- llvm/unittest/Support/AllocatorTest.cpp - BumpPtrAllocator tests ---===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/Support/Memory.h"
+#include "llvm/Support/Process.h"
+
+#include "gtest/gtest.h"
+#include <cstdlib>
+
+using namespace llvm;
+using namespace sys;
+
+namespace {
+
+class MappedMemoryTest : public ::testing::TestWithParam<Memory::ProtectionFlags> {
+public:
+  MappedMemoryTest() {
+    Flags = GetParam();
+    PageSize = sys::Process::GetPageSize();
+  }
+
+protected:
+  // Adds RW flags to permit testing of the resulting memory
+  Memory::ProtectionFlags getTestableEquivalent(Memory::ProtectionFlags RequestedFlags) {
+    switch (RequestedFlags) {
+    case Memory::MF_READ:
+    case Memory::MF_WRITE:
+    case Memory::MF_READWRITE:
+      return Memory::MF_READWRITE;
+    case Memory::MF_READEXEC:
+    case Memory::MF_READWRITEEXEC:
+    case Memory::MF_EXEC:
+      return Memory::MF_READWRITEEXEC;
+    }
+    // Default in case values are added to the enum, as required by some compilers
+    return Memory::MF_READWRITE;
+  }
+
+  // Returns true if the memory blocks overlap
+  bool doesOverlap(MemoryBlock M1, MemoryBlock M2) {
+    if (M1.base() == M2.base())
+      return true;
+
+    if (M1.base() > M2.base())
+      return (unsigned char *)M2.base() + M2.size() > M1.base();
+
+    return (unsigned char *)M1.base() + M1.size() > M2.base();
+  }
+
+  Memory::ProtectionFlags Flags;
+  size_t                  PageSize;
+};
+
+TEST_P(MappedMemoryTest, AllocAndRelease) {
+  MemoryBlock M1 = Memory::allocateMappedMemory(sizeof(int), 0, Flags);
+
+  EXPECT_NE((void*)0, M1.base());
+  EXPECT_LE(sizeof(int), M1.size());
+
+  EXPECT_FALSE(Memory::releaseMappedMemory(M1));
+}
+
+TEST_P(MappedMemoryTest, MultipleAllocAndRelease) {
+  MemoryBlock M1 = Memory::allocateMappedMemory(16, 0, Flags);
+  MemoryBlock M2 = Memory::allocateMappedMemory(64, 0, Flags);
+  MemoryBlock M3 = Memory::allocateMappedMemory(32, 0, Flags);
+
+  EXPECT_NE((void*)0, M1.base());
+  EXPECT_LE(16U, M1.size());
+  EXPECT_NE((void*)0, M2.base());
+  EXPECT_LE(64U, M2.size());
+  EXPECT_NE((void*)0, M3.base());
+  EXPECT_LE(32U, M3.size());
+
+  EXPECT_FALSE(doesOverlap(M1, M2));
+  EXPECT_FALSE(doesOverlap(M2, M3));
+  EXPECT_FALSE(doesOverlap(M1, M3));
+
+  EXPECT_FALSE(Memory::releaseMappedMemory(M1));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M3));
+  MemoryBlock M4 = Memory::allocateMappedMemory(16, 0, Flags);
+  EXPECT_NE((void*)0, M4.base());
+  EXPECT_LE(16U, M4.size());
+  EXPECT_FALSE(Memory::releaseMappedMemory(M4));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M2));
+}
+
+TEST_P(MappedMemoryTest, BasicWrite) {
+  // This test applies only to writeable combinations
+  if (Flags && !(Flags & Memory::MF_WRITE))
+    return;
+
+  MemoryBlock M1 = Memory::allocateMappedMemory(sizeof(int), 0, Flags);
+
+  EXPECT_NE((void*)0, M1.base());
+  EXPECT_LE(sizeof(int), M1.size());
+
+  int *a = (int*)M1.base();
+  *a = 1;
+  EXPECT_EQ(1, *a);
+
+  EXPECT_FALSE(Memory::releaseMappedMemory(M1));
+}
+
+TEST_P(MappedMemoryTest, MultipleWrite) {
+  // This test applies only to writeable combinations
+  if (Flags && !(Flags & Memory::MF_WRITE))
+    return;
+  MemoryBlock M1 = Memory::allocateMappedMemory(sizeof(int), 0, Flags);
+  MemoryBlock M2 = Memory::allocateMappedMemory(8 * sizeof(int), 0, Flags);
+  MemoryBlock M3 = Memory::allocateMappedMemory(4 * sizeof(int), 0, Flags);
+
+  EXPECT_FALSE(doesOverlap(M1, M2));
+  EXPECT_FALSE(doesOverlap(M2, M3));
+  EXPECT_FALSE(doesOverlap(M1, M3));
+
+  EXPECT_NE((void*)0, M1.base());
+  EXPECT_LE(1U * sizeof(int), M1.size());
+  EXPECT_NE((void*)0, M2.base());
+  EXPECT_LE(8U * sizeof(int), M2.size());
+  EXPECT_NE((void*)0, M3.base());
+  EXPECT_LE(4U * sizeof(int), M3.size());
+
+  int *x = (int*)M1.base();
+  *x = 1;
+
+  int *y = (int*)M2.base();
+  for (int i = 0; i < 8; i++) {
+    y[i] = i;
+  }
+
+  int *z = (int*)M3.base();
+  *z = 42;
+
+  EXPECT_EQ(1, *x);
+  EXPECT_EQ(7, y[7]);
+  EXPECT_EQ(42, *z);
+
+  EXPECT_FALSE(Memory::releaseMappedMemory(M1));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M3));
+
+  MemoryBlock M4 = Memory::allocateMappedMemory(64 * sizeof(int), 0, Flags);
+  EXPECT_NE((void*)0, M4.base());
+  EXPECT_LE(64U * sizeof(int), M4.size());
+  x = (int*)M4.base();
+  *x = 4;
+  EXPECT_EQ(4, *x);
+  EXPECT_FALSE(Memory::releaseMappedMemory(M4));
+
+  // Verify that M2 remains unaffected by other activity
+  for (int i = 0; i < 8; i++) {
+    EXPECT_EQ(i, y[i]);
+  }
+  EXPECT_FALSE(Memory::releaseMappedMemory(M2));
+}
+
+TEST_P(MappedMemoryTest, EnabledWrite) {
+  MemoryBlock M1 = Memory::allocateMappedMemory(2 * sizeof(int), 0, Flags);
+  MemoryBlock M2 = Memory::allocateMappedMemory(8 * sizeof(int), 0, Flags);
+  MemoryBlock M3 = Memory::allocateMappedMemory(4 * sizeof(int), 0, Flags);
+
+  EXPECT_NE((void*)0, M1.base());
+  EXPECT_LE(2U * sizeof(int), M1.size());
+  EXPECT_NE((void*)0, M2.base());
+  EXPECT_LE(8U * sizeof(int), M2.size());
+  EXPECT_NE((void*)0, M3.base());
+  EXPECT_LE(4U * sizeof(int), M3.size());
+
+  EXPECT_EQ(NULL, Memory::protectMappedMemory(M1, getTestableEquivalent(Flags)));
+  EXPECT_EQ(NULL, Memory::protectMappedMemory(M2, getTestableEquivalent(Flags)));
+  EXPECT_EQ(NULL, Memory::protectMappedMemory(M3, getTestableEquivalent(Flags)));
+
+  EXPECT_FALSE(doesOverlap(M1, M2));
+  EXPECT_FALSE(doesOverlap(M2, M3));
+  EXPECT_FALSE(doesOverlap(M1, M3));
+
+  int *x = (int*)M1.base();
+  *x = 1;
+  int *y = (int*)M2.base();
+  for (unsigned int i = 0; i < 8; i++) {
+    y[i] = i;
+  }
+  int *z = (int*)M3.base();
+  *z = 42;
+
+  EXPECT_EQ(1, *x);
+  EXPECT_EQ(7, y[7]);
+  EXPECT_EQ(42, *z);
+
+  EXPECT_FALSE(Memory::releaseMappedMemory(M1));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M3));
+  EXPECT_EQ(6, y[6]);
+
+  MemoryBlock M4 = Memory::allocateMappedMemory(16, 0, Flags);
+  EXPECT_NE((void*)0, M4.base());
+  EXPECT_LE(16U, M4.size());
+  EXPECT_EQ(NULL, Memory::protectMappedMemory(M4, getTestableEquivalent(Flags)));
+  x = (int*)M4.base();
+  *x = 4;
+  EXPECT_EQ(4, *x);
+  EXPECT_FALSE(Memory::releaseMappedMemory(M4));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M2));
+}
+
+TEST_P(MappedMemoryTest, SuccessiveNear) {
+  MemoryBlock M1 = Memory::allocateMappedMemory(16, 0, Flags);
+  MemoryBlock M2 = Memory::allocateMappedMemory(64, &M1, Flags);
+  MemoryBlock M3 = Memory::allocateMappedMemory(32, &M2, Flags);
+
+  EXPECT_NE((void*)0, M1.base());
+  EXPECT_LE(16U, M1.size());
+  EXPECT_NE((void*)0, M2.base());
+  EXPECT_LE(64U, M2.size());
+  EXPECT_NE((void*)0, M3.base());
+  EXPECT_LE(32U, M3.size());
+
+  EXPECT_FALSE(doesOverlap(M1, M2));
+  EXPECT_FALSE(doesOverlap(M2, M3));
+  EXPECT_FALSE(doesOverlap(M1, M3));
+
+  EXPECT_FALSE(Memory::releaseMappedMemory(M1));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M3));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M2));
+}
+
+TEST_P(MappedMemoryTest, DuplicateNear) {
+  MemoryBlock Near((void*)(3*PageSize), 16);
+  MemoryBlock M1 = Memory::allocateMappedMemory(16, &Near, Flags);
+  MemoryBlock M2 = Memory::allocateMappedMemory(64, &Near, Flags);
+  MemoryBlock M3 = Memory::allocateMappedMemory(32, &Near, Flags);
+
+  EXPECT_NE((void*)0, M1.base());
+  EXPECT_LE(16U, M1.size());
+  EXPECT_NE((void*)0, M2.base());
+  EXPECT_LE(64U, M2.size());
+  EXPECT_NE((void*)0, M3.base());
+  EXPECT_LE(32U, M3.size());
+
+  EXPECT_FALSE(Memory::releaseMappedMemory(M1));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M3));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M2));
+}
+
+TEST_P(MappedMemoryTest, ZeroNear) {
+  MemoryBlock Near(0, 0);
+  MemoryBlock M1 = Memory::allocateMappedMemory(16, &Near, Flags);
+  MemoryBlock M2 = Memory::allocateMappedMemory(64, &Near, Flags);
+  MemoryBlock M3 = Memory::allocateMappedMemory(32, &Near, Flags);
+
+  EXPECT_NE((void*)0, M1.base());
+  EXPECT_LE(16U, M1.size());
+  EXPECT_NE((void*)0, M2.base());
+  EXPECT_LE(64U, M2.size());
+  EXPECT_NE((void*)0, M3.base());
+  EXPECT_LE(32U, M3.size());
+
+  EXPECT_FALSE(doesOverlap(M1, M2));
+  EXPECT_FALSE(doesOverlap(M2, M3));
+  EXPECT_FALSE(doesOverlap(M1, M3));
+
+  EXPECT_FALSE(Memory::releaseMappedMemory(M1));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M3));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M2));
+}
+
+TEST_P(MappedMemoryTest, ZeroSizeNear) {
+  MemoryBlock Near((void*)(4*PageSize), 0);
+  MemoryBlock M1 = Memory::allocateMappedMemory(16, &Near, Flags);
+  MemoryBlock M2 = Memory::allocateMappedMemory(64, &Near, Flags);
+  MemoryBlock M3 = Memory::allocateMappedMemory(32, &Near, Flags);
+
+  EXPECT_NE((void*)0, M1.base());
+  EXPECT_LE(16U, M1.size());
+  EXPECT_NE((void*)0, M2.base());
+  EXPECT_LE(64U, M2.size());
+  EXPECT_NE((void*)0, M3.base());
+  EXPECT_LE(32U, M3.size());
+
+  EXPECT_FALSE(doesOverlap(M1, M2));
+  EXPECT_FALSE(doesOverlap(M2, M3));
+  EXPECT_FALSE(doesOverlap(M1, M3));
+
+  EXPECT_FALSE(Memory::releaseMappedMemory(M1));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M3));
+  EXPECT_FALSE(Memory::releaseMappedMemory(M2));
+}
+
+TEST_P(MappedMemoryTest, UnalignedNear) {
+  MemoryBlock Near((void*)(2*PageSize+5), 0);
+  MemoryBlock M1 = Memory::allocateMappedMemory(15, &Near, Flags);
+
+  EXPECT_NE((void*)0, M1.base());
+  EXPECT_LE(sizeof(int), M1.size());
+
+  EXPECT_FALSE(Memory::releaseMappedMemory(M1));
+}
+
+// Note that Memory::MF_WRITE is not supported exclusively across
+// operating systems and architectures and can imply MF_READWRITE
+Memory::ProtectionFlags MemoryFlags[] = {
+                                  Memory::MF_READ,
+                                  Memory::MF_WRITE,
+                                  Memory::MF_READWRITE,
+                                  Memory::MF_EXEC,
+                                  Memory::MF_READEXEC,
+                                  Memory::MF_READWRITEEXEC
+                                };
+
+INSTANTIATE_TEST_CASE_P(AllocationTests,
+                        MappedMemoryTest,
+                        ::testing::ValuesIn(MemoryFlags));
+
+}  // anonymous namespace




On Sep 4, 2012, at 5:21 PM, "Kaylor, Andrew" <andrew.kaylor at intel.com<mailto:andrew.kaylor at intel.com>> wrote:


ping

From: llvm-commits-bounces at cs.uiuc.edu<mailto:llvm-commits-bounces at cs.uiuc.edu> [mailto:llvm-commits-bounces at cs.uiuc.edu<mailto:commits-bounces at cs.uiuc.edu>] On Behalf Of Kaylor, Andrew
Sent: Tuesday, August 28, 2012 11:11 AM
To: Commit Messages and Patches for LLVM
Subject: Re: [llvm-commits] [PATCH] [Review Request] Memory Support changes to enable setting page protection flags

ping

From: llvm-commits-bounces at cs.uiuc.edu<mailto:llvm-commits-bounces at cs.uiuc.edu> [mailto:llvm-commits-bounces at cs.uiuc.edu] On Behalf Of Kaylor, Andrew
Sent: Friday, August 17, 2012 2:36 PM
To: Commit Messages and Patches for LLVM
Subject: [llvm-commits] [PATCH] [Review Request] Memory Support changes to enable setting page protection flags

Hi everyone,

The attached patch enhances the Support/Memory API to enable explicit setting of memory pages protection flags, both at the time of allocation and afterward.  These changes are in preparation for a future patch to allow MCJIT to protect pages of runtime loaded sections in JITed code.  Currently, MCJIT (like the legacy JIT) puts all code and data into pages that are both writable and executable, which is obviously a bit of a security concern.

This patch was prepared by Ashok Thirumurthi.

One sticky area that I'd like to call out for specific consideration is what the new functions do if the caller attempts to allocate memory or (more likely) set protection flags on existing memory with the MF_WRITE flag set, but not the MF_READ flag.  The specification of mprotect indicates that on some architectures enabling write also enables read, but leaves open the possibility that this is not necessarily so (and I believe some architecture do use this for cache optimization).  Windows, on the other hand, makes no provision for setting write-only permissions.

In an attempt to resolve this conflict while still providing relatively consistent behavior, we have made it so that MF_WRITE gets translated to PAGE_READWRITE on Windows, but in the Unix implementation it is translated to just PROT_WRITE, leaving it up to the OS implementation to determine behavior.  Other possibilities we considered were removing MF_WRITE by itself from the flags presented in the API, and always coercing MF_WRITE to MF_READWRITE.  We rejected both of these options in order to make it possible for users to take advantage of the write-only optimization where they had reason to believe it would be available.

Thanks,
Andy

_______________________________________________
llvm-commits mailing list
llvm-commits at cs.uiuc.edu<mailto:llvm-commits at cs.uiuc.edu>
http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20120911/30efa3b4/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: mapped-memory-2.patch
Type: application/octet-stream
Size: 32264 bytes
Desc: mapped-memory-2.patch
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20120911/30efa3b4/attachment.obj>


More information about the llvm-commits mailing list