[PATCH] D13818: [compiler-rt] [msan] Unify aarch64 mapping

Adhemerval Zanella via llvm-commits llvm-commits at lists.llvm.org
Thu Oct 22 14:48:11 PDT 2015


zatrazz added inline comments.

================
Comment at: lib/msan/msan.h:89
@@ -70,3 +88,3 @@
 # define LINEARIZE_MEM(mem) \
   (((uptr)(mem) & ~0x7C00000000ULL) ^ 0x100000000ULL)
 # define MEM_TO_SHADOW(mem) (LINEARIZE_MEM((mem)) + 0x4000000000ULL)
----------------
eugenis wrote:
> zatrazz wrote:
> > eugenis wrote:
> > > zatrazz wrote:
> > > > eugenis wrote:
> > > > > This is probably out of scope of this review, but could your elaborate, and maybe add a comment, about the constraints that led to this complex mapping function? For example, a list of all address ranges that must be in "app" regions would help.
> > > > > 
> > > > > This mapping limits the applications to roughly 1/7th of the address space on 39 bit VMA and only 1/30th on 42 bit VMA. Could we do any better?
> > > > > 
> > > > This is exactly what I am struggling with current aarch64 39 and 42-bit VMA contraints regarding PIE positioning. The memory segments are:
> > > > 
> > > > 0000000000-0010000000: both 39 and 42 for own programs segments
> > > > 5500000000-5600000000: 39-bits PIE program segments
> > > > 7f80000000-7000000000: 39-bits libraries segments
> > > > 
> > > > 2aa00000000-2ab00000000: 42-bits PIE program segments
> > > > 3ff00000000-3ffffffffff: 42-bits libraries segments
> > > > 
> > > > I am trying to increase the segments size, but it is hard to come up with a single transformation that works on both 39 and 42-bit VMA that maps 39-bit to 39-bits and also works for 42-bits. I open to suggestions.
> > > Can we do the same as on x86_64: flip either one or both of the most significant bits (38 & 37)?
> > > 39-bit addresses will stay 39-bit.
> > > The following regions seem to have long enough constant left prefix for this transormation to be linear:
> > > 2aa00000000-2ab00000000: 42-bits PIE program segments
> > > 3ff00000000-3ffffffffff: 42-bits libraries segments
> > > 
> > > It will fragment 42-bit VMA in like 16 application segments, and the same number of shadow and origin; some of them will be marked invalid to avoid shadow/app/origin overlap with other segments. Not a problem, as long as the function is linear on any contiguous kernel-mapped range.
> > > 
> > This seems to be a slight better strategy:
> > 
> >   {0x00000000000ULL, 0x01000000000ULL, MappingDesc::INVALID, "invalid"},
> >   {0x01000000000ULL, 0x02000000000ULL, MappingDesc::SHADOW,  "shadow-1"},
> >   {0x02000000000ULL, 0x03000000000ULL, MappingDesc::ORIGIN,  "origin"},
> >   {0x03000000000ULL, 0x03500000000ULL, MappingDesc::INVALID, "invalid"},
> >   {0x03500000000ULL, 0x03600000000ULL, MappingDesc::SHADOW,  "shadow-2"},
> >   {0x03600000000ULL, 0x04500000000ULL, MappingDesc::INVALID, "invalid"},
> >   {0x04500000000ULL, 0x04600000000ULL, MappingDesc::ORIGIN,  "origin"},
> >   {0x04600000000ULL, 0x05500000000ULL, MappingDesc::INVALID, "invalid"},
> >   {0x05500000000ULL, 0x05600000000ULL, MappingDesc::APP,     "app-1"},
> >   {0x05600000000ULL, 0x07000000000ULL, MappingDesc::INVALID, "invalid"},
> >   {0x07000000000ULL, 0x08000000000ULL, MappingDesc::APP,     "app-2"},
> >   {0x08000000000ULL, 0x2A000000000ULL, MappingDesc::INVALID, "invalid"},
> >   {0x2a000000000ULL, 0x2ac00000000ULL, MappingDesc::APP,     "app-3"},
> >   {0x2AC00000000ULL, 0x2C000000000ULL, MappingDesc::INVALID, "invalid"},
> >   {0x2C000000000ULL, 0x2CC00000000ULL, MappingDesc::SHADOW,  "shadow-3"},
> >   {0x2CC00000000ULL, 0x2D000000000ULL, MappingDesc::INVALID, "invalid"},
> >   {0x2D000000000ULL, 0x2DC00000000ULL, MappingDesc::ORIGIN,  "origin-3"},
> >   {0x2DC00000000ULL, 0x39000000000ULL, MappingDesc::INVALID, "invalid"},
> >   {0x39000000000ULL, 0x3A000000000ULL, MappingDesc::SHADOW,  "shadow"},
> >   {0x3A000000000ULL, 0x3B000000000ULL, MappingDesc::ORIGIN,  "origin"},
> >   {0x3B000000000ULL, 0x3F000000000ULL, MappingDesc::INVALID, "invalid"},
> >   {0x3F000000000ULL, 0x40000000000ULL, MappingDesc::APP,     "app-4"},
> > 
> > # define MEM_TO_SHADOW(mem) ((uptr)mem ^ 0x6000000000ULL)
> > # define SHADOW_TO_ORIGIN(shadow) (((uptr)(shadow)) + 0x1000000000ULL)
> > 
> > Although it does not increase the VMA available for 42-bits (4.39% compare to 13% for 39). I will try to check if it is possible to squeeze more for 42-bits, but the PIE constraint is really making this hard :/
> Your "invalid" regions are suspiciously large. It should be possible to add more app space w/o changing the mapping function.
> 
> For example, [3b, 3c) is mapped to [3d, 3e) with origin at [3e, 3f) - all three are marked invalid in your list.
> 
> Also, when naming regions, please make sure that shadow for "app-N" is called "shadow-N" - makes the list easier to read and verify. The same for origin.
> 
> 
I realized it just after I hit the send button. Using the segments:

{0x11000000000ULL, 0x12000000000ULL, MappingDesc::APP,     "app"}
{0x20000000000ULL, 0x22000000000ULL, MappingDesc::APP,     "app"}
{0x2E000000000ULL, 0x2F000000000ULL, MappingDesc::APP,     "app"}
{0x3B000000000ULL, 0x3C000000000ULL, MappingDesc::APP,     "app"}

I could reach of a total of 12.21% of total VMA avaliable for the application, % similar to x86_64 and MIPS (12.50%).

About the names, I will change and add proper comments.


http://reviews.llvm.org/D13818





More information about the llvm-commits mailing list