[PATCH] D130392: [Orc][JITLink] Slab based memory allocator to reduce RPC calls

Lang Hames via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Aug 3 12:25:59 PDT 2022


lhames accepted this revision.
lhames added a comment.
This revision is now accepted and ready to land.

LGTM!



================
Comment at: llvm/include/llvm/ExecutionEngine/Orc/MapperJITLinkMemoryManager.h:55-58
+  // Ranges that have been reserved in executor and usable
+  std::vector<ExecutorAddrRange> AvailableMemory;
+  // Ranges that have been reserved on both side
+  DenseMap<ExecutorAddr, ExecutorAddrDiff> UsedMemory;
----------------
>  // Ranges that have been reserved on both side
>  DenseMap<ExecutorAddr, ExecutorAddrDiff> UsedMemory;

I didn't understand this comment? This is just the used memory list, right? Whether or not there's any memory associated with it on the controller side should be up to the mapper, rather than the allocator.


================
Comment at: llvm/lib/ExecutionEngine/Orc/MapperJITLinkMemoryManager.cpp:73-80
   // Check if total size fits in address space
-  if (SegsSizes->total() > std::numeric_limits<size_t>::max()) {
+  auto TotalSize = SegsSizes->total();
+  if (TotalSize > std::numeric_limits<size_t>::max()) {
     OnAllocated(make_error<JITLinkError>(
         formatv("Total requested size {:x} for graph {} exceeds address space",
                 SegsSizes->total(), G.getName())));
     return;
----------------
I think this check should probably be dropped: We support JITing on a 32-bit controller for a 64-bit target, so you could request, e.g., an 8Gb zero-fill data section. That would overflow a size_t on the controller, but is still perfectly legal.

On the other hand if you overflow the address scape for the executor then you'll definitely overflow the available space too, so you'll error out below anyway.


================
Comment at: llvm/lib/ExecutionEngine/Orc/MapperJITLinkMemoryManager.cpp:164
+
+    AvailableMemory.push_back({Addr, Addr + Size});
     FA.release();
----------------
There's no coalescing of the `AvailableMemory` list yet, so you'll get fragmentation in the allocator. I think that should be addressed in a follow-up patch though.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D130392/new/

https://reviews.llvm.org/D130392



More information about the llvm-commits mailing list