[Openmp-commits] [PATCH] D154312: [Libomptarget] Begin implementing support for RPC services

Joseph Huber via Phabricator via Openmp-commits openmp-commits at lists.llvm.org
Mon Jul 3 10:27:13 PDT 2023


jhuber6 added inline comments.


================
Comment at: openmp/libomptarget/plugins-nextgen/amdgpu/src/rtl.cpp:533
+      return Plugin::success();
+    }
+
----------------
jdoerfert wrote:
> This I don't get. We wait for 8k units, then we run the server once, then we wait again if the kernel is not done. Why isn't this part of the loop below with a timeout that adjusts based on the RPC needs?
I wasn't sure how to mesh it with the existing logic, since we will presumably have streams that didn't come from kernel launches and we have some active timeout  exception here where I guess we query at a higher rate compared to the blocking one? I just worked around it but I could try to integrate it. AMDGPU was much more difficult here than CUDA because of the indirection.


================
Comment at: openmp/libomptarget/plugins-nextgen/amdgpu/src/rtl.cpp:1071
+  /// Attach an RPC device to this stream.
+  void setRPCDevice(RPCDeviceTy *Device) { RPCDevice = Device; }
+
----------------
jdoerfert wrote:
> We can grab that information in the constructor.
I tried that, turns out we initialize these streams before we load the binary image, which is requited to setup the RPC server since we first need to know where the symbol is to initialize it.


================
Comment at: openmp/libomptarget/plugins-nextgen/common/PluginInterface/CMakeLists.txt:18
+
+set_source_files_properties(PluginInterface.cpp GlobalHandler.cpp JIT.cpp RPC.cpp
+  PROPERTIES COMPILE_FLAGS "-O0 -g")
----------------
Forgot to remove this.


================
Comment at: openmp/libomptarget/plugins-nextgen/common/PluginInterface/PluginInterface.cpp:558
+  DP("Running an RPC server on device %d\n", getDeviceId());
+  return Plugin::success();
+}
----------------
jdoerfert wrote:
> Can you hoist  Plugin.getRPCServer(), to make it more readable.
> 
> I'm not a fan of RPCDevice as a name, we have too many "devices"
Sure, I'll try to think of a better name.


================
Comment at: openmp/libomptarget/plugins-nextgen/common/PluginInterface/PluginInterface.h:854
+  /// Boolean indicating whether or not this device has a server attached.
+  RPCDeviceTy *RPCDevice;
 };
----------------
jdoerfert wrote:
> docs
You mean explain this in more detail?


================
Comment at: openmp/libomptarget/plugins-nextgen/common/PluginInterface/RPC.cpp:69-104
+  // Register a custom opcode handler to perform plugin specific allocation.
+  // FIXME: We need to make sure this uses asynchronous allocations on CUDA.
+  auto MallocHandler = [](rpc_port_t Port, void *Data) {
+    rpc_recv_and_send(
+        Port,
+        [](rpc_buffer_t *Buffer, void *Data) {
+          plugin::GenericDeviceTy &Device =
----------------
jdoerfert wrote:
> Can the GenericDevice register these handlers?
> A vector of them somewhere, or just let it call a register callback.
> I would not make these special and define them here.
> Move all into the GenericDevice and it will call an Impl method for the subclasses to register more.
So this will probably only be necessary for things like malloc / memcpy. Figured those are already provided by the generic device so we should be able to hide that complexity in here right next to the implementation. Didn't feel like it warranted abstraction.


================
Comment at: openmp/libomptarget/plugins-nextgen/common/PluginInterface/RPC.cpp:134
+
+  return &Devices.emplace_back(*this, Device);
+#else
----------------
jdoerfert wrote:
> This seems like a bad idea. We should use the device ID to index the Devices array, and we really need to rename this.
Yeah you can make multiple. Only reason I didn't want because you'd need to allocate them because of the references, but we can just `std::make_unique`.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D154312/new/

https://reviews.llvm.org/D154312



More information about the Openmp-commits mailing list