[PATCH] D75192: [mlir][spirv] Update mlir-vulkan-runner execution driver.

Lei Zhang via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Feb 28 11:30:51 PST 2020


antiagainst requested changes to this revision.
antiagainst added a comment.
This revision now requires changes to proceed.

Nice, thanks Denis!



================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertGPULaunchFuncToVulkanLaunchFunc.cpp:30
+
+static constexpr const char *kSPIRVBinary = "SPIRV_BIN";
+static constexpr const char *kSPIRVEntryPoint = "SPIRVEntryPoint";
----------------
These two are now used as attributes. The convention is to use snake_case for attribute names. So what about naming them as `spirv_blob` and `spirv_entry_point`?


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertGPULaunchFuncToVulkanLaunchFunc.cpp:32
+static constexpr const char *kSPIRVEntryPoint = "SPIRVEntryPoint";
+static constexpr const char *kVulkanLaunch = "__vulkanLaunch";
+
----------------
I think we can just call it `vulkanLaunch`. The `__` prefix can be confusing as for what purpose it serves. (I guess you want to mean this should be an intermediate step? It's not really necessary. If this is influenced by the `_` name prefix in SPIR-V dialect, just wanted to let you know that it should also be cleaned up as said in the SPIR-V dialect's doc and I should create a bug for it.)


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertGPULaunchFuncToVulkanLaunchFunc.cpp:36
+
+// A pass to convert gpu launch op to vulkan launch call op, create a SPIR-V
+// binary shader from `spirv::ModuleOp` using `spirv::serialize` function,
----------------
... by creating ... and attaching ....


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertGPULaunchFuncToVulkanLaunchFunc.cpp:49
+  /// Converts the given `luanchOp` to vulkan launch call.
+  void convertGpuLaunchFunc(mlir::gpu::LaunchFuncOp launchOp);
+
----------------
We can drop the `mlir::` prefix given we already have `using namespace mlir` at the beginning. Also for other places.


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertGPULaunchFuncToVulkanLaunchFunc.cpp:55
+    if (auto memRefType = type.dyn_cast_or_null<MemRefType>())
+      return memRefType.getShape().size() == 1;
+    return false;
----------------
memRefType.hasRank() && memRefType.getRank() == 1


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertGPULaunchFuncToVulkanLaunchFunc.cpp:64
+
+public:
+  void runOnModule() override;
----------------
Put public section at the beginning.


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertGPULaunchFuncToVulkanLaunchFunc.cpp:71
+void ConvertGpuLaunchFuncToVulkanLaunchFunc::runOnModule() {
+  getModule().walk(
+      [this](mlir::gpu::LaunchFuncOp op) { convertGpuLaunchFunc(op); });
----------------
Hmm, I see we want to support lanuching multiple kernels eventually but I think right now it's broken: we walk trough all lanuch_func op here but when serializing the SPIR-V blob we only expect one SPIR-V module. I think we should report error on seeing multiple launch_func ops for now.




================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertGPULaunchFuncToVulkanLaunchFunc.cpp:87
+  OpBuilder builder(getModule().getBody()->getTerminator());
+  SmallVector<Type, 8> vulkanLaunchTypes{launchOp.getOperandTypes()};
+
----------------
Hmm, for a vulkan launch we can only control the number of work groups. Workgroup size is written into the kernel. So to properly modelling vulkan launch, we cannot have the local workgroup size configuration here...

Could you put a TODO here to address this later?


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertGPULaunchFuncToVulkanLaunchFunc.cpp:89
+
+  // Skip all configuration operands.
+  for (auto type : llvm::drop_begin(vulkanLaunchTypes, 6)) {
----------------
// Check that all operands have supported types except those for the launch configuration.


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertGPULaunchFuncToVulkanLaunchFunc.cpp:130
+  std::vector<char> binary;
+  if (failed(ConvertGpuLaunchFuncToVulkanLaunchFunc::createBinaryShader(
+          module, binary)))
----------------
No need for `ConvertGpuLaunchFuncToVulkanLaunchFunc::` given we are in the same class.


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertGPULaunchFuncToVulkanLaunchFunc.cpp:162
+    pass("convert-gpu-launch-to-vulkan-launch",
+         "Convert gpu_launch func to vulkan launch func");
----------------
Convert gpu.launch_func to vulkanLaunch external call


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertLaunchFuncToVulkanCalls.cpp:91
+  /// Translates the given `vulkanLaunchCallOp` to the sequence of Vulkan
+  /// runtime calls
+  void translateVulkanLaunchCall(LLVM::CallOp vulkanLaunchCallOp);
----------------
Period to end the sentence.


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertLaunchFuncToVulkanCalls.cpp:112
+/// access to the lowered memref.
+struct VulkanLaunchOpOperandAdaptor {
+  VulkanLaunchOpOperandAdaptor(ArrayRef<Value> values) { operands = values; }
----------------
Hmm, this is quite complicated and fragile. I think we should consider using the C interface wrappers instead of handling all the details by ourselves. See https://mlir.llvm.org/docs/ConversionToLLVMDialect/#c-compatible-wrapper-emission.

Could you look into that? A TODO for now is fine given this change is quite large already.


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertLaunchFuncToVulkanCalls.cpp:119
+  /// Returns a tuple with a pointer to the memory and the size of this
+  /// memory by the given `index`.
+  std::tuple<Value, Value> getResourceDescriptor1D(uint32_t index) {
----------------
for the `index`-th resource.


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertLaunchFuncToVulkanCalls.cpp:132
 
-void GpuLaunchFuncToVulkanCalssPass::runOnModule() {
-  initializeCachedTypes();
+  /// Returns an amount of resources assuming all operands lowered from
+  /// 1D memref.
----------------
Returns the number of resources assuming ...


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertLaunchFuncToVulkanCalls.cpp:134
+  /// 1D memref.
+  uint32_t resourceCount1D() {
+    return (operands.size() - numConfigOps) / loweredMemRefNumOps1D;
----------------
getResourceCount1D?


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertLaunchFuncToVulkanCalls.cpp:139
+private:
+  static constexpr const uint32_t loweredMemRefNumOps1D = 5;
+  static constexpr const uint32_t numConfigOps = 6;
----------------
Document these fields.


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertLaunchFuncToVulkanCalls.cpp:167
+
+  SmallVector<Value, 32> operands{vulkanLaunchCallOp.getOperands().begin(),
+                                  vulkanLaunchCallOp.getOperands().end()};
----------------
We should be able to do `auto operands = SmallVector<Value, 32>{vulkanLaunchCallOp.getOperands()}`?


================
Comment at: mlir/lib/Conversion/GPUToVulkan/ConvertLaunchFuncToVulkanCalls.cpp:356
     pass("launch-func-to-vulkan",
-         "Convert gpu.launch_func op to Vulkan runtime calls");
+         "Convert vulkan launch func to Vulkan runtime calls");
----------------
Convert vulkanLaunch external call to Vulkan runtime external calls


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D75192/new/

https://reviews.llvm.org/D75192





More information about the llvm-commits mailing list