[PATCH] D75287: [mlir][GPU] Expose the functionality to create a gpu.GPUFuncOp from a gpu.GPULaunchOp

Alex Zinenko via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Mar 4 04:33:57 PST 2020


ftynse accepted this revision.
ftynse added a comment.

I don't have further objections other than a bunch of nits. This patch intends to expose _functions_ and can land as is. Refactoring those functions into separate (test) passes is okay for a follow-up IMO.



================
Comment at: mlir/include/mlir/Dialect/GPU/Passes.h:21
 
+class LogicalResult;
 class MLIRContext;
----------------
`struct`, otherwise you'll break windows builds


================
Comment at: mlir/include/mlir/Dialect/GPU/Passes.h:53
+/// region.
+LogicalResult sinkInstructionsIntoLaunchOp(gpu::LaunchOp launchOp);
+
----------------
Bikeshed nit: `sinkOperationsIntoLaunchOp`. "Intructions" aren't a thing in MLIR.


================
Comment at: mlir/lib/Dialect/GPU/Transforms/KernelOutlining.cpp:64
+  // operation.
+  llvm::SetVector<Value> operands;
+  getUsedValuesDefinedAbove(launchOpBody, operands);
----------------
Bikeshed nit: `operands` confused me into thinking it referred to the _existing_ launchOp operands, not "values that might become operands if sinking is beneficial".


================
Comment at: mlir/lib/Dialect/GPU/Transforms/KernelOutlining.cpp:105
+        auto replacement = clonedOp->getResult(result.index());
+        for (Operation *user : result.value().getUsers()) {
+          if (user->getParentOfType<gpu::LaunchOp>() == launchOp)
----------------
How about iterating over uses rather than users?

```
for (auto use : result.value().getUses()) {
  if (use.getUser().getParentOfType<gpu::LaunchOp>() == launchOp))
    use.getOperand().set(replacement);
}
```


================
Comment at: mlir/lib/Dialect/GPU/Transforms/KernelOutlining.cpp:112
+    }
+    if (startSize == processed.size()) {
+      return launchOp.emitError(
----------------
Can this happen in a valid IR? If not, I would rather assert. Otherwise, please drop trivial braces


================
Comment at: mlir/lib/Dialect/GPU/Transforms/KernelOutlining.cpp:117
   }
-  kernelFunc.setType(LaunchBuilder.getFunctionType(newArgumentTypes, {}));
-  auto newLaunch = LaunchBuilder.create<gpu::LaunchFuncOp>(
-      launch.getLoc(), kernelFunc, launch.getGridSizeOperandValues(),
-      launch.getBlockSizeOperandValues(), newLaunchArgs);
-  launch.erase();
-  return newLaunch;
+  // Replace all uses of the cloned ops within the region.
+  return success();
----------------
This comment looks outdated


================
Comment at: mlir/lib/Dialect/GPU/Transforms/KernelOutlining.cpp:234
+        if (failed(sinkInstructionsIntoLaunchOp(op))) {
+          op.emitError("failed during sinking instructions into launch op");
+          return WalkResult::interrupt();
----------------
Sinking already reports an error, no need to add another one IMO.


================
Comment at: mlir/lib/Dialect/GPU/Transforms/KernelOutlining.cpp:251
       });
+      if (funcWalkResult.wasInterrupted()) {
+        return signalPassFailure();
----------------
Nit: Please drop trivial braces


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D75287/new/

https://reviews.llvm.org/D75287





More information about the llvm-commits mailing list