[Mlir-commits] [mlir] [mlir] Remove the mlir-spirv-cpu-runner (move to mlir-cpu-runner) (PR #114563)
Fabian Mora
llvmlistbot at llvm.org
Fri Nov 8 04:40:23 PST 2024
================
@@ -17,10 +17,65 @@
#include "mlir/ExecutionEngine/OptUtils.h"
#include "mlir/IR/Dialect.h"
#include "mlir/Target/LLVMIR/Dialect/All.h"
+#include "mlir/Target/LLVMIR/Dialect/LLVMIR/LLVMToLLVMIRTranslation.h"
+#include "mlir/Target/LLVMIR/Export.h"
+#include "llvm/IR/LLVMContext.h"
+#include "llvm/IR/Module.h"
+#include "llvm/Linker/Linker.h"
+#include "llvm/Support/CommandLine.h"
#include "llvm/Support/InitLLVM.h"
#include "llvm/Support/TargetSelect.h"
+using namespace mlir;
+
+llvm::cl::opt<bool> LinkNestedModules(
+ "link-nested-modules",
+ llvm::cl::desc("Link two nested MLIR modules into a single LLVM IR module. "
+ "Useful if both the host and device code can be run on the "
+ "same CPU, as in mlir-spirv-cpu-runner tests."));
----------------
fabianmcg wrote:
Yes, but that's still something that could be handled with the current infra + extraction, it would look something like:
- `module-to-offload`: turn nested ModuleOps into BinaryOps where the object being held is LLVM bitcode.
- `translateToLLVM`: take the binary, load the bitcode into an LLVM module, and link it to the main LLVM module. One can do that with the [OffloadingLLVMTranslationAttrInterface
](https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/GPU/IR/CompilationAttrInterfaces.td#L100-L142) interface.
It's just a matter, whether we want to generalize (doesn't require that many changes) and extract it from GPU. In this cases is also whether people accept the overhead price of having to serialize to LLVM bitcode and loading it back.
https://github.com/llvm/llvm-project/pull/114563
More information about the Mlir-commits
mailing list