[PATCH] D78207: [MLIR] Allow for multiple gpu modules during translation.

Mehdi AMINI via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Apr 15 22:45:05 PDT 2020


mehdi_amini added inline comments.


================
Comment at: mlir/lib/Conversion/GPUToCUDA/ConvertKernelFuncToCubin.cpp:115
+    auto clone = llvm::cantFail(
+        llvm::parseBitcodeFile(clonedModuleBufferRef, llvmContext));
+
----------------
Why don't you use the high level API the same way this is done in the ExecutionEngine?

```
  // TODO(zinenko): Reevaluate model of ownership of LLVMContext in LLVMDialect.
  SmallVector<char, 1> buffer;
  {
    llvm::raw_svector_ostream os(buffer);
    WriteBitcodeToFile(*llvmModule, os);
  }
  llvm::MemoryBufferRef bufferRef(StringRef(buffer.data(), buffer.size()),
                                  "cloned module buffer");
  auto expectedModule = parseBitcodeFile(bufferRef, *ctx);
  if (!expectedModule)
    return expectedModule.takeError();
  std::unique_ptr<Module> deserModule = std::move(*expectedModule);
  auto dataLayout = deserModule->getDataLayout();
```


I'd also like a TODO also here for Alex to actually fix: the fact that we have a LLVMContext tied to the LLVM dialect is really something we need to fix.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D78207/new/

https://reviews.llvm.org/D78207





More information about the llvm-commits mailing list