[PATCH] D78207: [MLIR] Allow for multiple gpu modules during translation.

Alex Zinenko via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Apr 16 05:03:23 PDT 2020


ftynse accepted this revision.
ftynse added inline comments.


================
Comment at: mlir/lib/Conversion/GPUToCUDA/ConvertKernelFuncToCubin.cpp:115
+    auto clone = llvm::cantFail(
+        llvm::parseBitcodeFile(clonedModuleBufferRef, llvmContext));
+
----------------
mehdi_amini wrote:
> mehdi_amini wrote:
> > Why don't you use the high level API the same way this is done in the ExecutionEngine?
> > 
> > ```
> >   // TODO(zinenko): Reevaluate model of ownership of LLVMContext in LLVMDialect.
> >   SmallVector<char, 1> buffer;
> >   {
> >     llvm::raw_svector_ostream os(buffer);
> >     WriteBitcodeToFile(*llvmModule, os);
> >   }
> >   llvm::MemoryBufferRef bufferRef(StringRef(buffer.data(), buffer.size()),
> >                                   "cloned module buffer");
> >   auto expectedModule = parseBitcodeFile(bufferRef, *ctx);
> >   if (!expectedModule)
> >     return expectedModule.takeError();
> >   std::unique_ptr<Module> deserModule = std::move(*expectedModule);
> >   auto dataLayout = deserModule->getDataLayout();
> > ```
> > 
> > 
> > I'd also like a TODO also here for Alex to actually fix: the fact that we have a LLVMContext tied to the LLVM dialect is really something we need to fix.
> (It may be even worth extracting this logic in a helper by the way)
> I'd also like a TODO also here for Alex to actually fix: the fact that we have a LLVMContext tied to the LLVM dialect is really something we need to fix.

Yeah, this is the next big thing on my todo list. Let's see how many discussion we can have in parallel :)


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D78207/new/

https://reviews.llvm.org/D78207





More information about the llvm-commits mailing list