[PATCH] D104516: [mlir] Add a ThreadPool to MLIRContext and refactor MLIR threading usage

Chris Lattner via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Sat Jun 19 08:57:57 PDT 2021


lattner added a comment.

Nice, thanks for tackling this River!



================
Comment at: llvm/lib/Support/ThreadPool.cpp:77
+  std::thread::id CurrentThreadId = std::this_thread::get_id();
+  for (const std::thread &Thread : Threads)
+    if (CurrentThreadId == Thread.get_id())
----------------
Yuck but ok.  This is obviously your way of trying to force me to rewrite this stuff ;-)


================
Comment at: mlir/include/mlir/IR/ThreadingUtilities.h:1
+//===- ThreadingUtilities.h - MLIR Threading Utilities ----------*- C++ -*-===//
+//
----------------
I'd recommend calling this just "Threading.h".

This also points out a layering issue: MLIRContext is the right place for us to scope this (so I'm ok with the patch), but this really has nothing to do with the IR!

One solution would be to split MLIRContext, give it a subclass like "MLIRNonIRContext" (better names welcome) that contain the thread pool, diagnostics utility and other stuff that would be generic, and put it in mlir/Support.  MLIRContext would derive from it, and would remain the currency type, but generic utilities like this would take "MLIRNonIRContext".

Anyway, discussion for another day, just saying that general parallelism stuff would be better in Support.


================
Comment at: mlir/include/mlir/IR/ThreadingUtilities.h:48
+  // scheduling work within a worker thread.
+  if (threadPool.isWorkerThread())
+    return std::for_each(begin, end, func);
----------------
Please merge this into the previous "if", so the compiler doesn't have to undupe the two std::for_each calls.


================
Comment at: mlir/include/mlir/IR/ThreadingUtilities.h:85
+template <typename RangeT, typename FuncT>
+void parallelForEach(MLIRContext *context, RangeT &&range, FuncT &&func) {
+  parallelForEach(context, std::begin(range), std::end(range),
----------------
Could you also add a parallelForEachN while you're here?  circt uses it in a couple places.


================
Comment at: mlir/lib/Pass/Pass.cpp:591-593
+  std::vector<std::atomic<bool>> activePMs(asyncExecutors.size());
+  for (std::atomic<bool> &isActive : activePMs)
+    isActive = false;
----------------
bondhugula wrote:
> std::vector<std::atomic<bool>> activePMs(asyncExecutors.size(), 
>                                                                          std::atomic<bool>{false});
> 
Right, atomic needs to be explicitly initialized to work correctly


================
Comment at: mlir/test/Dialect/Affine/SuperVectorize/compose_maps.mlir:1
-// RUN: mlir-opt -allow-unregistered-dialect %s -affine-super-vectorizer-test -compose-maps 2>&1 |  FileCheck %s
+// RUN: mlir-opt -allow-unregistered-dialect %s -affine-super-vectorizer-test -compose-maps -mlir-disable-threading 2>&1 |  FileCheck %s
 
----------------
Why do these need to disable threading?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D104516/new/

https://reviews.llvm.org/D104516



More information about the llvm-commits mailing list