[all-commits] [llvm/llvm-project] 973cb2: [MLIR][OMP] Ensure nested scf.parallel execute all...

William Moses via All-commits all-commits at lists.llvm.org
Fri Aug 20 16:06:40 PDT 2021


  Branch: refs/heads/main
  Home:   https://github.com/llvm/llvm-project
  Commit: 973cb2c326be9f256da0897c4d2ef117dc22761d
      https://github.com/llvm/llvm-project/commit/973cb2c326be9f256da0897c4d2ef117dc22761d
  Author: William S. Moses <gh at wsmoses.com>
  Date:   2021-08-20 (Fri, 20 Aug 2021)

  Changed paths:
    M mlir/lib/Conversion/SCFToOpenMP/SCFToOpenMP.cpp
    M mlir/test/Conversion/SCFToOpenMP/scf-to-openmp.mlir

  Log Message:
  -----------
  [MLIR][OMP] Ensure nested scf.parallel execute all iterations

Presently, the lowering of nested scf.parallel loops to OpenMP creates one omp.parallel region, with two (nested) OpenMP worksharing loops on the inside. When lowered to LLVM and executed, this results in incorrect results. The reason for this is as follows:

An OpenMP parallel region results in the code being run with whatever number of threads available to OpenMP. Within a parallel region a worksharing loop divides up the total number of requested iterations by the available number of threads, and distributes accordingly. For a single ws loop in a parallel region, this works as intended.

Now consider nested ws loops as follows:

omp.parallel {
   A: omp.ws %i = 0...10 {
      B: omp.ws %j = 0...10 {
          code(%i, %j)
      }
   }
}

Suppose we ran this on two threads. The first workshare loop would decide to execute iterations 0, 1, 2, 3, 4 on thread 0, and iterations 5, 6, 7, 8, 9 on thread 1. The second workshare loop would decide the same for its iteration. This means thread 0 would execute i \in [0, 5) and j \in [0, 5). Thread 1 would execute i \in [5, 10) and j \in [5, 10). This means that iterations i in [5, 10), j in [0, 5) and i in [0, 5), j in [5, 10) never get executed, which is clearly wrong.

This permits two options for a remedy:
1) Change the semantics of the omp.wsloop to be distinct from that of the OpenMP runtime call or equivalently #pragma omp for. This could then allow some lowering transformation to remedy the aforementioned issue. I don't think this is desirable for an abstraction standpoint.
2) When lowering an scf.parallel always surround the wsloop with a new parallel region (thereby causing the innermost wsloop to use the number of threads available only to it).

This PR implements the latter change.

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D108426




More information about the All-commits mailing list