[Openmp-dev] Debug assert trigger in OpenMP + MPI

Raúl Peñacoba Veigas via Openmp-dev openmp-dev at lists.llvm.org
Thu May 21 09:12:10 PDT 2020


Hello again,

I've managed to remove MPI from the equation. It seems a race condition 
in the runtime.

int main(int argc, char **argv)
{
         int TIMESTEPS = 10;
         int BLOCKS = 100;

         int nranks = 4;

         int DATA;

         #pragma omp parallel
         #pragma omp single
         {
                 for (int t = 0; t < TIMESTEPS; ++t) {
                         for (int r = 0; r < nranks; ++r) {
                                 for (int b = 0; b < BLOCKS; ++b) {
                                         #pragma omp task depend(in: DATA)
                                         { }
                                 }
                         }

                         #pragma omp task depend(inout: DATA)
                         {}
                 }
                 #pragma omp taskwait
         }
}

To run it execute:

clang -fopenmp t1.c -o t1

for i in {1..5000}; do echo $i; OMP_NUM_THREADS=3 ./t1; done

Regards,

Raúl

El 21/5/20 a las 9:57, Raúl Peñacoba Veigas escribió:
> Hello everyone,
>
> Writing an OpenMP + MPI code I've triggered a debug assert in 
> __kmp_task_start:
>
> KMP_DEBUG_ASSERT(taskdata->td_flags.tasktype == TASK_EXPLICIT);
>
> I attach a simpler code that does not do anything special with 
> additional info.
>
> #include <mpi.h>
>
> #include <assert.h>
> #include <stdio.h>
> #include <stdlib.h>
> #include <string.h>
> #include <unistd.h>
>
> int main(int argc, char **argv)
> {
>         int TIMESTEPS = 10;
>         int BLOCKS = 100;
>
>         MPI_Init(&argc, &argv);
>
>         int rank, nranks;
>         MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>         MPI_Comm_size(MPI_COMM_WORLD, &nranks);
>
>         int DATA;
>
>         #pragma omp parallel
>         #pragma omp single
>         {
>                 for (int t = 0; t < TIMESTEPS; ++t) {
>                         for (int r = 0; r < nranks; ++r) {
>                                 for (int b = 0; b < BLOCKS; ++b) {
>                                         #pragma omp task depend(in: DATA)
>                                         { }
>                                 }
>                         }
>
>                         #pragma omp task depend(inout: DATA)
>                         {}
>                 }
>                 #pragma omp taskwait
>         }
>
>         MPI_Finalize();
>
> }
>
> llvm_project debug build, commitaafdeeade8d
> MPICH Version: 3.3a2
> MPICH Release date: Sun Nov 13 09:12:11 MST 2016
>
> $ MPICH_CC=clang mpicc -fopenmp t1.c -o t1
> $ for i in {1..100}; do mpiexec.hydra -n 4 ./t1; done
>
>

http://bsc.es/disclaimer


More information about the Openmp-dev mailing list