[Openmp-dev] OpenMP code and thread priority on OSX

Stéphane Letz via Openmp-dev openmp-dev at lists.llvm.org
Thu Nov 26 10:16:33 PST 2015


> 
> % cd runtime/src
> % grep pthread_create *.*
> kmp_i18n.h:        KMP_SYSFAIL( "pthread_create", status );
> kmp_i18n.h:        KMP_CHECK_SYSFAIL( "pthread_create", status );
> z_Linux_util.c:            status = pthread_create( & handle, & thread_attr, __kmp_launch_worker, (void *) th );
> z_Linux_util.c:                KMP_SYSFAIL( "pthread_create", status );
> z_Linux_util.c:    status = pthread_create( &handle, & thread_attr, __kmp_launch_monitor, (void *) th );
> z_Linux_util.c:        KMP_SYSFAIL( "pthread_create", status );
> 
> (Yup, z_Linux_util.c is counterintuitive…)

Thanks  I'll have a look and try to patch this code.

> 
>> What do you mean by "not designed for concurrency, but for parallelism" in this specific use context?
> Concurrency is handling multiple asynchronous events, and can be a useful way to structure code even when one has only a single hardware thread. For instance to execute callbacks when the user presses on buttons in a GUI, or a network packet arrives.
> 
> Parallelism is using many hardware threads to reduce the time to solution of a single problem. It is futile if you only have one hardware thread.
> 
> OpenMP is designed for parallelism. It works best when it controls all the hardware and there is nothing else going on. In your case there clearly are other things going on (otherwise changing the priority wouldn't be necessary). In such an environment OpenMP may not work well. 
> Amongst the reasons are
> 1) Many openMP codes use static work distribution. That assumes both that the work is evenly distributed between threads, and that the threads execute at the same speed. If a thread is stolen away by the OS that second assumption is false. At which point all other threads will have to wait at the next join or barrier for the laggard to arrive.
> 2) Even if you use dynamic scheduling, the definition of OpenMP barriers (and join) is that all the threads must arrive at the barrier, not that all the work has to be complete. So if the laggard is still not executing OpenMP code even though all the work is complete all the other threads still have to wait.
> 
> -- Jim
> 

Well our OpenMP code is basically a DAG of linked audio tasks (audio data flowing between the tasks) that we express as a sequence of parallel sections (#pragma omp sections with #pragma omp section inside). In a given parallel section tasks are usually quite similar, and should be of the same speed. Then parallel sections are synchronized with barriers. Since we are in an RT context, we can assume that no other non RT thread will steal the CPU during the pure audio computation, of if there is one, then it would be another audio RT thread (or any other RT thread…), and this is acceptable (since all audio RT threads simply try to meet their timing deadline). 

I guess our code still uses static scheduling, I will test dynamic scheduling also.

Stéphane








More information about the Openmp-dev mailing list