[llvm-bugs] [Bug 50681] New: OMP_PLACES/OMP_PROC_BIND not respecting Sub-NUMA Clustering

via llvm-bugs llvm-bugs at lists.llvm.org
Fri Jun 11 06:56:43 PDT 2021


https://bugs.llvm.org/show_bug.cgi?id=50681

            Bug ID: 50681
           Summary: OMP_PLACES/OMP_PROC_BIND not respecting Sub-NUMA
                    Clustering
           Product: OpenMP
           Version: unspecified
          Hardware: PC
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: P
         Component: Runtime Library
          Assignee: unassignedbugs at nondot.org
          Reporter: jannis.klinkenberg at rwth-aachen.de
                CC: llvm-bugs at lists.llvm.org

Hi,

I wondering about the semantics for spread binding for OpenMP threads on
machines with miltiple NUMA domains.

We have the following Intel machines with Sub-NUMA Clustering enabled and HT
disabled.

```
$ numactl -H | grep -e cpus -e nodes
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 6 7 8 12 13 14 18 19 20
node 1 cpus: 3 4 5 9 10 11 15 16 17 21 22 23
node 2 cpus: 24 25 26 30 31 32 36 37 38 42 43 44
node 3 cpus: 27 28 29 33 34 35 39 40 41 45 46 47
```

Initially, I assumed an application that is executed with `OMP_NUM_THREADS=4
OMP_PLACES=cores OMP_PROC_BIND=spread` would distribute the OpenMP threads
equally across the NUMA nodes.

This is not the case as OMP_PLACES=cores seems to generate a contiguous list
0-47 and threads are then places to 0,12,24,36

The only way how to currently solve this issue is by specifying/running:
```
export OMP_PLACES=`numactl -H | grep cpus | awk '(NF>3) {for (i = 4; i <= NF;
i++) printf "%d,", $i}' | sed 's/.$//'`

OMP_NUM_THREADS=4 OMP_PROC_BIND=spread ./app
```

Then threads will be assigned to 0,3,24,27 as intented.
Not sure whether this is a bug or just a weakness of the current way how cores
and spread work together.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20210611/67336299/attachment.html>


More information about the llvm-bugs mailing list