<div dir="auto">Great, thanks! <div dir="auto"><br></div><div dir="auto">I appreciate it!</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Oct 5, 2020, 04:22 Dvorskiy, Mikhail <<a href="mailto:mikhail.dvorskiy@intel.com">mikhail.dvorskiy@intel.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">





<div lang="EN-US" link="blue" vlink="purple">
<div class="m_8010053005455498717WordSection1">
<p class="MsoNormal">Hi Christopher,<u></u><u></u></p>
<p class="MsoNormal">I’ve double check the code of __pattern_is_partitioned (which is based on the reduction parallel pattern). Yes, a binary operation is not commutative. So, my hypo was right.<u></u><u></u></p>
<p class="MsoNormal">Generally speaking the writing “manually”  reduction pattern w/o OpenMP s reducer is not good approach due to it may be not effective. Indeed, if we consider your example – the second loop (for) combines the results in serial mode, and
 std::vector brings additional overheads…<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Once more, OpenMP reduction requires commutative binary operation and it is right. With PSTL design perspective an algorithm pattern should not rely on a fact that a parallel reduction pattern (which is provided by a parallel backend) support
 a non-commutative binary operation. So, it is an issue of __pattern_is_partitioned and we will fix it. So while I would suggest don’t worry about that.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Best regards,<u></u><u></u></p>
<p class="MsoNormal">Mikhail Dvorskiy<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal"><b>From:</b> Christopher Nelson <<a href="mailto:nadiasvertex@gmail.com" target="_blank" rel="noreferrer">nadiasvertex@gmail.com</a>> <br>
<b>Sent:</b> Saturday, October 3, 2020 6:19 PM<br>
<b>To:</b> Dvorskiy, Mikhail <<a href="mailto:mikhail.dvorskiy@intel.com" target="_blank" rel="noreferrer">mikhail.dvorskiy@intel.com</a>><br>
<b>Cc:</b> Kukanov, Alexey <<a href="mailto:Alexey.Kukanov@intel.com" target="_blank" rel="noreferrer">Alexey.Kukanov@intel.com</a>>; Pavlov, Evgeniy <<a href="mailto:evgeniy.pavlov@intel.com" target="_blank" rel="noreferrer">evgeniy.pavlov@intel.com</a>>; Louis Dionne <<a href="mailto:ldionne@apple.com" target="_blank" rel="noreferrer">ldionne@apple.com</a>>; Thomas Rodgers <<a href="mailto:trodgers@redhat.com" target="_blank" rel="noreferrer">trodgers@redhat.com</a>>; Libc++ Dev <<a href="mailto:libcxx-dev@lists.llvm.org" target="_blank" rel="noreferrer">libcxx-dev@lists.llvm.org</a>><br>
<b>Subject:</b> Re: [libcxx-dev] OpenMP parallel reduce bugs<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<p class="MsoNormal">Hello again,<u></u><u></u></p>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">I was able to rewrite the parallel_reduce function in a way that works without using OpenMP's reducer. I have a couple of questions:<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">1. I use a vector to gather the intervening results for later reduction. Is there any problem depending on vector here?<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">2. I can see that it might make sense to build a taskloop for the actual reduction if the number of chunks is quite large. Is that something that I should look into more?<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">The code is below. Please let me know if you have any questions or concerns.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<pre><i><span style="font-size:11.5pt;color:#8c8c8c">//------------------------------------------------------------------------<br>// parallel_reduce<br>//------------------------------------------------------------------------<br><br></span></i><span style="font-size:11.5pt;color:#0033b3">template </span><span style="font-size:11.5pt;color:#080808"><</span><span style="font-size:11.5pt;color:#0033b3">class </span><span style="font-size:11.5pt;color:#371f80">_RandomAccessIterator</span><span style="font-size:11.5pt;color:#080808">, </span><span style="font-size:11.5pt;color:#0033b3">class </span><span style="font-size:11.5pt;color:#371f80">_Value</span><span style="font-size:11.5pt;color:#080808">, </span><span style="font-size:11.5pt;color:#0033b3">typename </span><span style="font-size:11.5pt;color:#371f80">_RealBody</span><span style="font-size:11.5pt;color:#080808">, </span><span style="font-size:11.5pt;color:#0033b3">typename </span><span style="font-size:11.5pt;color:#371f80">_Reduction</span><span style="font-size:11.5pt;color:#080808">><br></span><span style="font-size:11.5pt;color:#371f80">_Value<br></span><span style="font-size:11.5pt;color:#00627a">__parallel_reduce_body</span><span style="font-size:11.5pt;color:#080808">(</span><span style="font-size:11.5pt;color:#371f80">_RandomAccessIterator </span><span style="font-size:11.5pt;color:#080808">__first, </span><span style="font-size:11.5pt;color:#371f80">_RandomAccessIterator </span><span style="font-size:11.5pt;color:#080808">__last, </span><span style="font-size:11.5pt;color:#371f80">_Value </span><span style="font-size:11.5pt;color:#080808">__identity,<br>                       </span><span style="font-size:11.5pt;color:#371f80">_RealBody </span><span style="font-size:11.5pt;color:#080808">__real_body, </span><span style="font-size:11.5pt;color:#371f80">_Reduction </span><span style="font-size:11.5pt;color:#080808">__reduction)<br>{<br>    std::size_t </span><span style="font-size:11.5pt;color:black">__n_chunks</span><span style="font-size:11.5pt;color:#080808">{</span><span style="font-size:11.5pt;color:#1750eb">0</span><span style="font-size:11.5pt;color:#080808">}, __chunk_size{</span><span style="font-size:11.5pt;color:#1750eb">0</span><span style="font-size:11.5pt;color:#080808">}, __first_chunk_size{</span><span style="font-size:11.5pt;color:#1750eb">0</span><span style="font-size:11.5pt;color:#080808">};<br>    __chunk_partitioner(__first, __last, __n_chunks, __chunk_size, __first_chunk_size);<br><br>    std::vector<_Value> __values(__n_chunks);<br><br>    </span><i><span style="font-size:11.5pt;color:#8c8c8c">// To avoid over-subscription we use taskloop for the nested parallelism<br>    </span></i><span style="font-size:11.5pt;color:#080808">_PSTL_PRAGMA(omp taskloop shared(__values))<br>    </span><span style="font-size:11.5pt;color:#0033b3">for </span><span style="font-size:11.5pt;color:#080808">(std::size_t __chunk = </span><span style="font-size:11.5pt;color:#1750eb">0</span><span style="font-size:11.5pt;color:#080808">; __chunk < __n_chunks; ++__chunk)<br>    {<br>        </span><span style="font-size:11.5pt;color:#0033b3">auto </span><span style="font-size:11.5pt;color:#080808">__this_chunk_size = __chunk == </span><span style="font-size:11.5pt;color:#1750eb">0 </span><span style="font-size:11.5pt;color:#080808">? __first_chunk_size : __chunk_size;<br>        </span><span style="font-size:11.5pt;color:#0033b3">auto </span><span style="font-size:11.5pt;color:#080808">__index = __chunk == </span><span style="font-size:11.5pt;color:#1750eb">0 </span><span style="font-size:11.5pt;color:#080808">? </span><span style="font-size:11.5pt;color:#1750eb">0 </span><span style="font-size:11.5pt;color:#080808">: (__chunk * __chunk_size) + (__first_chunk_size - __chunk_size);<br>        </span><span style="font-size:11.5pt;color:#0033b3">auto </span><span style="font-size:11.5pt;color:#080808">__begin = __first + __index;<br>        </span><span style="font-size:11.5pt;color:#0033b3">auto </span><span style="font-size:11.5pt;color:#080808">__end = __begin + __this_chunk_size;<br>        __values[__chunk] = __real_body(__begin, __end, __identity);<br>    }<br><br>    </span><span style="font-size:11.5pt;color:#0033b3">auto </span><span style="font-size:11.5pt;color:black">__result </span><span style="font-size:11.5pt;color:#080808">= __values.front();<br>    </span><span style="font-size:11.5pt;color:#0033b3">for </span><span style="font-size:11.5pt;color:#080808">(</span><span style="font-size:11.5pt;color:#0033b3">auto </span><span style="font-size:11.5pt;color:#080808">p = __values.begin() + </span><span style="font-size:11.5pt;color:#1750eb">1</span><span style="font-size:11.5pt;color:#080808">; p != __values.end(); ++p)<br>    {<br>        __result = __reduction(__result, *p);<br>    }<br><br>    </span><span style="font-size:11.5pt;color:#0033b3">return </span><span style="font-size:11.5pt;color:#080808">__result;<br>}<u></u><u></u></span></pre>
</div>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal">On Fri, Oct 2, 2020 at 1:33 PM Christopher Nelson <<a href="mailto:nadiasvertex@gmail.com" target="_blank" rel="noreferrer">nadiasvertex@gmail.com</a>> wrote:<u></u><u></u></p>
</div>
<blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in">
<div>
<p class="MsoNormal">Thank you. I wondered if you had an update on this. I've done some further looking, and I think that is correct. I've tried to find example implementations of performing reductions with openmp that don't require a commutative operator.
 It seems like rewriting the is_partioned algorithm to provide a commutative operator might be a larger / undesirable change.<u></u><u></u></p>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Do you have any guidance on manually writing a task loop in openmp that performs the reduction without requiring commutativity?<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Thanks!<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">-={C}=-<u></u><u></u></p>
</div>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal">On Thu, Oct 1, 2020 at 9:11 AM Dvorskiy, Mikhail <<a href="mailto:mikhail.dvorskiy@intel.com" target="_blank" rel="noreferrer">mikhail.dvorskiy@intel.com</a>> wrote:<u></u><u></u></p>
</div>
<blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in">
<div>
<div>
<p class="MsoNormal">Hi Christopher,<u></u><u></u></p>
<p class="MsoNormal"> <u></u><u></u></p>
<p class="MsoNormal">Yes,  <span style="font-size:10.0pt;font-family:"Segoe UI",sans-serif;color:black">“is_partitioned” algo implementation is based on a reduction parallel pattern.  </span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Segoe UI",sans-serif;color:black">And it looks that a binary operation (combiner)  is not commutative.</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Segoe UI",sans-serif;color:black"> </span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Segoe UI",sans-serif;color:black">In general, “reduction” algorithm requires a commutative binary operation. And OpenMP reduction requires
 that.</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Segoe UI",sans-serif;color:black">For TBB backend it works because TBB parallel reduction algorithm doesn’t require a commutative binary
 operation. </span><u></u><u></u></p>
<p class="MsoNormal"> <u></u><u></u></p>
<p class="MsoNormal">We (me or Evgeniy) will check that hypo and inform you.<u></u><u></u></p>
<p class="MsoNormal"> <u></u><u></u></p>
<p class="MsoNormal">Best regards,<u></u><u></u></p>
<p class="MsoNormal">Mikhail Dvorskiy<u></u><u></u></p>
<p class="MsoNormal"> <u></u><u></u></p>
<p class="MsoNormal"><b>From:</b> Christopher Nelson <<a href="mailto:nadiasvertex@gmail.com" target="_blank" rel="noreferrer">nadiasvertex@gmail.com</a>>
<br>
<b>Sent:</b> Thursday, October 1, 2020 2:46 AM<br>
<b>To:</b> Kukanov, Alexey <<a href="mailto:Alexey.Kukanov@intel.com" target="_blank" rel="noreferrer">Alexey.Kukanov@intel.com</a>><br>
<b>Cc:</b> Dvorskiy, Mikhail <<a href="mailto:mikhail.dvorskiy@intel.com" target="_blank" rel="noreferrer">mikhail.dvorskiy@intel.com</a>>; Pavlov, Evgeniy <<a href="mailto:evgeniy.pavlov@intel.com" target="_blank" rel="noreferrer">evgeniy.pavlov@intel.com</a>>; Louis Dionne <<a href="mailto:ldionne@apple.com" target="_blank" rel="noreferrer">ldionne@apple.com</a>>;
 Thomas Rodgers <<a href="mailto:trodgers@redhat.com" target="_blank" rel="noreferrer">trodgers@redhat.com</a>>; Libc++ Dev <<a href="mailto:libcxx-dev@lists.llvm.org" target="_blank" rel="noreferrer">libcxx-dev@lists.llvm.org</a>><br>
<b>Subject:</b> [libcxx-dev] OpenMP parallel reduce bugs<u></u><u></u></p>
<p class="MsoNormal"> <u></u><u></u></p>
<div>
<div>
<p class="MsoNormal">Hello friends,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">I have been working on the OpenMP backend for the parallel STL, and most of the tests are passing. However, among the failures is the "is_partitioned" test. I have rewritten the
 __parallel_reduce backend function to be simpler to understand in an attempt to understand what is failing (code is below.)<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">I also rewrote it as a serial function that splits the iteration range in two and then calls __reduction() on each half of the range being passed in. The result I get from the serial
 execution as compared to the result I get from the parallel execution is different.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">I have verified that the parallel execution tasks are run, and that their results match what each serial execution would be if I ran them that way.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">I am wondering if there is something wrong with the way OpenMP is running the reducer here? Perhaps it is injecting a value into the computation that is unexpected for this algorithm?
 Does anything jump out at anyone as being suspicious?<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">Thank you again for your time and assistance!<u></u><u></u></p>
</div>
<div>
<pre><span style="font-size:11.5pt;color:#0033b3">template </span><span style="font-size:11.5pt;color:#080808"><</span><span style="font-size:11.5pt;color:#0033b3">class </span><span style="font-size:11.5pt;color:#371f80">_RandomAccessIterator</span><span style="font-size:11.5pt;color:#080808">, </span><span style="font-size:11.5pt;color:#0033b3">class </span><span style="font-size:11.5pt;color:#371f80">_Value</span><span style="font-size:11.5pt;color:#080808">, </span><span style="font-size:11.5pt;color:#0033b3">typename </span><span style="font-size:11.5pt;color:#371f80">_RealBody</span><span style="font-size:11.5pt;color:#080808">, </span><span style="font-size:11.5pt;color:#0033b3">typename </span><span style="font-size:11.5pt;color:#371f80">_Reduction</span><span style="font-size:11.5pt;color:#080808">><br></span><span style="font-size:11.5pt;color:#371f80">_Value<br></span><span style="font-size:11.5pt;color:#00627a">__parallel_reduce_body</span><span style="font-size:11.5pt;color:#080808">(</span><span style="font-size:11.5pt;color:#371f80">_RandomAccessIterator </span><span style="font-size:11.5pt;color:#080808">__first, </span><span style="font-size:11.5pt;color:#371f80">_RandomAccessIterator </span><span style="font-size:11.5pt;color:#080808">__last, </span><span style="font-size:11.5pt;color:#371f80">_Value </span><span style="font-size:11.5pt;color:#080808">__identity,<br>                       </span><span style="font-size:11.5pt;color:#371f80">_RealBody </span><span style="font-size:11.5pt;color:#080808">__real_body, </span><span style="font-size:11.5pt;color:#371f80">_Reduction </span><span style="font-size:11.5pt;color:#080808">__reduction)<br>{<br>    std::size_t </span><span style="font-size:11.5pt;color:black">__item_count </span><span style="font-size:11.5pt;color:#080808">= __last - __first;<br>    std::size_t </span><span style="font-size:11.5pt;color:black">__head_items </span><span style="font-size:11.5pt;color:#080808">= (__item_count / __default_chunk_size) * __default_chunk_size;<br><br>    </span><i><span style="font-size:11.5pt;color:#8c8c8c">// We should encapsulate a result value and a reduction operator since we<br>    // cannot use a lambda in OpenMP UDR.<br>    </span></i><span style="font-size:11.5pt;color:#0033b3">using </span><span style="font-size:11.5pt;color:#371f80">_CombinerType </span><span style="font-size:11.5pt;color:#080808">= </span><span style="font-size:11.5pt;color:teal">__pstl</span><span style="font-size:11.5pt;color:#080808">::</span><span style="font-size:11.5pt;color:teal">__internal</span><span style="font-size:11.5pt;color:#080808">::</span><span style="font-size:11.5pt;color:teal">_Combiner</span><span style="font-size:11.5pt;color:#080808"><</span><span style="font-size:11.5pt;color:#371f80">_Value</span><span style="font-size:11.5pt;color:#080808">, </span><span style="font-size:11.5pt;color:#371f80">_Reduction</span><span style="font-size:11.5pt;color:#080808">>;<br>    </span><span style="font-size:11.5pt;color:#371f80">_CombinerType </span><span style="font-size:11.5pt;color:black">__result</span><span style="font-size:11.5pt;color:#080808">{__identity, &__reduction};<br>    _PSTL_PRAGMA_DECLARE_REDUCTION(__combiner, _CombinerType)<br><br>    </span><i><span style="font-size:11.5pt;color:#8c8c8c">// To avoid over-subscription we use taskloop for the nested parallelism<br>    //_PSTL_PRAGMA(omp taskloop reduction(__combiner : __result))<br>    </span></i><span style="font-size:11.5pt;color:#0033b3">for </span><span style="font-size:11.5pt;color:#080808">(std::size_t __i = </span><span style="font-size:11.5pt;color:#1750eb">0</span><span style="font-size:11.5pt;color:#080808">; __i < __item_count; __i += __default_chunk_size)<br>    {<br>        </span><span style="font-size:11.5pt;color:#0033b3">auto </span><span style="font-size:11.5pt;color:#080808">__begin = __first + __i;<br>        </span><span style="font-size:11.5pt;color:#0033b3">auto </span><span style="font-size:11.5pt;color:#080808">__end = __i < __head_items ? __begin + __default_chunk_size : __last;<br>        __result.__value = __real_body(__begin, __end, __identity);<br>    }<br><br>    </span><span style="font-size:11.5pt;color:#0033b3">return </span><span style="font-size:11.5pt;color:black">__result</span><span style="font-size:11.5pt;color:#080808">.</span><span style="font-size:11.5pt;color:black">__value</span><span style="font-size:11.5pt;color:#080808">;<br>}    </span><u></u><u></u></pre>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
</div>
</div>
</div>

</blockquote></div>