[Openmp-dev] counts in the work sharing

Abid Malik via Openmp-dev openmp-dev at lists.llvm.org
Mon Mar 16 11:53:26 PDT 2020


My apology. I think I didn't get Jim's email

On Mon, Mar 16, 2020 at 2:11 PM Churbanov, Andrey <
Andrey.Churbanov at intel.com> wrote:

> Couple of weeks ago Jim Cownie <jcownie at gmail.com> effectively answered
> your question.
>
>
>
> Let me try to re-iterate:
>
> > For each static_init_4 call, there is a static_fini call. Is this
> specific to its static_init_4 call?
>
> Yes.
>
>
>
> > Is there a pairing of calls that can not be broken?
>
> Yes, though I am not completely sure what you are asking about here.  You
> can break the code any way you want, but why?
>
>
>
> > If we look at the static_fini call, the function arguments are the same.
>
> Same as what?  They are probably same location info and global thread id
> as in the corresponding static_init call, if I got your question right.
>
>
>
> > It seems that if we have two static_init_4 function calls, we can
> interchange their static_fini calls. Does that make sense?
>
> Not sure what is your goal here… All you can achieve by such broken code
> generation is to confuse an OMPT tool with unspecified results, possibly
> break statistics gathering, break checking of consistency of the OpenMP
> constructs, - all the code of static_fini does.
>
>
>
> > If I move the static_fini call beyond Loop-B in IR; Should Loop-B be
> part of the Work Sharing environment and its iteration should be
> distributed similarly to Loop-A over the available threads?
>
> All you can achive here is again to get broken code generation, and as a
> result to confuse an OMPT tool with unspecified results, etc.  The
> static_fini call does not affect parallelization in any way, so your second
> Loop-B will remain to be serial loop and will be redundantly executed by
> all threads of the team, regardless of the location of the static_fini
> call.  To make a loop work sharing, it should be parallelized (e.g. by
> adding “#pragma omp for” in the source code, given there exist enclosing
> parallel region).
>
>
>
> As Jim suggested earlier, look at the code!
>
>
>
> And please first describe the problem you want to solve. As opposed to
> trying to shuffle statements by guess.
>
>
>

If two #omp for loops are compatible (i.e they have same in terms of
parameter values for call functions), I am trying two runs with single pair
of static_init and static)fini calls

#pragma omp for
Loop-A

#pragma omp for
Loop-B

Loop-A and Loop-B are compatible ( same iterations and chunk size)

---> will produce

call __kmpc_static_init4A()
Loop-A
call __kmpc_static_finiA()

call __kmpc_static_init4B()
Loop-B
call __kmpc_static_finiB()

---> want to achieve
call __kmpc_static_init4A()
Loop-A
//call __kmpc_static_finiA()
// Remove these two call values and adjust use values as necessay
//call __kmpc_static_init4B()
Loop-B
call __kmpc_static_finiB()
this will avoid the recalculation of chunk size etc.

Hope this helps!



> Regards,
>
> Andrey
>
>
>
> *From:* Abid Malik <abidmuslim at gmail.com>
> *Sent:* Monday, March 16, 2020 5:58 PM
> *To:* Churbanov, Andrey <Andrey.Churbanov at intel.com>
> *Cc:* openmp-dev at lists.llvm.org
> *Subject:* Re: [Openmp-dev] counts in the work sharing
>
>
>
> Thanks for the information.
>
>
>
> For each static_init_4 call, there is a static_fini call. Is this specific
> to its static_init_4 call? Is there a pairing of calls that can not be
> broken? If we look at the static_fini call, the function arguments are the
> same. It seems that if we have two static_init_4 function calls, we can
> interchange their static_fini calls. Does that make sense?
>
>
>
> Suppose we have two loops, Loop-A and Loop-B ( Loop-A and Loop-B are
> identical in terms of iteration, chunk, increment, etc). One with omp for
> loop and other without it
>
>
>
> #pragma omp for
>
> {
>
>   Loop-A
>
> }
>
>
>
> Loop-B
>
>
>
>
>
> If I move the static_fini call beyond Loop-B in IR; Should Loop-B be part
> of the Work Sharing environment and its iteration should be distributed
> similarly to Loop-A over the available threads?
>
>
>
> Thanks,
>
>
>
>
>
>
>
> On Mon, Mar 16, 2020 at 8:04 AM Churbanov, Andrey <
> Andrey.Churbanov at intel.com> wrote:
>
> For static schedule the runtime does not track iterations.  The
> __kmpc_for_static_init_4 (or similar, depending on the iteration variable
> type) is the only runtime call for a statically scheduled loop (not
> counting leading barrier if any). After it compiler just loops over
> obtained range of iterations without involving runtime any more.
>
>
>
> Regards,
>
> Andrey
>
>
>
> *From:* Openmp-dev <openmp-dev-bounces at lists.llvm.org> *On Behalf Of *Abid
> Malik via Openmp-dev
> *Sent:* Friday, March 13, 2020 9:35 PM
> *To:* openmp-dev at lists.llvm.org
> *Subject:* [Openmp-dev] counts in the work sharing
>
>
>
> Hello,
>
>
>
> I am looking into the void __kmp_for_static_init4 function which marks the
> start work sharing. The function computes the upper and lower bounds and
> stride to be used for the set of iterations to be executed by the current
> thread from the statically scheduled loop that is described by the initial
> values of the bround, stride, increment and chunk size.  How the record of
> iterations is kept. Which function/file should I look into to keep track of
> the iteration?
>
>
>
>
>
> Thanks,
>
>
>
> --
>
> Abid M. Malik
> ******************************************************
> "I have learned silence from the talkative, toleration from the
> intolerant, and kindness from the unkind"---Gibran
> "Success is not for the chosen few, but for the few who choose" --- John
> Maxwell
> "Being a good person does not depend on your religion or status in life,
> your race or skin color, political views or culture. IT DEPENDS ON HOW GOOD
> YOU TREAT OTHERS"--- Abid
> "The Universe is talking to us, and the language of the Universe is
> mathematics."----Abid
>
>
>
> --------------------------------------------------------------------
> Joint Stock Company Intel A/O
> Registered legal address: Krylatsky Hills Business Park,
> 17 Krylatskaya Str., Bldg 4, Moscow 121614,
> Russian Federation
>
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution
> by others is strictly prohibited. If you are not the intended
> recipient, please contact the sender and delete all copies.
>
>
>
>
> --
>
> Abid M. Malik
> ******************************************************
> "I have learned silence from the talkative, toleration from the
> intolerant, and kindness from the unkind"---Gibran
> "Success is not for the chosen few, but for the few who choose" --- John
> Maxwell
> "Being a good person does not depend on your religion or status in life,
> your race or skin color, political views or culture. IT DEPENDS ON HOW GOOD
> YOU TREAT OTHERS"--- Abid
> "The Universe is talking to us, and the language of the Universe is
> mathematics."----Abid
>
>
>
> --------------------------------------------------------------------
> Joint Stock Company Intel A/O
> Registered legal address: Krylatsky Hills Business Park,
> 17 Krylatskaya Str., Bldg 4, Moscow 121614,
> Russian Federation
>
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution
> by others is strictly prohibited. If you are not the intended
> recipient, please contact the sender and delete all copies.
>


-- 
Abid M. Malik
******************************************************
"I have learned silence from the talkative, toleration from the intolerant,
and kindness from the unkind"---Gibran
"Success is not for the chosen few, but for the few who choose" --- John
Maxwell
"Being a good person does not depend on your religion or status in life,
your race or skin color, political views or culture. IT DEPENDS ON HOW GOOD
YOU TREAT OTHERS"--- Abid
"The Universe is talking to us, and the language of the Universe is
mathematics."----Abid
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/openmp-dev/attachments/20200316/f352facd/attachment-0001.html>


More information about the Openmp-dev mailing list