[LLVMdev] AESOP autoparallelizing compiler
Rahul
rahul3527 at gmail.com
Mon Mar 11 20:55:10 PDT 2013
Hi Timothy,
Thanks for quick reply :)
I disabled inlining in ~/aesop/blank/Makefile. Now function "main" calls
function "func1" with two arguments which alias, aesop still goes ahead and
parallelize the loop in function "func".
==============
func1(A, A+1);
==============
So, irrespective of whether aesop inlines function func1/func, above call
shouldn't have parallelized loop in function "func" (called from func1).
Thanks,
Rahul
On Mon, Mar 11, 2013 at 7:15 PM, Timothy Mattausch Creech
<tcreech at umd.edu>wrote:
> Hi Rahul,
> Thanks for your interest!
>
> Our work does not attempt to make any significant contributions to alias
> analysis, and acts as a client to existing LLVM AA. Furthermore, the
> options passed to the AESOP frontend scripts are obeyed at compile time,
> but at link time certain transformations occur unconditionally.
>
> Here, AESOP has actually thwarted your experiment by performing inlining
> just before link-time. You can disable this in ~/aesop/blank/Makefile. It
> does this unconditionally as a standard set of pre passes to help the
> analysis as much as possible. As a result, AESOP knows that A and B do not
> alias and is correctly parallelizing here.
>
> Best,
> Tim
>
> On Mar 11, 2013, at 8:59 AM, Rahul wrote:
>
> > Hi Timothy,
> >
> > Today I happened to download the code and do some experiments.
> > I actually wanted to see how you handle inter-procedure alias analysis.
> > So, I set inline threshold to zero and tried out following example
> > ===============================================
> > #define N 1024
> > void func(double *A, double *B)
> > {
> > int i;
> > for (i=1; i<N-2; i++) {
> > B[i] = A[i] + i*3;
> > }
> > }
> >
> > void func1(double *A, double *B)
> > {
> > func(A,B);
> > }
> >
> > int main(void)
> > {
> > double data[N];
> > double data1[N];
> > double result=0;
> > int i;
> >
> > for (i=0; i<N; i++) {
> > result += i*3;
> > data[i] = result;
> > }
> > func1(data, data1);
> >
> > printf(" Data[10] = %lf\n", data[10]);
> > printf("Data1[10] = %lf\n", data1[10]);
> >
> > return 0;
> > }
> > ===============================================
> > I got following parallalization info after compiling:
> >
> > Loop main:for.body has a loop carried scalar dependence, hence will
> not be parallelized
> > Loop func:for.body carries no dependence, hence is being parallelized
> >
> > Since A and B to function "func" are aliases it shouldn't have
> parallelized.
> >
> > Can you please let me know how you compute dependence ??
> >
> >
> > Thanks,
> > Rahul
> >
> >
> >
> > On Sun, Mar 10, 2013 at 7:28 AM, Timothy Mattausch Creech <
> tcreech at umd.edu> wrote:
> > On Mon, Mar 04, 2013 at 03:01:15PM +0800, 陳韋任 (Wei-Ren Chen) wrote:
> > > Hi Timothy,
> > >
> > > > We would like to inform the community that we're releasing a
> version of our research compiler, "AESOP", developed at UMD using LLVM.
> AESOP is a distance-vector-based autoparallelizing compiler for
> shared-memory machines. The source code and some further information is
> available at
> > > >
> > > > http://aesop.ece.umd.edu
> > > >
> > > > The main components of the released implementation are loop memory
> dependence analysis and parallel code generation using calls to POSIX
> threads. Since we currently have only a 2-person development team, we are
> still on LLVM 3.0, and some of the code could use some cleanup. Still, we
> hope that the work will be of interest to some.
> > >
> > > Do you have data show us that how much parallelization the AESOP can
> > > extract from those benchmarks? :)
> > >
> > > Regards,
> > > chenwj
> > >
> > > --
> > > Wei-Ren Chen (陳韋任)
> > > Computer Systems Lab, Institute of Information Science,
> > > Academia Sinica, Taiwan (R.O.C.)
> > > Tel:886-2-2788-3799 #1667
> > > Homepage: http://people.cs.nctu.edu.tw/~chenwj
> >
> > Hi Wei-Ren,
> > Sorry for the slow response. We're working on a short tech report
> which will be up on the website in April. This will contain a "results"
> section, including results from the SPEC benchmarks which we can't include
> in the source distribution.
> >
> > Briefly, I can say that we get good speedups on some of the NAS and SPEC
> benchmarks, such as a 3.6x+ speedup on 4 cores on the serial version of NAS
> "CG" (Fortran), and "lbm" (C) from CPU2006. (These are of course among our
> best results.)
> >
> > -Tim
> > _______________________________________________
> > LLVM Developers mailing list
> > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
> > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> >
> >
> >
> > --
> > Regards,
> > Rahul Patil.
>
>
--
Regards,
Rahul Patil.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130312/638436a9/attachment.html>
More information about the llvm-dev
mailing list