Dear LLVMers<br><br>OK, when I signed up for this mailing list, I asked for a once-daily digest.<br><br>This is the fourth digest I receive today, and there are about that many each day.<br><br>The only reason I subscribe to the mailing list is so I can post to it. But I don't need to receive the emails, because I can fully well read them in the archive online, and I certainly don't want to get spammed multiple times daily with the digest. Today, I received issue 44 at 3:01am, issue 45 at 12:37pm, issue 46 at 2:02pm and issue 47 at 3:21pm.<br>
<br>Is there a way that I can turn off all delivery of emails from the llvmdev list without unsubscribing? If that's not possible, I'll unsubscribe and then resubscribe when I have something to say, but I just think it would be better if I could turn off the email delivery or something.<br>
<br>Sincerely,<br><br>Sébastien Loisel<br><br><div class="gmail_quote">On Fri, Feb 15, 2008 at 3:21 PM, <<a href="mailto:llvmdev-request@cs.uiuc.edu">llvmdev-request@cs.uiuc.edu</a>> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Send LLVMdev mailing list submissions to<br>
<a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:llvmdev-request@cs.uiuc.edu">llvmdev-request@cs.uiuc.edu</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:llvmdev-owner@cs.uiuc.edu">llvmdev-owner@cs.uiuc.edu</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of LLVMdev digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: an llvm-gcc bug (Devang Patel)<br>
2. Re: an llvm-gcc bug (Chris Lattner)<br>
3. Re: an llvm-gcc bug (Dale Johannesen)<br>
4. Re: an llvm-gcc bug (Dale Johannesen)<br>
5. Re: an llvm-gcc bug (Chris Lattner)<br>
6. Re: Question on link error (Ted Neward)<br>
7. LiveInterval spilling (was LiveInterval Splitting &<br>
SubRegisters) (Roman Levenstein)<br>
8. Re: LiveInterval spilling (was LiveInterval Splitting &<br>
SubRegisters) (Fernando Magno Quintao Pereira)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Fri, 15 Feb 2008 11:17:33 -0800<br>
From: Devang Patel <<a href="mailto:dpatel@apple.com">dpatel@apple.com</a>><br>
Subject: Re: [LLVMdev] an llvm-gcc bug<br>
To: LLVM Developers Mailing List <<a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a>><br>
Message-ID: <<a href="mailto:0C222846-383D-41DD-8229-4858BD1FD3A2@apple.com">0C222846-383D-41DD-8229-4858BD1FD3A2@apple.com</a>><br>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes<br>
<br>
<br>
On Feb 15, 2008, at 10:27 AM, Chris Lattner wrote:<br>
<br>
><br>
> On Feb 15, 2008, at 10:23 AM, Devang Patel wrote:<br>
><br>
>><br>
>> On Feb 15, 2008, at 10:08 AM, Chris Lattner wrote:<br>
>><br>
>>>>> Alternatively I can take the Padding bit into account in the<br>
>>>>> StructType::get code somehow. Anyone have a strong opinion?<br>
>>>><br>
>>>> Shouldn't it be a map from the gcc type to the padding info?<br>
>>>> That said, you can get rid of the padding info as far as I'm<br>
>>>> concerned. However Chris might have a different opinion - I<br>
>>>> think he introduced it.<br>
>>><br>
>>> I don't think I introduced it (was it Devang?).<br>
>><br>
>> Yup. PR 1278<br>
><br>
> Ok! Can you please fix it to index by GCC type? There is a many to<br>
> one mapping between gcc types and llvm types.<br>
<br>
This is tricky and probably won't work. Padding info is in llvm struct<br>
type and CopyAggregate() is operating on llvm type. There is not any<br>
way to map llvm type back to gcc type. Am I missing something ?<br>
<br>
-<br>
Devang<br>
<br>
<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Fri, 15 Feb 2008 11:21:41 -0800<br>
From: Chris Lattner <<a href="mailto:sabre@nondot.org">sabre@nondot.org</a>><br>
Subject: Re: [LLVMdev] an llvm-gcc bug<br>
To: LLVM Developers Mailing List <<a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a>><br>
Message-ID: <<a href="mailto:8A9142EB-F978-4CCE-8550-5F805BDFAA07@nondot.org">8A9142EB-F978-4CCE-8550-5F805BDFAA07@nondot.org</a>><br>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes<br>
<br>
On Feb 15, 2008, at 11:17 AM, Devang Patel wrote:<br>
>> Ok! Can you please fix it to index by GCC type? There is a many to<br>
>> one mapping between gcc types and llvm types.<br>
><br>
> This is tricky and probably won't work. Padding info is in llvm struct<br>
> type and CopyAggregate() is operating on llvm type. There is not any<br>
> way to map llvm type back to gcc type. Am I missing something ?<br>
<br>
EmitAggregateCopy has the gcc type. It would be reasonable to have<br>
CopyAggregate walk the GCC type in parallel with the llvm type, at<br>
least in simple cases. In more complex cases, it could give up.<br>
<br>
-Chris<br>
<br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Fri, 15 Feb 2008 11:22:47 -0800<br>
From: Dale Johannesen <<a href="mailto:dalej@apple.com">dalej@apple.com</a>><br>
Subject: Re: [LLVMdev] an llvm-gcc bug<br>
To: LLVM Developers Mailing List <<a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a>><br>
Message-ID: <<a href="mailto:04308F30-2980-4F02-8FF7-E09DD4D0D9C1@apple.com">04308F30-2980-4F02-8FF7-E09DD4D0D9C1@apple.com</a>><br>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes<br>
<br>
<br>
On Feb 15, 2008, at 11:17 AM, Devang Patel wrote:<br>
<br>
><br>
> On Feb 15, 2008, at 10:27 AM, Chris Lattner wrote:<br>
><br>
>><br>
>> On Feb 15, 2008, at 10:23 AM, Devang Patel wrote:<br>
>><br>
>>><br>
>>> On Feb 15, 2008, at 10:08 AM, Chris Lattner wrote:<br>
>>><br>
>>>>>> Alternatively I can take the Padding bit into account in the<br>
>>>>>> StructType::get code somehow. Anyone have a strong opinion?<br>
>>>>><br>
>>>>> Shouldn't it be a map from the gcc type to the padding info?<br>
>>>>> That said, you can get rid of the padding info as far as I'm<br>
>>>>> concerned. However Chris might have a different opinion - I<br>
>>>>> think he introduced it.<br>
>>>><br>
>>>> I don't think I introduced it (was it Devang?).<br>
>>><br>
>>> Yup. PR 1278<br>
>><br>
>> Ok! Can you please fix it to index by GCC type? There is a many to<br>
>> one mapping between gcc types and llvm types.<br>
><br>
> This is tricky and probably won't work. Padding info is in llvm struct<br>
> type and CopyAggregate() is operating on llvm type. There is not any<br>
> way to map llvm type back to gcc type. Am I missing something ?<br>
<br>
<br>
I don't think so, I have reached the same conclusion. You can pass<br>
the gcc type into CopyAggregate, but it's recursive, and there's no<br>
way to get the gcc type for the fields. You would have to walk the<br>
gcc type in parallel with the llvm type, which at best involves<br>
duplicating a lot of code and is quite error prone.<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 4<br>
Date: Fri, 15 Feb 2008 11:27:42 -0800<br>
From: Dale Johannesen <<a href="mailto:dalej@apple.com">dalej@apple.com</a>><br>
Subject: Re: [LLVMdev] an llvm-gcc bug<br>
To: Dale Johannesen <<a href="mailto:dalej@apple.com">dalej@apple.com</a>><br>
Cc: LLVM Developers Mailing List <<a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a>><br>
Message-ID: <<a href="mailto:86CE380D-44AB-4035-9E7C-2DCD85308BDA@apple.com">86CE380D-44AB-4035-9E7C-2DCD85308BDA@apple.com</a>><br>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes<br>
<br>
<br>
On Feb 15, 2008, at 11:22 AM, Dale Johannesen wrote:<br>
<br>
><br>
> On Feb 15, 2008, at 11:17 AM, Devang Patel wrote:<br>
><br>
>><br>
>> On Feb 15, 2008, at 10:27 AM, Chris Lattner wrote:<br>
>><br>
>>><br>
>>> On Feb 15, 2008, at 10:23 AM, Devang Patel wrote:<br>
>>><br>
>>>><br>
>>>> On Feb 15, 2008, at 10:08 AM, Chris Lattner wrote:<br>
>>>><br>
>>>>>>> Alternatively I can take the Padding bit into account in the<br>
>>>>>>> StructType::get code somehow. Anyone have a strong opinion?<br>
>>>>>><br>
>>>>>> Shouldn't it be a map from the gcc type to the padding info?<br>
>>>>>> That said, you can get rid of the padding info as far as I'm<br>
>>>>>> concerned. However Chris might have a different opinion - I<br>
>>>>>> think he introduced it.<br>
>>>>><br>
>>>>> I don't think I introduced it (was it Devang?).<br>
>>>><br>
>>>> Yup. PR 1278<br>
>>><br>
>>> Ok! Can you please fix it to index by GCC type? There is a many to<br>
>>> one mapping between gcc types and llvm types.<br>
>><br>
>> This is tricky and probably won't work. Padding info is in llvm<br>
>> struct<br>
>> type and CopyAggregate() is operating on llvm type. There is not any<br>
>> way to map llvm type back to gcc type. Am I missing something ?<br>
><br>
><br>
> I don't think so, I have reached the same conclusion. You can pass<br>
> the gcc type into CopyAggregate, but it's recursive, and there's no<br>
> way to get the gcc type for the fields. You would have to walk the<br>
> gcc type in parallel with the llvm type, which at best involves<br>
> duplicating a lot of code and is quite error prone.<br>
<br>
...but giving up in this case is easy enough, ok, I can do that.<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 5<br>
Date: Fri, 15 Feb 2008 11:39:06 -0800<br>
From: Chris Lattner <<a href="mailto:sabre@nondot.org">sabre@nondot.org</a>><br>
Subject: Re: [LLVMdev] an llvm-gcc bug<br>
To: LLVM Developers Mailing List <<a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a>><br>
Message-ID: <<a href="mailto:9345E8E7-503D-483F-90DD-1DA6FD64B6F3@nondot.org">9345E8E7-503D-483F-90DD-1DA6FD64B6F3@nondot.org</a>><br>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes<br>
<br>
<br>
On Feb 15, 2008, at 11:27 AM, Dale Johannesen wrote:<br>
<br>
>> I don't think so, I have reached the same conclusion. You can pass<br>
>> the gcc type into CopyAggregate, but it's recursive, and there's no<br>
>> way to get the gcc type for the fields. You would have to walk the<br>
>> gcc type in parallel with the llvm type, which at best involves<br>
>> duplicating a lot of code and is quite error prone.<br>
><br>
> ...but giving up in this case is easy enough, ok, I can do that.<br>
<br>
Cool, thanks Dale! I think it would be reasonable to give up in<br>
nested struct cases, etc. We can always improve it later, and that<br>
will get us the obvious case in PR1278.<br>
<br>
-Chris<br>
<br>
<br>
------------------------------<br>
<br>
Message: 6<br>
Date: Fri, 15 Feb 2008 11:47:47 -0800<br>
From: "Ted Neward" <<a href="mailto:ted@tedneward.com">ted@tedneward.com</a>><br>
Subject: Re: [LLVMdev] Question on link error<br>
To: "'Anton Korobeynikov'" <<a href="mailto:asl@math.spbu.ru">asl@math.spbu.ru</a>>, "'LLVM Developers<br>
Mailing List'" <<a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a>><br>
Message-ID: <017001c8700b$a8172760$f8457620$@com><br>
Content-Type: text/plain; charset="windows-1250"<br>
<br>
I am *more* than willing to believe it's an environment/configuration error,<br>
so if you can't repro it, let's assume the problem is somewhere on my end.<br>
<br>
FWIW, I took the 2.2 release from the website, the GCC 4.2 release from the<br>
website, and tried to follow the README.LLVM directions (build LLVM 2.2,<br>
configure gcc in the obj directory, make and make install), then ran through<br>
the hello steps. Only the lli step fails.<br>
<br>
Ted Neward<br>
Java, .NET, XML Services<br>
Consulting, Teaching, Speaking, Writing<br>
<a href="http://www.tedneward.com" target="_blank">http://www.tedneward.com</a><br>
<br>
<br>
> -----Original Message-----<br>
> From: <a href="mailto:llvmdev-bounces@cs.uiuc.edu">llvmdev-bounces@cs.uiuc.edu</a> [mailto:<a href="mailto:llvmdev-bounces@cs.uiuc.edu">llvmdev-bounces@cs.uiuc.edu</a>]<br>
> On Behalf Of Anton Korobeynikov<br>
> Sent: Friday, February 15, 2008 3:52 AM<br>
> To: LLVM Developers Mailing List<br>
> Subject: Re: [LLVMdev] Question on link error<br>
><br>
> Hello, Ted<br>
><br>
> > __main is supposed to be inside hello.bc, so why can t lli find it?<br>
> No, it shouldn't be there. On targets, which lacks init sections (for<br>
> example, all win-based, like mingw & cygwin) __main is used to call<br>
> static constructors and relevant stuff.<br>
><br>
> The call to __main is assembled early in the main routine before the<br>
> actual code will be executed. I'll try to look into this problem today.<br>
><br>
> --<br>
> WBR, Anton Korobeynikov<br>
><br>
> No virus found in this incoming message.<br>
> Checked by AVG Free Edition.<br>
> Version: 7.5.516 / Virus Database: 269.20.5/1278 - Release Date:<br>
> 2/14/2008 10:28 AM<br>
><br>
><br>
<br>
No virus found in this outgoing message.<br>
Checked by AVG Free Edition.<br>
Version: 7.5.516 / Virus Database: 269.20.6/1280 - Release Date: 2/15/2008<br>
9:00 AM<br>
<br>
<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 7<br>
Date: Fri, 15 Feb 2008 21:01:33 +0100 (CET)<br>
From: Roman Levenstein <<a href="mailto:romixlev@yahoo.com">romixlev@yahoo.com</a>><br>
Subject: [LLVMdev] LiveInterval spilling (was LiveInterval Splitting &<br>
SubRegisters)<br>
To: LLVM Developers Mailing List <<a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a>><br>
Message-ID: <<a href="mailto:572290.31048.qm@web56312.mail.re3.yahoo.com">572290.31048.qm@web56312.mail.re3.yahoo.com</a>><br>
Content-Type: text/plain; charset=iso-8859-1<br>
<br>
Hi Evan,<br>
<br>
I have a few questions about current implementation of live intervals<br>
spilling, which is required for the implementation of Extended Linear<br>
Scan algorithm.<br>
<br>
--- Evan Cheng <<a href="mailto:evan.cheng@apple.com">evan.cheng@apple.com</a>> wrote:<br>
> > On Wednesday 23 January 2008 02:01, Evan Cheng wrote:<br>
> >> On Jan 22, 2008, at 12:23 PM, David Greene wrote:<br>
> >>> Evan,<br>
> >>><br>
> >>> Can you explain the basic mechanics of the live interval<br>
> splitting code?<br>
> >>> Is it all in LiveIntervalAnalysis.cpp under addIntervalsForSpills<br>
> >>> and child routines? What is it trying to do?<br>
> >><br>
> >> It's splitting live intervals that span multiple basic blocks.<br>
> That is, when an interval is spilled, it introduce a single reload<br>
per<br>
>>> basic block and retarget all the uses to use the result of the<br>
> single reload. It does not (yet) split intra-bb intervals.<br>
<br>
When I look at the code, it seems that when linear scan regalloc<br>
decides to spill a given live interval, it calls addIntervalsForSpills.<br>
This function would split the original live interval into several<br>
intervals according to the principle described by you above. Each of<br>
this intervals (split children) then gets a stack slot allocated (and<br>
all these split intervals get the same stack slot?) and then those new<br>
splitted intervals are added to unhandled set. Thus they get a chance<br>
to get physical registers assigned to them, independently. So,<br>
actually, they are not quite "spilled" intervals (since they are not<br>
really spilled and located in memory) and may get a physical register.<br>
Is my understanding of the algorithm correct so far?<br>
<br>
What I don't quite understand is the following:<br>
Why do live intervals with an allocated stack slot should also always<br>
have a physical register assigned to them? How should a register<br>
allocator decide, which physical register should be used for that?<br>
<br>
For example, in my version of Sarkar's Extended Linear Scan I sometimes<br>
spill the whole live interval. So, I assign a stack slot to it. But<br>
LLVM requires also a physical register to be assigned to each such live<br>
interval as well. How do I decide which physical register should be<br>
taken? Why can't the local spiller or the former rewriteFunction() part<br>
of the RegAllocLinearScan find out on their own which of the currently<br>
available for allocation physical registers should be taken at a given<br>
point for a reload or for a spilling operation for a given spilled live<br>
interval? Wouldn't it be more convenient? You just say that the<br>
interval is spilled and the rest is done "by magic"? Or may be I'm<br>
missing something about how spilling currently works in LLVM?<br>
<br>
Thanks in advance for any clarification of this issue.<br>
<br>
-Roman<br>
<br>
<br>
Lesen Sie Ihre E-Mails auf dem Handy.<br>
<a href="http://www.yahoo.de/go" target="_blank">www.yahoo.de/go</a><br>
<br>
<br>
------------------------------<br>
<br>
Message: 8<br>
Date: Fri, 15 Feb 2008 12:21:08 -0800 (PST)<br>
From: Fernando Magno Quintao Pereira <<a href="mailto:fernando@CS.UCLA.EDU">fernando@CS.UCLA.EDU</a>><br>
Subject: Re: [LLVMdev] LiveInterval spilling (was LiveInterval<br>
Splitting & SubRegisters)<br>
To: LLVM Developers Mailing List <<a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a>><br>
Message-ID: <<a href="mailto:Pine.SOC.4.64.0802151220110.12284@cheetah.cs.ucla.edu">Pine.SOC.4.64.0802151220110.12284@cheetah.cs.ucla.edu</a>><br>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed<br>
<br>
<br>
Hi, Roman,<br>
<br>
maybe I can try to answer this. I think that all boils down to having<br>
register to reload spilled values. Once a register is spilled, its live<br>
range is split into smaller pieces. These pieces most be contained into<br>
registers, and it is the task of the allocator to find these registers.<br>
Imagine that you have something like:<br>
<br>
Before After<br>
allocation: allocation:<br>
a := 1 a(R1) := 1 // a is assigned to R1<br>
| // store R1 into M1<br>
|<br>
| // 'a' is spilled into stack slot M1<br>
|<br>
| // assign 'a' to R2, and load M1 into R2<br>
b := a b(Rx) := a(R2)<br>
|<br>
|<br>
|<br>
|<br>
| // assign 'a' to R3, and load M1 into R3<br>
c := a c(Ry) := a(R3)<br>
<br>
So, the register is necessary for doing the reloading. Sometimes it is<br>
possible to avoid the reloading with instruction folding, but this is not<br>
always the case. Also, in the new allocator used in LLVM, I believe that<br>
some live ranges may be split into bigger pieces, and this would save some<br>
reloads.<br>
<br>
best,<br>
<br>
Fernando<br>
<br>
> When I look at the code, it seems that when linear scan<br>
regalloc<br>
> decides to spill a given live interval, it calls addIntervalsForSpills.<br>
> This function would split the original live interval into several<br>
> intervals according to the principle described by you above. Each of<br>
> this intervals (split children) then gets a stack slot allocated (and<br>
> all these split intervals get the same stack slot?) and then those new<br>
> splitted intervals are added to unhandled set. Thus they get a chance<br>
> to get physical registers assigned to them, independently. So,<br>
> actually, they are not quite "spilled" intervals (since they are not<br>
> really spilled and located in memory) and may get a physical register.<br>
> Is my understanding of the algorithm correct so far?<br>
><br>
> What I don't quite understand is the following:<br>
> Why do live intervals with an allocated stack slot should also always<br>
> have a physical register assigned to them? How should a register<br>
> allocator decide, which physical register should be used for that?<br>
><br>
> For example, in my version of Sarkar's Extended Linear Scan I sometimes<br>
> spill the whole live interval. So, I assign a stack slot to it. But<br>
> LLVM requires also a physical register to be assigned to each such live<br>
> interval as well. How do I decide which physical register should be<br>
> taken? Why can't the local spiller or the former rewriteFunction() part<br>
> of the RegAllocLinearScan find out on their own which of the currently<br>
> available for allocation physical registers should be taken at a given<br>
> point for a reload or for a spilling operation for a given spilled live<br>
> interval? Wouldn't it be more convenient? You just say that the<br>
> interval is spilled and the rest is done "by magic"? Or may be I'm<br>
> missing something about how spilling currently works in LLVM?<br>
><br>
> Thanks in advance for any clarification of this issue.<br>
><br>
> -Roman<br>
><br>
><br>
> Lesen Sie Ihre E-Mails auf dem Handy.<br>
> <a href="http://www.yahoo.de/go" target="_blank">www.yahoo.de/go</a><br>
> _______________________________________________<br>
> LLVM Developers mailing list<br>
> <a href="mailto:LLVMdev@cs.uiuc.edu">LLVMdev@cs.uiuc.edu</a> <a href="http://llvm.cs.uiuc.edu" target="_blank">http://llvm.cs.uiuc.edu</a><br>
> <a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev</a><br>
><br>
<br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
LLVMdev mailing list<br>
<a href="mailto:LLVMdev@cs.uiuc.edu">LLVMdev@cs.uiuc.edu</a><br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev</a><br>
<br>
<br>
End of LLVMdev Digest, Vol 44, Issue 47<br>
***************************************<br>
</blockquote></div><br><br clear="all"><br>-- <br>Sébastien Loisel