[llvm-dev] RFC: speedups with instruction side-data (ADCE, perhaps others?)

Mehdi Amini via llvm-dev llvm-dev at lists.llvm.org
Tue Sep 15 16:05:23 PDT 2015


> On Sep 15, 2015, at 2:16 PM, Owen Anderson <resistor at mac.com <mailto:resistor at mac.com>> wrote:
> 
>> 
>> On Sep 14, 2015, at 5:02 PM, Mehdi Amini via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>> 
>>> 
>>> On Sep 14, 2015, at 2:58 PM, Pete Cooper via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>>> 
>>> 
>>>> On Sep 14, 2015, at 2:49 PM, Matt Arsenault via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>>>> 
>>>> On 09/14/2015 02:47 PM, escha via llvm-dev wrote:
>>>>> I would assume that it’s just considered to be garbage. I feel like any sort of per-pass side data like this should come with absolute minimal contracts, to avoid introducing any more inter-pass complexity.
>>>> I would think this would need to be a verifier error if it were ever non-0
>>> +1
>>> 
>>> Otherwise every pass which ever needs this bit would have to first zero it out just to be safe, adding an extra walk over the whole functions.
>>> 
>>> Of course otherwise the pass modifying it will have to zero it, which could also be a walk over the whole function.  So either way you have lots iterate over, which is why i’m weary of this approach tbh.
>> 
>> Every pass which ever uses this internally would have to set it to zero when it is done, adding an extra walk over the whole functions as you noticed. This goes against “you don’t pay for what you don’t use”, so definitively -1 for this. Better to cleanup before use.
>> I agree that the approach does not scale/generalize well, and we should try to find an alternative if possible. Now *if* it is the only way to improve performance significantly, we might have to weight the tradeoff.
> 
> Does anyone have any concrete alternative suggestions to achieve the speedup demonstrated here?

It might also be that no one really started to think about the alternatives, but the RFC is pretty narrow (the scaling/generalizing part I mentioned, what if we need 2 bits in the future? 3? etc.) at this point. The absence of considered alternatives is not an reason by itself to get intrusive changes in. Now as I said, it might be the best/only way to go, it just hasn’t been demonstrated.

By the way talking about speedup, what is the impact on the compile time / memory usage mesured for a clang run on the public and on our internal test-suites? Can we have an LNT run result posted here?

Thanks,

— 
Mehdi

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150915/9554193a/attachment.html>


More information about the llvm-dev mailing list