[PATCH] [PATCH][SROA]Also slice the STORE when slicing a LOAD in AllocaSliceRewriter

Hao Liu Hao.Liu at arm.com
Wed Aug 27 20:46:27 PDT 2014

Hi chandler,


I give more details about my ideas on this patch in http://reviews.llvm.org/D4954.

I think you concern about when the narrow LOADs are folded back, we can’t fuse the narrow STOREs back. Am I get your point?

But I think the narrow LOADs won’t be folded back, as the SROA checks such LOAD can be split and removed. So if the narrow LOADs won’t exist, there are two choices for us:

(1)    The additional ZEXT/SHL/OR and wide STORE.

(2)    Two narrow STOREs.

I still prefer the (2) than the (1). I think a better optimization for (1) is to split the STORE. Even if there are other optimizations to change the wide STORE, we’ll still have ZEXT/SHL/OR left. Such code with bit math seems not the best choice.

If we believe the (2) is the best choice, why don’t we do it at the very beginning?





From: chandlerc at google.com [mailto:chandlerc at google.com] On Behalf Of Chandler Carruth
Sent: Tuesday, August 26, 2014 9:35 AM
To: Chandler Carruth
Cc: reviews+D4954+public+39a520765005fb8e at reviews.llvm.org; Hao Liu; Commit Messages and Patches for LLVM
Subject: Re: [PATCH] [PATCH][SROA]Also slice the STORE when slicing a LOAD in AllocaSliceRewriter



On Mon, Aug 25, 2014 at 6:33 PM, Chandler Carruth <chandlerc at gmail.com> wrote:

In particular, why did the load need to be sliced up but the store didn't? That doesn't really make sense.

Oh, I see, you have a store into a non-alloca region from a value loaded from the alloca.


I think preserving the wide store is correct here as it represents strictly more freedom in the optimizer. We should do store narrowing to remove bit math in the backend (or late in the optimizer).


To see why we would want to use the wider store: consider when we actually can eventually fold the loads together. Once we split up a store, we can in many cases never fuse them back together. =/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20140828/f7011562/attachment.html>

More information about the llvm-commits mailing list