[PATCH] [PATCH][SROA]Also slice the STORE when slicing a LOAD in AllocaSliceRewriter

Hao Liu Hao.Liu at arm.com
Tue Sep 2 00:11:15 PDT 2014


Hi James & Chandler,

 

I have two small test cases to show James’ first concern. Test results show loop vectorizor generates quite poor code for wide store. To see the result by following command lines:

opt –S –loop-vectorize < wide-store.ll

opt –S –loop-vectorize < narrow-stores.ll

 

The wide-store.ll and narrow-stores.ll are generated from attached struct.cpp by with or without the my patch. This cpp case is simplified from a hot function in SPEC CPU 2006 473.astar. Currently the poor code affects the performance.

 

 

Hi Chandler,

 

I also agree with your concern. On the other hand, If the input is zext/shl/or and a wide store, the patch in SROA can not handle such case. For example, if the input is wide-store.ll, only a separate pass or function specific to handle such case can generate simpler code.

But there is a conflict, even though we add code in the backend, we still can’t solve the problem about the wide-store affecting the Loop Vectorization issue. For this concern, I think maybe we prefer narrow stores than wide store.

 

Thanks,

-Hao

From: mankeyrabbit at gmail.com [mailto:mankeyrabbit at gmail.com] On Behalf Of James Molloy
Sent: Friday, August 29, 2014 5:40 PM
To: Chandler Carruth
Cc: Hao Liu; Commit Messages and Patches for LLVM; reviews+D4954+public+39a520765005fb8e at reviews.llvm.org
Subject: Re: [PATCH] [PATCH][SROA]Also slice the STORE when slicing a LOAD in AllocaSliceRewriter

 

Hi Chandler,

 

Nebbing in with my two-pennyworth;

 

Also, with the IR produced by SROA, the information needed is still present. I think the problem is that both backends need to be taught the trick of using multiple stores at indexed offsets to save math combining two values. That's my suggestion for how to improve the quality of code for these patterns.

 

I think while it's easy to ask the backend to do this, SROA runs fairly early and there are many things in between it and the backend.

 

My first concern is that the additional IR instructions diverge greatly from the expected generated code, which breaks the SLP and loop vectorizers' heuristics.

 

Secondly, there is no guarantee that the inserted zexts/sexts/or's will still be in a coherent group for the backend to identify. If other midend passes manage to remove or combine one or more of them, the idiom may no longer be matched. Or worse, if part of them are lifted or sunk into a different basic block our instruction selection will never see them!

 

So I'm not sure your proposed solution is the best available.

 

Cheers,

 

James

 

On 29 August 2014 10:23, Chandler Carruth <chandlerc at gmail.com> wrote:

 

On Wed, Aug 27, 2014 at 8:46 PM, Hao Liu <Hao.Liu at arm.com> wrote:

I think you concern about when the narrow LOADs are folded back, we can’t fuse the narrow STOREs back. Am I get your point?

 

No, my concern is about when we completely remove the loads through GVN or even the mem2reg process that runs in SROA itself. The sliced loads are expected to go away and become SSA registers. We might well be able to then fold away the zext / shl / or / etc into the math that feeds those SSA values. But we will in most cases fail to fuse the stores back together once they are split.

 

But I think the narrow LOADs won’t be folded back, as the SROA checks such LOAD can be split and removed. So if the narrow LOADs won’t exist, there are two choices for us:

(1)    The additional ZEXT/SHL/OR and wide STORE.

(2)    Two narrow STOREs.

I still prefer the (2) than the (1). I think a better optimization for (1) is to split the STORE. Even if there are other optimizations to change the wide STORE, we’ll still have ZEXT/SHL/OR left. Such code with bit math seems not the best choice.

 

I'm not really sure what you mean about having instructions left over.

 

The fundamental thing is this: the width of memory stored to is actually a very important property of the source program. It clarifies the maximum width of memory that is correct to store two in a single instruction. Splitting or narrowing a store is often irreversible because fusing or widening can introduce data races.

 

As a consequence, it is a conscious choice throughout the optimizer to preserve the maximal width of stores (and to a lesser extend loads). This preserves the information in the middle-end about what freedoms the source program has w.r.t. to memory accesses and data races.

 

 

Also, with the IR produced by SROA, the information needed is still present. I think the problem is that both backends need to be taught the trick of using multiple stores at indexed offsets to save math combining two values. That's my suggestion for how to improve the quality of code for these patterns.


_______________________________________________
llvm-commits mailing list
llvm-commits at cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits

 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20140902/98392a84/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: wide-store.ll
Type: application/octet-stream
Size: 1229 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20140902/98392a84/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: narrow-stores.ll
Type: application/octet-stream
Size: 1140 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20140902/98392a84/attachment-0001.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: struct.cpp
Type: application/octet-stream
Size: 259 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20140902/98392a84/attachment-0002.obj>


More information about the llvm-commits mailing list