[PATCH] D48278: [SelectionDAG] Fold redundant masking operations of shifted value

Sanjay Patel via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Jul 9 07:15:14 PDT 2018


spatel added a comment.

In https://reviews.llvm.org/D48278#1155779, @spatel wrote:

> In https://reviews.llvm.org/D48278#1155560, @dnsampaio wrote:
>
> > It reduces the number of computation operations, from 3 to 2, and the number of constants kept int constants for performing the masking, from 2 to 1.
> >  I don't see how it increases the latency. If you are going to perform the masking and the shift anyway.
>
>
> Ah, I see that now. But I'm not convinced this is the right approach. Why are we waiting to optimize this in the backend? This is a universally good optimization, so it should be in IR:
>  https://rise4fun.com/Alive/O04
>
> I'm not sure exactly where that optimization belongs. Ie, is it EarlyCSE, GVN, somewhere else, or is it its own pass? But I don't see any benefit in waiting to do this in the DAG.


This also raises a question that has come up in another review recently - https://reviews.llvm.org/D41233. If we reverse the canonicalization of shl+and, we would solve the most basic case that I showed above:

  define i32 @shl_first(i32 %a) {
    %t2 = shl i32 %a, 8
    %t3 = and i32 %t2, 44032
    ret i32 %t3
  }
  
  define i32 @mask_first(i32 %a) {
    %a2 = and i32 %a, 172
    %a3 = shl i32 %a2, 8
    ret i32 %a3
  }


https://reviews.llvm.org/D48278





More information about the llvm-commits mailing list