[llvm] 4bf004e - [DAG] Fold (bitcast (logicop (bitcast x), (c))) -> (logicop x, (bitcast c)) iff the current logicop type is illegal

Simon Pilgrim via llvm-commits llvm-commits at lists.llvm.org
Tue Mar 14 07:42:12 PDT 2023


Author: Simon Pilgrim
Date: 2023-03-14T14:41:11Z
New Revision: 4bf004e07e2b9d6e04e3f33e1b02628c679de664

URL: https://github.com/llvm/llvm-project/commit/4bf004e07e2b9d6e04e3f33e1b02628c679de664
DIFF: https://github.com/llvm/llvm-project/commit/4bf004e07e2b9d6e04e3f33e1b02628c679de664.diff

LOG: [DAG] Fold (bitcast (logicop (bitcast x), (c))) -> (logicop x, (bitcast c)) iff the current logicop type is illegal

Try to remove extra bitcasts around logicops if we're dealing with illegal types

Fixes the regressions in D145939

Differential Revision: https://reviews.llvm.org/D146032

Added: 
    

Modified: 
    llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
    llvm/test/CodeGen/AMDGPU/fneg.ll

Removed: 
    


################################################################################
diff  --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index e362fcb6dc95..4463873769bb 100644
--- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -14668,6 +14668,22 @@ SDValue DAGCombiner::visitBITCAST(SDNode *N) {
   if (N0.getOpcode() == ISD::BITCAST)
     return DAG.getBitcast(VT, N0.getOperand(0));
 
+  // fold (conv (logicop (conv x), (c))) -> (logicop x, (conv c))
+  // iff the current bitwise logicop type isn't legal
+  if (ISD::isBitwiseLogicOp(N0.getOpcode()) && VT.isInteger() &&
+      !TLI.isTypeLegal(N0.getOperand(0).getValueType())) {
+    auto IsFreeBitcast = [VT](SDValue V) {
+      return (V.getOpcode() == ISD::BITCAST &&
+              V.getOperand(0).getValueType() == VT) ||
+             (ISD::isBuildVectorOfConstantSDNodes(V.getNode()) &&
+              V->hasOneUse());
+    };
+    if (IsFreeBitcast(N0.getOperand(0)) && IsFreeBitcast(N0.getOperand(1)))
+      return DAG.getNode(N0.getOpcode(), SDLoc(N), VT,
+                         DAG.getBitcast(VT, N0.getOperand(0)),
+                         DAG.getBitcast(VT, N0.getOperand(1)));
+  }
+
   // fold (conv (load x)) -> (load (conv*)x)
   // If the resultant load doesn't need a higher alignment than the original!
   if (ISD::isNormalLoad(N0.getNode()) && N0.hasOneUse() &&

diff  --git a/llvm/test/CodeGen/AMDGPU/fneg.ll b/llvm/test/CodeGen/AMDGPU/fneg.ll
index 7d7162165f9e..760d9e96a609 100644
--- a/llvm/test/CodeGen/AMDGPU/fneg.ll
+++ b/llvm/test/CodeGen/AMDGPU/fneg.ll
@@ -218,13 +218,8 @@ define half @v_fneg_i16_fp_use(i16 %in) {
   ret half %fadd
 }
 
-; FIXME: This is terrible
 ; FUNC-LABEL: {{^}}s_fneg_v2i16:
-; SI: s_and_b32 s5, s4, 0xffff0000
-; SI: s_xor_b32 s4, s4, 0x8000
-; SI: s_and_b32 s4, s4, 0xffff
-; SI: s_or_b32 s4, s4, s5
-; SI: s_add_i32 s4, s4, 0x80000000
+; SI: s_xor_b32 s4, s4, 0x80008000
 
 ; VI: s_lshr_b32 s5, s4, 16
 ; VI: s_xor_b32 s4, s4, 0x8000


        


More information about the llvm-commits mailing list