[all-commits] [llvm/llvm-project] e1644a: GlobalISel: Reduce G_SHL width if source is extension

Matt Arsenault via All-commits all-commits at lists.llvm.org
Mon Aug 24 06:42:59 PDT 2020


  Branch: refs/heads/master
  Home:   https://github.com/llvm/llvm-project
  Commit: e1644a377996565e119aa178f40c567b986a6203
      https://github.com/llvm/llvm-project/commit/e1644a377996565e119aa178f40c567b986a6203
  Author: Matt Arsenault <Matthew.Arsenault at amd.com>
  Date:   2020-08-24 (Mon, 24 Aug 2020)

  Changed paths:
    M llvm/include/llvm/CodeGen/GlobalISel/CombinerHelper.h
    M llvm/include/llvm/CodeGen/TargetLowering.h
    M llvm/include/llvm/Target/GlobalISel/Combine.td
    M llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
    M llvm/lib/Target/AMDGPU/SIISelLowering.cpp
    M llvm/lib/Target/AMDGPU/SIISelLowering.h
    A llvm/test/CodeGen/AMDGPU/GlobalISel/combine-shl-from-extend-narrow.postlegal.mir
    A llvm/test/CodeGen/AMDGPU/GlobalISel/combine-shl-from-extend-narrow.prelegal.mir
    A llvm/test/CodeGen/AMDGPU/GlobalISel/shl-ext-reduce.ll

  Log Message:
  -----------
  GlobalISel: Reduce G_SHL width if source is extension

shl ([sza]ext x, y) => zext (shl x, y).

Turns expensive 64 bit shifts into 32 bit if it does not overflow the
source type:

This is a port of an AMDGPU DAG combine added in
5fa289f0d8ff85b9e14d2f814a90761378ab54ae. InstCombine does this
already, but we need to do it again here to apply it to shifts
introduced for lowered getelementptrs. This will help matching
addressing modes that use 32-bit offsets in a future patch.

TableGen annoyingly assumes only a single match data operand, so
introduce a reusable struct. However, this still requires defining a
separate GIMatchData for every combine which is still annoying.

Adds a morally equivalent function to the existing
getShiftAmountTy. Without this, we would have to do try to repeatedly
query the legalizer info and guess at what type to use for the shift.




More information about the All-commits mailing list