[all-commits] [llvm/llvm-project] 2939fc: [AArch64] Add IR intrinsics for sq(r)dmulh_lane(q)

Sanne Wouda via All-commits all-commits at lists.llvm.org
Wed Jan 29 05:34:07 PST 2020


  Branch: refs/heads/master
  Home:   https://github.com/llvm/llvm-project
  Commit: 2939fc13c8f6a5dbd1be77c1d19dc2720253b8c5
      https://github.com/llvm/llvm-project/commit/2939fc13c8f6a5dbd1be77c1d19dc2720253b8c5
  Author: Sanne Wouda <Sanne.Wouda at arm.com>
  Date:   2020-01-29 (Wed, 29 Jan 2020)

  Changed paths:
    M clang/include/clang/Basic/arm_neon.td
    M clang/lib/CodeGen/CGBuiltin.cpp
    M clang/test/CodeGen/aarch64-neon-2velem.c
    M llvm/include/llvm/IR/IntrinsicsAArch64.td
    M llvm/lib/Target/AArch64/AArch64InstrFormats.td
    M llvm/lib/Target/AArch64/AArch64InstrInfo.td
    M llvm/lib/Target/AArch64/AArch64RegisterBankInfo.cpp
    M llvm/lib/Target/AArch64/AArch64RegisterInfo.cpp
    M llvm/lib/Target/AArch64/AArch64RegisterInfo.td
    M llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp
    M llvm/test/CodeGen/AArch64/arm64-neon-2velem.ll

  Log Message:
  -----------
  [AArch64] Add IR intrinsics for sq(r)dmulh_lane(q)

Summary:
Currently, sqdmulh_lane and friends from the ACLE (implemented in arm_neon.h),
are represented in LLVM IR as a (by vector) sqdmulh and a vector of (repeated)
indices, like so:

   %shuffle = shufflevector <4 x i16> %v, <4 x i16> undef, <4 x i32> <i32 3, i32 3, i32 3, i32 3>
   %vqdmulh2.i = tail call <4 x i16> @llvm.aarch64.neon.sqdmulh.v4i16(<4 x i16> %a, <4 x i16> %shuffle)

When %v's values are known, the shufflevector is optimized away and we are no
longer able to select the lane variant of sqdmulh in the backend.

This defeats a (hand-coded) optimization that packs several constants into a
single vector and uses the lane intrinsics to reduce register pressure and
trade-off materialising several constants for a single vector load from the
constant pool, like so:

   int16x8_t v = {2,3,4,5,6,7,8,9};
   a = vqdmulh_laneq_s16(a, v, 0);
   b = vqdmulh_laneq_s16(b, v, 1);
   c = vqdmulh_laneq_s16(c, v, 2);
   d = vqdmulh_laneq_s16(d, v, 3);
   [...]

In one microbenchmark from libjpeg-turbo this accounts for a 2.5% to 4%
performance difference.

We could teach the compiler to recover the lane variants, but this would likely
require its own pass.  (Alternatively, "volatile" could be used on the constants
vector, but this is a bit ugly.)

This patch instead implements the following LLVM IR intrinsics for AArch64 to
maintain the original structure through IR optmization and into instruction
selection:
- sqdmulh_lane
- sqdmulh_laneq
- sqrdmulh_lane
- sqrdmulh_laneq.

These 'lane' variants need an additional register class.  The second argument
must be in the lower half of the 64-bit NEON register file, but only when
operating on i16 elements.

Note that the existing patterns for shufflevector and sqdmulh into sqdmulh_lane
(etc.) remain, so code that does not rely on NEON intrinsics to generate these
instructions is not affected.

This patch also changes clang to emit these IR intrinsics for the corresponding
NEON intrinsics (AArch64 only).

Reviewers: SjoerdMeijer, dmgreen, t.p.northover, rovka, rengolin, efriedma

Reviewed By: efriedma

Subscribers: kristof.beyls, hiraditya, jdoerfert, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D71469




More information about the All-commits mailing list