[llvm-bugs] [Bug 40163] New: NEON: Minor optimization: Use vsli/vsri instead of vshr/vshl and vorr

via llvm-bugs llvm-bugs at lists.llvm.org
Wed Dec 26 18:04:28 PST 2018


https://bugs.llvm.org/show_bug.cgi?id=40163

            Bug ID: 40163
           Summary: NEON: Minor optimization: Use vsli/vsri instead of
                    vshr/vshl and vorr
           Product: libraries
           Version: trunk
          Hardware: All
                OS: All
            Status: NEW
          Severity: enhancement
          Priority: P
         Component: Backend: ARM
          Assignee: unassignedbugs at nondot.org
          Reporter: husseydevin at gmail.com
                CC: llvm-bugs at lists.llvm.org, peter.smith at linaro.org,
                    Ties.Stuij at arm.com

Note: Also applies to aarch64.

typedef unsigned U32x4 __attribute__((vector_size(16)));

U32x4 vrol_accq_n_u32_17(U32x4 x)
{
    return x + ((x << 17) | (x >> (32 - 17)));
}

Generates the following for ARMv7a:

vrol_accq_n_u32_17:
        vshr.u32 q8, q0, #17
        vshl.i32 q9, q0, #15
        vorr     q8, q9, q8
        vadd.i32 q0, q8, q0
        bx       lr

In terms of performance, there isn't any faster option, but an instruction can
be saved by replacing vshl.i64/vorr with vsli.i64. It has the same performance
(usually) but with 4 bytes of code saved.

vrol_accq_n_u32_17:
        vshr.u32 q8, q0, #17
        vsli.32  q8, q0, #15
        vadd.i32 q0, q8, q0
        bx       lr

However, in the case where vsli/vsri would be the last instruction in the
function (e.g. a rotate left function), and it uses q0 in the shift, it is
faster to use vshl/vshr + vorr, as it would prevent a register swap.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20181227/dfc0f343/attachment.html>


More information about the llvm-bugs mailing list