<html>
    <head>
      <base href="https://bugs.llvm.org/">
    </head>
    <body><table border="1" cellspacing="0" cellpadding="8">
        <tr>
          <th>Bug ID</th>
          <td><a class="bz_bug_link 
          bz_status_NEW "
   title="NEW - NEON: Minor optimization: Use vsli/vsri instead of vshr/vshl and vorr"
   href="https://bugs.llvm.org/show_bug.cgi?id=40163">40163</a>
          </td>
        </tr>

        <tr>
          <th>Summary</th>
          <td>NEON: Minor optimization: Use vsli/vsri instead of vshr/vshl and vorr
          </td>
        </tr>

        <tr>
          <th>Product</th>
          <td>libraries
          </td>
        </tr>

        <tr>
          <th>Version</th>
          <td>trunk
          </td>
        </tr>

        <tr>
          <th>Hardware</th>
          <td>All
          </td>
        </tr>

        <tr>
          <th>OS</th>
          <td>All
          </td>
        </tr>

        <tr>
          <th>Status</th>
          <td>NEW
          </td>
        </tr>

        <tr>
          <th>Severity</th>
          <td>enhancement
          </td>
        </tr>

        <tr>
          <th>Priority</th>
          <td>P
          </td>
        </tr>

        <tr>
          <th>Component</th>
          <td>Backend: ARM
          </td>
        </tr>

        <tr>
          <th>Assignee</th>
          <td>unassignedbugs@nondot.org
          </td>
        </tr>

        <tr>
          <th>Reporter</th>
          <td>husseydevin@gmail.com
          </td>
        </tr>

        <tr>
          <th>CC</th>
          <td>llvm-bugs@lists.llvm.org, peter.smith@linaro.org, Ties.Stuij@arm.com
          </td>
        </tr></table>
      <p>
        <div>
        <pre>Note: Also applies to aarch64.

typedef unsigned U32x4 __attribute__((vector_size(16)));

U32x4 vrol_accq_n_u32_17(U32x4 x)
{
    return x + ((x << 17) | (x >> (32 - 17)));
}

Generates the following for ARMv7a:

vrol_accq_n_u32_17:
        vshr.u32 q8, q0, #17
        vshl.i32 q9, q0, #15
        vorr     q8, q9, q8
        vadd.i32 q0, q8, q0
        bx       lr

In terms of performance, there isn't any faster option, but an instruction can
be saved by replacing vshl.i64/vorr with vsli.i64. It has the same performance
(usually) but with 4 bytes of code saved.

vrol_accq_n_u32_17:
        vshr.u32 q8, q0, #17
        vsli.32  q8, q0, #15
        vadd.i32 q0, q8, q0
        bx       lr

However, in the case where vsli/vsri would be the last instruction in the
function (e.g. a rotate left function), and it uses q0 in the shift, it is
faster to use vshl/vshr + vorr, as it would prevent a register swap.</pre>
        </div>
      </p>


      <hr>
      <span>You are receiving this mail because:</span>

      <ul>
          <li>You are on the CC list for the bug.</li>
      </ul>
    </body>
</html>