<table border="1" cellspacing="0" cellpadding="8">
    <tr>
        <th>Issue</th>
        <td>
            <a href=https://github.com/llvm/llvm-project/issues/55106>55106</a>
        </td>
    </tr>

    <tr>
        <th>Summary</th>
        <td>
            HLSL masking bitshifts
        </td>
    </tr>

    <tr>
      <th>Labels</th>
      <td>
            backend:DirectX,
            HLSL
      </td>
    </tr>

    <tr>
      <th>Assignees</th>
      <td>
      </td>
    </tr>

    <tr>
      <th>Reporter</th>
      <td>
          llvm-beanz
      </td>
    </tr>
</table>

<pre>
    In HLSL bit shifts are defined to shift by `shift size % type size`. In DXC the IR semantics were changed so that the LLVM IR bit shift behaves that way. Upstream this will be two changes:

1) In HLSL codegen bit shifts should be emitted as `x << (y & (sizeof(x) - 1)`
2) In the DirectX backend we should pattern match the mask and remove it in any cases that it wasn't optimized away
</pre>
<img width="1px" height="1px" alt="" src="http://email.email.llvm.org/o/eJxNkkFvnDAQhX-NuYyCjFnY7IFDm1XUSNtLq1a5DngAN2AjbHaz-fUdw25TyQKP_XjzvhG109fqxcK3088T1CaA700bPOBMoKk1ljQEt51CfQVRym3vzQeBUAWE60RrxVcpsNXx9QlCT_DyAzyNaINpPFyIDZsebceG3rEAw6o6nX5_j9J_vaGmHs_kN8kFryn8mnyYCUc-MmxlhoFFEC7u5uhF_kXIo5C3ZybUAe5QjdPUkf0fzvduGXT0oNGEwInQR7J3EPkTL-Z6ZFRVxk1Ecy1v3qPrA0Rz1m6d1K1TJDmamZrwCjU2b2Q1I98bTchNZgsjhqZftSP6N0AWzTS6MwFnM5YPrtCgv7ObiO-tUPsAbgpm5CQclUeS6CrXh_yASTBhoGoFjZ7GdhF040yWeaj6EKZ1PuqZV2dCv9Rp40YuhuF8fz1Ms_vD6bk03i88UfVcFJksk74qdy1mWOZyr9taZ7tMZfs8L3Kpcyplg8mANQ2-EsVXodSNnjve5sFnQsWRqpgyVsUxMZWSSsmdKrOd3OcqrVWuVUn7ssWmfmwysZP875ghjelSN3fJXK1B66XzfDkYH_znJXpvOku0hmB_XELv7l8Q2o9kxapWpr8ti_SQ">