[llvm] [AMDGPU] Ensure all WMMA instructions are marked as convergent (PR #178314)

Frederik Harwath via llvm-commits llvm-commits at lists.llvm.org
Wed Jan 28 07:03:46 PST 2026


frederik-h wrote:

> I have also been dealing with a bug where an WMMA instruction is being sunk because it is not marked as convergent. In my case, this is `V_MFMA_SCALE_F32_32X32X64_F8F6F4_f4_f4_mac_vgprcd_e64`. This is not covered by your PR. I have opened a PR to this PR's branch on your fork [...]

See [here](https://github.com/LU-JOHN/llvm-project/pull/1) .

https://github.com/llvm/llvm-project/pull/178314


More information about the llvm-commits mailing list