[llvm] [LLVM][TableGen] Parameterize NumToSkip in DecoderEmitter (PR #136187)
Rahul Joshi via llvm-commits
llvm-commits at lists.llvm.org
Thu Apr 17 13:25:45 PDT 2025
jurahul wrote:
@topperc can you PTAL at the fix I am proposing for the expensive checks failure with this change I saw yesterday? For context, here's my assessment of my this broke it: https://github.com/llvm/llvm-project/pull/136019#issuecomment-2813657821. Not asking for a full review, just some high-level eyeballing.
The other option I considered is encoding the `NumToSkipSizeInBytes` in the first byte of each DecodeTable (as a table header): `0x80 | NumToSkipSizeInBytes` and then keeping a single `decodeInstruction` function that first reads this byte and then steers to a templated impl function based on size 2 or 3. But that means every time we decode we pay this 1-time cost of the branch. This approach instead avoids that and steers the code directly to the right one.
If this looks ok overall, I'll likely separate the AMDGPU refactor to move calls to `decodeInstruction` out of the header file and commit that first and then commit this. I had to do this because the new `DecoderTable2Bytes` type cannot be referenced in the header as the header does not (and should not) include the generated .inc file.
https://github.com/llvm/llvm-project/pull/136187
More information about the llvm-commits
mailing list