[llvm] [TableGen][DecoderEmitter] Add option to emit type-specialized code (PR #146593)
Craig Topper via llvm-commits
llvm-commits at lists.llvm.org
Tue Jul 22 12:54:44 PDT 2025
topperc wrote:
> I am almost ready to commit this, but decided to do one final measurement and here are the numbers I have for AMDGPU and RISCV:
>
> ```
> Old New
> AMDGPU .rodata 376968 358840
> AMDGPU .text 436700 277452
> RISCV .rodata 38026 37601
> RISCV .text 55596 68572
> ```
>
> For AMDGPU, this is a clear win with both code and data size reducing by 36% and 4.8%, but for RISCV, which currently uses a single uint64_t type, there is a significant increase in code size (23%) in adopting this. So, this is not always a clear win and it seems for RISCV we want to continue using the templating mechanism. @s-barannikov and @topperc WDYT? It seems in that case we would need to keep support for generating templated code as well as type specialized code and not deprecate the templated path in future. And if that's the case. maybe change the default for `GenerateTemplatedDecoder` to be true?
What if we templated decodeInstruction and dispatched to the correct decodeToMCInst* based on the bit width either by making it an argument to decodeInstruction or storing it in the first byte of the table? RISC-V could continue using uint64_t for the decodeInstruction template.
https://github.com/llvm/llvm-project/pull/146593
More information about the llvm-commits
mailing list