[llvm] [TableGen][DecoderEmitter] Add option to emit type-specialized code (PR #146593)

Rahul Joshi via llvm-commits llvm-commits at lists.llvm.org
Wed Aug 20 13:21:44 PDT 2025


jurahul wrote:

> > one where there is a single `decodeInstructionImpl` function that operates on the highest bitwidth
> 
> Does that mean a std::bitset is used on AMDGPU even for things that fit in uint64_t or uint32_t?

Right, if we generate a single "impl" function for `decodeInstruction`, it always operates on the highest bitwidth, so `bitset<128>` for AMDGPU. Just to refresh our memory, this was proposed as a way to control the code size increase we saw for RISCV. That's because today, for RISCV, we instantiate the code once using uint64_t only, whereas these changes could have created 3 instanced of the `decodeInstruction` (without the impl function) and resulted in code size increase. Using a single impl function that operates on the highest bitwidth size was a potential way to control this code size increase, but the above data suggests that it comes at a runtime cost. 

https://github.com/llvm/llvm-project/pull/146593


More information about the llvm-commits mailing list