[llvm] [TableGen][DecoderEmitter] Add option to emit type-specialized `decodeToMCInst` (PR #146593)
Rahul Joshi via llvm-commits
llvm-commits at lists.llvm.org
Fri Jul 4 00:43:27 PDT 2025
jurahul wrote:
Here's one possibility (along the lines you suggested). If we do not want to pass in the C++ types, we have to keep generating templated code. But we can do the following:
1. generate one variant of decodeToMCInst and decodeInstruction for each size. So `decodeToMCInst<Sz>, `decodeInstruction<Sz>` for each size `Sz`.
2. Encode the bitwidth/byte size of the instruction as first byte on the decoder table.
3. Generate a top-level `decodeInstruction` which looks like:
```
template<typename InsnType>
decodeInstruction16(const uint8_t DecoderTable*) {
if (*DecoderTable++ * 8 != 16)
return MCDisassembler::Fail;
// decoder table traversal
}
template<typename InsnType>
decodeInstruction(const uint8_t DecoderTable*) {
const uint8_t Bitwidth = DecoderTable[0] * 8;
switch (Bitwidth) {
case 16: return decodeInstruction16(...);
case 32: return decodeInstruction32(...);
default: return MCDisassembler::Fail;
}
}
```
So the existing code continues to work but may be a little fatter due to the switch. Backends can switch to using the `decodeInstruction16` etc functions and then they will see the code size reduction. However, that means they have to actively "opt-in" by changing the API they call. I am not sure if that's any better than what I have.
https://github.com/llvm/llvm-project/pull/146593
More information about the llvm-commits
mailing list