[llvm] [TableGen] Optimize intrinsic info type signature encoding (PR #106809)

Rahul Joshi via llvm-commits llvm-commits at lists.llvm.org
Tue Sep 3 06:30:28 PDT 2024


================
@@ -282,11 +283,34 @@ static TypeSigTy ComputeTypeSignature(const CodeGenIntrinsic &Int) {
   return TypeSig;
 }
 
+// Pack the type signature into 32-bit fixed encoding word.
+std::optional<unsigned> encodePacked(const TypeSigTy &TypeSig) {
+  if (TypeSig.size() > 8)
+    return std::nullopt;
+
+  unsigned Result = 0;
+  for (unsigned char C : reverse(TypeSig)) {
+    if (C > 15)
+      return std::nullopt;
+    Result = (Result << 4) | C;
+  }
+  return Result;
+}
+
 void IntrinsicEmitter::EmitGenerator(const CodeGenIntrinsicTable &Ints,
                                      raw_ostream &OS) {
-  // If we can compute a 32-bit fixed encoding for this intrinsic, do so and
+  // Note: the code below can be switched to use 32-bit fixed encoding by
+  // flipping the flag below.
+  constexpr bool Use16BitFixedEncoding = true;
----------------
jurahul wrote:

Not sure if we want to keep this flexibility around. I guess it's easy enough to go back to 32-bit in future if needed, so this flexibility not needed, and I can simplify this. WDYT?

https://github.com/llvm/llvm-project/pull/106809


More information about the llvm-commits mailing list