[PATCH] D24223: [SelectionDAG] Add SDNode:RawSDNodeBits.

Chandler Carruth via llvm-commits llvm-commits at lists.llvm.org
Sun Sep 4 17:30:43 PDT 2016


chandlerc added inline comments.

================
Comment at: llvm/include/llvm/CodeGen/SelectionDAGNodes.h:466
@@ -465,2 +465,3 @@
   union {
+    uint16_t RawSDNodeBits;
     SDNodeBitfields SDNodeBits;
----------------
I think you should make this:

  char RawSDNodeBits[sizeof(uint16_t)];

Because then it will actually be underlying storage and be well defined to access it as such. It also makes it clear that you don't get some *particular* uint16_t here, you just get that many bytes of memory.

================
Comment at: llvm/include/llvm/CodeGen/SelectionDAGNodes.h:1114
@@ -1097,5 +1113,3 @@
     uint16_t Data;
-    memcpy(&Data, &SDNodeBits, sizeof(SDNodeBits));
-    static_assert(sizeof(SDNodeBits) <= sizeof(uint16_t),
-                  "SDNodeBits field too large?");
+    memcpy(&Data, &RawSDNodeBits, sizeof(RawSDNodeBits));
     return Data;
----------------
So, as I mentioned in the other thread, I'm not sure what exact problem you're trying to solve. The change description is a bit confusing.

This makes sure that we copy a full uint16_t chunk of data out. Which might avoid copying only 1 byte instead of 2 bytes compared to the old version.

I think you could have the same effect by taking the max size of the various subclass bits... But maybe using the named entity is a bit cleaner than that, and you can just static assert that it *is* the max.


But this doesn't solve anything to do with endianness. I think that's OK, but your change description talks a lot about endianness for a change that doesn't really change that.


Perhaps worse, if the problem is uninitialized memory, I don't think this really helps at the theoretical level, even if it happens to help in practice. While you initialize all of the memory to zero, you then store through a bitfield. That gets to notionally store "uninitialized" into the high bits of the union. I don't think you can rely on them remaining zero.

If you need for all of the bitfields to occupy a full 16 bits in all cases so that you can use the raw 16 bits as an input to a hash code, I think you'll need to add a padding bitfield member to each and every one to cause them to have 16-bits worth of data, and you'll need to actually store a zero into that member when writing that bitfield.

Then you can change your `static_assert`s to check that each leaf size is *equal* to the raw union member. This will also force you to have a hard separation between classes with a prefix bitfield composed with others and a leaf class that is not composed any farther. I'm not sure if that'll be awkward or not. You will be able to completely remove the "zeroing" step you do in the base class constructor with the raw member.


The end result you want is that the typed accesses into the union always use exactly one leaf type that is exactly 16-bits large and (optionally) some set of *prefix* types that consist solely of a prefix of the leaf type's bitfields. That matches the spirit of the "common prefix" rules for unions.

Then you can have a raw member that you use here to read the un-typed underlying storage. But you don't want to do mutation through this because subsequent mutations through the leaf types can clobber things with "uninitialized".


https://reviews.llvm.org/D24223





More information about the llvm-commits mailing list