[clang] 2089878 - [ARM] Follow AACPS standard for volatile bit-fields access width

Ties Stuij via cfe-commits cfe-commits at lists.llvm.org
Tue Oct 13 02:32:01 PDT 2020


Author: Ties Stuij
Date: 2020-10-13T10:31:48+01:00
New Revision: 208987844ffa5fef636fd6bd36b4f7a7597fe520

URL: https://github.com/llvm/llvm-project/commit/208987844ffa5fef636fd6bd36b4f7a7597fe520
DIFF: https://github.com/llvm/llvm-project/commit/208987844ffa5fef636fd6bd36b4f7a7597fe520.diff

LOG: [ARM] Follow AACPS standard for volatile bit-fields access width

This patch resumes the work of D16586.
According to the AAPCS, volatile bit-fields should
be accessed using containers of the widht of their
declarative type. In such case:
```
struct S1 {
  short a : 1;
}
```
should be accessed using load and stores of the width
(sizeof(short)), where now the compiler does only load
the minimum required width (char in this case).
However, as discussed in D16586,
that could overwrite non-volatile bit-fields, which
conflicted with C and C++ object models by creating
data race conditions that are not part of the bit-field,
e.g.
```
struct S2 {
  short a;
  int  b : 16;
}
```
Accessing `S2.b` would also access `S2.a`.

The AAPCS Release 2020Q2
(https://documentation-service.arm.com/static/5efb7fbedbdee951c1ccf186?token=)
section 8.1 Data Types, page 36, "Volatile bit-fields -
preserving number and width of container accesses" has been
updated to avoid conflict with the C++ Memory Model.
Now it reads in the note:
```
This ABI does not place any restrictions on the access widths of bit-fields where the container
overlaps with a non-bit-field member or where the container overlaps with any zero length bit-field
placed between two other bit-fields. This is because the C/C++ memory model defines these as being
separate memory locations, which can be accessed by two threads simultaneously. For this reason,
compilers must be permitted to use a narrower memory access width (including splitting the access into
multiple instructions) to avoid writing to a different memory location. For example, in
struct S { int a:24; char b; }; a write to a must not also write to the location occupied by b, this requires at least two
memory accesses in all current Arm architectures. In the same way, in struct S { int a:24; int:0; int b:8; };,
writes to a or b must not overwrite each other.
```

I've updated the patch D16586 to follow such behavior by verifying that we
only change volatile bit-field access when:
 - it won't overlap with any other non-bit-field member
 - we only access memory inside the bounds of the record
 - avoid overlapping zero-length bit-fields.

Regarding the number of memory accesses, that should be preserved, that will
be implemented by D67399.

Reviewed By: ostannard

Differential Revision: https://reviews.llvm.org/D72932

Added: 
    

Modified: 
    clang/include/clang/Basic/CodeGenOptions.def
    clang/include/clang/Driver/Options.td
    clang/lib/CodeGen/CGExpr.cpp
    clang/lib/CodeGen/CGRecordLayout.h
    clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
    clang/lib/Frontend/CompilerInvocation.cpp
    clang/test/CodeGen/aapcs-bitfield.c
    clang/test/CodeGen/bitfield-2.c
    clang/test/CodeGen/volatile.c

Removed: 
    


################################################################################
diff  --git a/clang/include/clang/Basic/CodeGenOptions.def b/clang/include/clang/Basic/CodeGenOptions.def
index 4054f93cf4a4..bce2120a4d6d 100644
--- a/clang/include/clang/Basic/CodeGenOptions.def
+++ b/clang/include/clang/Basic/CodeGenOptions.def
@@ -401,12 +401,15 @@ CODEGENOPT(Addrsig, 1, 0)
 /// Whether to emit unused static constants.
 CODEGENOPT(KeepStaticConsts, 1, 0)
 
-/// Whether to not follow the AAPCS that enforce at least one read before storing to a volatile bitfield
+/// Whether to follow the AAPCS enforcing at least one read before storing to a volatile bitfield
 CODEGENOPT(ForceAAPCSBitfieldLoad, 1, 0)
 
 /// Assume that by-value parameters do not alias any other values.
 CODEGENOPT(PassByValueIsNoAlias, 1, 0)
 
+/// Whether to not follow the AAPCS that enforces volatile bit-field access width to be
+/// according to the field declaring type width.
+CODEGENOPT(AAPCSBitfieldWidth, 1, 1)
 
 #undef CODEGENOPT
 #undef ENUM_CODEGENOPT

diff  --git a/clang/include/clang/Driver/Options.td b/clang/include/clang/Driver/Options.td
index 20acd2072068..9980dda23bb0 100644
--- a/clang/include/clang/Driver/Options.td
+++ b/clang/include/clang/Driver/Options.td
@@ -2391,9 +2391,15 @@ def mno_neg_immediates: Flag<["-"], "mno-neg-immediates">, Group<m_arm_Features_
 def mcmse : Flag<["-"], "mcmse">, Group<m_arm_Features_Group>,
   Flags<[DriverOption,CC1Option]>,
   HelpText<"Allow use of CMSE (Armv8-M Security Extensions)">;
-def ForceAAPCSBitfieldLoad : Flag<["-"], "fAAPCSBitfieldLoad">, Group<m_arm_Features_Group>,
+def ForceAAPCSBitfieldLoad : Flag<["-"], "faapcs-bitfield-load">, Group<m_arm_Features_Group>,
   Flags<[DriverOption,CC1Option]>,
   HelpText<"Follows the AAPCS standard that all volatile bit-field write generates at least one load. (ARM only).">;
+def ForceNoAAPCSBitfieldWidth : Flag<["-"], "fno-aapcs-bitfield-width">, Group<m_arm_Features_Group>,
+  Flags<[DriverOption,CC1Option]>,
+  HelpText<"Do not follow the AAPCS standard requirement that volatile bit-field width is dictated by the field container type. (ARM only).">;
+def AAPCSBitfieldWidth : Flag<["-"], "faapcs-bitfield-width">, Group<m_arm_Features_Group>,
+  Flags<[DriverOption,CC1Option]>,
+  HelpText<"Follow the AAPCS standard requirement stating that volatile bit-field width is dictated by the field container type. (ARM only).">;
 
 def mgeneral_regs_only : Flag<["-"], "mgeneral-regs-only">, Group<m_aarch64_Features_Group>,
   HelpText<"Generate code which only uses the general purpose registers (AArch64 only)">;

diff  --git a/clang/lib/CodeGen/CGExpr.cpp b/clang/lib/CodeGen/CGExpr.cpp
index 27cf066466ca..869bace18ffc 100644
--- a/clang/lib/CodeGen/CGExpr.cpp
+++ b/clang/lib/CodeGen/CGExpr.cpp
@@ -1934,22 +1934,27 @@ RValue CodeGenFunction::EmitLoadOfBitfieldLValue(LValue LV,
   llvm::Type *ResLTy = ConvertType(LV.getType());
 
   Address Ptr = LV.getBitFieldAddress();
-  llvm::Value *Val = Builder.CreateLoad(Ptr, LV.isVolatileQualified(), "bf.load");
-
+  llvm::Value *Val =
+      Builder.CreateLoad(Ptr, LV.isVolatileQualified(), "bf.load");
+
+  bool UseVolatile = LV.isVolatileQualified() &&
+                     Info.VolatileStorageSize != 0 && isAAPCS(CGM.getTarget());
+  const unsigned Offset = UseVolatile ? Info.VolatileOffset : Info.Offset;
+  const unsigned StorageSize =
+      UseVolatile ? Info.VolatileStorageSize : Info.StorageSize;
   if (Info.IsSigned) {
-    assert(static_cast<unsigned>(Info.Offset + Info.Size) <= Info.StorageSize);
-    unsigned HighBits = Info.StorageSize - Info.Offset - Info.Size;
+    assert(static_cast<unsigned>(Offset + Info.Size) <= StorageSize);
+    unsigned HighBits = StorageSize - Offset - Info.Size;
     if (HighBits)
       Val = Builder.CreateShl(Val, HighBits, "bf.shl");
-    if (Info.Offset + HighBits)
-      Val = Builder.CreateAShr(Val, Info.Offset + HighBits, "bf.ashr");
+    if (Offset + HighBits)
+      Val = Builder.CreateAShr(Val, Offset + HighBits, "bf.ashr");
   } else {
-    if (Info.Offset)
-      Val = Builder.CreateLShr(Val, Info.Offset, "bf.lshr");
-    if (static_cast<unsigned>(Info.Offset) + Info.Size < Info.StorageSize)
-      Val = Builder.CreateAnd(Val, llvm::APInt::getLowBitsSet(Info.StorageSize,
-                                                              Info.Size),
-                              "bf.clear");
+    if (Offset)
+      Val = Builder.CreateLShr(Val, Offset, "bf.lshr");
+    if (static_cast<unsigned>(Offset) + Info.Size < StorageSize)
+      Val = Builder.CreateAnd(
+          Val, llvm::APInt::getLowBitsSet(StorageSize, Info.Size), "bf.clear");
   }
   Val = Builder.CreateIntCast(Val, ResLTy, Info.IsSigned, "bf.cast");
   EmitScalarRangeCheck(Val, LV.getType(), Loc);
@@ -2151,39 +2156,42 @@ void CodeGenFunction::EmitStoreThroughBitfieldLValue(RValue Src, LValue Dst,
                                  /*isSigned=*/false);
   llvm::Value *MaskedVal = SrcVal;
 
+  const bool UseVolatile =
+      CGM.getCodeGenOpts().AAPCSBitfieldWidth && Dst.isVolatileQualified() &&
+      Info.VolatileStorageSize != 0 && isAAPCS(CGM.getTarget());
+  const unsigned StorageSize =
+      UseVolatile ? Info.VolatileStorageSize : Info.StorageSize;
+  const unsigned Offset = UseVolatile ? Info.VolatileOffset : Info.Offset;
   // See if there are other bits in the bitfield's storage we'll need to load
   // and mask together with source before storing.
-  if (Info.StorageSize != Info.Size) {
-    assert(Info.StorageSize > Info.Size && "Invalid bitfield size.");
+  if (StorageSize != Info.Size) {
+    assert(StorageSize > Info.Size && "Invalid bitfield size.");
     llvm::Value *Val =
-      Builder.CreateLoad(Ptr, Dst.isVolatileQualified(), "bf.load");
+        Builder.CreateLoad(Ptr, Dst.isVolatileQualified(), "bf.load");
 
     // Mask the source value as needed.
     if (!hasBooleanRepresentation(Dst.getType()))
-      SrcVal = Builder.CreateAnd(SrcVal,
-                                 llvm::APInt::getLowBitsSet(Info.StorageSize,
-                                                            Info.Size),
-                                 "bf.value");
+      SrcVal = Builder.CreateAnd(
+          SrcVal, llvm::APInt::getLowBitsSet(StorageSize, Info.Size),
+          "bf.value");
     MaskedVal = SrcVal;
-    if (Info.Offset)
-      SrcVal = Builder.CreateShl(SrcVal, Info.Offset, "bf.shl");
+    if (Offset)
+      SrcVal = Builder.CreateShl(SrcVal, Offset, "bf.shl");
 
     // Mask out the original value.
-    Val = Builder.CreateAnd(Val,
-                            ~llvm::APInt::getBitsSet(Info.StorageSize,
-                                                     Info.Offset,
-                                                     Info.Offset + Info.Size),
-                            "bf.clear");
+    Val = Builder.CreateAnd(
+        Val, ~llvm::APInt::getBitsSet(StorageSize, Offset, Offset + Info.Size),
+        "bf.clear");
 
     // Or together the unchanged values and the source value.
     SrcVal = Builder.CreateOr(Val, SrcVal, "bf.set");
   } else {
-    assert(Info.Offset == 0);
+    assert(Offset == 0);
     // According to the AACPS:
     // When a volatile bit-field is written, and its container does not overlap
-    // with any non-bit-field member, its container must be read exactly once and
-    // written exactly once using the access width appropriate to the type of the
-    // container. The two accesses are not atomic.
+    // with any non-bit-field member, its container must be read exactly once
+    // and written exactly once using the access width appropriate to the type
+    // of the container. The two accesses are not atomic.
     if (Dst.isVolatileQualified() && isAAPCS(CGM.getTarget()) &&
         CGM.getCodeGenOpts().ForceAAPCSBitfieldLoad)
       Builder.CreateLoad(Ptr, true, "bf.load");
@@ -2198,8 +2206,8 @@ void CodeGenFunction::EmitStoreThroughBitfieldLValue(RValue Src, LValue Dst,
 
     // Sign extend the value if needed.
     if (Info.IsSigned) {
-      assert(Info.Size <= Info.StorageSize);
-      unsigned HighBits = Info.StorageSize - Info.Size;
+      assert(Info.Size <= StorageSize);
+      unsigned HighBits = StorageSize - Info.Size;
       if (HighBits) {
         ResultVal = Builder.CreateShl(ResultVal, HighBits, "bf.result.shl");
         ResultVal = Builder.CreateAShr(ResultVal, HighBits, "bf.result.ashr");
@@ -4211,32 +4219,45 @@ LValue CodeGenFunction::EmitLValueForField(LValue base,
 
   if (field->isBitField()) {
     const CGRecordLayout &RL =
-      CGM.getTypes().getCGRecordLayout(field->getParent());
+        CGM.getTypes().getCGRecordLayout(field->getParent());
     const CGBitFieldInfo &Info = RL.getBitFieldInfo(field);
+    const bool UseVolatile = isAAPCS(CGM.getTarget()) &&
+                             CGM.getCodeGenOpts().AAPCSBitfieldWidth &&
+                             Info.VolatileStorageSize != 0 &&
+                             field->getType()
+                                 .withCVRQualifiers(base.getVRQualifiers())
+                                 .isVolatileQualified();
     Address Addr = base.getAddress(*this);
     unsigned Idx = RL.getLLVMFieldNo(field);
     const RecordDecl *rec = field->getParent();
-    if (!IsInPreservedAIRegion &&
-        (!getDebugInfo() || !rec->hasAttr<BPFPreserveAccessIndexAttr>())) {
-      if (Idx != 0)
-        // For structs, we GEP to the field that the record layout suggests.
-        Addr = Builder.CreateStructGEP(Addr, Idx, field->getName());
-    } else {
-      llvm::DIType *DbgInfo = getDebugInfo()->getOrCreateRecordType(
-          getContext().getRecordType(rec), rec->getLocation());
-      Addr = Builder.CreatePreserveStructAccessIndex(Addr, Idx,
-          getDebugInfoFIndex(rec, field->getFieldIndex()),
-          DbgInfo);
+    if (!UseVolatile) {
+      if (!IsInPreservedAIRegion &&
+          (!getDebugInfo() || !rec->hasAttr<BPFPreserveAccessIndexAttr>())) {
+        if (Idx != 0)
+          // For structs, we GEP to the field that the record layout suggests.
+          Addr = Builder.CreateStructGEP(Addr, Idx, field->getName());
+      } else {
+        llvm::DIType *DbgInfo = getDebugInfo()->getOrCreateRecordType(
+            getContext().getRecordType(rec), rec->getLocation());
+        Addr = Builder.CreatePreserveStructAccessIndex(
+            Addr, Idx, getDebugInfoFIndex(rec, field->getFieldIndex()),
+            DbgInfo);
+      }
     }
-
+    const unsigned SS =
+        UseVolatile ? Info.VolatileStorageSize : Info.StorageSize;
     // Get the access type.
-    llvm::Type *FieldIntTy =
-      llvm::Type::getIntNTy(getLLVMContext(), Info.StorageSize);
+    llvm::Type *FieldIntTy = llvm::Type::getIntNTy(getLLVMContext(), SS);
     if (Addr.getElementType() != FieldIntTy)
       Addr = Builder.CreateElementBitCast(Addr, FieldIntTy);
+    if (UseVolatile) {
+      const unsigned VolatileOffset = Info.VolatileStorageOffset.getQuantity();
+      if (VolatileOffset)
+        Addr = Builder.CreateConstInBoundsGEP(Addr, VolatileOffset);
+    }
 
     QualType fieldType =
-      field->getType().withCVRQualifiers(base.getVRQualifiers());
+        field->getType().withCVRQualifiers(base.getVRQualifiers());
     // TODO: Support TBAA for bit fields.
     LValueBaseInfo FieldBaseInfo(BaseInfo.getAlignmentSource());
     return LValue::MakeBitfield(Addr, Info, fieldType, FieldBaseInfo,

diff  --git a/clang/lib/CodeGen/CGRecordLayout.h b/clang/lib/CodeGen/CGRecordLayout.h
index 730ee4c438e7..e6665b72bcba 100644
--- a/clang/lib/CodeGen/CGRecordLayout.h
+++ b/clang/lib/CodeGen/CGRecordLayout.h
@@ -46,7 +46,7 @@ namespace CodeGen {
 ///   };
 ///
 /// This will end up as the following LLVM type. The first array is the
-/// bitfield, and the second is the padding out to a 4-byte alignmnet.
+/// bitfield, and the second is the padding out to a 4-byte alignment.
 ///
 ///   %t = type { i8, i8, i8, i8, i8, [3 x i8] }
 ///
@@ -80,8 +80,21 @@ struct CGBitFieldInfo {
   /// The offset of the bitfield storage from the start of the struct.
   CharUnits StorageOffset;
 
+  /// The offset within a contiguous run of bitfields that are represented as a
+  /// single "field" within the LLVM struct type, taking into account the AAPCS
+  /// rules for volatile bitfields. This offset is in bits.
+  unsigned VolatileOffset : 16;
+
+  /// The storage size in bits which should be used when accessing this
+  /// bitfield.
+  unsigned VolatileStorageSize;
+
+  /// The offset of the bitfield storage from the start of the struct.
+  CharUnits VolatileStorageOffset;
+
   CGBitFieldInfo()
-      : Offset(), Size(), IsSigned(), StorageSize(), StorageOffset() {}
+      : Offset(), Size(), IsSigned(), StorageSize(), StorageOffset(),
+        VolatileOffset(), VolatileStorageSize(), VolatileStorageOffset() {}
 
   CGBitFieldInfo(unsigned Offset, unsigned Size, bool IsSigned,
                  unsigned StorageSize, CharUnits StorageOffset)

diff  --git a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
index 4e5d1d3f16f6..ce35880106c2 100644
--- a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
+++ b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
@@ -109,6 +109,14 @@ struct CGRecordLowering {
            D->isMsStruct(Context);
   }
 
+  /// Helper function to check if we are targeting AAPCS.
+  bool isAAPCS() const {
+    return Context.getTargetInfo().getABI().startswith("aapcs");
+  }
+
+  /// Helper function to check if the target machine is BigEndian.
+  bool isBE() const { return Context.getTargetInfo().isBigEndian(); }
+
   /// The Itanium base layout rule allows virtual bases to overlap
   /// other bases, which complicates layout in specific ways.
   ///
@@ -172,7 +180,8 @@ struct CGRecordLowering {
   void lowerUnion();
   void accumulateFields();
   void accumulateBitFields(RecordDecl::field_iterator Field,
-                        RecordDecl::field_iterator FieldEnd);
+                           RecordDecl::field_iterator FieldEnd);
+  void computeVolatileBitfields();
   void accumulateBases();
   void accumulateVPtrs();
   void accumulateVBases();
@@ -237,6 +246,10 @@ void CGRecordLowering::setBitFieldInfo(
   // least-significant-bit.
   if (DataLayout.isBigEndian())
     Info.Offset = Info.StorageSize - (Info.Offset + Info.Size);
+
+  Info.VolatileStorageSize = 0;
+  Info.VolatileOffset = 0;
+  Info.VolatileStorageOffset = CharUnits::Zero();
 }
 
 void CGRecordLowering::lower(bool NVBaseType) {
@@ -261,15 +274,21 @@ void CGRecordLowering::lower(bool NVBaseType) {
   // 8) Format the complete list of members in a way that can be consumed by
   //    CodeGenTypes::ComputeRecordLayout.
   CharUnits Size = NVBaseType ? Layout.getNonVirtualSize() : Layout.getSize();
-  if (D->isUnion())
-    return lowerUnion();
+  if (D->isUnion()) {
+    lowerUnion();
+    computeVolatileBitfields();
+    return;
+  }
   accumulateFields();
   // RD implies C++.
   if (RD) {
     accumulateVPtrs();
     accumulateBases();
-    if (Members.empty())
-      return appendPaddingBytes(Size);
+    if (Members.empty()) {
+      appendPaddingBytes(Size);
+      computeVolatileBitfields();
+      return;
+    }
     if (!NVBaseType)
       accumulateVBases();
   }
@@ -281,6 +300,7 @@ void CGRecordLowering::lower(bool NVBaseType) {
   Members.pop_back();
   calculateZeroInit();
   fillOutputFields();
+  computeVolatileBitfields();
 }
 
 void CGRecordLowering::lowerUnion() {
@@ -418,9 +438,9 @@ CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
     if (OffsetInRecord < 8 || !llvm::isPowerOf2_64(OffsetInRecord) ||
         !DataLayout.fitsInLegalInteger(OffsetInRecord))
       return false;
-    // Make sure StartBitOffset is natually aligned if it is treated as an
+    // Make sure StartBitOffset is naturally aligned if it is treated as an
     // IType integer.
-     if (StartBitOffset %
+    if (StartBitOffset %
             Context.toBits(getAlignment(getIntNType(OffsetInRecord))) !=
         0)
       return false;
@@ -503,6 +523,123 @@ void CGRecordLowering::accumulateBases() {
   }
 }
 
+/// The AAPCS that defines that, when possible, bit-fields should
+/// be accessed using containers of the declared type width:
+/// When a volatile bit-field is read, and its container does not overlap with
+/// any non-bit-field member or any zero length bit-field member, its container
+/// must be read exactly once using the access width appropriate to the type of
+/// the container. When a volatile bit-field is written, and its container does
+/// not overlap with any non-bit-field member or any zero-length bit-field
+/// member, its container must be read exactly once and written exactly once
+/// using the access width appropriate to the type of the container. The two
+/// accesses are not atomic.
+///
+/// Enforcing the width restriction can be disabled using
+/// -fno-aapcs-bitfield-width.
+void CGRecordLowering::computeVolatileBitfields() {
+  if (!isAAPCS() || !Types.getCodeGenOpts().AAPCSBitfieldWidth)
+    return;
+
+  for (auto &I : BitFields) {
+    const FieldDecl *Field = I.first;
+    CGBitFieldInfo &Info = I.second;
+    llvm::Type *ResLTy = Types.ConvertTypeForMem(Field->getType());
+    // If the record alignment is less than the type width, we can't enforce a
+    // aligned load, bail out.
+    if ((uint64_t)(Context.toBits(Layout.getAlignment())) <
+        ResLTy->getPrimitiveSizeInBits())
+      continue;
+    // CGRecordLowering::setBitFieldInfo() pre-adjusts the bit-field offsets
+    // for big-endian targets, but it assumes a container of width
+    // Info.StorageSize. Since AAPCS uses a 
diff erent container size (width
+    // of the type), we first undo that calculation here and redo it once
+    // the bit-field offset within the new container is calculated.
+    const unsigned OldOffset =
+        isBE() ? Info.StorageSize - (Info.Offset + Info.Size) : Info.Offset;
+    // Offset to the bit-field from the beginning of the struct.
+    const unsigned AbsoluteOffset =
+        Context.toBits(Info.StorageOffset) + OldOffset;
+
+    // Container size is the width of the bit-field type.
+    const unsigned StorageSize = ResLTy->getPrimitiveSizeInBits();
+    // Nothing to do if the access uses the desired
+    // container width and is naturally aligned.
+    if (Info.StorageSize == StorageSize && (OldOffset % StorageSize == 0))
+      continue;
+
+    // Offset within the container.
+    unsigned Offset = AbsoluteOffset & (StorageSize - 1);
+    // Bail out if an aligned load of the container cannot cover the entire
+    // bit-field. This can happen for example, if the bit-field is part of a
+    // packed struct. AAPCS does not define access rules for such cases, we let
+    // clang to follow its own rules.
+    if (Offset + Info.Size > StorageSize)
+      continue;
+
+    // Re-adjust offsets for big-endian targets.
+    if (isBE())
+      Offset = StorageSize - (Offset + Info.Size);
+
+    const CharUnits StorageOffset =
+        Context.toCharUnitsFromBits(AbsoluteOffset & ~(StorageSize - 1));
+    const CharUnits End = StorageOffset +
+                          Context.toCharUnitsFromBits(StorageSize) -
+                          CharUnits::One();
+
+    const ASTRecordLayout &Layout =
+        Context.getASTRecordLayout(Field->getParent());
+    // If we access outside memory outside the record, than bail out.
+    const CharUnits RecordSize = Layout.getSize();
+    if (End >= RecordSize)
+      continue;
+
+    // Bail out if performing this load would access non-bit-fields members.
+    bool Conflict = false;
+    for (const auto *F : D->fields()) {
+      // Allow sized bit-fields overlaps.
+      if (F->isBitField() && !F->isZeroLengthBitField(Context))
+        continue;
+
+      const CharUnits FOffset = Context.toCharUnitsFromBits(
+          Layout.getFieldOffset(F->getFieldIndex()));
+
+      // As C11 defines, a zero sized bit-field defines a barrier, so
+      // fields after and before it should be race condition free.
+      // The AAPCS acknowledges it and imposes no restritions when the
+      // natural container overlaps a zero-length bit-field.
+      if (F->isZeroLengthBitField(Context)) {
+        if (End > FOffset && StorageOffset < FOffset) {
+          Conflict = true;
+          break;
+        }
+      }
+
+      const CharUnits FEnd =
+          FOffset +
+          Context.toCharUnitsFromBits(
+              Types.ConvertTypeForMem(F->getType())->getPrimitiveSizeInBits()) -
+          CharUnits::One();
+      // If no overlap, continue.
+      if (End < FOffset || FEnd < StorageOffset)
+        continue;
+
+      // The desired load overlaps a non-bit-field member, bail out.
+      Conflict = true;
+      break;
+    }
+
+    if (Conflict)
+      continue;
+    // Write the new bit-field access parameters.
+    // As the storage offset now is defined as the number of elements from the
+    // start of the structure, we should divide the Offset by the element size.
+    Info.VolatileStorageOffset =
+        StorageOffset / Context.toCharUnitsFromBits(StorageSize).getQuantity();
+    Info.VolatileStorageSize = StorageSize;
+    Info.VolatileOffset = Offset;
+  }
+}
+
 void CGRecordLowering::accumulateVPtrs() {
   if (Layout.hasOwnVFPtr())
     Members.push_back(MemberInfo(CharUnits::Zero(), MemberInfo::VFPtr,
@@ -848,8 +985,10 @@ CodeGenTypes::ComputeRecordLayout(const RecordDecl *D, llvm::StructType *Ty) {
       assert(Info.StorageSize <= SL->getSizeInBits() &&
              "Union not large enough for bitfield storage");
     } else {
-      assert(Info.StorageSize ==
-             getDataLayout().getTypeAllocSizeInBits(ElementTy) &&
+      assert((Info.StorageSize ==
+                  getDataLayout().getTypeAllocSizeInBits(ElementTy) ||
+              Info.VolatileStorageSize ==
+                  getDataLayout().getTypeAllocSizeInBits(ElementTy)) &&
              "Storage size does not match the element type size");
     }
     assert(Info.Size > 0 && "Empty bitfield!");
@@ -897,11 +1036,12 @@ LLVM_DUMP_METHOD void CGRecordLayout::dump() const {
 
 void CGBitFieldInfo::print(raw_ostream &OS) const {
   OS << "<CGBitFieldInfo"
-     << " Offset:" << Offset
-     << " Size:" << Size
-     << " IsSigned:" << IsSigned
+     << " Offset:" << Offset << " Size:" << Size << " IsSigned:" << IsSigned
      << " StorageSize:" << StorageSize
-     << " StorageOffset:" << StorageOffset.getQuantity() << ">";
+     << " StorageOffset:" << StorageOffset.getQuantity()
+     << " VolatileOffset:" << VolatileOffset
+     << " VolatileStorageSize:" << VolatileStorageSize
+     << " VolatileStorageOffset:" << VolatileStorageOffset.getQuantity() << ">";
 }
 
 LLVM_DUMP_METHOD void CGBitFieldInfo::dump() const {

diff  --git a/clang/lib/Frontend/CompilerInvocation.cpp b/clang/lib/Frontend/CompilerInvocation.cpp
index 77ecbbd093e5..a4c56cc4e51e 100644
--- a/clang/lib/Frontend/CompilerInvocation.cpp
+++ b/clang/lib/Frontend/CompilerInvocation.cpp
@@ -1464,8 +1464,11 @@ static bool ParseCodeGenArgs(CodeGenOptions &Opts, ArgList &Args, InputKind IK,
       std::string(Args.getLastArgValue(OPT_fsymbol_partition_EQ));
 
   Opts.ForceAAPCSBitfieldLoad = Args.hasArg(OPT_ForceAAPCSBitfieldLoad);
+  Opts.AAPCSBitfieldWidth =
+      Args.hasFlag(OPT_AAPCSBitfieldWidth, OPT_ForceNoAAPCSBitfieldWidth, true);
 
   Opts.PassByValueIsNoAlias = Args.hasArg(OPT_fpass_by_value_is_noalias);
+
   return Success;
 }
 

diff  --git a/clang/test/CodeGen/aapcs-bitfield.c b/clang/test/CodeGen/aapcs-bitfield.c
index 4fc889bcf379..13db68d6ae81 100644
--- a/clang/test/CodeGen/aapcs-bitfield.c
+++ b/clang/test/CodeGen/aapcs-bitfield.c
@@ -1,8 +1,12 @@
 // NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py
-// RUN: %clang_cc1 -triple armv8-none-linux-eabi %s -emit-llvm -o - -O3 | FileCheck %s -check-prefix=LE
-// RUN: %clang_cc1 -triple armebv8-none-linux-eabi %s -emit-llvm -o - -O3 | FileCheck %s -check-prefix=BE
-// RUN: %clang_cc1 -triple armv8-none-linux-eabi %s -emit-llvm -o - -O3 -fAAPCSBitfieldLoad | FileCheck %s -check-prefixes=LE,LENUMLOADS
-// RUN: %clang_cc1 -triple armebv8-none-linux-eabi %s -emit-llvm -o - -O3 -fAAPCSBitfieldLoad | FileCheck %s -check-prefixes=BE,BENUMLOADS
+// RUN: %clang_cc1 -triple armv8-none-linux-eabi %s -emit-llvm -o - -O3 -fno-aapcs-bitfield-width | FileCheck %s -check-prefix=LE
+// RUN: %clang_cc1 -triple armebv8-none-linux-eabi %s -emit-llvm -o - -O3 -fno-aapcs-bitfield-width | FileCheck %s -check-prefix=BE
+// RUN: %clang_cc1 -triple armv8-none-linux-eabi %s -emit-llvm -o - -O3 -faapcs-bitfield-load -fno-aapcs-bitfield-width | FileCheck %s -check-prefixes=LENUMLOADS
+// RUN: %clang_cc1 -triple armebv8-none-linux-eabi %s -emit-llvm -o - -O3 -faapcs-bitfield-load -fno-aapcs-bitfield-width | FileCheck %s -check-prefixes=BENUMLOADS
+// RUN: %clang_cc1 -triple armv8-none-linux-eabi %s -emit-llvm -o - -O3 | FileCheck %s -check-prefix=LEWIDTH
+// RUN: %clang_cc1 -triple armebv8-none-linux-eabi %s -emit-llvm -o - -O3 | FileCheck %s -check-prefix=BEWIDTH
+// RUN: %clang_cc1 -triple armv8-none-linux-eabi %s -emit-llvm -o - -O3 -faapcs-bitfield-load | FileCheck %s -check-prefixes=LEWIDTHNUM
+// RUN: %clang_cc1 -triple armebv8-none-linux-eabi %s -emit-llvm -o - -O3 -faapcs-bitfield-load | FileCheck %s -check-prefixes=BEWIDTHNUM
 
 struct st0 {
   short c : 7;
@@ -25,6 +29,57 @@ struct st0 {
 // BE-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
 // BE-NEXT:    ret i32 [[CONV]]
 //
+// LENUMLOADS-LABEL: @st0_check_load(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST0:%.*]], %struct.st0* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[TMP0]], align 2
+// LENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 1
+// LENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 1
+// LENUMLOADS-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LENUMLOADS-NEXT:    ret i32 [[CONV]]
+//
+// BENUMLOADS-LABEL: @st0_check_load(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST0:%.*]], %struct.st0* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[TMP0]], align 2
+// BENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BENUMLOADS-NEXT:    ret i32 [[CONV]]
+//
+// LEWIDTH-LABEL: @st0_check_load(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST0:%.*]], %struct.st0* [[M:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[TMP0]], align 2
+// LEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 1
+// LEWIDTH-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LEWIDTH-NEXT:    ret i32 [[CONV]]
+//
+// BEWIDTH-LABEL: @st0_check_load(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST0:%.*]], %struct.st0* [[M:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[TMP0]], align 2
+// BEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BEWIDTH-NEXT:    ret i32 [[CONV]]
+//
+// LEWIDTHNUM-LABEL: @st0_check_load(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST0:%.*]], %struct.st0* [[M:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[TMP0]], align 2
+// LEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 1
+// LEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LEWIDTHNUM-NEXT:    ret i32 [[CONV]]
+//
+// BEWIDTHNUM-LABEL: @st0_check_load(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST0:%.*]], %struct.st0* [[M:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[TMP0]], align 2
+// BEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BEWIDTHNUM-NEXT:    ret i32 [[CONV]]
+//
 int st0_check_load(struct st0 *m) {
   return m->c;
 }
@@ -47,6 +102,60 @@ int st0_check_load(struct st0 *m) {
 // BE-NEXT:    store i8 [[BF_SET]], i8* [[TMP0]], align 2
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @st0_check_store(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST0:%.*]], %struct.st0* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[TMP0]], align 2
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -128
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 1
+// LENUMLOADS-NEXT:    store i8 [[BF_SET]], i8* [[TMP0]], align 2
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @st0_check_store(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST0:%.*]], %struct.st0* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[TMP0]], align 2
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 2
+// BENUMLOADS-NEXT:    store i8 [[BF_SET]], i8* [[TMP0]], align 2
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @st0_check_store(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST0:%.*]], %struct.st0* [[M:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[TMP0]], align 2
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -128
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 1
+// LEWIDTH-NEXT:    store i8 [[BF_SET]], i8* [[TMP0]], align 2
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @st0_check_store(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST0:%.*]], %struct.st0* [[M:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[TMP0]], align 2
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 2
+// BEWIDTH-NEXT:    store i8 [[BF_SET]], i8* [[TMP0]], align 2
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @st0_check_store(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST0:%.*]], %struct.st0* [[M:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[TMP0]], align 2
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -128
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 1
+// LEWIDTHNUM-NEXT:    store i8 [[BF_SET]], i8* [[TMP0]], align 2
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @st0_check_store(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST0:%.*]], %struct.st0* [[M:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[TMP0]], align 2
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 2
+// BEWIDTHNUM-NEXT:    store i8 [[BF_SET]], i8* [[TMP0]], align 2
+// BEWIDTHNUM-NEXT:    ret void
+//
 void st0_check_store(struct st0 *m) {
   m->c = 1;
 }
@@ -73,6 +182,57 @@ struct st1 {
 // BE-NEXT:    [[CONV:%.*]] = sext i16 [[BF_ASHR]] to i32
 // BE-NEXT:    ret i32 [[CONV]]
 //
+// LENUMLOADS-LABEL: @st1_check_load(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST1:%.*]], %struct.st1* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_LOAD]], 10
+// LENUMLOADS-NEXT:    [[CONV:%.*]] = sext i16 [[BF_ASHR]] to i32
+// LENUMLOADS-NEXT:    ret i32 [[CONV]]
+//
+// BENUMLOADS-LABEL: @st1_check_load(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST1:%.*]], %struct.st1* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i16 [[BF_LOAD]], 10
+// BENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr exact i16 [[BF_SHL]], 10
+// BENUMLOADS-NEXT:    [[CONV:%.*]] = sext i16 [[BF_ASHR]] to i32
+// BENUMLOADS-NEXT:    ret i32 [[CONV]]
+//
+// LEWIDTH-LABEL: @st1_check_load(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST1:%.*]], %struct.st1* [[M:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_LOAD]], 10
+// LEWIDTH-NEXT:    [[CONV:%.*]] = sext i16 [[BF_ASHR]] to i32
+// LEWIDTH-NEXT:    ret i32 [[CONV]]
+//
+// BEWIDTH-LABEL: @st1_check_load(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST1:%.*]], %struct.st1* [[M:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i16 [[BF_LOAD]], 10
+// BEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr exact i16 [[BF_SHL]], 10
+// BEWIDTH-NEXT:    [[CONV:%.*]] = sext i16 [[BF_ASHR]] to i32
+// BEWIDTH-NEXT:    ret i32 [[CONV]]
+//
+// LEWIDTHNUM-LABEL: @st1_check_load(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST1:%.*]], %struct.st1* [[M:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_LOAD]], 10
+// LEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i16 [[BF_ASHR]] to i32
+// LEWIDTHNUM-NEXT:    ret i32 [[CONV]]
+//
+// BEWIDTHNUM-LABEL: @st1_check_load(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST1:%.*]], %struct.st1* [[M:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i16 [[BF_LOAD]], 10
+// BEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr exact i16 [[BF_SHL]], 10
+// BEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i16 [[BF_ASHR]] to i32
+// BEWIDTHNUM-NEXT:    ret i32 [[CONV]]
+//
 int st1_check_load(struct st1 *m) {
   return m->c;
 }
@@ -95,6 +255,60 @@ int st1_check_load(struct st1 *m) {
 // BE-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @st1_check_store(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST1:%.*]], %struct.st1* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], 1023
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1024
+// LENUMLOADS-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @st1_check_store(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST1:%.*]], %struct.st1* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -64
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1
+// BENUMLOADS-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @st1_check_store(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST1:%.*]], %struct.st1* [[M:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], 1023
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1024
+// LEWIDTH-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @st1_check_store(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST1:%.*]], %struct.st1* [[M:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -64
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1
+// BEWIDTH-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @st1_check_store(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST1:%.*]], %struct.st1* [[M:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], 1023
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1024
+// LEWIDTHNUM-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @st1_check_store(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST1:%.*]], %struct.st1* [[M:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -64
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1
+// BEWIDTHNUM-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void st1_check_store(struct st1 *m) {
   m->c = 1;
 }
@@ -121,6 +335,57 @@ struct st2 {
 // BE-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
 // BE-NEXT:    ret i32 [[CONV]]
 //
+// LENUMLOADS-LABEL: @st2_check_load(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST2:%.*]], %struct.st2* [[M:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[C]], align 2
+// LENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 1
+// LENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 1
+// LENUMLOADS-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LENUMLOADS-NEXT:    ret i32 [[CONV]]
+//
+// BENUMLOADS-LABEL: @st2_check_load(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST2:%.*]], %struct.st2* [[M:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[C]], align 2
+// BENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BENUMLOADS-NEXT:    ret i32 [[CONV]]
+//
+// LEWIDTH-LABEL: @st2_check_load(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST2:%.*]], %struct.st2* [[M:%.*]], i32 0, i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[C]], align 2
+// LEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 1
+// LEWIDTH-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LEWIDTH-NEXT:    ret i32 [[CONV]]
+//
+// BEWIDTH-LABEL: @st2_check_load(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST2:%.*]], %struct.st2* [[M:%.*]], i32 0, i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[C]], align 2
+// BEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BEWIDTH-NEXT:    ret i32 [[CONV]]
+//
+// LEWIDTHNUM-LABEL: @st2_check_load(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST2:%.*]], %struct.st2* [[M:%.*]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[C]], align 2
+// LEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 1
+// LEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LEWIDTHNUM-NEXT:    ret i32 [[CONV]]
+//
+// BEWIDTHNUM-LABEL: @st2_check_load(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST2:%.*]], %struct.st2* [[M:%.*]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[C]], align 2
+// BEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BEWIDTHNUM-NEXT:    ret i32 [[CONV]]
+//
 int st2_check_load(struct st2 *m) {
   return m->c;
 }
@@ -143,6 +408,60 @@ int st2_check_load(struct st2 *m) {
 // BE-NEXT:    store i8 [[BF_SET]], i8* [[C]], align 2
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @st2_check_store(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST2:%.*]], %struct.st2* [[M:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[C]], align 2
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -128
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 1
+// LENUMLOADS-NEXT:    store i8 [[BF_SET]], i8* [[C]], align 2
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @st2_check_store(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST2:%.*]], %struct.st2* [[M:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[C]], align 2
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 2
+// BENUMLOADS-NEXT:    store i8 [[BF_SET]], i8* [[C]], align 2
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @st2_check_store(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST2:%.*]], %struct.st2* [[M:%.*]], i32 0, i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[C]], align 2
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -128
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 1
+// LEWIDTH-NEXT:    store i8 [[BF_SET]], i8* [[C]], align 2
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @st2_check_store(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST2:%.*]], %struct.st2* [[M:%.*]], i32 0, i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[C]], align 2
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 2
+// BEWIDTH-NEXT:    store i8 [[BF_SET]], i8* [[C]], align 2
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @st2_check_store(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST2:%.*]], %struct.st2* [[M:%.*]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[C]], align 2
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -128
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 1
+// LEWIDTHNUM-NEXT:    store i8 [[BF_SET]], i8* [[C]], align 2
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @st2_check_store(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST2:%.*]], %struct.st2* [[M:%.*]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i8, i8* [[C]], align 2
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 2
+// BEWIDTHNUM-NEXT:    store i8 [[BF_SET]], i8* [[C]], align 2
+// BEWIDTHNUM-NEXT:    ret void
+//
 void st2_check_store(struct st2 *m) {
   m->c = 1;
 }
@@ -168,6 +487,57 @@ struct st3 {
 // BE-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
 // BE-NEXT:    ret i32 [[CONV]]
 //
+// LENUMLOADS-LABEL: @st3_check_load(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST3:%.*]], %struct.st3* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 2
+// LENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 1
+// LENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 1
+// LENUMLOADS-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LENUMLOADS-NEXT:    ret i32 [[CONV]]
+//
+// BENUMLOADS-LABEL: @st3_check_load(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST3:%.*]], %struct.st3* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 2
+// BENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BENUMLOADS-NEXT:    ret i32 [[CONV]]
+//
+// LEWIDTH-LABEL: @st3_check_load(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st3* [[M:%.*]] to i16*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 2
+// LEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i16 [[BF_LOAD]], 9
+// LEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr exact i16 [[BF_SHL]], 9
+// LEWIDTH-NEXT:    [[CONV:%.*]] = sext i16 [[BF_ASHR]] to i32
+// LEWIDTH-NEXT:    ret i32 [[CONV]]
+//
+// BEWIDTH-LABEL: @st3_check_load(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st3* [[M:%.*]] to i16*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 2
+// BEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_LOAD]], 9
+// BEWIDTH-NEXT:    [[CONV:%.*]] = sext i16 [[BF_ASHR]] to i32
+// BEWIDTH-NEXT:    ret i32 [[CONV]]
+//
+// LEWIDTHNUM-LABEL: @st3_check_load(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st3* [[M:%.*]] to i16*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 2
+// LEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i16 [[BF_LOAD]], 9
+// LEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr exact i16 [[BF_SHL]], 9
+// LEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i16 [[BF_ASHR]] to i32
+// LEWIDTHNUM-NEXT:    ret i32 [[CONV]]
+//
+// BEWIDTHNUM-LABEL: @st3_check_load(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st3* [[M:%.*]] to i16*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 2
+// BEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_LOAD]], 9
+// BEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i16 [[BF_ASHR]] to i32
+// BEWIDTHNUM-NEXT:    ret i32 [[CONV]]
+//
 int st3_check_load(struct st3 *m) {
   return m->c;
 }
@@ -190,6 +560,60 @@ int st3_check_load(struct st3 *m) {
 // BE-NEXT:    store volatile i8 [[BF_SET]], i8* [[TMP0]], align 2
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @st3_check_store(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST3:%.*]], %struct.st3* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 2
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -128
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 1
+// LENUMLOADS-NEXT:    store volatile i8 [[BF_SET]], i8* [[TMP0]], align 2
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @st3_check_store(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST3:%.*]], %struct.st3* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 2
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 2
+// BENUMLOADS-NEXT:    store volatile i8 [[BF_SET]], i8* [[TMP0]], align 2
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @st3_check_store(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st3* [[M:%.*]] to i16*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 2
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -128
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1
+// LEWIDTH-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 2
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @st3_check_store(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st3* [[M:%.*]] to i16*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 2
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], 511
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 512
+// BEWIDTH-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 2
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @st3_check_store(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st3* [[M:%.*]] to i16*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 2
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -128
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1
+// LEWIDTHNUM-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 2
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @st3_check_store(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st3* [[M:%.*]] to i16*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 2
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], 511
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 512
+// BEWIDTHNUM-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 2
+// BEWIDTHNUM-NEXT:    ret void
+//
 void st3_check_store(struct st3 *m) {
   m->c = 1;
 }
@@ -221,6 +645,68 @@ struct st4 {
 // BE-NEXT:    [[CONV:%.*]] = ashr exact i32 [[SEXT]], 24
 // BE-NEXT:    ret i32 [[CONV]]
 //
+// LENUMLOADS-LABEL: @st4_check_load(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST4:%.*]], %struct.st4* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i16 [[BF_LOAD]], 2
+// LENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_SHL]], 11
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = zext i16 [[BF_ASHR]] to i32
+// LENUMLOADS-NEXT:    [[SEXT:%.*]] = shl i32 [[BF_CAST]], 24
+// LENUMLOADS-NEXT:    [[CONV:%.*]] = ashr exact i32 [[SEXT]], 24
+// LENUMLOADS-NEXT:    ret i32 [[CONV]]
+//
+// BENUMLOADS-LABEL: @st4_check_load(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST4:%.*]], %struct.st4* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i16 [[BF_LOAD]], 9
+// BENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_SHL]], 11
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = zext i16 [[BF_ASHR]] to i32
+// BENUMLOADS-NEXT:    [[SEXT:%.*]] = shl i32 [[BF_CAST]], 24
+// BENUMLOADS-NEXT:    [[CONV:%.*]] = ashr exact i32 [[SEXT]], 24
+// BENUMLOADS-NEXT:    ret i32 [[CONV]]
+//
+// LEWIDTH-LABEL: @st4_check_load(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st4* [[M:%.*]] to i8*
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i8, i8* [[TMP0]], i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP1]], align 1
+// LEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 2
+// LEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_SHL]], 3
+// LEWIDTH-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LEWIDTH-NEXT:    ret i32 [[CONV]]
+//
+// BEWIDTH-LABEL: @st4_check_load(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st4* [[M:%.*]] to i8*
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i8, i8* [[TMP0]], i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP1]], align 1
+// BEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_SHL]], 3
+// BEWIDTH-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BEWIDTH-NEXT:    ret i32 [[CONV]]
+//
+// LEWIDTHNUM-LABEL: @st4_check_load(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st4* [[M:%.*]] to i8*
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i8, i8* [[TMP0]], i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP1]], align 1
+// LEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 2
+// LEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_SHL]], 3
+// LEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LEWIDTHNUM-NEXT:    ret i32 [[CONV]]
+//
+// BEWIDTHNUM-LABEL: @st4_check_load(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st4* [[M:%.*]] to i8*
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i8, i8* [[TMP0]], i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP1]], align 1
+// BEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_SHL]], 3
+// BEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BEWIDTHNUM-NEXT:    ret i32 [[CONV]]
+//
 int st4_check_load(struct st4 *m) {
   return m->c;
 }
@@ -243,6 +729,64 @@ int st4_check_load(struct st4 *m) {
 // BE-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @st4_check_store(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST4:%.*]], %struct.st4* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -15873
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 512
+// LENUMLOADS-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @st4_check_store(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST4:%.*]], %struct.st4* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -125
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 4
+// BENUMLOADS-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @st4_check_store(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st4* [[M:%.*]] to i8*
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i8, i8* [[TMP0]], i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP1]], align 1
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -63
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 2
+// LEWIDTH-NEXT:    store volatile i8 [[BF_SET]], i8* [[TMP1]], align 1
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @st4_check_store(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st4* [[M:%.*]] to i8*
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i8, i8* [[TMP0]], i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP1]], align 1
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -125
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 4
+// BEWIDTH-NEXT:    store volatile i8 [[BF_SET]], i8* [[TMP1]], align 1
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @st4_check_store(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st4* [[M:%.*]] to i8*
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i8, i8* [[TMP0]], i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP1]], align 1
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -63
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 2
+// LEWIDTHNUM-NEXT:    store volatile i8 [[BF_SET]], i8* [[TMP1]], align 1
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @st4_check_store(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st4* [[M:%.*]] to i8*
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i8, i8* [[TMP0]], i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP1]], align 1
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -125
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 4
+// BEWIDTHNUM-NEXT:    store volatile i8 [[BF_SET]], i8* [[TMP1]], align 1
+// BEWIDTHNUM-NEXT:    ret void
+//
 void st4_check_store(struct st4 *m) {
   m->c = 1;
 }
@@ -265,6 +809,60 @@ void st4_check_store(struct st4 *m) {
 // BE-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @st4_check_nonv_store(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST4:%.*]], %struct.st4* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -512
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1
+// LENUMLOADS-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @st4_check_nonv_store(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST4:%.*]], %struct.st4* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], 127
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 128
+// BENUMLOADS-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @st4_check_nonv_store(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST4:%.*]], %struct.st4* [[M:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -512
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1
+// LEWIDTH-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @st4_check_nonv_store(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST4:%.*]], %struct.st4* [[M:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], 127
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 128
+// BEWIDTH-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @st4_check_nonv_store(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST4:%.*]], %struct.st4* [[M:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -512
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1
+// LEWIDTHNUM-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @st4_check_nonv_store(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST4:%.*]], %struct.st4* [[M:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], 127
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 128
+// BEWIDTHNUM-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void st4_check_nonv_store(struct st4 *m) {
   m->b = 1;
 }
@@ -291,6 +889,57 @@ struct st5 {
 // BE-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
 // BE-NEXT:    ret i32 [[CONV]]
 //
+// LENUMLOADS-LABEL: @st5_check_load(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST5:%.*]], %struct.st5* [[M:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[C]], align 2
+// LENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 3
+// LENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 3
+// LENUMLOADS-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LENUMLOADS-NEXT:    ret i32 [[CONV]]
+//
+// BENUMLOADS-LABEL: @st5_check_load(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST5:%.*]], %struct.st5* [[M:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[C]], align 2
+// BENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 3
+// BENUMLOADS-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BENUMLOADS-NEXT:    ret i32 [[CONV]]
+//
+// LEWIDTH-LABEL: @st5_check_load(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST5:%.*]], %struct.st5* [[M:%.*]], i32 0, i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[C]], align 2
+// LEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 3
+// LEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 3
+// LEWIDTH-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LEWIDTH-NEXT:    ret i32 [[CONV]]
+//
+// BEWIDTH-LABEL: @st5_check_load(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST5:%.*]], %struct.st5* [[M:%.*]], i32 0, i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[C]], align 2
+// BEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 3
+// BEWIDTH-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BEWIDTH-NEXT:    ret i32 [[CONV]]
+//
+// LEWIDTHNUM-LABEL: @st5_check_load(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST5:%.*]], %struct.st5* [[M:%.*]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[C]], align 2
+// LEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 3
+// LEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 3
+// LEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LEWIDTHNUM-NEXT:    ret i32 [[CONV]]
+//
+// BEWIDTHNUM-LABEL: @st5_check_load(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST5:%.*]], %struct.st5* [[M:%.*]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[C]], align 2
+// BEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 3
+// BEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BEWIDTHNUM-NEXT:    ret i32 [[CONV]]
+//
 int st5_check_load(struct st5 *m) {
   return m->c;
 }
@@ -313,6 +962,60 @@ int st5_check_load(struct st5 *m) {
 // BE-NEXT:    store volatile i8 [[BF_SET]], i8* [[C]], align 2
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @st5_check_store(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST5:%.*]], %struct.st5* [[M:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[C]], align 2
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -32
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 1
+// LENUMLOADS-NEXT:    store volatile i8 [[BF_SET]], i8* [[C]], align 2
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @st5_check_store(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST5:%.*]], %struct.st5* [[M:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[C]], align 2
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 7
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 8
+// BENUMLOADS-NEXT:    store volatile i8 [[BF_SET]], i8* [[C]], align 2
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @st5_check_store(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST5:%.*]], %struct.st5* [[M:%.*]], i32 0, i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[C]], align 2
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -32
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 1
+// LEWIDTH-NEXT:    store volatile i8 [[BF_SET]], i8* [[C]], align 2
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @st5_check_store(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST5:%.*]], %struct.st5* [[M:%.*]], i32 0, i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[C]], align 2
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 7
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 8
+// BEWIDTH-NEXT:    store volatile i8 [[BF_SET]], i8* [[C]], align 2
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @st5_check_store(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST5:%.*]], %struct.st5* [[M:%.*]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[C]], align 2
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -32
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 1
+// LEWIDTHNUM-NEXT:    store volatile i8 [[BF_SET]], i8* [[C]], align 2
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @st5_check_store(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST5:%.*]], %struct.st5* [[M:%.*]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[C]], align 2
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 7
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 8
+// BEWIDTHNUM-NEXT:    store volatile i8 [[BF_SET]], i8* [[C]], align 2
+// BEWIDTHNUM-NEXT:    ret void
+//
 void st5_check_store(struct st5 *m) {
   m->c = 1;
 }
@@ -331,7 +1034,7 @@ struct st6 {
 // LE-NEXT:    [[BF_ASHR:%.*]] = ashr exact i16 [[BF_SHL]], 4
 // LE-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_ASHR]] to i32
 // LE-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
-// LE-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[B]], align 2
+// LE-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[B]], align 2, !tbaa !3
 // LE-NEXT:    [[CONV:%.*]] = sext i8 [[TMP1]] to i32
 // LE-NEXT:    [[ADD:%.*]] = add nsw i32 [[BF_CAST]], [[CONV]]
 // LE-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
@@ -349,7 +1052,7 @@ struct st6 {
 // BE-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_LOAD]], 4
 // BE-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_ASHR]] to i32
 // BE-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
-// BE-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[B]], align 2
+// BE-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[B]], align 2, !tbaa !3
 // BE-NEXT:    [[CONV:%.*]] = sext i8 [[TMP1]] to i32
 // BE-NEXT:    [[ADD:%.*]] = add nsw i32 [[BF_CAST]], [[CONV]]
 // BE-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
@@ -359,6 +1062,114 @@ struct st6 {
 // BE-NEXT:    [[ADD4:%.*]] = add nsw i32 [[ADD]], [[BF_CAST3]]
 // BE-NEXT:    ret i32 [[ADD4]]
 //
+// LENUMLOADS-LABEL: @st6_check_load(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST6:%.*]], %struct.st6* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i16 [[BF_LOAD]], 4
+// LENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr exact i16 [[BF_SHL]], 4
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_ASHR]] to i32
+// LENUMLOADS-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[B]], align 2, !tbaa !3
+// LENUMLOADS-NEXT:    [[CONV:%.*]] = sext i8 [[TMP1]] to i32
+// LENUMLOADS-NEXT:    [[ADD:%.*]] = add nsw i32 [[BF_CAST]], [[CONV]]
+// LENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[C]], align 1
+// LENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = shl i8 [[BF_LOAD1]], 3
+// LENUMLOADS-NEXT:    [[BF_ASHR3:%.*]] = ashr exact i8 [[BF_SHL2]], 3
+// LENUMLOADS-NEXT:    [[BF_CAST4:%.*]] = sext i8 [[BF_ASHR3]] to i32
+// LENUMLOADS-NEXT:    [[ADD5:%.*]] = add nsw i32 [[ADD]], [[BF_CAST4]]
+// LENUMLOADS-NEXT:    ret i32 [[ADD5]]
+//
+// BENUMLOADS-LABEL: @st6_check_load(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST6:%.*]], %struct.st6* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_LOAD]], 4
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_ASHR]] to i32
+// BENUMLOADS-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[B]], align 2, !tbaa !3
+// BENUMLOADS-NEXT:    [[CONV:%.*]] = sext i8 [[TMP1]] to i32
+// BENUMLOADS-NEXT:    [[ADD:%.*]] = add nsw i32 [[BF_CAST]], [[CONV]]
+// BENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[C]], align 1
+// BENUMLOADS-NEXT:    [[BF_ASHR2:%.*]] = ashr i8 [[BF_LOAD1]], 3
+// BENUMLOADS-NEXT:    [[BF_CAST3:%.*]] = sext i8 [[BF_ASHR2]] to i32
+// BENUMLOADS-NEXT:    [[ADD4:%.*]] = add nsw i32 [[ADD]], [[BF_CAST3]]
+// BENUMLOADS-NEXT:    ret i32 [[ADD4]]
+//
+// LEWIDTH-LABEL: @st6_check_load(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST6:%.*]], %struct.st6* [[M:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i16 [[BF_LOAD]], 4
+// LEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr exact i16 [[BF_SHL]], 4
+// LEWIDTH-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_ASHR]] to i32
+// LEWIDTH-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[B]], align 2, !tbaa !3
+// LEWIDTH-NEXT:    [[CONV:%.*]] = sext i8 [[TMP1]] to i32
+// LEWIDTH-NEXT:    [[ADD:%.*]] = add nsw i32 [[BF_CAST]], [[CONV]]
+// LEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
+// LEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[C]], align 1
+// LEWIDTH-NEXT:    [[BF_SHL2:%.*]] = shl i8 [[BF_LOAD1]], 3
+// LEWIDTH-NEXT:    [[BF_ASHR3:%.*]] = ashr exact i8 [[BF_SHL2]], 3
+// LEWIDTH-NEXT:    [[BF_CAST4:%.*]] = sext i8 [[BF_ASHR3]] to i32
+// LEWIDTH-NEXT:    [[ADD5:%.*]] = add nsw i32 [[ADD]], [[BF_CAST4]]
+// LEWIDTH-NEXT:    ret i32 [[ADD5]]
+//
+// BEWIDTH-LABEL: @st6_check_load(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST6:%.*]], %struct.st6* [[M:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_LOAD]], 4
+// BEWIDTH-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_ASHR]] to i32
+// BEWIDTH-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[B]], align 2, !tbaa !3
+// BEWIDTH-NEXT:    [[CONV:%.*]] = sext i8 [[TMP1]] to i32
+// BEWIDTH-NEXT:    [[ADD:%.*]] = add nsw i32 [[BF_CAST]], [[CONV]]
+// BEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
+// BEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[C]], align 1
+// BEWIDTH-NEXT:    [[BF_ASHR2:%.*]] = ashr i8 [[BF_LOAD1]], 3
+// BEWIDTH-NEXT:    [[BF_CAST3:%.*]] = sext i8 [[BF_ASHR2]] to i32
+// BEWIDTH-NEXT:    [[ADD4:%.*]] = add nsw i32 [[ADD]], [[BF_CAST3]]
+// BEWIDTH-NEXT:    ret i32 [[ADD4]]
+//
+// LEWIDTHNUM-LABEL: @st6_check_load(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST6:%.*]], %struct.st6* [[M:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i16 [[BF_LOAD]], 4
+// LEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr exact i16 [[BF_SHL]], 4
+// LEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_ASHR]] to i32
+// LEWIDTHNUM-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[B]], align 2, !tbaa !3
+// LEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i8 [[TMP1]] to i32
+// LEWIDTHNUM-NEXT:    [[ADD:%.*]] = add nsw i32 [[BF_CAST]], [[CONV]]
+// LEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[C]], align 1
+// LEWIDTHNUM-NEXT:    [[BF_SHL2:%.*]] = shl i8 [[BF_LOAD1]], 3
+// LEWIDTHNUM-NEXT:    [[BF_ASHR3:%.*]] = ashr exact i8 [[BF_SHL2]], 3
+// LEWIDTHNUM-NEXT:    [[BF_CAST4:%.*]] = sext i8 [[BF_ASHR3]] to i32
+// LEWIDTHNUM-NEXT:    [[ADD5:%.*]] = add nsw i32 [[ADD]], [[BF_CAST4]]
+// LEWIDTHNUM-NEXT:    ret i32 [[ADD5]]
+//
+// BEWIDTHNUM-LABEL: @st6_check_load(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST6:%.*]], %struct.st6* [[M:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_LOAD]], 4
+// BEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_ASHR]] to i32
+// BEWIDTHNUM-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[B]], align 2, !tbaa !3
+// BEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i8 [[TMP1]] to i32
+// BEWIDTHNUM-NEXT:    [[ADD:%.*]] = add nsw i32 [[BF_CAST]], [[CONV]]
+// BEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[C]], align 1
+// BEWIDTHNUM-NEXT:    [[BF_ASHR2:%.*]] = ashr i8 [[BF_LOAD1]], 3
+// BEWIDTHNUM-NEXT:    [[BF_CAST3:%.*]] = sext i8 [[BF_ASHR2]] to i32
+// BEWIDTHNUM-NEXT:    [[ADD4:%.*]] = add nsw i32 [[ADD]], [[BF_CAST3]]
+// BEWIDTHNUM-NEXT:    ret i32 [[ADD4]]
+//
 int st6_check_load(volatile struct st6 *m) {
   int x = m->a;
   x += m->b;
@@ -374,7 +1185,7 @@ int st6_check_load(volatile struct st6 *m) {
 // LE-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1
 // LE-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
 // LE-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
-// LE-NEXT:    store i8 2, i8* [[B]], align 2
+// LE-NEXT:    store i8 2, i8* [[B]], align 2, !tbaa !3
 // LE-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
 // LE-NEXT:    [[BF_LOAD1:%.*]] = load i8, i8* [[C]], align 1
 // LE-NEXT:    [[BF_CLEAR2:%.*]] = and i8 [[BF_LOAD1]], -32
@@ -390,7 +1201,7 @@ int st6_check_load(volatile struct st6 *m) {
 // BE-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 16
 // BE-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
 // BE-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
-// BE-NEXT:    store i8 2, i8* [[B]], align 2
+// BE-NEXT:    store i8 2, i8* [[B]], align 2, !tbaa !3
 // BE-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
 // BE-NEXT:    [[BF_LOAD1:%.*]] = load i8, i8* [[C]], align 1
 // BE-NEXT:    [[BF_CLEAR2:%.*]] = and i8 [[BF_LOAD1]], 7
@@ -398,6 +1209,102 @@ int st6_check_load(volatile struct st6 *m) {
 // BE-NEXT:    store i8 [[BF_SET3]], i8* [[C]], align 1
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @st6_check_store(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST6:%.*]], %struct.st6* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -4096
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1
+// LENUMLOADS-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
+// LENUMLOADS-NEXT:    store i8 2, i8* [[B]], align 2, !tbaa !3
+// LENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load i8, i8* [[C]], align 1
+// LENUMLOADS-NEXT:    [[BF_CLEAR2:%.*]] = and i8 [[BF_LOAD1]], -32
+// LENUMLOADS-NEXT:    [[BF_SET3:%.*]] = or i8 [[BF_CLEAR2]], 3
+// LENUMLOADS-NEXT:    store i8 [[BF_SET3]], i8* [[C]], align 1
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @st6_check_store(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST6:%.*]], %struct.st6* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], 15
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 16
+// BENUMLOADS-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
+// BENUMLOADS-NEXT:    store i8 2, i8* [[B]], align 2, !tbaa !3
+// BENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load i8, i8* [[C]], align 1
+// BENUMLOADS-NEXT:    [[BF_CLEAR2:%.*]] = and i8 [[BF_LOAD1]], 7
+// BENUMLOADS-NEXT:    [[BF_SET3:%.*]] = or i8 [[BF_CLEAR2]], 24
+// BENUMLOADS-NEXT:    store i8 [[BF_SET3]], i8* [[C]], align 1
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @st6_check_store(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST6:%.*]], %struct.st6* [[M:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -4096
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1
+// LEWIDTH-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
+// LEWIDTH-NEXT:    store i8 2, i8* [[B]], align 2, !tbaa !3
+// LEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
+// LEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load i8, i8* [[C]], align 1
+// LEWIDTH-NEXT:    [[BF_CLEAR2:%.*]] = and i8 [[BF_LOAD1]], -32
+// LEWIDTH-NEXT:    [[BF_SET3:%.*]] = or i8 [[BF_CLEAR2]], 3
+// LEWIDTH-NEXT:    store i8 [[BF_SET3]], i8* [[C]], align 1
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @st6_check_store(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST6:%.*]], %struct.st6* [[M:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], 15
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 16
+// BEWIDTH-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
+// BEWIDTH-NEXT:    store i8 2, i8* [[B]], align 2, !tbaa !3
+// BEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
+// BEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load i8, i8* [[C]], align 1
+// BEWIDTH-NEXT:    [[BF_CLEAR2:%.*]] = and i8 [[BF_LOAD1]], 7
+// BEWIDTH-NEXT:    [[BF_SET3:%.*]] = or i8 [[BF_CLEAR2]], 24
+// BEWIDTH-NEXT:    store i8 [[BF_SET3]], i8* [[C]], align 1
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @st6_check_store(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST6:%.*]], %struct.st6* [[M:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -4096
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 1
+// LEWIDTHNUM-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    store i8 2, i8* [[B]], align 2, !tbaa !3
+// LEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load i8, i8* [[C]], align 1
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR2:%.*]] = and i8 [[BF_LOAD1]], -32
+// LEWIDTHNUM-NEXT:    [[BF_SET3:%.*]] = or i8 [[BF_CLEAR2]], 3
+// LEWIDTHNUM-NEXT:    store i8 [[BF_SET3]], i8* [[C]], align 1
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @st6_check_store(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST6:%.*]], %struct.st6* [[M:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i16, i16* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], 15
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 16
+// BEWIDTHNUM-NEXT:    store i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    store i8 2, i8* [[B]], align 2, !tbaa !3
+// BEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST6]], %struct.st6* [[M]], i32 0, i32 2
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load i8, i8* [[C]], align 1
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR2:%.*]] = and i8 [[BF_LOAD1]], 7
+// BEWIDTHNUM-NEXT:    [[BF_SET3:%.*]] = or i8 [[BF_CLEAR2]], 24
+// BEWIDTHNUM-NEXT:    store i8 [[BF_SET3]], i8* [[C]], align 1
+// BEWIDTHNUM-NEXT:    ret void
+//
 void st6_check_store(struct st6 *m) {
   m->a = 1;
   m->b = 2;
@@ -418,10 +1325,10 @@ struct st7b {
 // LE-LABEL: @st7_check_load(
 // LE-NEXT:  entry:
 // LE-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
-// LE-NEXT:    [[TMP0:%.*]] = load i8, i8* [[X]], align 4
+// LE-NEXT:    [[TMP0:%.*]] = load i8, i8* [[X]], align 4, !tbaa !8
 // LE-NEXT:    [[CONV:%.*]] = sext i8 [[TMP0]] to i32
 // LE-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
-// LE-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[A]], align 4
+// LE-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[A]], align 4, !tbaa !11
 // LE-NEXT:    [[CONV1:%.*]] = sext i8 [[TMP1]] to i32
 // LE-NEXT:    [[ADD:%.*]] = add nsw i32 [[CONV1]], [[CONV]]
 // LE-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
@@ -435,10 +1342,10 @@ struct st7b {
 // BE-LABEL: @st7_check_load(
 // BE-NEXT:  entry:
 // BE-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
-// BE-NEXT:    [[TMP0:%.*]] = load i8, i8* [[X]], align 4
+// BE-NEXT:    [[TMP0:%.*]] = load i8, i8* [[X]], align 4, !tbaa !8
 // BE-NEXT:    [[CONV:%.*]] = sext i8 [[TMP0]] to i32
 // BE-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
-// BE-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[A]], align 4
+// BE-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[A]], align 4, !tbaa !11
 // BE-NEXT:    [[CONV1:%.*]] = sext i8 [[TMP1]] to i32
 // BE-NEXT:    [[ADD:%.*]] = add nsw i32 [[CONV1]], [[CONV]]
 // BE-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
@@ -448,6 +1355,105 @@ struct st7b {
 // BE-NEXT:    [[ADD3:%.*]] = add nsw i32 [[ADD]], [[BF_CAST]]
 // BE-NEXT:    ret i32 [[ADD3]]
 //
+// LENUMLOADS-LABEL: @st7_check_load(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = load i8, i8* [[X]], align 4, !tbaa !8
+// LENUMLOADS-NEXT:    [[CONV:%.*]] = sext i8 [[TMP0]] to i32
+// LENUMLOADS-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[A]], align 4, !tbaa !11
+// LENUMLOADS-NEXT:    [[CONV1:%.*]] = sext i8 [[TMP1]] to i32
+// LENUMLOADS-NEXT:    [[ADD:%.*]] = add nsw i32 [[CONV1]], [[CONV]]
+// LENUMLOADS-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// LENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 3
+// LENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 3
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LENUMLOADS-NEXT:    [[ADD3:%.*]] = add nsw i32 [[ADD]], [[BF_CAST]]
+// LENUMLOADS-NEXT:    ret i32 [[ADD3]]
+//
+// BENUMLOADS-LABEL: @st7_check_load(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = load i8, i8* [[X]], align 4, !tbaa !8
+// BENUMLOADS-NEXT:    [[CONV:%.*]] = sext i8 [[TMP0]] to i32
+// BENUMLOADS-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[A]], align 4, !tbaa !11
+// BENUMLOADS-NEXT:    [[CONV1:%.*]] = sext i8 [[TMP1]] to i32
+// BENUMLOADS-NEXT:    [[ADD:%.*]] = add nsw i32 [[CONV1]], [[CONV]]
+// BENUMLOADS-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// BENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 3
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BENUMLOADS-NEXT:    [[ADD3:%.*]] = add nsw i32 [[ADD]], [[BF_CAST]]
+// BENUMLOADS-NEXT:    ret i32 [[ADD3]]
+//
+// LEWIDTH-LABEL: @st7_check_load(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = load i8, i8* [[X]], align 4, !tbaa !8
+// LEWIDTH-NEXT:    [[CONV:%.*]] = sext i8 [[TMP0]] to i32
+// LEWIDTH-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[A]], align 4, !tbaa !11
+// LEWIDTH-NEXT:    [[CONV1:%.*]] = sext i8 [[TMP1]] to i32
+// LEWIDTH-NEXT:    [[ADD:%.*]] = add nsw i32 [[CONV1]], [[CONV]]
+// LEWIDTH-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// LEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 3
+// LEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 3
+// LEWIDTH-NEXT:    [[BF_CAST:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LEWIDTH-NEXT:    [[ADD3:%.*]] = add nsw i32 [[ADD]], [[BF_CAST]]
+// LEWIDTH-NEXT:    ret i32 [[ADD3]]
+//
+// BEWIDTH-LABEL: @st7_check_load(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = load i8, i8* [[X]], align 4, !tbaa !8
+// BEWIDTH-NEXT:    [[CONV:%.*]] = sext i8 [[TMP0]] to i32
+// BEWIDTH-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[A]], align 4, !tbaa !11
+// BEWIDTH-NEXT:    [[CONV1:%.*]] = sext i8 [[TMP1]] to i32
+// BEWIDTH-NEXT:    [[ADD:%.*]] = add nsw i32 [[CONV1]], [[CONV]]
+// BEWIDTH-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// BEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 3
+// BEWIDTH-NEXT:    [[BF_CAST:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BEWIDTH-NEXT:    [[ADD3:%.*]] = add nsw i32 [[ADD]], [[BF_CAST]]
+// BEWIDTH-NEXT:    ret i32 [[ADD3]]
+//
+// LEWIDTHNUM-LABEL: @st7_check_load(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = load i8, i8* [[X]], align 4, !tbaa !8
+// LEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i8 [[TMP0]] to i32
+// LEWIDTHNUM-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[A]], align 4, !tbaa !11
+// LEWIDTHNUM-NEXT:    [[CONV1:%.*]] = sext i8 [[TMP1]] to i32
+// LEWIDTHNUM-NEXT:    [[ADD:%.*]] = add nsw i32 [[CONV1]], [[CONV]]
+// LEWIDTHNUM-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// LEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i8 [[BF_LOAD]], 3
+// LEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr exact i8 [[BF_SHL]], 3
+// LEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = sext i8 [[BF_ASHR]] to i32
+// LEWIDTHNUM-NEXT:    [[ADD3:%.*]] = add nsw i32 [[ADD]], [[BF_CAST]]
+// LEWIDTHNUM-NEXT:    ret i32 [[ADD3]]
+//
+// BEWIDTHNUM-LABEL: @st7_check_load(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = load i8, i8* [[X]], align 4, !tbaa !8
+// BEWIDTHNUM-NEXT:    [[CONV:%.*]] = sext i8 [[TMP0]] to i32
+// BEWIDTHNUM-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = load volatile i8, i8* [[A]], align 4, !tbaa !11
+// BEWIDTHNUM-NEXT:    [[CONV1:%.*]] = sext i8 [[TMP1]] to i32
+// BEWIDTHNUM-NEXT:    [[ADD:%.*]] = add nsw i32 [[CONV1]], [[CONV]]
+// BEWIDTHNUM-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// BEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i8 [[BF_LOAD]], 3
+// BEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = sext i8 [[BF_ASHR]] to i32
+// BEWIDTHNUM-NEXT:    [[ADD3:%.*]] = add nsw i32 [[ADD]], [[BF_CAST]]
+// BEWIDTHNUM-NEXT:    ret i32 [[ADD3]]
+//
 int st7_check_load(struct st7b *m) {
   int r = m->x;
   r += m->y.a;
@@ -458,9 +1464,9 @@ int st7_check_load(struct st7b *m) {
 // LE-LABEL: @st7_check_store(
 // LE-NEXT:  entry:
 // LE-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
-// LE-NEXT:    store i8 1, i8* [[X]], align 4
+// LE-NEXT:    store i8 1, i8* [[X]], align 4, !tbaa !8
 // LE-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
-// LE-NEXT:    store volatile i8 2, i8* [[A]], align 4
+// LE-NEXT:    store volatile i8 2, i8* [[A]], align 4, !tbaa !11
 // LE-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
 // LE-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
 // LE-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -32
@@ -471,9 +1477,9 @@ int st7_check_load(struct st7b *m) {
 // BE-LABEL: @st7_check_store(
 // BE-NEXT:  entry:
 // BE-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
-// BE-NEXT:    store i8 1, i8* [[X]], align 4
+// BE-NEXT:    store i8 1, i8* [[X]], align 4, !tbaa !8
 // BE-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
-// BE-NEXT:    store volatile i8 2, i8* [[A]], align 4
+// BE-NEXT:    store volatile i8 2, i8* [[A]], align 4, !tbaa !11
 // BE-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
 // BE-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
 // BE-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 7
@@ -481,6 +1487,84 @@ int st7_check_load(struct st7b *m) {
 // BE-NEXT:    store volatile i8 [[BF_SET]], i8* [[B]], align 1
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @st7_check_store(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    store i8 1, i8* [[X]], align 4, !tbaa !8
+// LENUMLOADS-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
+// LENUMLOADS-NEXT:    store volatile i8 2, i8* [[A]], align 4, !tbaa !11
+// LENUMLOADS-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -32
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 3
+// LENUMLOADS-NEXT:    store volatile i8 [[BF_SET]], i8* [[B]], align 1
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @st7_check_store(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    store i8 1, i8* [[X]], align 4, !tbaa !8
+// BENUMLOADS-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
+// BENUMLOADS-NEXT:    store volatile i8 2, i8* [[A]], align 4, !tbaa !11
+// BENUMLOADS-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 7
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 24
+// BENUMLOADS-NEXT:    store volatile i8 [[BF_SET]], i8* [[B]], align 1
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @st7_check_store(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    store i8 1, i8* [[X]], align 4, !tbaa !8
+// LEWIDTH-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
+// LEWIDTH-NEXT:    store volatile i8 2, i8* [[A]], align 4, !tbaa !11
+// LEWIDTH-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -32
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 3
+// LEWIDTH-NEXT:    store volatile i8 [[BF_SET]], i8* [[B]], align 1
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @st7_check_store(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    store i8 1, i8* [[X]], align 4, !tbaa !8
+// BEWIDTH-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
+// BEWIDTH-NEXT:    store volatile i8 2, i8* [[A]], align 4, !tbaa !11
+// BEWIDTH-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 7
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 24
+// BEWIDTH-NEXT:    store volatile i8 [[BF_SET]], i8* [[B]], align 1
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @st7_check_store(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    store i8 1, i8* [[X]], align 4, !tbaa !8
+// LEWIDTHNUM-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
+// LEWIDTHNUM-NEXT:    store volatile i8 2, i8* [[A]], align 4, !tbaa !11
+// LEWIDTHNUM-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], -32
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 3
+// LEWIDTHNUM-NEXT:    store volatile i8 [[BF_SET]], i8* [[B]], align 1
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @st7_check_store(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[X:%.*]] = getelementptr inbounds [[STRUCT_ST7B:%.*]], %struct.st7b* [[M:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    store i8 1, i8* [[X]], align 4, !tbaa !8
+// BEWIDTHNUM-NEXT:    [[A:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 0
+// BEWIDTHNUM-NEXT:    store volatile i8 2, i8* [[A]], align 4, !tbaa !11
+// BEWIDTHNUM-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ST7B]], %struct.st7b* [[M]], i32 0, i32 2, i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i8 [[BF_LOAD]], 7
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i8 [[BF_CLEAR]], 24
+// BEWIDTHNUM-NEXT:    store volatile i8 [[BF_SET]], i8* [[B]], align 1
+// BEWIDTHNUM-NEXT:    ret void
+//
 void st7_check_store(struct st7b *m) {
   m->x = 1;
   m->y.a = 2;
@@ -504,6 +1588,42 @@ struct st8 {
 // BE-NEXT:    store i16 -1, i16* [[TMP0]], align 4
 // BE-NEXT:    ret i32 65535
 //
+// LENUMLOADS-LABEL: @st8_check_assignment(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST8:%.*]], %struct.st8* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    store i16 -1, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret i32 65535
+//
+// BENUMLOADS-LABEL: @st8_check_assignment(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST8:%.*]], %struct.st8* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    store i16 -1, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret i32 65535
+//
+// LEWIDTH-LABEL: @st8_check_assignment(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST8:%.*]], %struct.st8* [[M:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    store i16 -1, i16* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret i32 65535
+//
+// BEWIDTH-LABEL: @st8_check_assignment(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST8:%.*]], %struct.st8* [[M:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    store i16 -1, i16* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret i32 65535
+//
+// LEWIDTHNUM-LABEL: @st8_check_assignment(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST8:%.*]], %struct.st8* [[M:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    store i16 -1, i16* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret i32 65535
+//
+// BEWIDTHNUM-LABEL: @st8_check_assignment(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST8:%.*]], %struct.st8* [[M:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    store i16 -1, i16* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret i32 65535
+//
 int st8_check_assignment(struct st8 *m) {
   return m->f = 0xffff;
 }
@@ -526,6 +1646,50 @@ struct st9{
 // BE-NEXT:    [[BF_CAST:%.*]] = sext i8 [[BF_LOAD]] to i32
 // BE-NEXT:    ret i32 [[BF_CAST]]
 //
+// LENUMLOADS-LABEL: @read_st9(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST9:%.*]], %struct.st9* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = sext i8 [[BF_LOAD]] to i32
+// LENUMLOADS-NEXT:    ret i32 [[BF_CAST]]
+//
+// BENUMLOADS-LABEL: @read_st9(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST9:%.*]], %struct.st9* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = sext i8 [[BF_LOAD]] to i32
+// BENUMLOADS-NEXT:    ret i32 [[BF_CAST]]
+//
+// LEWIDTH-LABEL: @read_st9(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st9* [[M:%.*]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i32 [[BF_LOAD]], 24
+// LEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr exact i32 [[BF_SHL]], 24
+// LEWIDTH-NEXT:    ret i32 [[BF_ASHR]]
+//
+// BEWIDTH-LABEL: @read_st9(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st9* [[M:%.*]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_LOAD]], 24
+// BEWIDTH-NEXT:    ret i32 [[BF_ASHR]]
+//
+// LEWIDTHNUM-LABEL: @read_st9(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st9* [[M:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i32 [[BF_LOAD]], 24
+// LEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr exact i32 [[BF_SHL]], 24
+// LEWIDTHNUM-NEXT:    ret i32 [[BF_ASHR]]
+//
+// BEWIDTHNUM-LABEL: @read_st9(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st9* [[M:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_LOAD]], 24
+// BEWIDTHNUM-NEXT:    ret i32 [[BF_ASHR]]
+//
 int read_st9(volatile struct st9 *m) {
   return m->f;
 }
@@ -533,17 +1697,65 @@ int read_st9(volatile struct st9 *m) {
 // LE-LABEL: @store_st9(
 // LE-NEXT:  entry:
 // LE-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST9:%.*]], %struct.st9* [[M:%.*]], i32 0, i32 0
-// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
 // LE-NEXT:    store volatile i8 1, i8* [[TMP0]], align 4
 // LE-NEXT:    ret void
 //
 // BE-LABEL: @store_st9(
 // BE-NEXT:  entry:
 // BE-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST9:%.*]], %struct.st9* [[M:%.*]], i32 0, i32 0
-// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
 // BE-NEXT:    store volatile i8 1, i8* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @store_st9(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST9:%.*]], %struct.st9* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    store volatile i8 1, i8* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @store_st9(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST9:%.*]], %struct.st9* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    store volatile i8 1, i8* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @store_st9(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st9* [[M:%.*]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], -256
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 1
+// LEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @store_st9(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st9* [[M:%.*]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], 16777215
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 16777216
+// BEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @store_st9(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st9* [[M:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], -256
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 1
+// LEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @store_st9(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st9* [[M:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], 16777215
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 16777216
+// BEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void store_st9(volatile struct st9 *m) {
   m->f = 1;
 }
@@ -553,7 +1765,6 @@ void store_st9(volatile struct st9 *m) {
 // LE-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST9:%.*]], %struct.st9* [[M:%.*]], i32 0, i32 0
 // LE-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
 // LE-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
-// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 4
 // LE-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 4
 // LE-NEXT:    ret void
 //
@@ -562,10 +1773,75 @@ void store_st9(volatile struct st9 *m) {
 // BE-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST9:%.*]], %struct.st9* [[M:%.*]], i32 0, i32 0
 // BE-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
 // BE-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
-// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 4
 // BE-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_st9(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST9:%.*]], %struct.st9* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_st9(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST9:%.*]], %struct.st9* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_st9(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st9* [[M:%.*]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 255
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -256
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_st9(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st9* [[M:%.*]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = add i32 [[BF_LOAD]], 16777216
+// BEWIDTH-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP1]], -16777216
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 16777215
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_st9(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st9* [[M:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 255
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -256
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_st9(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st9* [[M:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = add i32 [[BF_LOAD]], 16777216
+// BEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP1]], -16777216
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 16777215
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_st9(volatile struct st9 *m) {
   ++m->f;
 }
@@ -593,6 +1869,56 @@ struct st10{
 // BE-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_ASHR]] to i32
 // BE-NEXT:    ret i32 [[BF_CAST]]
 //
+// LENUMLOADS-LABEL: @read_st10(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST10:%.*]], %struct.st10* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i16 [[BF_LOAD]], 7
+// LENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_SHL]], 8
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_ASHR]] to i32
+// LENUMLOADS-NEXT:    ret i32 [[BF_CAST]]
+//
+// BENUMLOADS-LABEL: @read_st10(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST10:%.*]], %struct.st10* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i16 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i16 [[BF_SHL]], 8
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_ASHR]] to i32
+// BENUMLOADS-NEXT:    ret i32 [[BF_CAST]]
+//
+// LEWIDTH-LABEL: @read_st10(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st10* [[M:%.*]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i32 [[BF_LOAD]], 23
+// LEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_SHL]], 24
+// LEWIDTH-NEXT:    ret i32 [[BF_ASHR]]
+//
+// BEWIDTH-LABEL: @read_st10(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st10* [[M:%.*]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i32 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_SHL]], 24
+// BEWIDTH-NEXT:    ret i32 [[BF_ASHR]]
+//
+// LEWIDTHNUM-LABEL: @read_st10(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st10* [[M:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i32 [[BF_LOAD]], 23
+// LEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_SHL]], 24
+// LEWIDTHNUM-NEXT:    ret i32 [[BF_ASHR]]
+//
+// BEWIDTHNUM-LABEL: @read_st10(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st10* [[M:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i32 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_SHL]], 24
+// BEWIDTHNUM-NEXT:    ret i32 [[BF_ASHR]]
+//
 int read_st10(volatile struct st10 *m) {
   return m->f;
 }
@@ -615,6 +1941,60 @@ int read_st10(volatile struct st10 *m) {
 // BE-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @store_st10(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST10:%.*]], %struct.st10* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -511
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 2
+// LENUMLOADS-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @store_st10(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST10:%.*]], %struct.st10* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD]], -32641
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], 128
+// BENUMLOADS-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @store_st10(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st10* [[M:%.*]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], -511
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 2
+// LEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @store_st10(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st10* [[M:%.*]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], -2139095041
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 8388608
+// BEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @store_st10(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st10* [[M:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], -511
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 2
+// LEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @store_st10(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st10* [[M:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], -2139095041
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 8388608
+// BEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void store_st10(volatile struct st10 *m) {
   m->f = 1;
 }
@@ -643,6 +2023,78 @@ void store_st10(volatile struct st10 *m) {
 // BE-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_st10(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST10:%.*]], %struct.st10* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = add i16 [[BF_LOAD]], 2
+// LENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = and i16 [[TMP1]], 510
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD1]], -511
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], [[BF_SHL2]]
+// LENUMLOADS-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_st10(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST10:%.*]], %struct.st10* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = add i16 [[BF_LOAD]], 128
+// BENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = and i16 [[TMP1]], 32640
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD1]], -32641
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], [[BF_SHL2]]
+// BENUMLOADS-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_st10(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st10* [[M:%.*]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[INC3:%.*]] = add i32 [[BF_LOAD]], 2
+// LEWIDTH-NEXT:    [[BF_SHL2:%.*]] = and i32 [[INC3]], 510
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -511
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL2]]
+// LEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_st10(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st10* [[M:%.*]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[INC3:%.*]] = add i32 [[BF_LOAD]], 8388608
+// BEWIDTH-NEXT:    [[BF_SHL2:%.*]] = and i32 [[INC3]], 2139095040
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -2139095041
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL2]]
+// BEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_st10(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st10* [[M:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[INC3:%.*]] = add i32 [[BF_LOAD]], 2
+// LEWIDTHNUM-NEXT:    [[BF_SHL2:%.*]] = and i32 [[INC3]], 510
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -511
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL2]]
+// LEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_st10(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st10* [[M:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[INC3:%.*]] = add i32 [[BF_LOAD]], 8388608
+// BEWIDTHNUM-NEXT:    [[BF_SHL2:%.*]] = and i32 [[INC3]], 2139095040
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -2139095041
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL2]]
+// BEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_st10(volatile struct st10 *m) {
   ++m->f;
 }
@@ -666,6 +2118,48 @@ struct st11{
 // BE-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_LOAD]] to i32
 // BE-NEXT:    ret i32 [[BF_CAST]]
 //
+// LENUMLOADS-LABEL: @read_st11(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_LOAD]] to i32
+// LENUMLOADS-NEXT:    ret i32 [[BF_CAST]]
+//
+// BENUMLOADS-LABEL: @read_st11(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_LOAD]] to i32
+// BENUMLOADS-NEXT:    ret i32 [[BF_CAST]]
+//
+// LEWIDTH-LABEL: @read_st11(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// LEWIDTH-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_LOAD]] to i32
+// LEWIDTH-NEXT:    ret i32 [[BF_CAST]]
+//
+// BEWIDTH-LABEL: @read_st11(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// BEWIDTH-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_LOAD]] to i32
+// BEWIDTH-NEXT:    ret i32 [[BF_CAST]]
+//
+// LEWIDTHNUM-LABEL: @read_st11(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// LEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_LOAD]] to i32
+// LEWIDTHNUM-NEXT:    ret i32 [[BF_CAST]]
+//
+// BEWIDTHNUM-LABEL: @read_st11(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// BEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = sext i16 [[BF_LOAD]] to i32
+// BEWIDTHNUM-NEXT:    ret i32 [[BF_CAST]]
+//
 int read_st11(volatile struct st11 *m) {
   return m->f;
 }
@@ -673,17 +2167,55 @@ int read_st11(volatile struct st11 *m) {
 // LE-LABEL: @store_st11(
 // LE-NEXT:  entry:
 // LE-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
-// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
 // LE-NEXT:    store volatile i16 1, i16* [[F]], align 1
 // LE-NEXT:    ret void
 //
 // BE-LABEL: @store_st11(
 // BE-NEXT:  entry:
 // BE-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
-// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
 // BE-NEXT:    store volatile i16 1, i16* [[F]], align 1
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @store_st11(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// LENUMLOADS-NEXT:    store volatile i16 1, i16* [[F]], align 1
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @store_st11(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// BENUMLOADS-NEXT:    store volatile i16 1, i16* [[F]], align 1
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @store_st11(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// LEWIDTH-NEXT:    store volatile i16 1, i16* [[F]], align 1
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @store_st11(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// BEWIDTH-NEXT:    store volatile i16 1, i16* [[F]], align 1
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @store_st11(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// LEWIDTHNUM-NEXT:    store volatile i16 1, i16* [[F]], align 1
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @store_st11(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// BEWIDTHNUM-NEXT:    store volatile i16 1, i16* [[F]], align 1
+// BEWIDTHNUM-NEXT:    ret void
+//
 void store_st11(volatile struct st11 *m) {
   m->f = 1;
 }
@@ -693,7 +2225,6 @@ void store_st11(volatile struct st11 *m) {
 // LE-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
 // LE-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
 // LE-NEXT:    [[INC:%.*]] = add i16 [[BF_LOAD]], 1
-// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i16, i16* [[F]], align 1
 // LE-NEXT:    store volatile i16 [[INC]], i16* [[F]], align 1
 // LE-NEXT:    ret void
 //
@@ -702,10 +2233,61 @@ void store_st11(volatile struct st11 *m) {
 // BE-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
 // BE-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
 // BE-NEXT:    [[INC:%.*]] = add i16 [[BF_LOAD]], 1
-// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i16, i16* [[F]], align 1
 // BE-NEXT:    store volatile i16 [[INC]], i16* [[F]], align 1
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_st11(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i16 [[BF_LOAD]], 1
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i16, i16* [[F]], align 1
+// LENUMLOADS-NEXT:    store volatile i16 [[INC]], i16* [[F]], align 1
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_st11(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add i16 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i16, i16* [[F]], align 1
+// BENUMLOADS-NEXT:    store volatile i16 [[INC]], i16* [[F]], align 1
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_st11(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i16 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    store volatile i16 [[INC]], i16* [[F]], align 1
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_st11(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// BEWIDTH-NEXT:    [[INC:%.*]] = add i16 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    store volatile i16 [[INC]], i16* [[F]], align 1
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_st11(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i16 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i16, i16* [[F]], align 1
+// LEWIDTHNUM-NEXT:    store volatile i16 [[INC]], i16* [[F]], align 1
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_st11(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[F:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[F]], align 1
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add i16 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i16, i16* [[F]], align 1
+// BEWIDTHNUM-NEXT:    store volatile i16 [[INC]], i16* [[F]], align 1
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_st11(volatile struct st11 *m) {
   ++m->f;
 }
@@ -713,19 +2295,67 @@ void increment_st11(volatile struct st11 *m) {
 // LE-LABEL: @increment_e_st11(
 // LE-NEXT:  entry:
 // LE-NEXT:    [[E:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 0
-// LE-NEXT:    [[TMP0:%.*]] = load volatile i8, i8* [[E]], align 4
+// LE-NEXT:    [[TMP0:%.*]] = load volatile i8, i8* [[E]], align 4, !tbaa !12
 // LE-NEXT:    [[INC:%.*]] = add i8 [[TMP0]], 1
-// LE-NEXT:    store volatile i8 [[INC]], i8* [[E]], align 4
+// LE-NEXT:    store volatile i8 [[INC]], i8* [[E]], align 4, !tbaa !12
 // LE-NEXT:    ret void
 //
 // BE-LABEL: @increment_e_st11(
 // BE-NEXT:  entry:
 // BE-NEXT:    [[E:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 0
-// BE-NEXT:    [[TMP0:%.*]] = load volatile i8, i8* [[E]], align 4
+// BE-NEXT:    [[TMP0:%.*]] = load volatile i8, i8* [[E]], align 4, !tbaa !12
 // BE-NEXT:    [[INC:%.*]] = add i8 [[TMP0]], 1
-// BE-NEXT:    store volatile i8 [[INC]], i8* [[E]], align 4
+// BE-NEXT:    store volatile i8 [[INC]], i8* [[E]], align 4, !tbaa !12
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_e_st11(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[E:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = load volatile i8, i8* [[E]], align 4, !tbaa !12
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[TMP0]], 1
+// LENUMLOADS-NEXT:    store volatile i8 [[INC]], i8* [[E]], align 4, !tbaa !12
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_e_st11(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[E:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = load volatile i8, i8* [[E]], align 4, !tbaa !12
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[TMP0]], 1
+// BENUMLOADS-NEXT:    store volatile i8 [[INC]], i8* [[E]], align 4, !tbaa !12
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_e_st11(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[E:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = load volatile i8, i8* [[E]], align 4, !tbaa !12
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i8 [[TMP0]], 1
+// LEWIDTH-NEXT:    store volatile i8 [[INC]], i8* [[E]], align 4, !tbaa !12
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_e_st11(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[E:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = load volatile i8, i8* [[E]], align 4, !tbaa !12
+// BEWIDTH-NEXT:    [[INC:%.*]] = add i8 [[TMP0]], 1
+// BEWIDTH-NEXT:    store volatile i8 [[INC]], i8* [[E]], align 4, !tbaa !12
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_e_st11(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[E:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = load volatile i8, i8* [[E]], align 4, !tbaa !12
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i8 [[TMP0]], 1
+// LEWIDTHNUM-NEXT:    store volatile i8 [[INC]], i8* [[E]], align 4, !tbaa !12
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_e_st11(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[E:%.*]] = getelementptr inbounds [[STRUCT_ST11:%.*]], %struct.st11* [[M:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = load volatile i8, i8* [[E]], align 4, !tbaa !12
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add i8 [[TMP0]], 1
+// BEWIDTHNUM-NEXT:    store volatile i8 [[INC]], i8* [[E]], align 4, !tbaa !12
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_e_st11(volatile struct st11 *m) {
   ++m->e;
 }
@@ -751,6 +2381,54 @@ struct st12{
 // BE-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_SHL]], 16
 // BE-NEXT:    ret i32 [[BF_ASHR]]
 //
+// LENUMLOADS-LABEL: @read_st12(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i32 [[BF_LOAD]], 8
+// LENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_SHL]], 16
+// LENUMLOADS-NEXT:    ret i32 [[BF_ASHR]]
+//
+// BENUMLOADS-LABEL: @read_st12(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl i32 [[BF_LOAD]], 8
+// BENUMLOADS-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_SHL]], 16
+// BENUMLOADS-NEXT:    ret i32 [[BF_ASHR]]
+//
+// LEWIDTH-LABEL: @read_st12(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i32 [[BF_LOAD]], 8
+// LEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_SHL]], 16
+// LEWIDTH-NEXT:    ret i32 [[BF_ASHR]]
+//
+// BEWIDTH-LABEL: @read_st12(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl i32 [[BF_LOAD]], 8
+// BEWIDTH-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_SHL]], 16
+// BEWIDTH-NEXT:    ret i32 [[BF_ASHR]]
+//
+// LEWIDTHNUM-LABEL: @read_st12(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i32 [[BF_LOAD]], 8
+// LEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_SHL]], 16
+// LEWIDTHNUM-NEXT:    ret i32 [[BF_ASHR]]
+//
+// BEWIDTHNUM-LABEL: @read_st12(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl i32 [[BF_LOAD]], 8
+// BEWIDTHNUM-NEXT:    [[BF_ASHR:%.*]] = ashr i32 [[BF_SHL]], 16
+// BEWIDTHNUM-NEXT:    ret i32 [[BF_ASHR]]
+//
 int read_st12(volatile struct st12 *m) {
   return m->f;
 }
@@ -773,6 +2451,60 @@ int read_st12(volatile struct st12 *m) {
 // BE-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @store_st12(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], -16776961
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 256
+// LENUMLOADS-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @store_st12(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], -16776961
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 256
+// BENUMLOADS-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @store_st12(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], -16776961
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 256
+// LEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @store_st12(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], -16776961
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 256
+// BEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @store_st12(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], -16776961
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 256
+// LEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @store_st12(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD]], -16776961
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], 256
+// BEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void store_st12(volatile struct st12 *m) {
   m->f = 1;
 }
@@ -801,6 +2533,78 @@ void store_st12(volatile struct st12 *m) {
 // BE-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_st12(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[INC3:%.*]] = add i32 [[BF_LOAD]], 256
+// LENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = and i32 [[INC3]], 16776960
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -16776961
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL2]]
+// LENUMLOADS-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_st12(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[INC3:%.*]] = add i32 [[BF_LOAD]], 256
+// BENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = and i32 [[INC3]], 16776960
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -16776961
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL2]]
+// BENUMLOADS-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_st12(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[INC3:%.*]] = add i32 [[BF_LOAD]], 256
+// LEWIDTH-NEXT:    [[BF_SHL2:%.*]] = and i32 [[INC3]], 16776960
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -16776961
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL2]]
+// LEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_st12(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[INC3:%.*]] = add i32 [[BF_LOAD]], 256
+// BEWIDTH-NEXT:    [[BF_SHL2:%.*]] = and i32 [[INC3]], 16776960
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -16776961
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL2]]
+// BEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_st12(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[INC3:%.*]] = add i32 [[BF_LOAD]], 256
+// LEWIDTHNUM-NEXT:    [[BF_SHL2:%.*]] = and i32 [[INC3]], 16776960
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -16776961
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL2]]
+// LEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_st12(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[INC3:%.*]] = add i32 [[BF_LOAD]], 256
+// BEWIDTHNUM-NEXT:    [[BF_SHL2:%.*]] = and i32 [[INC3]], 16776960
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -16776961
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL2]]
+// BEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_st12(volatile struct st12 *m) {
   ++m->f;
 }
@@ -829,6 +2633,78 @@ void increment_st12(volatile struct st12 *m) {
 // BE-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_e_st12(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 255
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -256
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LENUMLOADS-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_e_st12(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = add i32 [[BF_LOAD]], 16777216
+// BENUMLOADS-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP1]], -16777216
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 16777215
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BENUMLOADS-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_e_st12(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 255
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -256
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_e_st12(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = add i32 [[BF_LOAD]], 16777216
+// BEWIDTH-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP1]], -16777216
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 16777215
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_e_st12(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 255
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -256
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_e_st12(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st12* [[M:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = add i32 [[BF_LOAD]], 16777216
+// BEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP1]], -16777216
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 16777215
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_e_st12(volatile struct st12 *m) {
   ++m->e;
 }
@@ -866,6 +2742,90 @@ struct st13 {
 // BE-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_b_st13(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st13* [[S:%.*]] to i40*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i40 [[BF_LOAD]], 8
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[TMP1]] to i32
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LENUMLOADS-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i40
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl nuw i40 [[TMP2]], 8
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], 255
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_SHL]], [[BF_CLEAR]]
+// LENUMLOADS-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_b_st13(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st13* [[S:%.*]] to i40*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[BF_LOAD]] to i32
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i40
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], -4294967296
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_CLEAR]], [[TMP1]]
+// BENUMLOADS-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_b_st13(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st13* [[S:%.*]] to i40*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = lshr i40 [[BF_LOAD]], 8
+// LEWIDTH-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[TMP1]] to i32
+// LEWIDTH-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LEWIDTH-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i40
+// LEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl nuw i40 [[TMP2]], 8
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], 255
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_SHL]], [[BF_CLEAR]]
+// LEWIDTH-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_b_st13(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st13* [[S:%.*]] to i40*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BEWIDTH-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[BF_LOAD]] to i32
+// BEWIDTH-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i40
+// BEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], -4294967296
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_CLEAR]], [[TMP1]]
+// BEWIDTH-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_b_st13(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st13* [[S:%.*]] to i40*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = lshr i40 [[BF_LOAD]], 8
+// LEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[TMP1]] to i32
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LEWIDTHNUM-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i40
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl nuw i40 [[TMP2]], 8
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], 255
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_SHL]], [[BF_CLEAR]]
+// LEWIDTHNUM-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_b_st13(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st13* [[S:%.*]] to i40*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[BF_LOAD]] to i32
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i40
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], -4294967296
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_CLEAR]], [[TMP1]]
+// BEWIDTHNUM-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_b_st13(volatile struct st13 *s) {
   s->b++;
 }
@@ -879,7 +2839,6 @@ struct st14 {
 // LE-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST14:%.*]], %struct.st14* [[S:%.*]], i32 0, i32 0
 // LE-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
 // LE-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
-// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
 // LE-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
 // LE-NEXT:    ret void
 //
@@ -888,10 +2847,61 @@ struct st14 {
 // BE-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST14:%.*]], %struct.st14* [[S:%.*]], i32 0, i32 0
 // BE-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
 // BE-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
-// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
 // BE-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_a_st14(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST14:%.*]], %struct.st14* [[S:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_a_st14(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST14:%.*]], %struct.st14* [[S:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_a_st14(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST14:%.*]], %struct.st14* [[S:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_a_st14(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST14:%.*]], %struct.st14* [[S:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BEWIDTH-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_a_st14(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST14:%.*]], %struct.st14* [[S:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_a_st14(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST14:%.*]], %struct.st14* [[S:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_a_st14(volatile struct st14 *s) {
   s->a++;
 }
@@ -905,7 +2915,6 @@ struct st15 {
 // LE-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST15:%.*]], %struct.st15* [[S:%.*]], i32 0, i32 0
 // LE-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
 // LE-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
-// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
 // LE-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
 // LE-NEXT:    ret void
 //
@@ -914,10 +2923,61 @@ struct st15 {
 // BE-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST15:%.*]], %struct.st15* [[S:%.*]], i32 0, i32 0
 // BE-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
 // BE-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
-// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
 // BE-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_a_st15(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST15:%.*]], %struct.st15* [[S:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_a_st15(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST15:%.*]], %struct.st15* [[S:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_a_st15(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST15:%.*]], %struct.st15* [[S:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_a_st15(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST15:%.*]], %struct.st15* [[S:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BEWIDTH-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_a_st15(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST15:%.*]], %struct.st15* [[S:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_a_st15(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ST15:%.*]], %struct.st15* [[S:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_a_st15(volatile struct st15 *s) {
   s->a++;
 }
@@ -955,6 +3015,84 @@ struct st16 {
 // BE-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_a_st16(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i64
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -4294967296
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[TMP1]]
+// LENUMLOADS-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_a_st16(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[TMP1]] to i32
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BENUMLOADS-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i64
+// BENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl nuw i64 [[TMP2]], 32
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], 4294967295
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL]], [[BF_CLEAR]]
+// BENUMLOADS-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_a_st16(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// LEWIDTH-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i64
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -4294967296
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[TMP1]]
+// LEWIDTH-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_a_st16(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// BEWIDTH-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[TMP1]] to i32
+// BEWIDTH-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BEWIDTH-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i64
+// BEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl nuw i64 [[TMP2]], 32
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], 4294967295
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL]], [[BF_CLEAR]]
+// BEWIDTH-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_a_st16(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i64
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -4294967296
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[TMP1]]
+// LEWIDTHNUM-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_a_st16(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// BEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[TMP1]] to i32
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BEWIDTHNUM-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i64
+// BEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl nuw i64 [[TMP2]], 32
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], 4294967295
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL]], [[BF_CLEAR]]
+// BEWIDTHNUM-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_a_st16(struct st16 *s) {
   s->a++;
 }
@@ -987,6 +3125,90 @@ void increment_a_st16(struct st16 *s) {
 // BE-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_b_st16(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// LENUMLOADS-NEXT:    [[TMP2:%.*]] = trunc i64 [[TMP1]] to i32
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i32 [[TMP2]], 1
+// LENUMLOADS-NEXT:    [[TMP3:%.*]] = and i32 [[INC]], 65535
+// LENUMLOADS-NEXT:    [[BF_VALUE:%.*]] = zext i32 [[TMP3]] to i64
+// LENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = shl nuw nsw i64 [[BF_VALUE]], 32
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -281470681743361
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL2]], [[BF_CLEAR]]
+// LENUMLOADS-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_b_st16(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// BENUMLOADS-NEXT:    [[INC4:%.*]] = add i32 [[TMP1]], 65536
+// BENUMLOADS-NEXT:    [[TMP2:%.*]] = and i32 [[INC4]], -65536
+// BENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = zext i32 [[TMP2]] to i64
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -4294901761
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[BF_SHL2]]
+// BENUMLOADS-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_b_st16(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// LEWIDTH-NEXT:    [[TMP2:%.*]] = trunc i64 [[TMP1]] to i32
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i32 [[TMP2]], 1
+// LEWIDTH-NEXT:    [[TMP3:%.*]] = and i32 [[INC]], 65535
+// LEWIDTH-NEXT:    [[BF_VALUE:%.*]] = zext i32 [[TMP3]] to i64
+// LEWIDTH-NEXT:    [[BF_SHL2:%.*]] = shl nuw nsw i64 [[BF_VALUE]], 32
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -281470681743361
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL2]], [[BF_CLEAR]]
+// LEWIDTH-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_b_st16(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// BEWIDTH-NEXT:    [[INC4:%.*]] = add i32 [[TMP1]], 65536
+// BEWIDTH-NEXT:    [[TMP2:%.*]] = and i32 [[INC4]], -65536
+// BEWIDTH-NEXT:    [[BF_SHL2:%.*]] = zext i32 [[TMP2]] to i64
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -4294901761
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[BF_SHL2]]
+// BEWIDTH-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_b_st16(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// LEWIDTHNUM-NEXT:    [[TMP2:%.*]] = trunc i64 [[TMP1]] to i32
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i32 [[TMP2]], 1
+// LEWIDTHNUM-NEXT:    [[TMP3:%.*]] = and i32 [[INC]], 65535
+// LEWIDTHNUM-NEXT:    [[BF_VALUE:%.*]] = zext i32 [[TMP3]] to i64
+// LEWIDTHNUM-NEXT:    [[BF_SHL2:%.*]] = shl nuw nsw i64 [[BF_VALUE]], 32
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -281470681743361
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL2]], [[BF_CLEAR]]
+// LEWIDTHNUM-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_b_st16(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// BEWIDTHNUM-NEXT:    [[INC4:%.*]] = add i32 [[TMP1]], 65536
+// BEWIDTHNUM-NEXT:    [[TMP2:%.*]] = and i32 [[INC4]], -65536
+// BEWIDTHNUM-NEXT:    [[BF_SHL2:%.*]] = zext i32 [[TMP2]] to i64
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -4294901761
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[BF_SHL2]]
+// BEWIDTHNUM-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_b_st16(struct st16 *s) {
   s->b++;
 }
@@ -1019,6 +3241,90 @@ void increment_b_st16(struct st16 *s) {
 // BE-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_c_st16(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast i48* [[C]] to i64*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i64
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -4294967296
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[TMP1]]
+// LENUMLOADS-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_c_st16(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast i48* [[C]] to i64*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[TMP1]] to i32
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BENUMLOADS-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i64
+// BENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl nuw i64 [[TMP2]], 32
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], 4294967295
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL]], [[BF_CLEAR]]
+// BENUMLOADS-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_c_st16(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast i48* [[C]] to i64*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// LEWIDTH-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i64
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -4294967296
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[TMP1]]
+// LEWIDTH-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_c_st16(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast i48* [[C]] to i64*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// BEWIDTH-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[TMP1]] to i32
+// BEWIDTH-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BEWIDTH-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i64
+// BEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl nuw i64 [[TMP2]], 32
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], 4294967295
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL]], [[BF_CLEAR]]
+// BEWIDTH-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_c_st16(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast i48* [[C]] to i64*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i64
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -4294967296
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[TMP1]]
+// LEWIDTHNUM-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_c_st16(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast i48* [[C]] to i64*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// BEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[TMP1]] to i32
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BEWIDTHNUM-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i64
+// BEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl nuw i64 [[TMP2]], 32
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], 4294967295
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL]], [[BF_CLEAR]]
+// BEWIDTHNUM-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_c_st16(struct st16 *s) {
   s->c++;
 }
@@ -1053,6 +3359,96 @@ void increment_c_st16(struct st16 *s) {
 // BE-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_d_st16(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[D:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast i48* [[D]] to i64*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// LENUMLOADS-NEXT:    [[TMP2:%.*]] = trunc i64 [[TMP1]] to i32
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i32 [[TMP2]], 1
+// LENUMLOADS-NEXT:    [[TMP3:%.*]] = and i32 [[INC]], 65535
+// LENUMLOADS-NEXT:    [[BF_VALUE:%.*]] = zext i32 [[TMP3]] to i64
+// LENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = shl nuw nsw i64 [[BF_VALUE]], 32
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -281470681743361
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL2]], [[BF_CLEAR]]
+// LENUMLOADS-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_d_st16(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[D:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast i48* [[D]] to i64*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// BENUMLOADS-NEXT:    [[INC4:%.*]] = add i32 [[TMP1]], 65536
+// BENUMLOADS-NEXT:    [[TMP2:%.*]] = and i32 [[INC4]], -65536
+// BENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = zext i32 [[TMP2]] to i64
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -4294901761
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[BF_SHL2]]
+// BENUMLOADS-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_d_st16(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[D:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast i48* [[D]] to i64*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// LEWIDTH-NEXT:    [[TMP2:%.*]] = trunc i64 [[TMP1]] to i32
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i32 [[TMP2]], 1
+// LEWIDTH-NEXT:    [[TMP3:%.*]] = and i32 [[INC]], 65535
+// LEWIDTH-NEXT:    [[BF_VALUE:%.*]] = zext i32 [[TMP3]] to i64
+// LEWIDTH-NEXT:    [[BF_SHL2:%.*]] = shl nuw nsw i64 [[BF_VALUE]], 32
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -281470681743361
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL2]], [[BF_CLEAR]]
+// LEWIDTH-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_d_st16(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[D:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast i48* [[D]] to i64*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// BEWIDTH-NEXT:    [[INC4:%.*]] = add i32 [[TMP1]], 65536
+// BEWIDTH-NEXT:    [[TMP2:%.*]] = and i32 [[INC4]], -65536
+// BEWIDTH-NEXT:    [[BF_SHL2:%.*]] = zext i32 [[TMP2]] to i64
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -4294901761
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[BF_SHL2]]
+// BEWIDTH-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_d_st16(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[D:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast i48* [[D]] to i64*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// LEWIDTHNUM-NEXT:    [[TMP2:%.*]] = trunc i64 [[TMP1]] to i32
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i32 [[TMP2]], 1
+// LEWIDTHNUM-NEXT:    [[TMP3:%.*]] = and i32 [[INC]], 65535
+// LEWIDTHNUM-NEXT:    [[BF_VALUE:%.*]] = zext i32 [[TMP3]] to i64
+// LEWIDTHNUM-NEXT:    [[BF_SHL2:%.*]] = shl nuw nsw i64 [[BF_VALUE]], 32
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -281470681743361
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL2]], [[BF_CLEAR]]
+// LEWIDTHNUM-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_d_st16(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[D:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast i48* [[D]] to i64*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load i64, i64* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// BEWIDTHNUM-NEXT:    [[INC4:%.*]] = add i32 [[TMP1]], 65536
+// BEWIDTHNUM-NEXT:    [[TMP2:%.*]] = and i32 [[INC4]], -65536
+// BEWIDTHNUM-NEXT:    [[BF_SHL2:%.*]] = zext i32 [[TMP2]] to i64
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD]], -4294901761
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[BF_SHL2]]
+// BEWIDTHNUM-NEXT:    store i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_d_st16(struct st16 *s) {
   s->d++;
 }
@@ -1085,6 +3481,68 @@ void increment_d_st16(struct st16 *s) {
 // BE-NEXT:    store volatile i64 [[BF_SET]], i64* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_v_a_st16(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i64
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD1]], -4294967296
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[TMP1]]
+// LENUMLOADS-NEXT:    store volatile i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_v_a_st16(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[TMP1]] to i32
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BENUMLOADS-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i64
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl nuw i64 [[TMP2]], 32
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD1]], 4294967295
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL]], [[BF_CLEAR]]
+// BENUMLOADS-NEXT:    store volatile i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_v_a_st16(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    store volatile i32 [[INC]], i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_v_a_st16(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    store volatile i32 [[INC]], i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_v_a_st16(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    store volatile i32 [[INC]], i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_v_a_st16(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    store volatile i32 [[INC]], i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_v_a_st16(volatile struct st16 *s) {
   s->a++;
 }
@@ -1119,6 +3577,88 @@ void increment_v_a_st16(volatile struct st16 *s) {
 // BE-NEXT:    store volatile i64 [[BF_SET]], i64* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_v_b_st16(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// LENUMLOADS-NEXT:    [[TMP2:%.*]] = trunc i64 [[TMP1]] to i32
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i32 [[TMP2]], 1
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[TMP3:%.*]] = and i32 [[INC]], 65535
+// LENUMLOADS-NEXT:    [[BF_VALUE:%.*]] = zext i32 [[TMP3]] to i64
+// LENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = shl nuw nsw i64 [[BF_VALUE]], 32
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD1]], -281470681743361
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL2]], [[BF_CLEAR]]
+// LENUMLOADS-NEXT:    store volatile i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_v_b_st16(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i64*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// BENUMLOADS-NEXT:    [[INC4:%.*]] = add i32 [[TMP1]], 65536
+// BENUMLOADS-NEXT:    [[TMP2:%.*]] = and i32 [[INC4]], -65536
+// BENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = zext i32 [[TMP2]] to i64
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD1]], -4294901761
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[BF_SHL2]]
+// BENUMLOADS-NEXT:    store volatile i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_v_b_st16(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i32*
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i32, i32* [[TMP0]], i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// LEWIDTH-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 65535
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -65536
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP1]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_v_b_st16(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i32*
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i32, i32* [[TMP0]], i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// BEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// BEWIDTH-NEXT:    [[TMP2:%.*]] = add i32 [[BF_LOAD]], 65536
+// BEWIDTH-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP2]], -65536
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 65535
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP1]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_v_b_st16(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i32, i32* [[TMP0]], i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 65535
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -65536
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP1]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_v_b_st16(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i32, i32* [[TMP0]], i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// BEWIDTHNUM-NEXT:    [[TMP2:%.*]] = add i32 [[BF_LOAD]], 65536
+// BEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP2]], -65536
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 65535
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP1]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_v_b_st16(volatile struct st16 *s) {
   s->b++;
 }
@@ -1153,6 +3693,74 @@ void increment_v_b_st16(volatile struct st16 *s) {
 // BE-NEXT:    store volatile i64 [[BF_SET]], i64* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_v_c_st16(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast i48* [[C]] to i64*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i64
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD1]], -4294967296
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[TMP1]]
+// LENUMLOADS-NEXT:    store volatile i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_v_c_st16(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[C:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast i48* [[C]] to i64*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i64 [[TMP1]] to i32
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BENUMLOADS-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i64
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl nuw i64 [[TMP2]], 32
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD1]], 4294967295
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL]], [[BF_CLEAR]]
+// BENUMLOADS-NEXT:    store volatile i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_v_c_st16(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = bitcast i48* [[TMP0]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// LEWIDTH-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    store volatile i32 [[INC]], i32* [[TMP1]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_v_c_st16(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = bitcast i48* [[TMP0]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// BEWIDTH-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    store volatile i32 [[INC]], i32* [[TMP1]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_v_c_st16(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = bitcast i48* [[TMP0]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// LEWIDTHNUM-NEXT:    store volatile i32 [[INC]], i32* [[TMP1]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_v_c_st16(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = bitcast i48* [[TMP0]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// BEWIDTHNUM-NEXT:    store volatile i32 [[INC]], i32* [[TMP1]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_v_c_st16(volatile struct st16 *s) {
   s->c++;
 }
@@ -1189,6 +3797,90 @@ void increment_v_c_st16(volatile struct st16 *s) {
 // BE-NEXT:    store volatile i64 [[BF_SET]], i64* [[TMP0]], align 4
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_v_d_st16(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[D:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast i48* [[D]] to i64*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i64 [[BF_LOAD]], 32
+// LENUMLOADS-NEXT:    [[TMP2:%.*]] = trunc i64 [[TMP1]] to i32
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i32 [[TMP2]], 1
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[TMP3:%.*]] = and i32 [[INC]], 65535
+// LENUMLOADS-NEXT:    [[BF_VALUE:%.*]] = zext i32 [[TMP3]] to i64
+// LENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = shl nuw nsw i64 [[BF_VALUE]], 32
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD1]], -281470681743361
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_SHL2]], [[BF_CLEAR]]
+// LENUMLOADS-NEXT:    store volatile i64 [[BF_SET]], i64* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_v_d_st16(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[D:%.*]] = getelementptr inbounds [[STRUCT_ST16:%.*]], %struct.st16* [[S:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast i48* [[D]] to i64*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i64, i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = trunc i64 [[BF_LOAD]] to i32
+// BENUMLOADS-NEXT:    [[INC4:%.*]] = add i32 [[TMP1]], 65536
+// BENUMLOADS-NEXT:    [[TMP2:%.*]] = and i32 [[INC4]], -65536
+// BENUMLOADS-NEXT:    [[BF_SHL2:%.*]] = zext i32 [[TMP2]] to i64
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i64 [[BF_LOAD1]], -4294901761
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i64 [[BF_CLEAR]], [[BF_SHL2]]
+// BENUMLOADS-NEXT:    store volatile i64 [[BF_SET]], i64* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_v_d_st16(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i32*
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i32, i32* [[TMP0]], i32 3
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// LEWIDTH-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 65535
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -65536
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP1]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_v_d_st16(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i32*
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i32, i32* [[TMP0]], i32 3
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// BEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// BEWIDTH-NEXT:    [[TMP2:%.*]] = add i32 [[BF_LOAD]], 65536
+// BEWIDTH-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP2]], -65536
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 65535
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP1]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_v_d_st16(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i32*
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i32, i32* [[TMP0]], i32 3
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 65535
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -65536
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP1]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_v_d_st16(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st16* [[S:%.*]] to i32*
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i32, i32* [[TMP0]], i32 3
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP1]], align 4
+// BEWIDTHNUM-NEXT:    [[TMP2:%.*]] = add i32 [[BF_LOAD]], 65536
+// BEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP2]], -65536
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 65535
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP1]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_v_d_st16(volatile struct st16 *s) {
   s->d++;
 }
@@ -1227,6 +3919,90 @@ char c : 8;
 // BE-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_v_b_st17(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st17* [[S:%.*]] to i40*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[BF_LOAD]] to i32
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i40
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], -4294967296
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_CLEAR]], [[TMP1]]
+// LENUMLOADS-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_v_b_st17(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st17* [[S:%.*]] to i40*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i40 [[BF_LOAD]], 8
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[TMP1]] to i32
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BENUMLOADS-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i40
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl nuw i40 [[TMP2]], 8
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], 255
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_SHL]], [[BF_CLEAR]]
+// BENUMLOADS-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_v_b_st17(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st17* [[S:%.*]] to i40*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LEWIDTH-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[BF_LOAD]] to i32
+// LEWIDTH-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i40
+// LEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], -4294967296
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_CLEAR]], [[TMP1]]
+// LEWIDTH-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_v_b_st17(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast %struct.st17* [[S:%.*]] to i40*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = lshr i40 [[BF_LOAD]], 8
+// BEWIDTH-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[TMP1]] to i32
+// BEWIDTH-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BEWIDTH-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i40
+// BEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BEWIDTH-NEXT:    [[BF_SHL:%.*]] = shl nuw i40 [[TMP2]], 8
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], 255
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_SHL]], [[BF_CLEAR]]
+// BEWIDTH-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_v_b_st17(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st17* [[S:%.*]] to i40*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[BF_LOAD]] to i32
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = zext i32 [[INC]] to i40
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], -4294967296
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_CLEAR]], [[TMP1]]
+// LEWIDTHNUM-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_v_b_st17(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast %struct.st17* [[S:%.*]] to i40*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = lshr i40 [[BF_LOAD]], 8
+// BEWIDTHNUM-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[TMP1]] to i32
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add nsw i32 [[BF_CAST]], 1
+// BEWIDTHNUM-NEXT:    [[TMP2:%.*]] = zext i32 [[INC]] to i40
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = shl nuw i40 [[TMP2]], 8
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], 255
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_SHL]], [[BF_CLEAR]]
+// BEWIDTHNUM-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_v_b_st17(volatile struct st17 *s) {
   s->b++;
 }
@@ -1259,6 +4035,458 @@ void increment_v_b_st17(volatile struct st17 *s) {
 // BE-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
 // BE-NEXT:    ret void
 //
+// LENUMLOADS-LABEL: @increment_v_c_st17(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st17* [[S:%.*]] to i40*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i40 [[BF_LOAD]], 32
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[TMP1]] to i8
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[BF_CAST]], 1
+// LENUMLOADS-NEXT:    [[TMP2:%.*]] = zext i8 [[INC]] to i40
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    [[BF_SHL:%.*]] = shl nuw i40 [[TMP2]], 32
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], 4294967295
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_SHL]], [[BF_CLEAR]]
+// LENUMLOADS-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_v_c_st17(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast %struct.st17* [[S:%.*]] to i40*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i40 [[BF_LOAD]] to i8
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[BF_CAST]], 1
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = zext i8 [[INC]] to i40
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i40, i40* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i40 [[BF_LOAD1]], -256
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i40 [[BF_CLEAR]], [[TMP1]]
+// BENUMLOADS-NEXT:    store volatile i40 [[BF_SET]], i40* [[TMP0]], align 1
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_v_c_st17(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr inbounds [[STRUCT_ST17:%.*]], %struct.st17* [[S:%.*]], i32 0, i32 0, i32 4
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_v_c_st17(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr inbounds [[STRUCT_ST17:%.*]], %struct.st17* [[S:%.*]], i32 0, i32 0, i32 4
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BEWIDTH-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_v_c_st17(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr inbounds [[STRUCT_ST17:%.*]], %struct.st17* [[S:%.*]], i32 0, i32 0, i32 4
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_v_c_st17(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr inbounds [[STRUCT_ST17:%.*]], %struct.st17* [[S:%.*]], i32 0, i32 0, i32 4
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 1
+// BEWIDTHNUM-NEXT:    ret void
+//
 void increment_v_c_st17(volatile struct st17 *s) {
   s->c++;
 }
+
+// A zero bitfield should block, as the C11 specification
+// requires a and b to be 
diff erent memory positions
+struct zero_bitfield {
+  int a : 8;
+  char : 0;
+  int b : 8;
+};
+
+// LE-LABEL: @increment_a_zero_bitfield(
+// LE-NEXT:  entry:
+// LE-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 0
+// LE-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// LE-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LE-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 4
+// LE-NEXT:    ret void
+//
+// BE-LABEL: @increment_a_zero_bitfield(
+// BE-NEXT:  entry:
+// BE-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 0
+// BE-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// BE-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BE-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 4
+// BE-NEXT:    ret void
+//
+// LENUMLOADS-LABEL: @increment_a_zero_bitfield(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_a_zero_bitfield(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_a_zero_bitfield(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_a_zero_bitfield(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_a_zero_bitfield(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_a_zero_bitfield(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    store volatile i8 [[INC]], i8* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
+void increment_a_zero_bitfield(volatile struct zero_bitfield *s) {
+  s->a++;
+}
+
+// LE-LABEL: @increment_b_zero_bitfield(
+// LE-NEXT:  entry:
+// LE-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 1
+// LE-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// LE-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LE-NEXT:    store volatile i8 [[INC]], i8* [[B]], align 1
+// LE-NEXT:    ret void
+//
+// BE-LABEL: @increment_b_zero_bitfield(
+// BE-NEXT:  entry:
+// BE-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 1
+// BE-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// BE-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BE-NEXT:    store volatile i8 [[INC]], i8* [[B]], align 1
+// BE-NEXT:    ret void
+//
+// LENUMLOADS-LABEL: @increment_b_zero_bitfield(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[B]], align 1
+// LENUMLOADS-NEXT:    store volatile i8 [[INC]], i8* [[B]], align 1
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_b_zero_bitfield(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// BENUMLOADS-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[B]], align 1
+// BENUMLOADS-NEXT:    store volatile i8 [[INC]], i8* [[B]], align 1
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_b_zero_bitfield(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    store volatile i8 [[INC]], i8* [[B]], align 1
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_b_zero_bitfield(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// BEWIDTH-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BEWIDTH-NEXT:    store volatile i8 [[INC]], i8* [[B]], align 1
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_b_zero_bitfield(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[B]], align 1
+// LEWIDTHNUM-NEXT:    store volatile i8 [[INC]], i8* [[B]], align 1
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_b_zero_bitfield(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD:%.*]], %struct.zero_bitfield* [[S:%.*]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i8, i8* [[B]], align 1
+// BEWIDTHNUM-NEXT:    [[INC:%.*]] = add i8 [[BF_LOAD]], 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[B]], align 1
+// BEWIDTHNUM-NEXT:    store volatile i8 [[INC]], i8* [[B]], align 1
+// BEWIDTHNUM-NEXT:    ret void
+//
+void increment_b_zero_bitfield(volatile struct zero_bitfield *s) {
+  s->b++;
+}
+
+// The zero bitfield here does not affect
+struct zero_bitfield_ok {
+  short a : 8;
+  char a1 : 8;
+  long : 0;
+  int b : 24;
+};
+
+// LE-LABEL: @increment_a_zero_bitfield_ok(
+// LE-NEXT:  entry:
+// LE-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 0
+// LE-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LE-NEXT:    [[CONV:%.*]] = trunc i16 [[BF_LOAD]] to i8
+// LE-NEXT:    [[BF_LOAD1:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LE-NEXT:    [[TMP1:%.*]] = lshr i16 [[BF_LOAD1]], 8
+// LE-NEXT:    [[BF_CAST:%.*]] = trunc i16 [[TMP1]] to i8
+// LE-NEXT:    [[ADD:%.*]] = add i8 [[BF_CAST]], [[CONV]]
+// LE-NEXT:    [[TMP2:%.*]] = zext i8 [[ADD]] to i16
+// LE-NEXT:    [[BF_LOAD5:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LE-NEXT:    [[BF_SHL6:%.*]] = shl nuw i16 [[TMP2]], 8
+// LE-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD5]], 255
+// LE-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_SHL6]], [[BF_CLEAR]]
+// LE-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LE-NEXT:    ret void
+//
+// BE-LABEL: @increment_a_zero_bitfield_ok(
+// BE-NEXT:  entry:
+// BE-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 0
+// BE-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BE-NEXT:    [[TMP1:%.*]] = lshr i16 [[BF_LOAD]], 8
+// BE-NEXT:    [[CONV:%.*]] = trunc i16 [[TMP1]] to i8
+// BE-NEXT:    [[BF_LOAD1:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BE-NEXT:    [[SEXT:%.*]] = trunc i16 [[BF_LOAD1]] to i8
+// BE-NEXT:    [[ADD:%.*]] = add i8 [[SEXT]], [[CONV]]
+// BE-NEXT:    [[TMP2:%.*]] = zext i8 [[ADD]] to i16
+// BE-NEXT:    [[BF_LOAD5:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BE-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD5]], -256
+// BE-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], [[TMP2]]
+// BE-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BE-NEXT:    ret void
+//
+// LENUMLOADS-LABEL: @increment_a_zero_bitfield_ok(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 0
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[CONV:%.*]] = trunc i16 [[BF_LOAD]] to i8
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i16 [[BF_LOAD1]], 8
+// LENUMLOADS-NEXT:    [[BF_CAST:%.*]] = trunc i16 [[TMP1]] to i8
+// LENUMLOADS-NEXT:    [[ADD:%.*]] = add i8 [[BF_CAST]], [[CONV]]
+// LENUMLOADS-NEXT:    [[TMP2:%.*]] = zext i8 [[ADD]] to i16
+// LENUMLOADS-NEXT:    [[BF_LOAD5:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_SHL6:%.*]] = shl nuw i16 [[TMP2]], 8
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD5]], 255
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_SHL6]], [[BF_CLEAR]]
+// LENUMLOADS-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_a_zero_bitfield_ok(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 0
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = lshr i16 [[BF_LOAD]], 8
+// BENUMLOADS-NEXT:    [[CONV:%.*]] = trunc i16 [[TMP1]] to i8
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[SEXT:%.*]] = trunc i16 [[BF_LOAD1]] to i8
+// BENUMLOADS-NEXT:    [[ADD:%.*]] = add i8 [[SEXT]], [[CONV]]
+// BENUMLOADS-NEXT:    [[TMP2:%.*]] = zext i8 [[ADD]] to i16
+// BENUMLOADS-NEXT:    [[BF_LOAD5:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i16 [[BF_LOAD5]], -256
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i16 [[BF_CLEAR]], [[TMP2]]
+// BENUMLOADS-NEXT:    store volatile i16 [[BF_SET]], i16* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_a_zero_bitfield_ok(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 0
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[CONV:%.*]] = trunc i16 [[BF_LOAD]] to i8
+// LEWIDTH-NEXT:    [[TMP1:%.*]] = bitcast %struct.zero_bitfield_ok* [[S]] to i8*
+// LEWIDTH-NEXT:    [[TMP2:%.*]] = getelementptr inbounds i8, i8* [[TMP1]], i32 1
+// LEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP2]], align 1
+// LEWIDTH-NEXT:    [[ADD:%.*]] = add i8 [[BF_LOAD1]], [[CONV]]
+// LEWIDTH-NEXT:    store volatile i8 [[ADD]], i8* [[TMP2]], align 1
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_a_zero_bitfield_ok(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 0
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = lshr i16 [[BF_LOAD]], 8
+// BEWIDTH-NEXT:    [[CONV:%.*]] = trunc i16 [[TMP1]] to i8
+// BEWIDTH-NEXT:    [[TMP2:%.*]] = bitcast %struct.zero_bitfield_ok* [[S]] to i8*
+// BEWIDTH-NEXT:    [[TMP3:%.*]] = getelementptr inbounds i8, i8* [[TMP2]], i32 1
+// BEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP3]], align 1
+// BEWIDTH-NEXT:    [[ADD:%.*]] = add i8 [[BF_LOAD1]], [[CONV]]
+// BEWIDTH-NEXT:    store volatile i8 [[ADD]], i8* [[TMP3]], align 1
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_a_zero_bitfield_ok(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 0
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[CONV:%.*]] = trunc i16 [[BF_LOAD]] to i8
+// LEWIDTHNUM-NEXT:    [[TMP1:%.*]] = bitcast %struct.zero_bitfield_ok* [[S]] to i8*
+// LEWIDTHNUM-NEXT:    [[TMP2:%.*]] = getelementptr inbounds i8, i8* [[TMP1]], i32 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP2]], align 1
+// LEWIDTHNUM-NEXT:    [[ADD:%.*]] = add i8 [[BF_LOAD1]], [[CONV]]
+// LEWIDTHNUM-NEXT:    [[BF_LOAD4:%.*]] = load volatile i8, i8* [[TMP2]], align 1
+// LEWIDTHNUM-NEXT:    store volatile i8 [[ADD]], i8* [[TMP2]], align 1
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_a_zero_bitfield_ok(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = getelementptr [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 0
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i16, i16* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = lshr i16 [[BF_LOAD]], 8
+// BEWIDTHNUM-NEXT:    [[CONV:%.*]] = trunc i16 [[TMP1]] to i8
+// BEWIDTHNUM-NEXT:    [[TMP2:%.*]] = bitcast %struct.zero_bitfield_ok* [[S]] to i8*
+// BEWIDTHNUM-NEXT:    [[TMP3:%.*]] = getelementptr inbounds i8, i8* [[TMP2]], i32 1
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i8, i8* [[TMP3]], align 1
+// BEWIDTHNUM-NEXT:    [[ADD:%.*]] = add i8 [[BF_LOAD1]], [[CONV]]
+// BEWIDTHNUM-NEXT:    [[BF_LOAD4:%.*]] = load volatile i8, i8* [[TMP3]], align 1
+// BEWIDTHNUM-NEXT:    store volatile i8 [[ADD]], i8* [[TMP3]], align 1
+// BEWIDTHNUM-NEXT:    ret void
+//
+void increment_a_zero_bitfield_ok(volatile struct zero_bitfield_ok *s) {
+  s->a1 += s->a;
+}
+
+// LE-LABEL: @increment_b_zero_bitfield_ok(
+// LE-NEXT:  entry:
+// LE-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 1
+// LE-NEXT:    [[TMP0:%.*]] = bitcast i24* [[B]] to i32*
+// LE-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LE-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LE-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LE-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 16777215
+// LE-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -16777216
+// LE-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LE-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LE-NEXT:    ret void
+//
+// BE-LABEL: @increment_b_zero_bitfield_ok(
+// BE-NEXT:  entry:
+// BE-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 1
+// BE-NEXT:    [[TMP0:%.*]] = bitcast i24* [[B]] to i32*
+// BE-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BE-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BE-NEXT:    [[TMP1:%.*]] = add i32 [[BF_LOAD]], 256
+// BE-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP1]], -256
+// BE-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 255
+// BE-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BE-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BE-NEXT:    ret void
+//
+// LENUMLOADS-LABEL: @increment_b_zero_bitfield_ok(
+// LENUMLOADS-NEXT:  entry:
+// LENUMLOADS-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 1
+// LENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast i24* [[B]] to i32*
+// LENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 16777215
+// LENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -16777216
+// LENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LENUMLOADS-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LENUMLOADS-NEXT:    ret void
+//
+// BENUMLOADS-LABEL: @increment_b_zero_bitfield_ok(
+// BENUMLOADS-NEXT:  entry:
+// BENUMLOADS-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 1
+// BENUMLOADS-NEXT:    [[TMP0:%.*]] = bitcast i24* [[B]] to i32*
+// BENUMLOADS-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    [[TMP1:%.*]] = add i32 [[BF_LOAD]], 256
+// BENUMLOADS-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP1]], -256
+// BENUMLOADS-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 255
+// BENUMLOADS-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BENUMLOADS-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BENUMLOADS-NEXT:    ret void
+//
+// LEWIDTH-LABEL: @increment_b_zero_bitfield_ok(
+// LEWIDTH-NEXT:  entry:
+// LEWIDTH-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 1
+// LEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast i24* [[B]] to i32*
+// LEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 16777215
+// LEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -16777216
+// LEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTH-NEXT:    ret void
+//
+// BEWIDTH-LABEL: @increment_b_zero_bitfield_ok(
+// BEWIDTH-NEXT:  entry:
+// BEWIDTH-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 1
+// BEWIDTH-NEXT:    [[TMP0:%.*]] = bitcast i24* [[B]] to i32*
+// BEWIDTH-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    [[TMP1:%.*]] = add i32 [[BF_LOAD]], 256
+// BEWIDTH-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP1]], -256
+// BEWIDTH-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 255
+// BEWIDTH-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BEWIDTH-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTH-NEXT:    ret void
+//
+// LEWIDTHNUM-LABEL: @increment_b_zero_bitfield_ok(
+// LEWIDTHNUM-NEXT:  entry:
+// LEWIDTHNUM-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 1
+// LEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast i24* [[B]] to i32*
+// LEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[INC:%.*]] = add i32 [[BF_LOAD]], 1
+// LEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    [[BF_VALUE:%.*]] = and i32 [[INC]], 16777215
+// LEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], -16777216
+// LEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_VALUE]]
+// LEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// LEWIDTHNUM-NEXT:    ret void
+//
+// BEWIDTHNUM-LABEL: @increment_b_zero_bitfield_ok(
+// BEWIDTHNUM-NEXT:  entry:
+// BEWIDTHNUM-NEXT:    [[B:%.*]] = getelementptr inbounds [[STRUCT_ZERO_BITFIELD_OK:%.*]], %struct.zero_bitfield_ok* [[S:%.*]], i32 0, i32 1
+// BEWIDTHNUM-NEXT:    [[TMP0:%.*]] = bitcast i24* [[B]] to i32*
+// BEWIDTHNUM-NEXT:    [[BF_LOAD:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[BF_LOAD1:%.*]] = load volatile i32, i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    [[TMP1:%.*]] = add i32 [[BF_LOAD]], 256
+// BEWIDTHNUM-NEXT:    [[BF_SHL:%.*]] = and i32 [[TMP1]], -256
+// BEWIDTHNUM-NEXT:    [[BF_CLEAR:%.*]] = and i32 [[BF_LOAD1]], 255
+// BEWIDTHNUM-NEXT:    [[BF_SET:%.*]] = or i32 [[BF_CLEAR]], [[BF_SHL]]
+// BEWIDTHNUM-NEXT:    store volatile i32 [[BF_SET]], i32* [[TMP0]], align 4
+// BEWIDTHNUM-NEXT:    ret void
+//
+void increment_b_zero_bitfield_ok(volatile struct zero_bitfield_ok *s) {
+  s->b++;
+}

diff  --git a/clang/test/CodeGen/bitfield-2.c b/clang/test/CodeGen/bitfield-2.c
index 9d669575ecd1..661d42683bc2 100644
--- a/clang/test/CodeGen/bitfield-2.c
+++ b/clang/test/CodeGen/bitfield-2.c
@@ -14,7 +14,7 @@
 // CHECK-RECORD:   LLVMType:%struct.s0 = type { [3 x i8] }
 // CHECK-RECORD:   IsZeroInitializable:1
 // CHECK-RECORD:   BitFields:[
-// CHECK-RECORD:     <CGBitFieldInfo Offset:0 Size:24 IsSigned:1 StorageSize:24 StorageOffset:0>
+// CHECK-RECORD:     <CGBitFieldInfo Offset:0 Size:24 IsSigned:1 StorageSize:24 StorageOffset:0
 struct __attribute((packed)) s0 {
   int f0 : 24;
 };
@@ -54,8 +54,8 @@ unsigned long long test_0() {
 // CHECK-RECORD:   LLVMType:%struct.s1 = type { [3 x i8] }
 // CHECK-RECORD:   IsZeroInitializable:1
 // CHECK-RECORD:   BitFields:[
-// CHECK-RECORD:     <CGBitFieldInfo Offset:0 Size:10 IsSigned:1 StorageSize:24 StorageOffset:0>
-// CHECK-RECORD:     <CGBitFieldInfo Offset:10 Size:10 IsSigned:1 StorageSize:24 StorageOffset:0>
+// CHECK-RECORD:     <CGBitFieldInfo Offset:0 Size:10 IsSigned:1 StorageSize:24 StorageOffset:0
+// CHECK-RECORD:     <CGBitFieldInfo Offset:10 Size:10 IsSigned:1 StorageSize:24 StorageOffset:0
 
 #pragma pack(push)
 #pragma pack(1)
@@ -102,7 +102,7 @@ unsigned long long test_1() {
 // CHECK-RECORD:   LLVMType:%union.u2 = type { i8 }
 // CHECK-RECORD:   IsZeroInitializable:1
 // CHECK-RECORD:   BitFields:[
-// CHECK-RECORD:     <CGBitFieldInfo Offset:0 Size:3 IsSigned:0 StorageSize:8 StorageOffset:0>
+// CHECK-RECORD:     <CGBitFieldInfo Offset:0 Size:3 IsSigned:0 StorageSize:8 StorageOffset:0
 
 union __attribute__((packed)) u2 {
   unsigned long long f0 : 3;
@@ -274,8 +274,8 @@ _Bool test_6() {
 // CHECK-RECORD:   LLVMType:%struct.s7 = type { i32, i32, i32, i8, i32, [12 x i8] }
 // CHECK-RECORD:   IsZeroInitializable:1
 // CHECK-RECORD:   BitFields:[
-// CHECK-RECORD:     <CGBitFieldInfo Offset:0 Size:5 IsSigned:1 StorageSize:8 StorageOffset:12>
-// CHECK-RECORD:     <CGBitFieldInfo Offset:0 Size:29 IsSigned:1 StorageSize:32 StorageOffset:16>
+// CHECK-RECORD:     <CGBitFieldInfo Offset:0 Size:5 IsSigned:1 StorageSize:8 StorageOffset:12
+// CHECK-RECORD:     <CGBitFieldInfo Offset:0 Size:29 IsSigned:1 StorageSize:32 StorageOffset:16
 
 struct __attribute__((aligned(16))) s7 {
   int a, b, c;

diff  --git a/clang/test/CodeGen/volatile.c b/clang/test/CodeGen/volatile.c
index 0f58bb62a248..93772b8f8e0e 100644
--- a/clang/test/CodeGen/volatile.c
+++ b/clang/test/CodeGen/volatile.c
@@ -1,4 +1,5 @@
-// RUN: %clang_cc1 -triple=%itanium_abi_triple -emit-llvm < %s | FileCheck %s -check-prefix CHECK -check-prefix CHECK-IT
+// RUN: %clang_cc1 -triple=aarch64-unknown-linux-gnu -emit-llvm < %s | FileCheck %s -check-prefix CHECK -check-prefixes CHECK-IT,CHECK-IT-ARM
+// RUN: %clang_cc1 -triple=x86_64-unknown-linux-gnu -emit-llvm < %s | FileCheck %s -check-prefix CHECK -check-prefixes CHECK-IT,CHECK-IT-OTHER
 // RUN: %clang_cc1 -triple=%ms_abi_triple -emit-llvm < %s | FileCheck %s -check-prefix CHECK -check-prefix CHECK-MS
 
 int S;
@@ -88,7 +89,8 @@ int main() {
 // CHECK-MS: load i32, i32* getelementptr {{.*}} @BF
 // CHECK: store i32 {{.*}}, i32* [[I]]
   i=vBF.x;
-// CHECK-IT: load volatile i8, i8* getelementptr {{.*}} @vBF
+// CHECK-IT-OTHER: load volatile i8, i8* getelementptr {{.*}} @vBF
+// CHECK-IT-ARM: load volatile i32, i32* bitcast {{.*}} @vBF
 // CHECK-MS: load volatile i32, i32* getelementptr {{.*}} @vBF
 // CHECK: store i32 {{.*}}, i32* [[I]]
   i=V[3];
@@ -163,9 +165,11 @@ int main() {
 // CHECK-MS: store i32 {{.*}}, i32* getelementptr {{.*}} @BF
   vBF.x=i;
 // CHECK: load i32, i32* [[I]]
-// CHECK-IT: load volatile i8, i8* getelementptr {{.*}} @vBF
+// CHECK-IT-OTHER: load volatile i8, i8* getelementptr {{.*}} @vBF
+// CHECK-IT-ARM: load volatile i32, i32* bitcast {{.*}} @vBF
 // CHECK-MS: load volatile i32, i32* getelementptr {{.*}} @vBF
-// CHECK-IT: store volatile i8 {{.*}}, i8* getelementptr {{.*}} @vBF
+// CHECK-IT-OTHER: store volatile i8 {{.*}}, i8* getelementptr {{.*}} @vBF
+// CHECK-IT-ARM: store volatile i32 {{.*}}, i32* bitcast {{.*}} @vBF
 // CHECK-MS: store volatile i32 {{.*}}, i32* getelementptr {{.*}} @vBF
   V[3]=i;
 // CHECK: load i32, i32* [[I]]


        


More information about the cfe-commits mailing list