[llvm] r219299 - Simplify switch statement in ARM subtarget align access
Renato Golin
renato.golin at linaro.org
Wed Oct 8 05:26:13 PDT 2014
Author: rengolin
Date: Wed Oct 8 07:26:13 2014
New Revision: 219299
URL: http://llvm.org/viewvc/llvm-project?rev=219299&view=rev
Log:
Simplify switch statement in ARM subtarget align access
This switch can be reduced to a simpler if/else statement.
Patch by Charlie Turner.
Modified:
llvm/trunk/lib/Target/ARM/ARMSubtarget.cpp
Modified: llvm/trunk/lib/Target/ARM/ARMSubtarget.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/ARM/ARMSubtarget.cpp?rev=219299&r1=219298&r2=219299&view=diff
==============================================================================
--- llvm/trunk/lib/Target/ARM/ARMSubtarget.cpp (original)
+++ llvm/trunk/lib/Target/ARM/ARMSubtarget.cpp Wed Oct 8 07:26:13 2014
@@ -292,37 +292,31 @@ void ARMSubtarget::initSubtargetFeatures
SupportsTailCall = !isThumb1Only();
}
- switch (Align) {
- case DefaultAlign:
- // Assume pre-ARMv6 doesn't support unaligned accesses.
- //
- // ARMv6 may or may not support unaligned accesses depending on the
- // SCTLR.U bit, which is architecture-specific. We assume ARMv6
- // Darwin and NetBSD targets support unaligned accesses, and others don't.
- //
- // ARMv7 always has SCTLR.U set to 1, but it has a new SCTLR.A bit
- // which raises an alignment fault on unaligned accesses. Linux
- // defaults this bit to 0 and handles it as a system-wide (not
- // per-process) setting. It is therefore safe to assume that ARMv7+
- // Linux targets support unaligned accesses. The same goes for NaCl.
- //
- // The above behavior is consistent with GCC.
- AllowsUnalignedMem =
- (hasV7Ops() && (isTargetLinux() || isTargetNaCl() ||
- isTargetNetBSD())) ||
- (hasV6Ops() && (isTargetMachO() || isTargetNetBSD()));
- // The one exception is cortex-m0, which despite being v6, does not
- // support unaligned accesses. Rather than make the above boolean
- // expression even more obtuse, just override the value here.
- if (isThumb1Only() && isMClass())
- AllowsUnalignedMem = false;
- break;
- case StrictAlign:
+ if (Align == DefaultAlign) {
+ // Assume pre-ARMv6 doesn't support unaligned accesses.
+ //
+ // ARMv6 may or may not support unaligned accesses depending on the
+ // SCTLR.U bit, which is architecture-specific. We assume ARMv6
+ // Darwin and NetBSD targets support unaligned accesses, and others don't.
+ //
+ // ARMv7 always has SCTLR.U set to 1, but it has a new SCTLR.A bit
+ // which raises an alignment fault on unaligned accesses. Linux
+ // defaults this bit to 0 and handles it as a system-wide (not
+ // per-process) setting. It is therefore safe to assume that ARMv7+
+ // Linux targets support unaligned accesses. The same goes for NaCl.
+ //
+ // The above behavior is consistent with GCC.
+ AllowsUnalignedMem =
+ (hasV7Ops() && (isTargetLinux() || isTargetNaCl() ||
+ isTargetNetBSD())) ||
+ (hasV6Ops() && (isTargetMachO() || isTargetNetBSD()));
+ // The one exception is cortex-m0, which despite being v6, does not
+ // support unaligned accesses. Rather than make the above boolean
+ // expression even more obtuse, just override the value here.
+ if (isThumb1Only() && isMClass())
AllowsUnalignedMem = false;
- break;
- case NoStrictAlign:
- AllowsUnalignedMem = true;
- break;
+ } else {
+ AllowsUnalignedMem = !(Align == StrictAlign);
}
switch (IT) {
More information about the llvm-commits
mailing list