[clang] [Driver] Don't alias -mstrict-align to -mno-unaligned-access (PR #85350)
Peter Smith via cfe-commits
cfe-commits at lists.llvm.org
Fri Mar 15 02:45:21 PDT 2024
================
@@ -321,9 +321,11 @@ void aarch64::getAArch64TargetFeatures(const Driver &D,
}
}
- if (Arg *A = Args.getLastArg(options::OPT_mno_unaligned_access,
- options::OPT_munaligned_access)) {
- if (A->getOption().matches(options::OPT_mno_unaligned_access))
+ if (Arg *A = Args.getLastArg(
+ options::OPT_mstrict_align, options::OPT_mno_strict_align,
+ options::OPT_mno_unaligned_access, options::OPT_munaligned_access)) {
+ if (A->getOption().matches(options::OPT_mstrict_align) ||
+ A->getOption().matches(options::OPT_mno_unaligned_access))
----------------
smithp35 wrote:
Preventing unaligned access can be useful in AArch64, it is an option we do use to build our embedded C-libraries with (not a focus for GCC). It is documented in the toolchain manual https://developer.arm.com/documentation/101754/0621/armclang-Reference/armclang-Command-line-Options/-munaligned-access---mno-unaligned-access
In summary, we'd like to keep it for AArch64.
AArch64 always has the option of using unaligned accesses, but they can be disabled by writing the SCTLR register, and accesses to Device memory always need to be aligned. Code that runs before the MMU is enabled runs as if Device memory.
```
Unaligned accesses to Normal memory
The behavior of unaligned accesses to Normal memory is dependent on all of the following:
• The instruction causing the memory access.
• The memory attributes of the accessed memory.
• The value of SCTLR_ELx.{A, nAA}.
• Whether or not FEAT_LSE2 is implemented.
```
https://github.com/llvm/llvm-project/pull/85350
More information about the cfe-commits
mailing list