[lld] 4a8503c - [lld-macho] Align all cstrings to 16 bytes when deduplicating
Jez Ng via llvm-commits
llvm-commits at lists.llvm.org
Thu Jun 24 13:53:37 PDT 2021
Author: Jez Ng
Date: 2021-06-24T16:53:29-04:00
New Revision: 4a8503c8e04a5252193d58bb57a0111e7df05104
URL: https://github.com/llvm/llvm-project/commit/4a8503c8e04a5252193d58bb57a0111e7df05104
DIFF: https://github.com/llvm/llvm-project/commit/4a8503c8e04a5252193d58bb57a0111e7df05104.diff
LOG: [lld-macho] Align all cstrings to 16 bytes when deduplicating
We previously did this only for x86_64, but it turns out that
arm64 needs this too -- see PR50791.
Ultimately this is a hack, and we should avoid over-aligning strings
that don't need it. I'm just having a hard time figuring out how ld64 is
determining the right alignment.
No new test for this since we were already testing this behavior for
x86_64, and extending it to arm64 seems too trivial.
Reviewed By: #lld-macho, thakis
Differential Revision: https://reviews.llvm.org/D104835
Added:
Modified:
lld/MachO/SyntheticSections.cpp
Removed:
################################################################################
diff --git a/lld/MachO/SyntheticSections.cpp b/lld/MachO/SyntheticSections.cpp
index a5d21dcbaeb3..563b6e2ab605 100644
--- a/lld/MachO/SyntheticSections.cpp
+++ b/lld/MachO/SyntheticSections.cpp
@@ -1166,19 +1166,20 @@ void BitcodeBundleSection::writeTo(uint8_t *buf) const {
// that only contains a duplicate cstring at a
diff erent alignment. See PR50563
// for details.
//
-// In practice, the cstrings we've seen so far that require special alignment
-// are all accessed by x86_64 SIMD operations -- x86_64 requires SIMD accesses
-// to be 16-byte-aligned. So for now, I'm just aligning all strings to 16 bytes
-// on x86_64. This is indeed wasteful, but implementation-wise it's simpler
-// than preserving per-string alignment+offsets. It also avoids the
-// aforementioned crash after deduplication of
diff erently-aligned strings.
-// Finally, the overhead is not huge: using 16-byte alignment (vs no alignment)
-// is only a 0.5% size overhead when linking chromium_framework.
+// On x86_64, the cstrings we've seen so far that require special alignment are
+// all accessed by SIMD operations -- x86_64 requires SIMD accesses to be
+// 16-byte-aligned. arm64 also seems to require 16-byte-alignment in some cases
+// (PR50791), but I haven't tracked down the root cause. So for now, I'm just
+// aligning all strings to 16 bytes. This is indeed wasteful, but
+// implementation-wise it's simpler than preserving per-string
+// alignment+offsets. It also avoids the aforementioned crash after
+// deduplication of
diff erently-aligned strings. Finally, the overhead is not
+// huge: using 16-byte alignment (vs no alignment) is only a 0.5% size overhead
+// when linking chromium_framework on x86_64.
CStringSection::CStringSection()
: SyntheticSection(segment_names::text, section_names::cString),
- builder(StringTableBuilder::RAW,
- /*Alignment=*/target->cpuType == CPU_TYPE_X86_64 ? 16 : 1) {
- align = target->cpuType == CPU_TYPE_X86_64 ? 16 : 1;
+ builder(StringTableBuilder::RAW, /*Alignment=*/16) {
+ align = 16;
flags = S_CSTRING_LITERALS;
}
More information about the llvm-commits
mailing list