[PATCH] [Thumb1] Re-write emitThumbRegPlusImmediate
Tim Northover
t.p.northover at gmail.com
Tue Nov 11 14:46:46 PST 2014
Hi Oliver,
I'm not sure how many comments I've got really, I suspect I've picked up on the same issue repeatedly in many cases. But still...
================
Comment at: lib/Target/ARM/Thumb1RegisterInfo.cpp:147-148
@@ -161,1 +146,4 @@
+/// alignments.
+static unsigned RoundUpToOddAlignment(unsigned Value, unsigned Align) {
+ return (Value + Align - 1) / Align * Align;
}
----------------
Why not just fix the implementation of RoundUpToAlignment? Clang seems quite capable of optimising "/Align * Align" when Align is a power of 2, and it looks like the function gets inlined at all of its callsites.
================
Comment at: lib/Target/ARM/Thumb1RegisterInfo.cpp:236
@@ -221,2 +235,3 @@
- unsigned NumMIs = calcNumMI(Opc, ExtraOpc, Bytes, NumBits, Scale);
+ assert((!isMul4 || (std::min(CopyScale, ExtraScale) == 1)) &&
+ "Unaligned offset, but all instructions require alignment");
----------------
isMul4 will be unused in a release build, I think, causing a warning.
================
Comment at: lib/Target/ARM/Thumb1RegisterInfo.cpp:250-253
@@ +249,6 @@
+ unsigned RequiredCopyInstrs = CopyOpc ? 1 : 0;
+ unsigned RangeAfterCopy = (CopyRange > Bytes) ? 0 : (Bytes - CopyRange);
+ unsigned RequiredExtraInstrs;
+ if (ExtraRange)
+ RequiredExtraInstrs = RoundUpToOddAlignment(RangeAfterCopy, ExtraRange) / ExtraRange;
+ else
----------------
This doesn't seem to line up with what actually gets implemented. In particular, I think we're in deep trouble if the RangeAfterCopy actually does get rounded up.
================
Comment at: lib/Target/ARM/Thumb1RegisterInfo.cpp:256
@@ -223,1 +255,3 @@
+ RequiredExtraInstrs = UINT_MAX;
+ unsigned RequiredInstrs = RequiredCopyInstrs + RequiredExtraInstrs;
unsigned Threshold = (DestReg == ARM::SP) ? 3 : 2;
----------------
I think this overflows if RequiredExtraInstrs = UINT_MAX and there's a copy instruction.
================
Comment at: lib/Target/ARM/Thumb1RegisterInfo.cpp:269-270
@@ +268,4 @@
+ if (CopyOpc) {
+ unsigned CopyImm = std::min(Bytes, CopyRange) / CopyScale;
+ Bytes -= CopyImm * CopyScale;
+
----------------
I don't think this works with an unaligned copy but an aligned extra. That case doesn't seem to exist anyway, but perhaps it should more clear if that's intentional (comments, in the assert, ...).
http://reviews.llvm.org/D6211
More information about the llvm-commits
mailing list