[PATCH] [ARM] Add Thumb-2 code size optimization (wide CMP to narrow ADD)m
Tim Northover
t.p.northover at gmail.com
Thu Apr 23 07:56:39 PDT 2015
On 23 April 2015 at 05:31, John Brawn <John.Brawn at arm.com> wrote:
> It should be safe to do this transformation even if flags other than the
> Z flag are used: CMP #-imm behaves identically to CMN #imm, and ADDS #imm
> sets the PSR flags in the same manner as CMN #imm.
Yep, when I last thought about this I came to the conclusion
(AArch64ISelLowering.cpp:1126) that it was valid provided imm != 0.
For the patch:
+ MachineBasicBlock::instr_iterator MBBI = MBB.instr_begin();
+ MachineBasicBlock::instr_iterator E = MBB.instr_end();
+ // Advance the iterator to the CMP instruction, as we are only interested
+ // in the instructions after the CMP instruction.
+ for (; MBBI != E; ++MBBI) {
+ if (MI == MBBI)
+ break;
+ }
Can't we just construct a MachineBasicBlock::iterator from MI here?
+ MachineInstrBuilder MIB = BuildMI(MBB, MI, MI->getDebugLoc(),
+ TII->get(ARM::tADDi8), Reg.getReg());
I'd prefer to preserve Reg's killed state here. Possibly CPSR itself
too, which might not actually be used.
+ AddDefaultPred(MIB);
I don't think we can assume the original instruction is unpredicated
here, can we? This pass seems to get run at least once after
IfConversion.
Tim.
More information about the llvm-commits
mailing list