[PATCH] D75203: Relax existing instructions to reduce the number of nops needed for alignment purposes

Philip Reames via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Feb 26 11:23:25 PST 2020


reames created this revision.
reames added reviewers: MaskRay, jyknight, craig.topper, tstellar.
Herald added subscribers: bollu, hiraditya, mcrosier.
Herald added a project: LLVM.

If we have an explicit align directive, we currently default to emitting nops to fill the space.  As discussed in the context of the prefix padding work for branch alignment (D72225 <https://reviews.llvm.org/D72225>), we're allowed to play other tricks such as extending the size of previous instructions instead.

This patch will convert near jumps to far jumps if doing so decreases the number of bytes of nops needed for a following align.  It does so as a post-pass after relaxation is complete.  It intentionally works without moving any labels or doing anything which might require another round of relaxation.

The point of this patch is mainly to mock out the approach.  The optimization implemented is real, and possibly useful, but the main point is to demonstrate an approach for implementing such "pad previous instruction" approaches.  The key notion in this patch is to treat padding previous instructions as an optional optimization, not as a core part of relaxation.  The benefit to this is that we avoid the potential concern about increasing the distance between two labels and thus causing further potentially non-local code grown due to relaxation.  The downside is that we may miss some opportunities to avoid nops.

For the moment, this patch only implements a small set of existing relaxations for a single directive type.  Assuming the approach is satisfactory, I plan to extend this to other directive types (off, boundary align) and a broader set of instructions where there are obvious "relaxations" which are roughly performance equivalent.

Note that this patch *doesn't* change which instructions are relaxable.  We may wish to explore that separately to increase optimization opportunity, but I figured that deserved it's own separate discussion.

There are possible downsides to this optimization (and all "pad previous instruction" variants).  The major two are potentially increasing instruction fetch and perturbing uop caching.  (i.e. the usual alignment risks)  Specifically:

- If we pad an instruction such that it crosses a fetch window (16 bytes on modern X86), we may cause the decoder to have to trigger a fetch it wouldn't have otherwise.  This can effect both decode speed, and icache pressure.
- Intel's uop caching have particular restrictions on instruction combinations which can fit in a particular way.  By moving around instructions, we can both cause misses an change misses into hits.  Many of the most painful cases are around branch density, so I don't expect this to be too bad on the whole.

On the whole, I expect to see small swings (i.e. the typical alignment change problem), but nothing major or systematic in either direction.


Repository:
  rG LLVM Github Monorepo

https://reviews.llvm.org/D75203

Files:
  llvm/include/llvm/MC/MCAsmBackend.h
  llvm/include/llvm/MC/MCAssembler.h
  llvm/include/llvm/MC/MCFragment.h
  llvm/lib/MC/MCAssembler.cpp
  llvm/lib/MC/MCFragment.cpp
  llvm/lib/Target/X86/MCTargetDesc/X86AsmBackend.cpp
  llvm/test/MC/X86/align-via-relaxation.s

-------------- next part --------------
A non-text attachment was scrubbed...
Name: D75203.246795.patch
Type: text/x-patch
Size: 16530 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20200226/41df2ead/attachment.bin>


More information about the llvm-commits mailing list