[PATCH] D36059: [memops] Add a new pass to inject fast-path code for specific library function calls.

Chandler Carruth via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Sun Jul 30 06:15:40 PDT 2017


chandlerc created this revision.
Herald added subscribers: eraman, mgorny, mcrosier, mehdi_amini, sanjoy.

The initial motivation is providing fast, inline paths for memset and
memcpy with a dynamic size when that size happens to be small. Because
LLVM is *very* good at forming memset and memcpy out of raw loops and
many other constructs, it is especially important that these remain fast
even when used in circumstances where the library function call overhead
is unacceptably large.

The first attempt at addressing this was https://reviews.llvm.org/D35750, but that proved to only
exacerbate the issue rather than fixing it.

It turns out, at least for x86, we can emit a very minimal loop behind
a dynamic test on the size and dramatically improve the performance of
sizes that happen to be small.

To make all of this work *well* requires a lot of careful logic:

- We need to analyze and discover scaling of the size fed to memset and memcpy.
- We can't widen past the alignment.
- We need to emit any loop with *exactly* the right IR to get efficient lowering from the backend.
- It needs to run quite late to not be perturbed by other passes that try to "optimize" the loop.
- We need to avoid this in optsize and minsize functions.
- We need to generate checks for zero-length operations before the loop. This ends up being an even faster path.
- But we need to not generate *redundant* checks which means adding a mini predicate analysis just to find existing zero checks. It turns out these are incredibly common because so many of these routines are created out of loops which we have already extracted just such a predicate from.

There is still more we should do here such as:

1. Don't emit these for cold libcalls.
2. Use value profile data (if available) to bias at least the branch weights and potentially the actual sizes.

However, for at least a few benchmarks here that end up hitting this very hard,
I'm seeing between 20% and 50% improvements already. Naturally, I'll be
gathering more data both on performance impact and code size impact, but
I wanted to go ahead and get this out for review.


https://reviews.llvm.org/D36059

Files:
  include/llvm/Analysis/TargetTransformInfo.h
  include/llvm/Analysis/TargetTransformInfoImpl.h
  include/llvm/InitializePasses.h
  include/llvm/LinkAllPasses.h
  include/llvm/Transforms/Scalar/FastPathLibCalls.h
  lib/Analysis/TargetTransformInfo.cpp
  lib/Passes/PassBuilder.cpp
  lib/Passes/PassRegistry.def
  lib/Target/X86/X86TargetTransformInfo.cpp
  lib/Target/X86/X86TargetTransformInfo.h
  lib/Transforms/Scalar/CMakeLists.txt
  lib/Transforms/Scalar/FastPathLibCalls.cpp
  test/Other/new-pm-defaults.ll
  test/Other/new-pm-thinlto-defaults.ll
  test/Transforms/FastPathLibCalls/X86/lit.local.cfg
  test/Transforms/FastPathLibCalls/X86/memops.ll
  test/Transforms/FastPathLibCalls/basic.ll
  test/Transforms/FastPathLibCalls/memops.ll

-------------- next part --------------
A non-text attachment was scrubbed...
Name: D36059.108829.patch
Type: text/x-patch
Size: 71324 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20170730/49f726bd/attachment.bin>


More information about the llvm-commits mailing list