[llvm] b74c6d2 - [InlineFunction] Disable emission of alignment assumptions by default

Nikita Popov via llvm-commits llvm-commits at lists.llvm.org
Thu Apr 30 14:13:17 PDT 2020


Author: Nikita Popov
Date: 2020-04-30T23:12:54+02:00
New Revision: b74c6d2c9d8e57db96742094cc4daf98a258b412

URL: https://github.com/llvm/llvm-project/commit/b74c6d2c9d8e57db96742094cc4daf98a258b412
DIFF: https://github.com/llvm/llvm-project/commit/b74c6d2c9d8e57db96742094cc4daf98a258b412.diff

LOG: [InlineFunction] Disable emission of alignment assumptions by default

In D74183 clang started emitting alignment for sret parameters
unconditionally. This caused a 1.5% compile-time regression on
tramp3d-v4. The reason is that we now generate many instance of IR like

    %ptrint = ptrtoint %class.GuardLayers* %guards_m to i64
    %maskedptr = and i64 %ptrint, 3
    %maskcond = icmp eq i64 %maskedptr, 0
    tail call void @llvm.assume(i1 %maskcond)

to preserve the alignment information during inlining. Based on IR
analysis, these assumptions also regress optimization. The attached
phase ordering test case illustrates two issues: One are instruction
count based optimization heuristics, which are affected by the four
additional instructions of the assumption. The other is blocking of
SROA due to ptrtoint casts (PR45763).

We already encountered the same problem in Rust, where we (unlike
Clang) generally prefer to emit alignment information absolutely
everywhere it is available. We were only able to do this after
hardcoding -preserve-alignment-assumptions-during-inlining=false,
because we were seeing significant optimization and compile-time
regressions otherwise.

This patch disables -preserve-alignment-assumptions-during-inlining
by default, because we should not be punishing people for adding
more alignment annotations.

Once the assume bundle work shakes out and we can represent (and use)
alignment assumptions using assume bundles, it should be possible to
re-enable this with reduced overhead.

Differential Revision: https://reviews.llvm.org/D76886

Added: 
    llvm/test/Transforms/PhaseOrdering/inlining-alignment-assumptions.ll

Modified: 
    llvm/lib/Transforms/Utils/InlineFunction.cpp

Removed: 
    


################################################################################
diff  --git a/llvm/lib/Transforms/Utils/InlineFunction.cpp b/llvm/lib/Transforms/Utils/InlineFunction.cpp
index 94e676d5fc2e..bcba256fde16 100644
--- a/llvm/lib/Transforms/Utils/InlineFunction.cpp
+++ b/llvm/lib/Transforms/Utils/InlineFunction.cpp
@@ -79,9 +79,12 @@ EnableNoAliasConversion("enable-noalias-to-md-conversion", cl::init(true),
   cl::Hidden,
   cl::desc("Convert noalias attributes to metadata during inlining."));
 
+// Disabled by default, because the added alignment assumptions may increase
+// compile-time and block optimizations. This option is not suitable for use
+// with frontends that emit comprehensive parameter alignment annotations.
 static cl::opt<bool>
 PreserveAlignmentAssumptions("preserve-alignment-assumptions-during-inlining",
-  cl::init(true), cl::Hidden,
+  cl::init(false), cl::Hidden,
   cl::desc("Convert align attributes to assumptions during inlining."));
 
 static cl::opt<bool> UpdateReturnAttributes(

diff  --git a/llvm/test/Transforms/PhaseOrdering/inlining-alignment-assumptions.ll b/llvm/test/Transforms/PhaseOrdering/inlining-alignment-assumptions.ll
new file mode 100644
index 000000000000..5a9d4442d9e3
--- /dev/null
+++ b/llvm/test/Transforms/PhaseOrdering/inlining-alignment-assumptions.ll
@@ -0,0 +1,114 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
+; RUN: opt -S -O2 -preserve-alignment-assumptions-during-inlining=0 < %s | FileCheck %s --check-prefixes=CHECK,ASSUMPTIONS-OFF,FALLBACK-0
+; RUN: opt -S -O2 -preserve-alignment-assumptions-during-inlining=1 < %s | FileCheck %s --check-prefixes=CHECK,ASSUMPTIONS-ON,FALLBACK-1
+; RUN: opt -S -O2 < %s | FileCheck %s --check-prefixes=CHECK,ASSUMPTIONS-OFF,FALLBACK-DEFAULT
+
+target datalayout = "e-p:64:64-p5:32:32-A5"
+
+; This illustrates an optimization 
diff erence caused by instruction counting
+; heuristics, which are affected by the additional instructions of the
+; alignment assumption.
+
+define internal i1 @callee1(i1 %c, i64* align 8 %ptr) {
+  store volatile i64 0, i64* %ptr
+  ret i1 %c
+}
+
+define void @caller1(i1 %c, i64* align 1 %ptr) {
+; ASSUMPTIONS-OFF-LABEL: @caller1(
+; ASSUMPTIONS-OFF-NEXT:    br i1 [[C:%.*]], label [[TRUE2:%.*]], label [[FALSE2:%.*]]
+; ASSUMPTIONS-OFF:       true2:
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 0, i64* [[PTR:%.*]], align 8
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 -1, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 -1, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 -1, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 -1, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 -1, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 2, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    ret void
+; ASSUMPTIONS-OFF:       false2:
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 1, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 0, i64* [[PTR]], align 8
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 -1, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 -1, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 -1, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 -1, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 -1, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    store volatile i64 3, i64* [[PTR]], align 4
+; ASSUMPTIONS-OFF-NEXT:    ret void
+;
+; ASSUMPTIONS-ON-LABEL: @caller1(
+; ASSUMPTIONS-ON-NEXT:    br i1 [[C:%.*]], label [[TRUE1:%.*]], label [[FALSE1:%.*]]
+; ASSUMPTIONS-ON:       true1:
+; ASSUMPTIONS-ON-NEXT:    [[C_PR:%.*]] = phi i1 [ false, [[FALSE1]] ], [ true, [[TMP0:%.*]] ]
+; ASSUMPTIONS-ON-NEXT:    [[PTRINT:%.*]] = ptrtoint i64* [[PTR:%.*]] to i64
+; ASSUMPTIONS-ON-NEXT:    [[MASKEDPTR:%.*]] = and i64 [[PTRINT]], 7
+; ASSUMPTIONS-ON-NEXT:    [[MASKCOND:%.*]] = icmp eq i64 [[MASKEDPTR]], 0
+; ASSUMPTIONS-ON-NEXT:    tail call void @llvm.assume(i1 [[MASKCOND]])
+; ASSUMPTIONS-ON-NEXT:    store volatile i64 0, i64* [[PTR]], align 8
+; ASSUMPTIONS-ON-NEXT:    store volatile i64 -1, i64* [[PTR]], align 8
+; ASSUMPTIONS-ON-NEXT:    store volatile i64 -1, i64* [[PTR]], align 8
+; ASSUMPTIONS-ON-NEXT:    store volatile i64 -1, i64* [[PTR]], align 8
+; ASSUMPTIONS-ON-NEXT:    store volatile i64 -1, i64* [[PTR]], align 8
+; ASSUMPTIONS-ON-NEXT:    store volatile i64 -1, i64* [[PTR]], align 8
+; ASSUMPTIONS-ON-NEXT:    br i1 [[C_PR]], label [[TRUE2:%.*]], label [[FALSE2:%.*]]
+; ASSUMPTIONS-ON:       false1:
+; ASSUMPTIONS-ON-NEXT:    store volatile i64 1, i64* [[PTR]], align 4
+; ASSUMPTIONS-ON-NEXT:    br label [[TRUE1]]
+; ASSUMPTIONS-ON:       true2:
+; ASSUMPTIONS-ON-NEXT:    store volatile i64 2, i64* [[PTR]], align 8
+; ASSUMPTIONS-ON-NEXT:    ret void
+; ASSUMPTIONS-ON:       false2:
+; ASSUMPTIONS-ON-NEXT:    store volatile i64 3, i64* [[PTR]], align 8
+; ASSUMPTIONS-ON-NEXT:    ret void
+;
+  br i1 %c, label %true1, label %false1
+
+true1:
+  %c2 = call i1 @callee1(i1 %c, i64* %ptr)
+  store volatile i64 -1, i64* %ptr
+  store volatile i64 -1, i64* %ptr
+  store volatile i64 -1, i64* %ptr
+  store volatile i64 -1, i64* %ptr
+  store volatile i64 -1, i64* %ptr
+  br i1 %c2, label %true2, label %false2
+
+false1:
+  store volatile i64 1, i64* %ptr
+  br label %true1
+
+true2:
+  store volatile i64 2, i64* %ptr
+  ret void
+
+false2:
+  store volatile i64 3, i64* %ptr
+  ret void
+}
+
+; This test illustrates that alignment assumptions may prevent SROA.
+; See PR45763.
+
+define internal void @callee2(i64* noalias sret align 8 %arg) {
+  store i64 0, i64* %arg, align 8
+  ret void
+}
+
+define amdgpu_kernel void @caller2() {
+; ASSUMPTIONS-OFF-LABEL: @caller2(
+; ASSUMPTIONS-OFF-NEXT:    ret void
+;
+; ASSUMPTIONS-ON-LABEL: @caller2(
+; ASSUMPTIONS-ON-NEXT:    [[ALLOCA:%.*]] = alloca i64, align 8, addrspace(5)
+; ASSUMPTIONS-ON-NEXT:    [[CAST:%.*]] = addrspacecast i64 addrspace(5)* [[ALLOCA]] to i64*
+; ASSUMPTIONS-ON-NEXT:    [[PTRINT:%.*]] = ptrtoint i64* [[CAST]] to i64
+; ASSUMPTIONS-ON-NEXT:    [[MASKEDPTR:%.*]] = and i64 [[PTRINT]], 7
+; ASSUMPTIONS-ON-NEXT:    [[MASKCOND:%.*]] = icmp eq i64 [[MASKEDPTR]], 0
+; ASSUMPTIONS-ON-NEXT:    call void @llvm.assume(i1 [[MASKCOND]])
+; ASSUMPTIONS-ON-NEXT:    ret void
+;
+  %alloca = alloca i64, align 8, addrspace(5)
+  %cast = addrspacecast i64 addrspace(5)* %alloca to i64*
+  call void @callee2(i64* sret align 8 %cast)
+  ret void
+}


        


More information about the llvm-commits mailing list