[llvm-commits] [llvm] r173230 - in /llvm/trunk: docs/LangRef.rst include/llvm-c/Core.h include/llvm/IR/Attributes.h lib/AsmParser/LLLexer.cpp lib/AsmParser/LLParser.cpp lib/AsmParser/LLToken.h lib/CodeGen/StackProtector.cpp lib/IR/Attributes.cpp lib/Target/CppBackend/CPPBackend.cpp lib/Transforms/IPO/Inliner.cpp test/CodeGen/X86/stack-protector.ll test/Transforms/Inline/inline_ssp.ll utils/kate/llvm.xml utils/vim/llvm.vim
Bill Wendling
wendling at apple.com
Tue Jan 22 22:45:28 PST 2013
I forgot to mention that the patch is by Josh Magee!
-bw
On Jan 22, 2013, at 10:41 PM, Bill Wendling <isanbard at gmail.com> wrote:
> Author: void
> Date: Wed Jan 23 00:41:41 2013
> New Revision: 173230
>
> URL: http://llvm.org/viewvc/llvm-project?rev=173230&view=rev
> Log:
> Add the IR attribute 'sspstrong'.
>
> SSPStrong applies a heuristic to insert stack protectors in these situations:
>
> * A Protector is required for functions which contain an array, regardless of
> type or length.
>
> * A Protector is required for functions which contain a structure/union which
> contains an array, regardless of type or length. Note, there is no limit to
> the depth of nesting.
>
> * A protector is required when the address of a local variable (i.e., stack
> based variable) is exposed. (E.g., such as through a local whose address is
> taken as part of the RHS of an assignment or a local whose address is taken as
> part of a function argument.)
>
> This patch implements the SSPString attribute to be equivalent to
> SSPRequired. This will change in a subsequent patch.
>
> Added:
> llvm/trunk/test/Transforms/Inline/inline_ssp.ll
> Modified:
> llvm/trunk/docs/LangRef.rst
> llvm/trunk/include/llvm-c/Core.h
> llvm/trunk/include/llvm/IR/Attributes.h
> llvm/trunk/lib/AsmParser/LLLexer.cpp
> llvm/trunk/lib/AsmParser/LLParser.cpp
> llvm/trunk/lib/AsmParser/LLToken.h
> llvm/trunk/lib/CodeGen/StackProtector.cpp
> llvm/trunk/lib/IR/Attributes.cpp
> llvm/trunk/lib/Target/CppBackend/CPPBackend.cpp
> llvm/trunk/lib/Transforms/IPO/Inliner.cpp
> llvm/trunk/test/CodeGen/X86/stack-protector.ll
> llvm/trunk/utils/kate/llvm.xml
> llvm/trunk/utils/vim/llvm.vim
>
> Modified: llvm/trunk/docs/LangRef.rst
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/docs/LangRef.rst?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/docs/LangRef.rst (original)
> +++ llvm/trunk/docs/LangRef.rst Wed Jan 23 00:41:41 2013
> @@ -837,8 +837,16 @@
>
> If a function that has an ``sspreq`` attribute is inlined into a
> function that doesn't have an ``sspreq`` attribute or which has an
> - ``ssp`` attribute, then the resulting function will have an
> - ``sspreq`` attribute.
> + ``ssp`` or ``sspstrong`` attribute, then the resulting function will have
> + an ``sspreq`` attribute.
> +``sspstrong``
> + This attribute indicates that the function should emit a stack smashing
> + protector. Currently this attribute has the same effect as
> + ``sspreq``. This overrides the ``ssp`` function attribute.
> +
> + If a function that has an ``sspstrong`` attribute is inlined into a
> + function that doesn't have an ``sspstrong`` attribute, then the
> + resulting function will have an ``sspstrong`` attribute.
> ``uwtable``
> This attribute indicates that the ABI being targeted requires that
> an unwind table entry be produce for this function even if we can
>
> Modified: llvm/trunk/include/llvm-c/Core.h
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm-c/Core.h?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/include/llvm-c/Core.h (original)
> +++ llvm/trunk/include/llvm-c/Core.h Wed Jan 23 00:41:41 2013
> @@ -173,10 +173,11 @@
> LLVMUWTable = 1 << 30,
> LLVMNonLazyBind = 1 << 31
>
> - /* FIXME: This attribute is currently not included in the C API as
> + /* FIXME: These attributes are currently not included in the C API as
> a temporary measure until the API/ABI impact to the C API is understood
> and the path forward agreed upon.
> - LLVMAddressSafety = 1ULL << 32
> + LLVMAddressSafety = 1ULL << 32,
> + LLVMStackProtectStrongAttribute = 1ULL<<33
> */
> } LLVMAttribute;
>
>
> Modified: llvm/trunk/include/llvm/IR/Attributes.h
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/IR/Attributes.h?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/include/llvm/IR/Attributes.h (original)
> +++ llvm/trunk/include/llvm/IR/Attributes.h Wed Jan 23 00:41:41 2013
> @@ -90,6 +90,7 @@
> ///< alignstack=(1))
> StackProtect, ///< Stack protection.
> StackProtectReq, ///< Stack protection required.
> + StackProtectStrong, ///< Strong Stack protection.
> StructRet, ///< Hidden pointer to structure to return
> UWTable, ///< Function must be in a unwind table
> ZExt, ///< Zero extended before/after call
> @@ -463,6 +464,7 @@
> .removeAttribute(Attribute::OptimizeForSize)
> .removeAttribute(Attribute::StackProtect)
> .removeAttribute(Attribute::StackProtectReq)
> + .removeAttribute(Attribute::StackProtectStrong)
> .removeAttribute(Attribute::NoRedZone)
> .removeAttribute(Attribute::NoImplicitFloat)
> .removeAttribute(Attribute::Naked)
>
> Modified: llvm/trunk/lib/AsmParser/LLLexer.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/AsmParser/LLLexer.cpp?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/lib/AsmParser/LLLexer.cpp (original)
> +++ llvm/trunk/lib/AsmParser/LLLexer.cpp Wed Jan 23 00:41:41 2013
> @@ -549,6 +549,7 @@
> KEYWORD(optsize);
> KEYWORD(ssp);
> KEYWORD(sspreq);
> + KEYWORD(sspstrong);
> KEYWORD(noredzone);
> KEYWORD(noimplicitfloat);
> KEYWORD(naked);
>
> Modified: llvm/trunk/lib/AsmParser/LLParser.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/AsmParser/LLParser.cpp?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/lib/AsmParser/LLParser.cpp (original)
> +++ llvm/trunk/lib/AsmParser/LLParser.cpp Wed Jan 23 00:41:41 2013
> @@ -956,6 +956,7 @@
> case lltok::kw_returns_twice: B.addAttribute(Attribute::ReturnsTwice); break;
> case lltok::kw_ssp: B.addAttribute(Attribute::StackProtect); break;
> case lltok::kw_sspreq: B.addAttribute(Attribute::StackProtectReq); break;
> + case lltok::kw_sspstrong: B.addAttribute(Attribute::StackProtectStrong); break;
> case lltok::kw_uwtable: B.addAttribute(Attribute::UWTable); break;
> case lltok::kw_noduplicate: B.addAttribute(Attribute::NoDuplicate); break;
>
> @@ -1050,11 +1051,11 @@
> case lltok::kw_readonly: case lltok::kw_inlinehint:
> case lltok::kw_alwaysinline: case lltok::kw_optsize:
> case lltok::kw_ssp: case lltok::kw_sspreq:
> - case lltok::kw_noredzone: case lltok::kw_noimplicitfloat:
> - case lltok::kw_naked: case lltok::kw_nonlazybind:
> - case lltok::kw_address_safety: case lltok::kw_minsize:
> - case lltok::kw_alignstack: case lltok::kw_align:
> - case lltok::kw_noduplicate:
> + case lltok::kw_sspstrong: case lltok::kw_noimplicitfloat:
> + case lltok::kw_noredzone: case lltok::kw_naked:
> + case lltok::kw_nonlazybind: case lltok::kw_address_safety:
> + case lltok::kw_minsize: case lltok::kw_alignstack:
> + case lltok::kw_align: case lltok::kw_noduplicate:
> HaveError |= Error(Lex.getLoc(), "invalid use of function-only attribute");
> break;
> }
>
> Modified: llvm/trunk/lib/AsmParser/LLToken.h
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/AsmParser/LLToken.h?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/lib/AsmParser/LLToken.h (original)
> +++ llvm/trunk/lib/AsmParser/LLToken.h Wed Jan 23 00:41:41 2013
> @@ -110,6 +110,7 @@
> kw_optsize,
> kw_ssp,
> kw_sspreq,
> + kw_sspstrong,
> kw_noredzone,
> kw_noimplicitfloat,
> kw_naked,
>
> Modified: llvm/trunk/lib/CodeGen/StackProtector.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/StackProtector.cpp?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/lib/CodeGen/StackProtector.cpp (original)
> +++ llvm/trunk/lib/CodeGen/StackProtector.cpp Wed Jan 23 00:41:41 2013
> @@ -141,6 +141,12 @@
> Attribute::StackProtectReq))
> return true;
>
> + // FIXME: Dummy SSP-strong implementation. Default to required until
> + // strong heuristic is implemented.
> + if (F->getAttributes().hasAttribute(AttributeSet::FunctionIndex,
> + Attribute::StackProtectStrong))
> + return true;
> +
> if (!F->getAttributes().hasAttribute(AttributeSet::FunctionIndex,
> Attribute::StackProtect))
> return false;
>
> Modified: llvm/trunk/lib/IR/Attributes.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/IR/Attributes.cpp?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/lib/IR/Attributes.cpp (original)
> +++ llvm/trunk/lib/IR/Attributes.cpp Wed Jan 23 00:41:41 2013
> @@ -206,6 +206,8 @@
> Result += "ssp ";
> if (hasAttribute(Attribute::StackProtectReq))
> Result += "sspreq ";
> + if (hasAttribute(Attribute::StackProtectStrong))
> + Result += "sspstrong ";
> if (hasAttribute(Attribute::NoRedZone))
> Result += "noredzone ";
> if (hasAttribute(Attribute::NoImplicitFloat))
> @@ -487,6 +489,7 @@
> case Attribute::AddressSafety: return 1ULL << 32;
> case Attribute::MinSize: return 1ULL << 33;
> case Attribute::NoDuplicate: return 1ULL << 34;
> + case Attribute::StackProtectStrong: return 1ULL << 35;
> }
> llvm_unreachable("Unsupported attribute type");
> }
>
> Modified: llvm/trunk/lib/Target/CppBackend/CPPBackend.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/CppBackend/CPPBackend.cpp?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/CppBackend/CPPBackend.cpp (original)
> +++ llvm/trunk/lib/Target/CppBackend/CPPBackend.cpp Wed Jan 23 00:41:41 2013
> @@ -499,6 +499,7 @@
> HANDLE_ATTR(OptimizeForSize);
> HANDLE_ATTR(StackProtect);
> HANDLE_ATTR(StackProtectReq);
> + HANDLE_ATTR(StackProtectStrong);
> HANDLE_ATTR(NoCapture);
> HANDLE_ATTR(NoRedZone);
> HANDLE_ATTR(NoImplicitFloat);
>
> Modified: llvm/trunk/lib/Transforms/IPO/Inliner.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/IPO/Inliner.cpp?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Transforms/IPO/Inliner.cpp (original)
> +++ llvm/trunk/lib/Transforms/IPO/Inliner.cpp Wed Jan 23 00:41:41 2013
> @@ -72,6 +72,40 @@
> typedef DenseMap<ArrayType*, std::vector<AllocaInst*> >
> InlinedArrayAllocasTy;
>
> +/// \brief If the inlined function had a higher stack protection level than the
> +/// calling function, then bump up the caller's stack protection level.
> +static void AdjustCallerSSPLevel(Function *Caller, Function *Callee) {
> + // If upgrading the SSP attribute, clear out the old SSP Attributes first.
> + // Having multiple SSP attributes doesn't actually hurt, but it adds useless
> + // clutter to the IR.
> + AttrBuilder B;
> + B.addAttribute(Attribute::StackProtect)
> + .addAttribute(Attribute::StackProtectStrong);
> + AttributeSet OldSSPAttr = AttributeSet::get(Caller->getContext(),
> + AttributeSet::FunctionIndex,
> + B);
> + AttributeSet CallerAttr = Caller->getAttributes(),
> + CalleeAttr = Callee->getAttributes();
> +
> + if (CalleeAttr.hasAttribute(AttributeSet::FunctionIndex,
> + Attribute::StackProtectReq)) {
> + Caller->removeAttributes(AttributeSet::FunctionIndex, OldSSPAttr);
> + Caller->addFnAttr(Attribute::StackProtectReq);
> + } else if (CalleeAttr.hasAttribute(AttributeSet::FunctionIndex,
> + Attribute::StackProtectStrong) &&
> + !CallerAttr.hasAttribute(AttributeSet::FunctionIndex,
> + Attribute::StackProtectReq)) {
> + Caller->removeAttributes(AttributeSet::FunctionIndex, OldSSPAttr);
> + Caller->addFnAttr(Attribute::StackProtectStrong);
> + } else if (CalleeAttr.hasAttribute(AttributeSet::FunctionIndex,
> + Attribute::StackProtect) &&
> + !CallerAttr.hasAttribute(AttributeSet::FunctionIndex,
> + Attribute::StackProtectReq) &&
> + !CallerAttr.hasAttribute(AttributeSet::FunctionIndex,
> + Attribute::StackProtectStrong))
> + Caller->addFnAttr(Attribute::StackProtect);
> +}
> +
> /// InlineCallIfPossible - If it is possible to inline the specified call site,
> /// do so and update the CallGraph for this operation.
> ///
> @@ -91,16 +125,7 @@
> if (!InlineFunction(CS, IFI, InsertLifetime))
> return false;
>
> - // If the inlined function had a higher stack protection level than the
> - // calling function, then bump up the caller's stack protection level.
> - if (Callee->getAttributes().hasAttribute(AttributeSet::FunctionIndex,
> - Attribute::StackProtectReq))
> - Caller->addFnAttr(Attribute::StackProtectReq);
> - else if (Callee->getAttributes().hasAttribute(AttributeSet::FunctionIndex,
> - Attribute::StackProtect) &&
> - !Caller->getAttributes().hasAttribute(AttributeSet::FunctionIndex,
> - Attribute::StackProtectReq))
> - Caller->addFnAttr(Attribute::StackProtect);
> + AdjustCallerSSPLevel(Caller, Callee);
>
> // Look at all of the allocas that we inlined through this call site. If we
> // have already inlined other allocas through other calls into this function,
>
> Modified: llvm/trunk/test/CodeGen/X86/stack-protector.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/stack-protector.ll?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/stack-protector.ll (original)
> +++ llvm/trunk/test/CodeGen/X86/stack-protector.ll Wed Jan 23 00:41:41 2013
> @@ -1,28 +1,635 @@
> -; RUN: llc -mtriple=i386-pc-linux-gnu < %s -o - | grep %gs:
> -; RUN: llc -mtriple=x86_64-pc-linux-gnu < %s -o - | grep %fs:
> -; RUN: llc -code-model=kernel -mtriple=x86_64-pc-linux-gnu < %s -o - | grep %gs:
> -; RUN: llc -mtriple=x86_64-apple-darwin < %s -o - | grep "__stack_chk_guard"
> -; RUN: llc -mtriple=x86_64-apple-darwin < %s -o - | grep "__stack_chk_fail"
> -
> -@"\01LC" = internal constant [11 x i8] c"buf == %s\0A\00" ; <[11 x i8]*> [#uses=1]
> -
> -define void @test(i8* %a) nounwind ssp {
> -entry:
> - %a_addr = alloca i8* ; <i8**> [#uses=2]
> - %buf = alloca [8 x i8] ; <[8 x i8]*> [#uses=2]
> - %"alloca point" = bitcast i32 0 to i32 ; <i32> [#uses=0]
> - store i8* %a, i8** %a_addr
> - %buf1 = bitcast [8 x i8]* %buf to i8* ; <i8*> [#uses=1]
> - %0 = load i8** %a_addr, align 4 ; <i8*> [#uses=1]
> - %1 = call i8* @strcpy(i8* %buf1, i8* %0) nounwind ; <i8*> [#uses=0]
> - %buf2 = bitcast [8 x i8]* %buf to i8* ; <i8*> [#uses=1]
> - %2 = call i32 (i8*, ...)* @printf(i8* getelementptr ([11 x i8]* @"\01LC", i32 0, i32 0), i8* %buf2) nounwind ; <i32> [#uses=0]
> - br label %return
> +; RUN: llc -mtriple=i386-pc-linux-gnu < %s -o - | FileCheck --check-prefix=LINUX-I386 %s
> +; RUN: llc -mtriple=x86_64-pc-linux-gnu < %s -o - | FileCheck --check-prefix=LINUX-X64 %s
> +; RUN: llc -code-model=kernel -mtriple=x86_64-pc-linux-gnu < %s -o - | FileCheck --check-prefix=LINUX-KERNEL-X64 %s
> +; RUN: llc -mtriple=x86_64-apple-darwin < %s -o - | FileCheck --check-prefix=DARWIN-X64 %s
> +; FIXME: Update and expand test when strong heuristic is implemented.
>
> -return: ; preds = %entry
> - ret void
> +%struct.foo = type { [16 x i8] }
> +%struct.foo.0 = type { [4 x i8] }
> +
> + at .str = private unnamed_addr constant [4 x i8] c"%s\0A\00", align 1
> +
> +; test1a: array of [16 x i8]
> +; no ssp attribute
> +; Requires no protector.
> +define void @test1a(i8* %a) nounwind uwtable {
> +entry:
> +; LINUX-I386: test1a:
> +; LINUX-I386-NOT: calll __stack_chk_fail
> +; LINUX-I386: .cfi_endproc
> +
> +; LINUX-X64: test1a:
> +; LINUX-X64-NOT: callq __stack_chk_fail
> +; LINUX-X64: .cfi_endproc
> +
> +; LINUX-KERNEL-X64: test1a:
> +; LINUX-KERNEL-X64-NOT: callq __stack_chk_fail
> +; LINUX-KERNEL-X64: .cfi_endproc
> +
> +; DARWIN-X64: test1a:
> +; DARWIN-X64-NOT: callq ___stack_chk_fail
> +; DARWIN-X64: .cfi_endproc
> + %a.addr = alloca i8*, align 8
> + %buf = alloca [16 x i8], align 16
> + store i8* %a, i8** %a.addr, align 8
> + %arraydecay = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %arraydecay1 = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0
> + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay1)
> + ret void
> +}
> +
> +; test1b: array of [16 x i8]
> +; ssp attribute
> +; Requires protector.
> +define void @test1b(i8* %a) nounwind uwtable ssp {
> +entry:
> +; LINUX-I386: test1b:
> +; LINUX-I386: mov{{l|q}} %gs:
> +; LINUX-I386: calll __stack_chk_fail
> +
> +; LINUX-X64: test1b:
> +; LINUX-X64: mov{{l|q}} %fs:
> +; LINUX-X64: callq __stack_chk_fail
> +
> +; LINUX-KERNEL-X64: test1b:
> +; LINUX-KERNEL-X64: mov{{l|q}} %gs:
> +; LINUX-KERNEL-X64: callq __stack_chk_fail
> +
> +; DARWIN-X64: test1b:
> +; DARWIN-X64: mov{{l|q}} ___stack_chk_guard
> +; DARWIN-X64: callq ___stack_chk_fail
> + %a.addr = alloca i8*, align 8
> + %buf = alloca [16 x i8], align 16
> + store i8* %a, i8** %a.addr, align 8
> + %arraydecay = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %arraydecay1 = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0
> + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay1)
> + ret void
> +}
> +
> +; test1c: array of [16 x i8]
> +; sspstrong attribute
> +; Requires protector.
> +define void @test1c(i8* %a) nounwind uwtable sspstrong {
> +entry:
> +; LINUX-I386: test1c:
> +; LINUX-I386: mov{{l|q}} %gs:
> +; LINUX-I386: calll __stack_chk_fail
> +
> +; LINUX-X64: test1c:
> +; LINUX-X64: mov{{l|q}} %fs:
> +; LINUX-X64: callq __stack_chk_fail
> +
> +; LINUX-KERNEL-X64: test1c:
> +; LINUX-KERNEL-X64: mov{{l|q}} %gs:
> +; LINUX-KERNEL-X64: callq __stack_chk_fail
> +
> +; DARWIN-X64: test1c:
> +; DARWIN-X64: mov{{l|q}} ___stack_chk_guard
> +; DARWIN-X64: callq ___stack_chk_fail
> + %a.addr = alloca i8*, align 8
> + %buf = alloca [16 x i8], align 16
> + store i8* %a, i8** %a.addr, align 8
> + %arraydecay = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %arraydecay1 = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0
> + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay1)
> + ret void
> +}
> +
> +; test1d: array of [16 x i8]
> +; sspreq attribute
> +; Requires protector.
> +define void @test1d(i8* %a) nounwind uwtable sspreq {
> +entry:
> +; LINUX-I386: test1d:
> +; LINUX-I386: mov{{l|q}} %gs:
> +; LINUX-I386: calll __stack_chk_fail
> +
> +; LINUX-X64: test1d:
> +; LINUX-X64: mov{{l|q}} %fs:
> +; LINUX-X64: callq __stack_chk_fail
> +
> +; LINUX-KERNEL-X64: test1d:
> +; LINUX-KERNEL-X64: mov{{l|q}} %gs:
> +; LINUX-KERNEL-X64: callq __stack_chk_fail
> +
> +; DARWIN-X64: test1d:
> +; DARWIN-X64: mov{{l|q}} ___stack_chk_guard
> +; DARWIN-X64: callq ___stack_chk_fail
> + %a.addr = alloca i8*, align 8
> + %buf = alloca [16 x i8], align 16
> + store i8* %a, i8** %a.addr, align 8
> + %arraydecay = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %arraydecay1 = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0
> + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay1)
> + ret void
> +}
> +
> +; test2a: struct { [16 x i8] }
> +; no ssp attribute
> +; Requires no protector.
> +define void @test2a(i8* %a) nounwind uwtable {
> +entry:
> +; LINUX-I386: test2a:
> +; LINUX-I386-NOT: calll __stack_chk_fail
> +; LINUX-I386: .cfi_endproc
> +
> +; LINUX-X64: test2a:
> +; LINUX-X64-NOT: callq __stack_chk_fail
> +; LINUX-X64: .cfi_endproc
> +
> +; LINUX-KERNEL-X64: test2a:
> +; LINUX-KERNEL-X64-NOT: callq __stack_chk_fail
> +; LINUX-KERNEL-X64: .cfi_endproc
> +
> +; DARWIN-X64: test2a:
> +; DARWIN-X64-NOT: callq ___stack_chk_fail
> +; DARWIN-X64: .cfi_endproc
> + %a.addr = alloca i8*, align 8
> + %b = alloca %struct.foo, align 1
> + store i8* %a, i8** %a.addr, align 8
> + %buf = getelementptr inbounds %struct.foo* %b, i32 0, i32 0
> + %arraydecay = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %buf1 = getelementptr inbounds %struct.foo* %b, i32 0, i32 0
> + %arraydecay2 = getelementptr inbounds [16 x i8]* %buf1, i32 0, i32 0
> + %call3 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay2)
> + ret void
> +}
> +
> +; test2b: struct { [16 x i8] }
> +; ssp attribute
> +; Requires protector.
> +define void @test2b(i8* %a) nounwind uwtable ssp {
> +entry:
> +; LINUX-I386: test2b:
> +; LINUX-I386: mov{{l|q}} %gs:
> +; LINUX-I386: calll __stack_chk_fail
> +
> +; LINUX-X64: test2b:
> +; LINUX-X64: mov{{l|q}} %fs:
> +; LINUX-X64: callq __stack_chk_fail
> +
> +; LINUX-KERNEL-X64: test2b:
> +; LINUX-KERNEL-X64: mov{{l|q}} %gs:
> +; LINUX-KERNEL-X64: callq __stack_chk_fail
> +
> +; DARWIN-X64: test2b:
> +; DARWIN-X64: mov{{l|q}} ___stack_chk_guard
> +; DARWIN-X64: callq ___stack_chk_fail
> + %a.addr = alloca i8*, align 8
> + %b = alloca %struct.foo, align 1
> + store i8* %a, i8** %a.addr, align 8
> + %buf = getelementptr inbounds %struct.foo* %b, i32 0, i32 0
> + %arraydecay = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %buf1 = getelementptr inbounds %struct.foo* %b, i32 0, i32 0
> + %arraydecay2 = getelementptr inbounds [16 x i8]* %buf1, i32 0, i32 0
> + %call3 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay2)
> + ret void
> +}
> +
> +; test2c: struct { [16 x i8] }
> +; sspstrong attribute
> +; Requires protector.
> +define void @test2c(i8* %a) nounwind uwtable sspstrong {
> +entry:
> +; LINUX-I386: test2c:
> +; LINUX-I386: mov{{l|q}} %gs:
> +; LINUX-I386: calll __stack_chk_fail
> +
> +; LINUX-X64: test2c:
> +; LINUX-X64: mov{{l|q}} %fs:
> +; LINUX-X64: callq __stack_chk_fail
> +
> +; LINUX-KERNEL-X64: test2c:
> +; LINUX-KERNEL-X64: mov{{l|q}} %gs:
> +; LINUX-KERNEL-X64: callq __stack_chk_fail
> +
> +; DARWIN-X64: test2c:
> +; DARWIN-X64: mov{{l|q}} ___stack_chk_guard
> +; DARWIN-X64: callq ___stack_chk_fail
> + %a.addr = alloca i8*, align 8
> + %b = alloca %struct.foo, align 1
> + store i8* %a, i8** %a.addr, align 8
> + %buf = getelementptr inbounds %struct.foo* %b, i32 0, i32 0
> + %arraydecay = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %buf1 = getelementptr inbounds %struct.foo* %b, i32 0, i32 0
> + %arraydecay2 = getelementptr inbounds [16 x i8]* %buf1, i32 0, i32 0
> + %call3 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay2)
> + ret void
> +}
> +
> +; test2d: struct { [16 x i8] }
> +; sspreq attribute
> +; Requires protector.
> +define void @test2d(i8* %a) nounwind uwtable sspreq {
> +entry:
> +; LINUX-I386: test2d:
> +; LINUX-I386: mov{{l|q}} %gs:
> +; LINUX-I386: calll __stack_chk_fail
> +
> +; LINUX-X64: test2d:
> +; LINUX-X64: mov{{l|q}} %fs:
> +; LINUX-X64: callq __stack_chk_fail
> +
> +; LINUX-KERNEL-X64: test2d:
> +; LINUX-KERNEL-X64: mov{{l|q}} %gs:
> +; LINUX-KERNEL-X64: callq __stack_chk_fail
> +
> +; DARWIN-X64: test2d:
> +; DARWIN-X64: mov{{l|q}} ___stack_chk_guard
> +; DARWIN-X64: callq ___stack_chk_fail
> + %a.addr = alloca i8*, align 8
> + %b = alloca %struct.foo, align 1
> + store i8* %a, i8** %a.addr, align 8
> + %buf = getelementptr inbounds %struct.foo* %b, i32 0, i32 0
> + %arraydecay = getelementptr inbounds [16 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %buf1 = getelementptr inbounds %struct.foo* %b, i32 0, i32 0
> + %arraydecay2 = getelementptr inbounds [16 x i8]* %buf1, i32 0, i32 0
> + %call3 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay2)
> + ret void
> +}
> +
> +; test3a: array of [4 x i8]
> +; no ssp attribute
> +; Requires no protector.
> +define void @test3a(i8* %a) nounwind uwtable {
> +entry:
> +; LINUX-I386: test3a:
> +; LINUX-I386-NOT: calll __stack_chk_fail
> +; LINUX-I386: .cfi_endproc
> +
> +; LINUX-X64: test3a:
> +; LINUX-X64-NOT: callq __stack_chk_fail
> +; LINUX-X64: .cfi_endproc
> +
> +; LINUX-KERNEL-X64: test3a:
> +; LINUX-KERNEL-X64-NOT: callq __stack_chk_fail
> +; LINUX-KERNEL-X64: .cfi_endproc
> +
> +; DARWIN-X64: test3a:
> +; DARWIN-X64-NOT: callq ___stack_chk_fail
> +; DARWIN-X64: .cfi_endproc
> + %a.addr = alloca i8*, align 8
> + %buf = alloca [4 x i8], align 1
> + store i8* %a, i8** %a.addr, align 8
> + %arraydecay = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %arraydecay1 = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0
> + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay1)
> + ret void
> +}
> +
> +; test3b: array [4 x i8]
> +; ssp attribute
> +; Requires no protector.
> +define void @test3b(i8* %a) nounwind uwtable ssp {
> +entry:
> +; LINUX-I386: test3b:
> +; LINUX-I386-NOT: calll __stack_chk_fail
> +; LINUX-I386: .cfi_endproc
> +
> +; LINUX-X64: test3b:
> +; LINUX-X64-NOT: callq __stack_chk_fail
> +; LINUX-X64: .cfi_endproc
> +
> +; LINUX-KERNEL-X64: test3b:
> +; LINUX-KERNEL-X64-NOT: callq __stack_chk_fail
> +; LINUX-KERNEL-X64: .cfi_endproc
> +
> +; DARWIN-X64: test3b:
> +; DARWIN-X64-NOT: callq ___stack_chk_fail
> +; DARWIN-X64: .cfi_endproc
> + %a.addr = alloca i8*, align 8
> + %buf = alloca [4 x i8], align 1
> + store i8* %a, i8** %a.addr, align 8
> + %arraydecay = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %arraydecay1 = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0
> + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay1)
> + ret void
> }
>
> -declare i8* @strcpy(i8*, i8*) nounwind
> +; test3c: array of [4 x i8]
> +; sspstrong attribute
> +; Requires protector.
> +define void @test3c(i8* %a) nounwind uwtable sspstrong {
> +entry:
> +; LINUX-I386: test3c:
> +; LINUX-I386: mov{{l|q}} %gs:
> +; LINUX-I386: calll __stack_chk_fail
> +
> +; LINUX-X64: test3c:
> +; LINUX-X64: mov{{l|q}} %fs:
> +; LINUX-X64: callq __stack_chk_fail
> +
> +; LINUX-KERNEL-X64: test3c:
> +; LINUX-KERNEL-X64: mov{{l|q}} %gs:
> +; LINUX-KERNEL-X64: callq __stack_chk_fail
> +
> +; DARWIN-X64: test3c:
> +; DARWIN-X64: mov{{l|q}} ___stack_chk_guard
> +; DARWIN-X64: callq ___stack_chk_fail
> + %a.addr = alloca i8*, align 8
> + %buf = alloca [4 x i8], align 1
> + store i8* %a, i8** %a.addr, align 8
> + %arraydecay = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %arraydecay1 = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0
> + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay1)
> + ret void
> +}
> +
> +; test3d: array of [4 x i8]
> +; sspreq attribute
> +; Requires protector.
> +define void @test3d(i8* %a) nounwind uwtable sspreq {
> +entry:
> +; LINUX-I386: test3d:
> +; LINUX-I386: mov{{l|q}} %gs:
> +; LINUX-I386: calll __stack_chk_fail
> +
> +; LINUX-X64: test3d:
> +; LINUX-X64: mov{{l|q}} %fs:
> +; LINUX-X64: callq __stack_chk_fail
> +
> +; LINUX-KERNEL-X64: test3d:
> +; LINUX-KERNEL-X64: mov{{l|q}} %gs:
> +; LINUX-KERNEL-X64: callq __stack_chk_fail
> +
> +; DARWIN-X64: test3d:
> +; DARWIN-X64: mov{{l|q}} ___stack_chk_guard
> +; DARWIN-X64: callq ___stack_chk_fail
> + %a.addr = alloca i8*, align 8
> + %buf = alloca [4 x i8], align 1
> + store i8* %a, i8** %a.addr, align 8
> + %arraydecay = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %arraydecay1 = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0
> + %call2 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay1)
> + ret void
> +}
> +
> +; test4a: struct { [4 x i8] }
> +; no ssp attribute
> +; Requires no protector.
> +define void @test4a(i8* %a) nounwind uwtable {
> +entry:
> +; LINUX-I386: test4a:
> +; LINUX-I386-NOT: calll __stack_chk_fail
> +; LINUX-I386: .cfi_endproc
> +
> +; LINUX-X64: test4a:
> +; LINUX-X64-NOT: callq __stack_chk_fail
> +; LINUX-X64: .cfi_endproc
> +
> +; LINUX-KERNEL-X64: test4a:
> +; LINUX-KERNEL-X64-NOT: callq __stack_chk_fail
> +; LINUX-KERNEL-X64: .cfi_endproc
> +
> +; DARWIN-X64: test4a:
> +; DARWIN-X64-NOT: callq ___stack_chk_fail
> +; DARWIN-X64: .cfi_endproc
> + %a.addr = alloca i8*, align 8
> + %b = alloca %struct.foo.0, align 1
> + store i8* %a, i8** %a.addr, align 8
> + %buf = getelementptr inbounds %struct.foo.0* %b, i32 0, i32 0
> + %arraydecay = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %buf1 = getelementptr inbounds %struct.foo.0* %b, i32 0, i32 0
> + %arraydecay2 = getelementptr inbounds [4 x i8]* %buf1, i32 0, i32 0
> + %call3 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay2)
> + ret void
> +}
> +
> +; test4b: struct { [4 x i8] }
> +; ssp attribute
> +; Requires no protector.
> +define void @test4b(i8* %a) nounwind uwtable ssp {
> +entry:
> +; LINUX-I386: test4b:
> +; LINUX-I386-NOT: calll __stack_chk_fail
> +; LINUX-I386: .cfi_endproc
> +
> +; LINUX-X64: test4b:
> +; LINUX-X64-NOT: callq __stack_chk_fail
> +; LINUX-X64: .cfi_endproc
> +
> +; LINUX-KERNEL-X64: test4b:
> +; LINUX-KERNEL-X64-NOT: callq __stack_chk_fail
> +; LINUX-KERNEL-X64: .cfi_endproc
> +
> +; DARWIN-X64: test4b:
> +; DARWIN-X64-NOT: callq ___stack_chk_fail
> +; DARWIN-X64: .cfi_endproc
> + %a.addr = alloca i8*, align 8
> + %b = alloca %struct.foo.0, align 1
> + store i8* %a, i8** %a.addr, align 8
> + %buf = getelementptr inbounds %struct.foo.0* %b, i32 0, i32 0
> + %arraydecay = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %buf1 = getelementptr inbounds %struct.foo.0* %b, i32 0, i32 0
> + %arraydecay2 = getelementptr inbounds [4 x i8]* %buf1, i32 0, i32 0
> + %call3 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay2)
> + ret void
> +}
> +
> +; test4c: struct { [4 x i8] }
> +; sspstrong attribute
> +; Requires protector.
> +define void @test4c(i8* %a) nounwind uwtable sspstrong {
> +entry:
> +; LINUX-I386: test4c:
> +; LINUX-I386: mov{{l|q}} %gs:
> +; LINUX-I386: calll __stack_chk_fail
> +
> +; LINUX-X64: test4c:
> +; LINUX-X64: mov{{l|q}} %fs:
> +; LINUX-X64: callq __stack_chk_fail
> +
> +; LINUX-KERNEL-X64: test4c:
> +; LINUX-KERNEL-X64: mov{{l|q}} %gs:
> +; LINUX-KERNEL-X64: callq __stack_chk_fail
> +
> +; DARWIN-X64: test4c:
> +; DARWIN-X64: mov{{l|q}} ___stack_chk_guard
> +; DARWIN-X64: callq ___stack_chk_fail
> + %a.addr = alloca i8*, align 8
> + %b = alloca %struct.foo.0, align 1
> + store i8* %a, i8** %a.addr, align 8
> + %buf = getelementptr inbounds %struct.foo.0* %b, i32 0, i32 0
> + %arraydecay = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %buf1 = getelementptr inbounds %struct.foo.0* %b, i32 0, i32 0
> + %arraydecay2 = getelementptr inbounds [4 x i8]* %buf1, i32 0, i32 0
> + %call3 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay2)
> + ret void
> +}
> +
> +; test4d: struct { [4 x i8] }
> +; sspreq attribute
> +; Requires protector.
> +define void @test4d(i8* %a) nounwind uwtable sspreq {
> +entry:
> +; LINUX-I386: test4d:
> +; LINUX-I386: mov{{l|q}} %gs:
> +; LINUX-I386: calll __stack_chk_fail
> +
> +; LINUX-X64: test4d:
> +; LINUX-X64: mov{{l|q}} %fs:
> +; LINUX-X64: callq __stack_chk_fail
> +
> +; LINUX-KERNEL-X64: test4d:
> +; LINUX-KERNEL-X64: mov{{l|q}} %gs:
> +; LINUX-KERNEL-X64: callq __stack_chk_fail
> +
> +; DARWIN-X64: test4d:
> +; DARWIN-X64: mov{{l|q}} ___stack_chk_guard
> +; DARWIN-X64: callq ___stack_chk_fail
> + %a.addr = alloca i8*, align 8
> + %b = alloca %struct.foo.0, align 1
> + store i8* %a, i8** %a.addr, align 8
> + %buf = getelementptr inbounds %struct.foo.0* %b, i32 0, i32 0
> + %arraydecay = getelementptr inbounds [4 x i8]* %buf, i32 0, i32 0
> + %0 = load i8** %a.addr, align 8
> + %call = call i8* @strcpy(i8* %arraydecay, i8* %0)
> + %buf1 = getelementptr inbounds %struct.foo.0* %b, i32 0, i32 0
> + %arraydecay2 = getelementptr inbounds [4 x i8]* %buf1, i32 0, i32 0
> + %call3 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %arraydecay2)
> + ret void
> +}
> +
> +; test5a: no arrays / no nested arrays
> +; no ssp attribute
> +; Requires no protector.
> +define void @test5a(i8* %a) nounwind uwtable {
> +entry:
> +; LINUX-I386: test5a:
> +; LINUX-I386-NOT: calll __stack_chk_fail
> +; LINUX-I386: .cfi_endproc
> +
> +; LINUX-X64: test5a:
> +; LINUX-X64-NOT: callq __stack_chk_fail
> +; LINUX-X64: .cfi_endproc
> +
> +; LINUX-KERNEL-X64: test5a:
> +; LINUX-KERNEL-X64-NOT: callq __stack_chk_fail
> +; LINUX-KERNEL-X64: .cfi_endproc
> +
> +; DARWIN-X64: test5a:
> +; DARWIN-X64-NOT: callq ___stack_chk_fail
> +; DARWIN-X64: .cfi_endproc
> + %a.addr = alloca i8*, align 8
> + store i8* %a, i8** %a.addr, align 8
> + %0 = load i8** %a.addr, align 8
> + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %0)
> + ret void
> +}
> +
> +; test5b: no arrays / no nested arrays
> +; ssp attribute
> +; Requires no protector.
> +define void @test5b(i8* %a) nounwind uwtable ssp {
> +entry:
> +; LINUX-I386: test5b:
> +; LINUX-I386-NOT: calll __stack_chk_fail
> +; LINUX-I386: .cfi_endproc
> +
> +; LINUX-X64: test5b:
> +; LINUX-X64-NOT: callq __stack_chk_fail
> +; LINUX-X64: .cfi_endproc
> +
> +; LINUX-KERNEL-X64: test5b:
> +; LINUX-KERNEL-X64-NOT: callq __stack_chk_fail
> +; LINUX-KERNEL-X64: .cfi_endproc
> +
> +; DARWIN-X64: test5b:
> +; DARWIN-X64-NOT: callq ___stack_chk_fail
> +; DARWIN-X64: .cfi_endproc
> + %a.addr = alloca i8*, align 8
> + store i8* %a, i8** %a.addr, align 8
> + %0 = load i8** %a.addr, align 8
> + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %0)
> + ret void
> +}
> +
> +; test5c: no arrays / no nested arrays
> +; sspstrong attribute
> +; Requires protector.
> +; FIXME: Once strong heuristic is implemented, this should _not_ require
> +; a protector
> +define void @test5c(i8* %a) nounwind uwtable sspstrong {
> +entry:
> +; LINUX-I386: test5c:
> +; LINUX-I386: mov{{l|q}} %gs:
> +; LINUX-I386: calll __stack_chk_fail
> +
> +; LINUX-X64: test5c:
> +; LINUX-X64: mov{{l|q}} %fs:
> +; LINUX-X64: callq __stack_chk_fail
> +
> +; LINUX-KERNEL-X64: test5c:
> +; LINUX-KERNEL-X64: mov{{l|q}} %gs:
> +; LINUX-KERNEL-X64: callq __stack_chk_fail
> +
> +; DARWIN-X64: test5c:
> +; DARWIN-X64: mov{{l|q}} ___stack_chk_guard
> +; DARWIN-X64: callq ___stack_chk_fail
> + %a.addr = alloca i8*, align 8
> + store i8* %a, i8** %a.addr, align 8
> + %0 = load i8** %a.addr, align 8
> + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %0)
> + ret void
> +}
> +
> +; test5d: no arrays / no nested arrays
> +; sspreq attribute
> +; Requires protector.
> +define void @test5d(i8* %a) nounwind uwtable sspreq {
> +entry:
> +; LINUX-I386: test5d:
> +; LINUX-I386: mov{{l|q}} %gs:
> +; LINUX-I386: calll __stack_chk_fail
> +
> +; LINUX-X64: test5d:
> +; LINUX-X64: mov{{l|q}} %fs:
> +; LINUX-X64: callq __stack_chk_fail
> +
> +; LINUX-KERNEL-X64: test5d:
> +; LINUX-KERNEL-X64: mov{{l|q}} %gs:
> +; LINUX-KERNEL-X64: callq __stack_chk_fail
> +
> +; DARWIN-X64: test5d:
> +; DARWIN-X64: mov{{l|q}} ___stack_chk_guard
> +; DARWIN-X64: callq ___stack_chk_fail
> + %a.addr = alloca i8*, align 8
> + store i8* %a, i8** %a.addr, align 8
> + %0 = load i8** %a.addr, align 8
> + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i8* %0)
> + ret void
> +}
>
> -declare i32 @printf(i8*, ...) nounwind
> +declare i8* @strcpy(i8*, i8*)
> +declare i32 @printf(i8*, ...)
>
> Added: llvm/trunk/test/Transforms/Inline/inline_ssp.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/Inline/inline_ssp.ll?rev=173230&view=auto
> ==============================================================================
> --- llvm/trunk/test/Transforms/Inline/inline_ssp.ll (added)
> +++ llvm/trunk/test/Transforms/Inline/inline_ssp.ll Wed Jan 23 00:41:41 2013
> @@ -0,0 +1,155 @@
> +; RUN: opt -inline %s -S | FileCheck %s
> +; Ensure SSP attributes are propagated correctly when inlining.
> +
> + at .str = private unnamed_addr constant [11 x i8] c"fun_nossp\0A\00", align 1
> + at .str1 = private unnamed_addr constant [9 x i8] c"fun_ssp\0A\00", align 1
> + at .str2 = private unnamed_addr constant [15 x i8] c"fun_sspstrong\0A\00", align 1
> + at .str3 = private unnamed_addr constant [12 x i8] c"fun_sspreq\0A\00", align 1
> +
> +; These first four functions (@fun_sspreq, @fun_sspstrong, @fun_ssp, @fun_nossp)
> +; are used by the remaining functions to ensure that the SSP attributes are
> +; propagated correctly. The caller should have its SSP attribute set as:
> +; strictest(caller-ssp-attr, callee-ssp-attr), where strictness is ordered as:
> +; sspreq > sspstrong > ssp > [no ssp]
> +define internal void @fun_sspreq() nounwind uwtable sspreq {
> +entry:
> + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([12 x i8]* @.str3, i32 0, i32 0))
> + ret void
> +}
> +
> +define internal void @fun_sspstrong() nounwind uwtable sspstrong {
> +entry:
> + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([15 x i8]* @.str2, i32 0, i32 0))
> + ret void
> +}
> +
> +define internal void @fun_ssp() nounwind uwtable ssp {
> +entry:
> + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([9 x i8]* @.str1, i32 0, i32 0))
> + ret void
> +}
> +
> +define internal void @fun_nossp() nounwind uwtable {
> +entry:
> + %call = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([11 x i8]* @.str, i32 0, i32 0))
> + ret void
> +}
> +
> +; Tests start below
> +
> +define void @inline_req_req() nounwind uwtable sspreq {
> +entry:
> +; CHECK: @inline_req_req() nounwind uwtable sspreq
> + call void @fun_sspreq()
> + ret void
> +}
> +
> +define void @inline_req_strong() nounwind uwtable sspstrong {
> +entry:
> +; CHECK: @inline_req_strong() nounwind uwtable sspreq
> + call void @fun_sspreq()
> + ret void
> +}
> +
> +define void @inline_req_ssp() nounwind uwtable ssp {
> +entry:
> +; CHECK: @inline_req_ssp() nounwind uwtable sspreq
> + call void @fun_sspreq()
> + ret void
> +}
> +
> +define void @inline_req_nossp() nounwind uwtable {
> +entry:
> +; CHECK: @inline_req_nossp() nounwind uwtable sspreq
> + call void @fun_sspreq()
> + ret void
> +}
> +
> +define void @inline_strong_req() nounwind uwtable sspreq {
> +entry:
> +; CHECK: @inline_strong_req() nounwind uwtable sspreq
> + call void @fun_sspstrong()
> + ret void
> +}
> +
> +
> +define void @inline_strong_strong() nounwind uwtable sspstrong {
> +entry:
> +; CHECK: @inline_strong_strong() nounwind uwtable sspstrong
> + call void @fun_sspstrong()
> + ret void
> +}
> +
> +define void @inline_strong_ssp() nounwind uwtable ssp {
> +entry:
> +; CHECK: @inline_strong_ssp() nounwind uwtable sspstrong
> + call void @fun_sspstrong()
> + ret void
> +}
> +
> +define void @inline_strong_nossp() nounwind uwtable {
> +entry:
> +; CHECK: @inline_strong_nossp() nounwind uwtable sspstrong
> + call void @fun_sspstrong()
> + ret void
> +}
> +
> +define void @inline_ssp_req() nounwind uwtable sspreq {
> +entry:
> +; CHECK: @inline_ssp_req() nounwind uwtable sspreq
> + call void @fun_ssp()
> + ret void
> +}
> +
> +
> +define void @inline_ssp_strong() nounwind uwtable sspstrong {
> +entry:
> +; CHECK: @inline_ssp_strong() nounwind uwtable sspstrong
> + call void @fun_ssp()
> + ret void
> +}
> +
> +define void @inline_ssp_ssp() nounwind uwtable ssp {
> +entry:
> +; CHECK: @inline_ssp_ssp() nounwind uwtable ssp
> + call void @fun_ssp()
> + ret void
> +}
> +
> +define void @inline_ssp_nossp() nounwind uwtable {
> +entry:
> +; CHECK: @inline_ssp_nossp() nounwind uwtable ssp
> + call void @fun_ssp()
> + ret void
> +}
> +
> +define void @inline_nossp_req() nounwind uwtable sspreq {
> +entry:
> +; CHECK: @inline_nossp_req() nounwind uwtable sspreq
> + call void @fun_nossp()
> + ret void
> +}
> +
> +
> +define void @inline_nossp_strong() nounwind uwtable sspstrong {
> +entry:
> +; CHECK: @inline_nossp_strong() nounwind uwtable sspstrong
> + call void @fun_nossp()
> + ret void
> +}
> +
> +define void @inline_nossp_ssp() nounwind uwtable ssp {
> +entry:
> +; CHECK: @inline_nossp_ssp() nounwind uwtable ssp
> + call void @fun_nossp()
> + ret void
> +}
> +
> +define void @inline_nossp_nossp() nounwind uwtable {
> +entry:
> +; CHECK: @inline_nossp_nossp() nounwind uwtable
> + call void @fun_nossp()
> + ret void
> +}
> +
> +declare i32 @printf(i8*, ...)
>
> Modified: llvm/trunk/utils/kate/llvm.xml
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/utils/kate/llvm.xml?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/utils/kate/llvm.xml (original)
> +++ llvm/trunk/utils/kate/llvm.xml Wed Jan 23 00:41:41 2013
> @@ -90,6 +90,7 @@
> <item> readonly </item>
> <item> ssp </item>
> <item> sspreq </item>
> + <item> sspstrong </item>
> </list>
> <list name="types">
> <item> float </item>
>
> Modified: llvm/trunk/utils/vim/llvm.vim
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/utils/vim/llvm.vim?rev=173230&r1=173229&r2=173230&view=diff
> ==============================================================================
> --- llvm/trunk/utils/vim/llvm.vim (original)
> +++ llvm/trunk/utils/vim/llvm.vim Wed Jan 23 00:41:41 2013
> @@ -51,10 +51,10 @@
> syn keyword llvmKeyword nounwind optsize personality private protected
> syn keyword llvmKeyword ptx_device ptx_kernel readnone readonly release
> syn keyword llvmKeyword returns_twice section seq_cst sideeffect signext
> -syn keyword llvmKeyword singlethread spir_func spir_kernel sret ssp sspreq tail
> -syn keyword llvmKeyword target thread_local to triple unnamed_addr unordered
> -syn keyword llvmKeyword uwtable volatile weak weak_odr x86_fastcallcc
> -syn keyword llvmKeyword x86_stdcallcc x86_thiscallcc zeroext
> +syn keyword llvmKeyword singlethread spir_func spir_kernel sret ssp sspreq
> +syn keyword llvmKeyword sspstrong tail target thread_local to triple
> +syn keyword llvmKeyword unnamed_addr unordered uwtable volatile weak weak_odr
> +syn keyword llvmKeyword x86_fastcallcc x86_stdcallcc x86_thiscallcc zeroext
>
> " Obsolete keywords.
> syn keyword llvmError getresult begin end
>
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
More information about the llvm-commits
mailing list