[llvm-dev] static stack depth analysis tool

Ben Craig via llvm-dev llvm-dev at lists.llvm.org
Wed Jul 11 10:05:17 PDT 2018


I have written such a (naïve) tool in the past to do this.  I started with this script ( https://github.com/spotify/linux/blob/master/scripts/checkstack.pl ) and evolved it from there.

Basically, I parsed the assembly.  The results were not perfectly accurate, but they were very useful.  I was able to “handle” indirect calls and recursion, for some definition of “handle”.

I think the main value in doing something inside of the LLVM ecosystem is to get (some) indirect call information.  One of the security mitigations (CFI?) already needs to figure out the set of possible targets for an indirect call.  If you can get access to that information, you can provide much more complete call stacks.  My scripts in the past have only “handled” indirect calls by noting that a caller had X number of indirect callees.  For recursion, I picked a canonical recursion path, took one cycle of recursion, and reported that number, along with a flag in the report to say that recursion was involved.

My python scripts for x86, x86_64, and armv5 total all of about 29 kb, so it wasn’t hard.  Exact byte accuracy, including path dependency, interrupts, and fancy stacks will, of course, bloat that.

From: llvm-dev <llvm-dev-bounces at lists.llvm.org> On Behalf Of Annie Cherkaev via llvm-dev
Sent: Wednesday, July 11, 2018 11:32 AM
To: llvm-dev at lists.llvm.org
Subject: [llvm-dev] static stack depth analysis tool


Hello llvm-dev!


We are currently building a tool using LLVM which statically computes the worst-case stack depth for programs whose call-graphs are statically constrained. While this task is undecidable for general programs, we specifically plan to use it to analyze all entry points into Zircon’s kernel and the vDSO.


Currently, without such a tool, the best option for allocating kernel memory in Zircon is taking a best guess of a conservative overestimate of the memory required. This is problematic, however, because allocating too much memory wastes valuable, highly limited RAM, and allocating too little memory can result in stack overflows, which lead to catastrophic system failure. This tool will eliminate the guesswork by reporting the smallest amount of memory that is sufficient for all possible executions within the kernel.


We would appreciate feedback on our approach, and would like to hear if anyone else has use-cases for such a tool. Below are some implementation details, followed by some Zircon-specific details.


     Implementation details:


This tool will perform the stack depth analysis by extracting the call graph and the stack frame sizes from LLVM in two late passes, scheduled just before codegen. A modulePass will write a dictionary of function names to the list of called functions, and a machineFunctionPass will write a dictionary of function names to their stack sizes. A script can then use these dictionaries to annotate the call graph with the stack sizes by matching up function names, and use this annotated call graph to find the execution path with the largest stack.


Backends: We plan to support the x86-64 and aarch64 architectures; adding other architectures should be straight-forward.


Fancy stacks: We plan to support using SafeStack & ShadowCallStack.


Hand-written assembly: The Zircon codebase contains a nontrivial amount of hand-written assembly which uses stack space. We will manually audit the stack usage of assembly code, and provide it to the analysis in a lookup table.


Indirect Function Calls & Recursion: We hope to not run into too many call graph complications in our codebase, refactor those which we do run into, and add this tool into the build system to enforce that future code is amenable to this analysis. We currently don’t plan on implementing any additional analysis which LLVM does not already have.


Testing: We’ll validate the correctness of the tool through unit testing, end-to-end testing and ideally also fuzz testing. We can set up fuzz testing by instrumenting the source code to dynamically monitor the stack size-- for instance by inlining assembly to report the stack pointer-- and comparing that result to the value the tool computes statically. While this wouldn’t validate that the tool is reporting stack sizes precisely, it would give us a way to automatically check that the true stack frame size reported by the dynamic check (which is the stack size for *some* execution path) is no larger than the worst-case stack size reported by the static tool.


     Zircon-specific details:


Interrupts:

An executing thread can be temporarily halted by an interrupt, which will execute code on the stack of the thread it paused. This means to ensure we find what the true worst-case stack is, we need to model the case where an interrupt occurs while the function is at its deepest stack depth. Some interrupts can interrupt other interrupts, therefore to accurately model worst-case stack depth we need to build a model of how interrupts may execute in x86-64 and aarch64, and treat them as additional entry points which are called by every leaf function.


Motivation for analyzing stack usage in the vDSO:

In Zircon, syscalls are exposed through the vDSO (virtual Dynamic Shared Object). The vDSO is an interface which is like a shared library object, except it is provided by the OS instead of existing as a literal file. If a process has permission to make syscalls then the vDSO is mapped into its memory space, and it can dynamically link against syscalls the same way it would any other shared library object. The vDSO requires a certain amount of stack space to safely complete any syscall, and that amount is part of the system’s ABI contract with user code. Currently, however, that amount is unknown, undocumented and unenforced; this tool will allow us to analyze the amount of stack space currently implemented syscalls use, document the amount that will be contractually required by the ABI, and enforce that future syscalls do not exceed that amount.


Please let us know if you have any comments or suggestions about the implementation, and if you have a project that could benefit from such a tool.


Thanks!


Annie Cherkaev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180711/a4251440/attachment.html>


More information about the llvm-dev mailing list