r257533 - D9600: Add scan-build python implementation

Laszlo Nagy via cfe-commits cfe-commits at lists.llvm.org
Tue Jan 12 14:38:41 PST 2016


Author: rizsotto
Date: Tue Jan 12 16:38:41 2016
New Revision: 257533

URL: http://llvm.org/viewvc/llvm-project?rev=257533&view=rev
Log:
D9600: Add scan-build python implementation

Added:
    cfe/trunk/tools/scan-build-py/
    cfe/trunk/tools/scan-build-py/README.md
    cfe/trunk/tools/scan-build-py/bin/
    cfe/trunk/tools/scan-build-py/bin/analyze-build
    cfe/trunk/tools/scan-build-py/bin/analyze-c++
    cfe/trunk/tools/scan-build-py/bin/analyze-cc
    cfe/trunk/tools/scan-build-py/bin/intercept-build
    cfe/trunk/tools/scan-build-py/bin/intercept-c++
    cfe/trunk/tools/scan-build-py/bin/intercept-cc
    cfe/trunk/tools/scan-build-py/bin/scan-build
    cfe/trunk/tools/scan-build-py/libear/
    cfe/trunk/tools/scan-build-py/libear/__init__.py
    cfe/trunk/tools/scan-build-py/libear/config.h.in
    cfe/trunk/tools/scan-build-py/libear/ear.c
    cfe/trunk/tools/scan-build-py/libscanbuild/
    cfe/trunk/tools/scan-build-py/libscanbuild/__init__.py
    cfe/trunk/tools/scan-build-py/libscanbuild/analyze.py
    cfe/trunk/tools/scan-build-py/libscanbuild/clang.py
    cfe/trunk/tools/scan-build-py/libscanbuild/command.py
    cfe/trunk/tools/scan-build-py/libscanbuild/intercept.py
    cfe/trunk/tools/scan-build-py/libscanbuild/report.py
    cfe/trunk/tools/scan-build-py/libscanbuild/resources/
    cfe/trunk/tools/scan-build-py/libscanbuild/resources/scanview.css
    cfe/trunk/tools/scan-build-py/libscanbuild/resources/selectable.js
    cfe/trunk/tools/scan-build-py/libscanbuild/resources/sorttable.js
    cfe/trunk/tools/scan-build-py/libscanbuild/runner.py
    cfe/trunk/tools/scan-build-py/libscanbuild/shell.py
    cfe/trunk/tools/scan-build-py/tests/
    cfe/trunk/tools/scan-build-py/tests/__init__.py
    cfe/trunk/tools/scan-build-py/tests/functional/
    cfe/trunk/tools/scan-build-py/tests/functional/__init__.py
    cfe/trunk/tools/scan-build-py/tests/functional/cases/
    cfe/trunk/tools/scan-build-py/tests/functional/cases/__init__.py
    cfe/trunk/tools/scan-build-py/tests/functional/cases/test_create_cdb.py
    cfe/trunk/tools/scan-build-py/tests/functional/cases/test_exec_anatomy.py
    cfe/trunk/tools/scan-build-py/tests/functional/cases/test_from_cdb.py
    cfe/trunk/tools/scan-build-py/tests/functional/cases/test_from_cmd.py
    cfe/trunk/tools/scan-build-py/tests/functional/exec/
    cfe/trunk/tools/scan-build-py/tests/functional/exec/CMakeLists.txt
    cfe/trunk/tools/scan-build-py/tests/functional/exec/config.h.in
    cfe/trunk/tools/scan-build-py/tests/functional/exec/main.c
    cfe/trunk/tools/scan-build-py/tests/functional/src/
    cfe/trunk/tools/scan-build-py/tests/functional/src/broken-one.c
    cfe/trunk/tools/scan-build-py/tests/functional/src/broken-two.c
    cfe/trunk/tools/scan-build-py/tests/functional/src/build/
    cfe/trunk/tools/scan-build-py/tests/functional/src/build/Makefile
    cfe/trunk/tools/scan-build-py/tests/functional/src/clean-one.c
    cfe/trunk/tools/scan-build-py/tests/functional/src/clean-two.c
    cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/
    cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_broken.json.in
    cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_clean.json.in
    cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_regular.json.in
    cfe/trunk/tools/scan-build-py/tests/functional/src/emit-one.c
    cfe/trunk/tools/scan-build-py/tests/functional/src/emit-two.c
    cfe/trunk/tools/scan-build-py/tests/functional/src/include/
    cfe/trunk/tools/scan-build-py/tests/functional/src/include/clean-one.h
    cfe/trunk/tools/scan-build-py/tests/functional/src/main.c
    cfe/trunk/tools/scan-build-py/tests/unit/
    cfe/trunk/tools/scan-build-py/tests/unit/__init__.py
    cfe/trunk/tools/scan-build-py/tests/unit/fixtures.py
    cfe/trunk/tools/scan-build-py/tests/unit/test_analyze.py
    cfe/trunk/tools/scan-build-py/tests/unit/test_clang.py
    cfe/trunk/tools/scan-build-py/tests/unit/test_command.py
    cfe/trunk/tools/scan-build-py/tests/unit/test_intercept.py
    cfe/trunk/tools/scan-build-py/tests/unit/test_report.py
    cfe/trunk/tools/scan-build-py/tests/unit/test_runner.py
    cfe/trunk/tools/scan-build-py/tests/unit/test_shell.py

Added: cfe/trunk/tools/scan-build-py/README.md
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/README.md?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/README.md (added)
+++ cfe/trunk/tools/scan-build-py/README.md Tue Jan 12 16:38:41 2016
@@ -0,0 +1,120 @@
+scan-build
+==========
+
+A package designed to wrap a build so that all calls to gcc/clang are
+intercepted and logged into a [compilation database][1] and/or piped to
+the clang static analyzer. Includes intercept-build tool, which logs
+the build, as well as scan-build tool, which logs the build and runs
+the clang static analyzer on it.
+
+Portability
+-----------
+
+Should be working on UNIX operating systems.
+
+- It has been tested on FreeBSD, GNU/Linux and OS X.
+- Prepared to work on windows, but need help to make it.
+
+
+Prerequisites
+-------------
+
+1. **python** interpreter (version 2.7, 3.2, 3.3, 3.4, 3.5).
+
+
+How to use
+----------
+
+To run the Clang static analyzer against a project goes like this:
+
+    $ scan-build <your build command>
+
+To generate a compilation database file goes like this:
+
+    $ intercept-build <your build command>
+
+To run the Clang static analyzer against a project with compilation database
+goes like this:
+
+    $ analyze-build
+
+Use `--help` to know more about the commands.
+
+
+Limitations
+-----------
+
+Generally speaking, the `intercept-build` and `analyze-build` tools together
+does the same job as `scan-build` does. So, you can expect the same output
+from this line as simple `scan-build` would do:
+
+    $ intercept-build <your build command> && analyze-build
+
+The major difference is how and when the analyzer is run. The `scan-build`
+tool has three distinct model to run the analyzer:
+
+1.  Use compiler wrappers to make actions.
+    The compiler wrappers does run the real compiler and the analyzer.
+    This is the default behaviour, can be enforced with `--override-compiler`
+    flag.
+
+2.  Use special library to intercept compiler calls durring the build process.
+    The analyzer run against each modules after the build finished.
+    Use `--intercept-first` flag to get this model.
+
+3.  Use compiler wrappers to intercept compiler calls durring the build process.
+    The analyzer run against each modules after the build finished.
+    Use `--intercept-first` and `--override-compiler` flags together to get
+    this model.
+
+The 1. and 3. are using compiler wrappers, which works only if the build
+process respects the `CC` and `CXX` environment variables. (Some build
+process can override these variable as command line parameter only. This case
+you need to pass the compiler wrappers manually. eg.: `intercept-build
+--override-compiler make CC=intercept-cc CXX=intercept-c++ all` where the
+original build command would have been `make all` only.)
+
+The 1. runs the analyzer right after the real compilation. So, if the build
+process removes removes intermediate modules (generated sources) the analyzer
+output still kept.
+
+The 2. and 3. generate the compilation database first, and filters out those
+modules which are not exists. So, it's suitable for incremental analysis durring
+the development.
+
+The 2. mode is available only on FreeBSD and Linux. Where library preload
+is available from the dynamic loader. Not supported on OS X (unless System
+Integrity Protection feature is turned off).
+
+`intercept-build` command uses only the 2. and 3. mode to generate the
+compilation database. `analyze-build` does only run the analyzer against the
+captured compiler calls.
+
+
+Known problems
+--------------
+
+Because it uses `LD_PRELOAD` or `DYLD_INSERT_LIBRARIES` environment variables,
+it does not append to it, but overrides it. So builds which are using these
+variables might not work. (I don't know any build tool which does that, but
+please let me know if you do.)
+
+
+Problem reports
+---------------
+
+If you find a bug in this documentation or elsewhere in the program or would
+like to propose an improvement, please use the project's [issue tracker][3].
+Please describing the bug and where you found it. If you have a suggestion
+how to fix it, include that as well. Patches are also welcome.
+
+
+License
+-------
+
+The project is licensed under University of Illinois/NCSA Open Source License.
+See LICENSE.TXT for details.
+
+  [1]: http://clang.llvm.org/docs/JSONCompilationDatabase.html
+  [2]: https://pypi.python.org/pypi/scan-build
+  [3]: https://llvm.org/bugs/enter_bug.cgi?product=clang

Added: cfe/trunk/tools/scan-build-py/bin/analyze-build
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/bin/analyze-build?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/bin/analyze-build (added)
+++ cfe/trunk/tools/scan-build-py/bin/analyze-build Tue Jan 12 16:38:41 2016
@@ -0,0 +1,17 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import multiprocessing
+multiprocessing.freeze_support()
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.analyze import analyze_build_main
+sys.exit(analyze_build_main(this_dir, False))

Added: cfe/trunk/tools/scan-build-py/bin/analyze-c++
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/bin/analyze-c%2B%2B?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/bin/analyze-c++ (added)
+++ cfe/trunk/tools/scan-build-py/bin/analyze-c++ Tue Jan 12 16:38:41 2016
@@ -0,0 +1,14 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.analyze import analyze_build_wrapper
+sys.exit(analyze_build_wrapper(True))

Added: cfe/trunk/tools/scan-build-py/bin/analyze-cc
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/bin/analyze-cc?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/bin/analyze-cc (added)
+++ cfe/trunk/tools/scan-build-py/bin/analyze-cc Tue Jan 12 16:38:41 2016
@@ -0,0 +1,14 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.analyze import analyze_build_wrapper
+sys.exit(analyze_build_wrapper(False))

Added: cfe/trunk/tools/scan-build-py/bin/intercept-build
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/bin/intercept-build?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/bin/intercept-build (added)
+++ cfe/trunk/tools/scan-build-py/bin/intercept-build Tue Jan 12 16:38:41 2016
@@ -0,0 +1,17 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import multiprocessing
+multiprocessing.freeze_support()
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.intercept import intercept_build_main
+sys.exit(intercept_build_main(this_dir))

Added: cfe/trunk/tools/scan-build-py/bin/intercept-c++
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/bin/intercept-c%2B%2B?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/bin/intercept-c++ (added)
+++ cfe/trunk/tools/scan-build-py/bin/intercept-c++ Tue Jan 12 16:38:41 2016
@@ -0,0 +1,14 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.intercept import intercept_build_wrapper
+sys.exit(intercept_build_wrapper(True))

Added: cfe/trunk/tools/scan-build-py/bin/intercept-cc
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/bin/intercept-cc?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/bin/intercept-cc (added)
+++ cfe/trunk/tools/scan-build-py/bin/intercept-cc Tue Jan 12 16:38:41 2016
@@ -0,0 +1,14 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.intercept import intercept_build_wrapper
+sys.exit(intercept_build_wrapper(False))

Added: cfe/trunk/tools/scan-build-py/bin/scan-build
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/bin/scan-build?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/bin/scan-build (added)
+++ cfe/trunk/tools/scan-build-py/bin/scan-build Tue Jan 12 16:38:41 2016
@@ -0,0 +1,17 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import multiprocessing
+multiprocessing.freeze_support()
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.analyze import analyze_build_main
+sys.exit(analyze_build_main(this_dir, True))

Added: cfe/trunk/tools/scan-build-py/libear/__init__.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libear/__init__.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libear/__init__.py (added)
+++ cfe/trunk/tools/scan-build-py/libear/__init__.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,260 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module compiles the intercept library. """
+
+import sys
+import os
+import os.path
+import re
+import tempfile
+import shutil
+import contextlib
+import logging
+
+__all__ = ['build_libear']
+
+
+def build_libear(compiler, dst_dir):
+    """ Returns the full path to the 'libear' library. """
+
+    try:
+        src_dir = os.path.dirname(os.path.realpath(__file__))
+        toolset = make_toolset(src_dir)
+        toolset.set_compiler(compiler)
+        toolset.set_language_standard('c99')
+        toolset.add_definitions(['-D_GNU_SOURCE'])
+
+        configure = do_configure(toolset)
+        configure.check_function_exists('execve', 'HAVE_EXECVE')
+        configure.check_function_exists('execv', 'HAVE_EXECV')
+        configure.check_function_exists('execvpe', 'HAVE_EXECVPE')
+        configure.check_function_exists('execvp', 'HAVE_EXECVP')
+        configure.check_function_exists('execvP', 'HAVE_EXECVP2')
+        configure.check_function_exists('exect', 'HAVE_EXECT')
+        configure.check_function_exists('execl', 'HAVE_EXECL')
+        configure.check_function_exists('execlp', 'HAVE_EXECLP')
+        configure.check_function_exists('execle', 'HAVE_EXECLE')
+        configure.check_function_exists('posix_spawn', 'HAVE_POSIX_SPAWN')
+        configure.check_function_exists('posix_spawnp', 'HAVE_POSIX_SPAWNP')
+        configure.check_symbol_exists('_NSGetEnviron', 'crt_externs.h',
+                                      'HAVE_NSGETENVIRON')
+        configure.write_by_template(
+            os.path.join(src_dir, 'config.h.in'),
+            os.path.join(dst_dir, 'config.h'))
+
+        target = create_shared_library('ear', toolset)
+        target.add_include(dst_dir)
+        target.add_sources('ear.c')
+        target.link_against(toolset.dl_libraries())
+        target.link_against(['pthread'])
+        target.build_release(dst_dir)
+
+        return os.path.join(dst_dir, target.name)
+
+    except Exception:
+        logging.info("Could not build interception library.", exc_info=True)
+        return None
+
+
+def execute(cmd, *args, **kwargs):
+    """ Make subprocess execution silent. """
+
+    import subprocess
+    kwargs.update({'stdout': subprocess.PIPE, 'stderr': subprocess.STDOUT})
+    return subprocess.check_call(cmd, *args, **kwargs)
+
+
+ at contextlib.contextmanager
+def TemporaryDirectory(**kwargs):
+    name = tempfile.mkdtemp(**kwargs)
+    try:
+        yield name
+    finally:
+        shutil.rmtree(name)
+
+
+class Toolset(object):
+    """ Abstract class to represent different toolset. """
+
+    def __init__(self, src_dir):
+        self.src_dir = src_dir
+        self.compiler = None
+        self.c_flags = []
+
+    def set_compiler(self, compiler):
+        """ part of public interface """
+        self.compiler = compiler
+
+    def set_language_standard(self, standard):
+        """ part of public interface """
+        self.c_flags.append('-std=' + standard)
+
+    def add_definitions(self, defines):
+        """ part of public interface """
+        self.c_flags.extend(defines)
+
+    def dl_libraries(self):
+        raise NotImplementedError()
+
+    def shared_library_name(self, name):
+        raise NotImplementedError()
+
+    def shared_library_c_flags(self, release):
+        extra = ['-DNDEBUG', '-O3'] if release else []
+        return extra + ['-fPIC'] + self.c_flags
+
+    def shared_library_ld_flags(self, release, name):
+        raise NotImplementedError()
+
+
+class DarwinToolset(Toolset):
+    def __init__(self, src_dir):
+        Toolset.__init__(self, src_dir)
+
+    def dl_libraries(self):
+        return []
+
+    def shared_library_name(self, name):
+        return 'lib' + name + '.dylib'
+
+    def shared_library_ld_flags(self, release, name):
+        extra = ['-dead_strip'] if release else []
+        return extra + ['-dynamiclib', '-install_name', '@rpath/' + name]
+
+
+class UnixToolset(Toolset):
+    def __init__(self, src_dir):
+        Toolset.__init__(self, src_dir)
+
+    def dl_libraries(self):
+        return []
+
+    def shared_library_name(self, name):
+        return 'lib' + name + '.so'
+
+    def shared_library_ld_flags(self, release, name):
+        extra = [] if release else []
+        return extra + ['-shared', '-Wl,-soname,' + name]
+
+
+class LinuxToolset(UnixToolset):
+    def __init__(self, src_dir):
+        UnixToolset.__init__(self, src_dir)
+
+    def dl_libraries(self):
+        return ['dl']
+
+
+def make_toolset(src_dir):
+    platform = sys.platform
+    if platform in {'win32', 'cygwin'}:
+        raise RuntimeError('not implemented on this platform')
+    elif platform == 'darwin':
+        return DarwinToolset(src_dir)
+    elif platform in {'linux', 'linux2'}:
+        return LinuxToolset(src_dir)
+    else:
+        return UnixToolset(src_dir)
+
+
+class Configure(object):
+    def __init__(self, toolset):
+        self.ctx = toolset
+        self.results = {'APPLE': sys.platform == 'darwin'}
+
+    def _try_to_compile_and_link(self, source):
+        try:
+            with TemporaryDirectory() as work_dir:
+                src_file = 'check.c'
+                with open(os.path.join(work_dir, src_file), 'w') as handle:
+                    handle.write(source)
+
+                execute([self.ctx.compiler, src_file] + self.ctx.c_flags,
+                        cwd=work_dir)
+                return True
+        except Exception:
+            return False
+
+    def check_function_exists(self, function, name):
+        template = "int FUNCTION(); int main() { return FUNCTION(); }"
+        source = template.replace("FUNCTION", function)
+
+        logging.debug('Checking function %s', function)
+        found = self._try_to_compile_and_link(source)
+        logging.debug('Checking function %s -- %s', function,
+                      'found' if found else 'not found')
+        self.results.update({name: found})
+
+    def check_symbol_exists(self, symbol, include, name):
+        template = """#include <INCLUDE>
+                      int main() { return ((int*)(&SYMBOL))[0]; }"""
+        source = template.replace('INCLUDE', include).replace("SYMBOL", symbol)
+
+        logging.debug('Checking symbol %s', symbol)
+        found = self._try_to_compile_and_link(source)
+        logging.debug('Checking symbol %s -- %s', symbol,
+                      'found' if found else 'not found')
+        self.results.update({name: found})
+
+    def write_by_template(self, template, output):
+        def transform(line, definitions):
+
+            pattern = re.compile(r'^#cmakedefine\s+(\S+)')
+            m = pattern.match(line)
+            if m:
+                key = m.group(1)
+                if key not in definitions or not definitions[key]:
+                    return '/* #undef {} */\n'.format(key)
+                else:
+                    return '#define {}\n'.format(key)
+            return line
+
+        with open(template, 'r') as src_handle:
+            logging.debug('Writing config to %s', output)
+            with open(output, 'w') as dst_handle:
+                for line in src_handle:
+                    dst_handle.write(transform(line, self.results))
+
+
+def do_configure(toolset):
+    return Configure(toolset)
+
+
+class SharedLibrary(object):
+    def __init__(self, name, toolset):
+        self.name = toolset.shared_library_name(name)
+        self.ctx = toolset
+        self.inc = []
+        self.src = []
+        self.lib = []
+
+    def add_include(self, directory):
+        self.inc.extend(['-I', directory])
+
+    def add_sources(self, source):
+        self.src.append(source)
+
+    def link_against(self, libraries):
+        self.lib.extend(['-l' + lib for lib in libraries])
+
+    def build_release(self, directory):
+        for src in self.src:
+            logging.debug('Compiling %s', src)
+            execute(
+                [self.ctx.compiler, '-c', os.path.join(self.ctx.src_dir, src),
+                 '-o', src + '.o'] + self.inc +
+                self.ctx.shared_library_c_flags(True),
+                cwd=directory)
+        logging.debug('Linking %s', self.name)
+        execute(
+            [self.ctx.compiler] + [src + '.o' for src in self.src] +
+            ['-o', self.name] + self.lib +
+            self.ctx.shared_library_ld_flags(True, self.name),
+            cwd=directory)
+
+
+def create_shared_library(name, toolset):
+    return SharedLibrary(name, toolset)

Added: cfe/trunk/tools/scan-build-py/libear/config.h.in
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libear/config.h.in?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libear/config.h.in (added)
+++ cfe/trunk/tools/scan-build-py/libear/config.h.in Tue Jan 12 16:38:41 2016
@@ -0,0 +1,23 @@
+/* -*- coding: utf-8 -*-
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+*/
+
+#pragma once
+
+#cmakedefine HAVE_EXECVE
+#cmakedefine HAVE_EXECV
+#cmakedefine HAVE_EXECVPE
+#cmakedefine HAVE_EXECVP
+#cmakedefine HAVE_EXECVP2
+#cmakedefine HAVE_EXECT
+#cmakedefine HAVE_EXECL
+#cmakedefine HAVE_EXECLP
+#cmakedefine HAVE_EXECLE
+#cmakedefine HAVE_POSIX_SPAWN
+#cmakedefine HAVE_POSIX_SPAWNP
+#cmakedefine HAVE_NSGETENVIRON
+
+#cmakedefine APPLE

Added: cfe/trunk/tools/scan-build-py/libear/ear.c
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libear/ear.c?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libear/ear.c (added)
+++ cfe/trunk/tools/scan-build-py/libear/ear.c Tue Jan 12 16:38:41 2016
@@ -0,0 +1,605 @@
+/* -*- coding: utf-8 -*-
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+*/
+
+/**
+ * This file implements a shared library. This library can be pre-loaded by
+ * the dynamic linker of the Operating System (OS). It implements a few function
+ * related to process creation. By pre-load this library the executed process
+ * uses these functions instead of those from the standard library.
+ *
+ * The idea here is to inject a logic before call the real methods. The logic is
+ * to dump the call into a file. To call the real method this library is doing
+ * the job of the dynamic linker.
+ *
+ * The only input for the log writing is about the destination directory.
+ * This is passed as environment variable.
+ */
+
+#include "config.h"
+
+#include <stddef.h>
+#include <stdarg.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+#include <dlfcn.h>
+#include <pthread.h>
+
+#if defined HAVE_POSIX_SPAWN || defined HAVE_POSIX_SPAWNP
+#include <spawn.h>
+#endif
+
+#if defined HAVE_NSGETENVIRON
+# include <crt_externs.h>
+#else
+extern char **environ;
+#endif
+
+#define ENV_OUTPUT "INTERCEPT_BUILD_TARGET_DIR"
+#ifdef APPLE
+# define ENV_FLAT    "DYLD_FORCE_FLAT_NAMESPACE"
+# define ENV_PRELOAD "DYLD_INSERT_LIBRARIES"
+# define ENV_SIZE 3
+#else
+# define ENV_PRELOAD "LD_PRELOAD"
+# define ENV_SIZE 2
+#endif
+
+#define DLSYM(TYPE_, VAR_, SYMBOL_)                                            \
+    union {                                                                    \
+        void *from;                                                            \
+        TYPE_ to;                                                              \
+    } cast;                                                                    \
+    if (0 == (cast.from = dlsym(RTLD_NEXT, SYMBOL_))) {                        \
+        perror("bear: dlsym");                                                 \
+        exit(EXIT_FAILURE);                                                    \
+    }                                                                          \
+    TYPE_ const VAR_ = cast.to;
+
+
+typedef char const * bear_env_t[ENV_SIZE];
+
+static int bear_capture_env_t(bear_env_t *env);
+static int bear_reset_env_t(bear_env_t *env);
+static void bear_release_env_t(bear_env_t *env);
+static char const **bear_update_environment(char *const envp[], bear_env_t *env);
+static char const **bear_update_environ(char const **in, char const *key, char const *value);
+static char **bear_get_environment();
+static void bear_report_call(char const *fun, char const *const argv[]);
+static char const **bear_strings_build(char const *arg, va_list *ap);
+static char const **bear_strings_copy(char const **const in);
+static char const **bear_strings_append(char const **in, char const *e);
+static size_t bear_strings_length(char const *const *in);
+static void bear_strings_release(char const **);
+
+
+static bear_env_t env_names =
+    { ENV_OUTPUT
+    , ENV_PRELOAD
+#ifdef ENV_FLAT
+    , ENV_FLAT
+#endif
+    };
+
+static bear_env_t initial_env =
+    { 0
+    , 0
+#ifdef ENV_FLAT
+    , 0
+#endif
+    };
+
+static int initialized = 0;
+static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
+
+static void on_load(void) __attribute__((constructor));
+static void on_unload(void) __attribute__((destructor));
+
+
+#ifdef HAVE_EXECVE
+static int call_execve(const char *path, char *const argv[],
+                       char *const envp[]);
+#endif
+#ifdef HAVE_EXECVP
+static int call_execvp(const char *file, char *const argv[]);
+#endif
+#ifdef HAVE_EXECVPE
+static int call_execvpe(const char *file, char *const argv[],
+                        char *const envp[]);
+#endif
+#ifdef HAVE_EXECVP2
+static int call_execvP(const char *file, const char *search_path,
+                       char *const argv[]);
+#endif
+#ifdef HAVE_EXECT
+static int call_exect(const char *path, char *const argv[],
+                      char *const envp[]);
+#endif
+#ifdef HAVE_POSIX_SPAWN
+static int call_posix_spawn(pid_t *restrict pid, const char *restrict path,
+                            const posix_spawn_file_actions_t *file_actions,
+                            const posix_spawnattr_t *restrict attrp,
+                            char *const argv[restrict],
+                            char *const envp[restrict]);
+#endif
+#ifdef HAVE_POSIX_SPAWNP
+static int call_posix_spawnp(pid_t *restrict pid, const char *restrict file,
+                             const posix_spawn_file_actions_t *file_actions,
+                             const posix_spawnattr_t *restrict attrp,
+                             char *const argv[restrict],
+                             char *const envp[restrict]);
+#endif
+
+
+/* Initialization method to Captures the relevant environment variables.
+ */
+
+static void on_load(void) {
+    pthread_mutex_lock(&mutex);
+    if (!initialized)
+        initialized = bear_capture_env_t(&initial_env);
+    pthread_mutex_unlock(&mutex);
+}
+
+static void on_unload(void) {
+    pthread_mutex_lock(&mutex);
+    bear_release_env_t(&initial_env);
+    initialized = 0;
+    pthread_mutex_unlock(&mutex);
+}
+
+
+/* These are the methods we are try to hijack.
+ */
+
+#ifdef HAVE_EXECVE
+int execve(const char *path, char *const argv[], char *const envp[]) {
+    bear_report_call(__func__, (char const *const *)argv);
+    return call_execve(path, argv, envp);
+}
+#endif
+
+#ifdef HAVE_EXECV
+#ifndef HAVE_EXECVE
+#error can not implement execv without execve
+#endif
+int execv(const char *path, char *const argv[]) {
+    bear_report_call(__func__, (char const *const *)argv);
+    char * const * envp = bear_get_environment();
+    return call_execve(path, argv, envp);
+}
+#endif
+
+#ifdef HAVE_EXECVPE
+int execvpe(const char *file, char *const argv[], char *const envp[]) {
+    bear_report_call(__func__, (char const *const *)argv);
+    return call_execvpe(file, argv, envp);
+}
+#endif
+
+#ifdef HAVE_EXECVP
+int execvp(const char *file, char *const argv[]) {
+    bear_report_call(__func__, (char const *const *)argv);
+    return call_execvp(file, argv);
+}
+#endif
+
+#ifdef HAVE_EXECVP2
+int execvP(const char *file, const char *search_path, char *const argv[]) {
+    bear_report_call(__func__, (char const *const *)argv);
+    return call_execvP(file, search_path, argv);
+}
+#endif
+
+#ifdef HAVE_EXECT
+int exect(const char *path, char *const argv[], char *const envp[]) {
+    bear_report_call(__func__, (char const *const *)argv);
+    return call_exect(path, argv, envp);
+}
+#endif
+
+#ifdef HAVE_EXECL
+# ifndef HAVE_EXECVE
+#  error can not implement execl without execve
+# endif
+int execl(const char *path, const char *arg, ...) {
+    va_list args;
+    va_start(args, arg);
+    char const **argv = bear_strings_build(arg, &args);
+    va_end(args);
+
+    bear_report_call(__func__, (char const *const *)argv);
+    char * const * envp = bear_get_environment();
+    int const result = call_execve(path, (char *const *)argv, envp);
+
+    bear_strings_release(argv);
+    return result;
+}
+#endif
+
+#ifdef HAVE_EXECLP
+# ifndef HAVE_EXECVP
+#  error can not implement execlp without execvp
+# endif
+int execlp(const char *file, const char *arg, ...) {
+    va_list args;
+    va_start(args, arg);
+    char const **argv = bear_strings_build(arg, &args);
+    va_end(args);
+
+    bear_report_call(__func__, (char const *const *)argv);
+    int const result = call_execvp(file, (char *const *)argv);
+
+    bear_strings_release(argv);
+    return result;
+}
+#endif
+
+#ifdef HAVE_EXECLE
+# ifndef HAVE_EXECVE
+#  error can not implement execle without execve
+# endif
+// int execle(const char *path, const char *arg, ..., char * const envp[]);
+int execle(const char *path, const char *arg, ...) {
+    va_list args;
+    va_start(args, arg);
+    char const **argv = bear_strings_build(arg, &args);
+    char const **envp = va_arg(args, char const **);
+    va_end(args);
+
+    bear_report_call(__func__, (char const *const *)argv);
+    int const result =
+        call_execve(path, (char *const *)argv, (char *const *)envp);
+
+    bear_strings_release(argv);
+    return result;
+}
+#endif
+
+#ifdef HAVE_POSIX_SPAWN
+int posix_spawn(pid_t *restrict pid, const char *restrict path,
+                const posix_spawn_file_actions_t *file_actions,
+                const posix_spawnattr_t *restrict attrp,
+                char *const argv[restrict], char *const envp[restrict]) {
+    bear_report_call(__func__, (char const *const *)argv);
+    return call_posix_spawn(pid, path, file_actions, attrp, argv, envp);
+}
+#endif
+
+#ifdef HAVE_POSIX_SPAWNP
+int posix_spawnp(pid_t *restrict pid, const char *restrict file,
+                 const posix_spawn_file_actions_t *file_actions,
+                 const posix_spawnattr_t *restrict attrp,
+                 char *const argv[restrict], char *const envp[restrict]) {
+    bear_report_call(__func__, (char const *const *)argv);
+    return call_posix_spawnp(pid, file, file_actions, attrp, argv, envp);
+}
+#endif
+
+/* These are the methods which forward the call to the standard implementation.
+ */
+
+#ifdef HAVE_EXECVE
+static int call_execve(const char *path, char *const argv[],
+                       char *const envp[]) {
+    typedef int (*func)(const char *, char *const *, char *const *);
+
+    DLSYM(func, fp, "execve");
+
+    char const **const menvp = bear_update_environment(envp, &initial_env);
+    int const result = (*fp)(path, argv, (char *const *)menvp);
+    bear_strings_release(menvp);
+    return result;
+}
+#endif
+
+#ifdef HAVE_EXECVPE
+static int call_execvpe(const char *file, char *const argv[],
+                        char *const envp[]) {
+    typedef int (*func)(const char *, char *const *, char *const *);
+
+    DLSYM(func, fp, "execvpe");
+
+    char const **const menvp = bear_update_environment(envp, &initial_env);
+    int const result = (*fp)(file, argv, (char *const *)menvp);
+    bear_strings_release(menvp);
+    return result;
+}
+#endif
+
+#ifdef HAVE_EXECVP
+static int call_execvp(const char *file, char *const argv[]) {
+    typedef int (*func)(const char *file, char *const argv[]);
+
+    DLSYM(func, fp, "execvp");
+
+    bear_env_t current_env;
+    bear_capture_env_t(&current_env);
+    bear_reset_env_t(&initial_env);
+    int const result = (*fp)(file, argv);
+    bear_reset_env_t(&current_env);
+    bear_release_env_t(&current_env);
+
+    return result;
+}
+#endif
+
+#ifdef HAVE_EXECVP2
+static int call_execvP(const char *file, const char *search_path,
+                       char *const argv[]) {
+    typedef int (*func)(const char *, const char *, char *const *);
+
+    DLSYM(func, fp, "execvP");
+
+    bear_env_t current_env;
+    bear_capture_env_t(&current_env);
+    bear_reset_env_t(&initial_env);
+    int const result = (*fp)(file, search_path, argv);
+    bear_reset_env_t(&current_env);
+    bear_release_env_t(&current_env);
+
+    return result;
+}
+#endif
+
+#ifdef HAVE_EXECT
+static int call_exect(const char *path, char *const argv[],
+                      char *const envp[]) {
+    typedef int (*func)(const char *, char *const *, char *const *);
+
+    DLSYM(func, fp, "exect");
+
+    char const **const menvp = bear_update_environment(envp, &initial_env);
+    int const result = (*fp)(path, argv, (char *const *)menvp);
+    bear_strings_release(menvp);
+    return result;
+}
+#endif
+
+#ifdef HAVE_POSIX_SPAWN
+static int call_posix_spawn(pid_t *restrict pid, const char *restrict path,
+                            const posix_spawn_file_actions_t *file_actions,
+                            const posix_spawnattr_t *restrict attrp,
+                            char *const argv[restrict],
+                            char *const envp[restrict]) {
+    typedef int (*func)(pid_t *restrict, const char *restrict,
+                        const posix_spawn_file_actions_t *,
+                        const posix_spawnattr_t *restrict,
+                        char *const *restrict, char *const *restrict);
+
+    DLSYM(func, fp, "posix_spawn");
+
+    char const **const menvp = bear_update_environment(envp, &initial_env);
+    int const result =
+        (*fp)(pid, path, file_actions, attrp, argv, (char *const *restrict)menvp);
+    bear_strings_release(menvp);
+    return result;
+}
+#endif
+
+#ifdef HAVE_POSIX_SPAWNP
+static int call_posix_spawnp(pid_t *restrict pid, const char *restrict file,
+                             const posix_spawn_file_actions_t *file_actions,
+                             const posix_spawnattr_t *restrict attrp,
+                             char *const argv[restrict],
+                             char *const envp[restrict]) {
+    typedef int (*func)(pid_t *restrict, const char *restrict,
+                        const posix_spawn_file_actions_t *,
+                        const posix_spawnattr_t *restrict,
+                        char *const *restrict, char *const *restrict);
+
+    DLSYM(func, fp, "posix_spawnp");
+
+    char const **const menvp = bear_update_environment(envp, &initial_env);
+    int const result =
+        (*fp)(pid, file, file_actions, attrp, argv, (char *const *restrict)menvp);
+    bear_strings_release(menvp);
+    return result;
+}
+#endif
+
+/* this method is to write log about the process creation. */
+
+static void bear_report_call(char const *fun, char const *const argv[]) {
+    static int const GS = 0x1d;
+    static int const RS = 0x1e;
+    static int const US = 0x1f;
+
+    if (!initialized)
+        return;
+
+    pthread_mutex_lock(&mutex);
+    const char *cwd = getcwd(NULL, 0);
+    if (0 == cwd) {
+        perror("bear: getcwd");
+        exit(EXIT_FAILURE);
+    }
+    char const * const out_dir = initial_env[0];
+    size_t const path_max_length = strlen(out_dir) + 32;
+    char filename[path_max_length];
+    if (-1 == snprintf(filename, path_max_length, "%s/%d.cmd", out_dir, getpid())) {
+        perror("bear: snprintf");
+        exit(EXIT_FAILURE);
+    }
+    FILE * fd = fopen(filename, "a+");
+    if (0 == fd) {
+        perror("bear: fopen");
+        exit(EXIT_FAILURE);
+    }
+    fprintf(fd, "%d%c", getpid(), RS);
+    fprintf(fd, "%d%c", getppid(), RS);
+    fprintf(fd, "%s%c", fun, RS);
+    fprintf(fd, "%s%c", cwd, RS);
+    size_t const argc = bear_strings_length(argv);
+    for (size_t it = 0; it < argc; ++it) {
+        fprintf(fd, "%s%c", argv[it], US);
+    }
+    fprintf(fd, "%c", GS);
+    if (fclose(fd)) {
+        perror("bear: fclose");
+        exit(EXIT_FAILURE);
+    }
+    free((void *)cwd);
+    pthread_mutex_unlock(&mutex);
+}
+
+/* update environment assure that chilren processes will copy the desired
+ * behaviour */
+
+static int bear_capture_env_t(bear_env_t *env) {
+    int status = 1;
+    for (size_t it = 0; it < ENV_SIZE; ++it) {
+        char const * const env_value = getenv(env_names[it]);
+        char const * const env_copy = (env_value) ? strdup(env_value) : env_value;
+        (*env)[it] = env_copy;
+        status &= (env_copy) ? 1 : 0;
+    }
+    return status;
+}
+
+static int bear_reset_env_t(bear_env_t *env) {
+    int status = 1;
+    for (size_t it = 0; it < ENV_SIZE; ++it) {
+        if ((*env)[it]) {
+            setenv(env_names[it], (*env)[it], 1);
+        } else {
+            unsetenv(env_names[it]);
+        }
+    }
+    return status;
+}
+
+static void bear_release_env_t(bear_env_t *env) {
+    for (size_t it = 0; it < ENV_SIZE; ++it) {
+        free((void *)(*env)[it]);
+        (*env)[it] = 0;
+    }
+}
+
+static char const **bear_update_environment(char *const envp[], bear_env_t *env) {
+    char const **result = bear_strings_copy((char const **)envp);
+    for (size_t it = 0; it < ENV_SIZE && (*env)[it]; ++it)
+        result = bear_update_environ(result, env_names[it], (*env)[it]);
+    return result;
+}
+
+static char const **bear_update_environ(char const *envs[], char const *key, char const * const value) {
+    // find the key if it's there
+    size_t const key_length = strlen(key);
+    char const **it = envs;
+    for (; (it) && (*it); ++it) {
+        if ((0 == strncmp(*it, key, key_length)) &&
+            (strlen(*it) > key_length) && ('=' == (*it)[key_length]))
+            break;
+    }
+    // allocate a environment entry
+    size_t const value_length = strlen(value);
+    size_t const env_length = key_length + value_length + 2;
+    char *env = malloc(env_length);
+    if (0 == env) {
+        perror("bear: malloc [in env_update]");
+        exit(EXIT_FAILURE);
+    }
+    if (-1 == snprintf(env, env_length, "%s=%s", key, value)) {
+        perror("bear: snprintf");
+        exit(EXIT_FAILURE);
+    }
+    // replace or append the environment entry
+    if (it && *it) {
+        free((void *)*it);
+        *it = env;
+	return envs;
+    }
+    return bear_strings_append(envs, env);
+}
+
+static char **bear_get_environment() {
+#if defined HAVE_NSGETENVIRON
+    return *_NSGetEnviron();
+#else
+    return environ;
+#endif
+}
+
+/* util methods to deal with string arrays. environment and process arguments
+ * are both represented as string arrays. */
+
+static char const **bear_strings_build(char const *const arg, va_list *args) {
+    char const **result = 0;
+    size_t size = 0;
+    for (char const *it = arg; it; it = va_arg(*args, char const *)) {
+        result = realloc(result, (size + 1) * sizeof(char const *));
+        if (0 == result) {
+            perror("bear: realloc");
+            exit(EXIT_FAILURE);
+        }
+        char const *copy = strdup(it);
+        if (0 == copy) {
+            perror("bear: strdup");
+            exit(EXIT_FAILURE);
+        }
+        result[size++] = copy;
+    }
+    result = realloc(result, (size + 1) * sizeof(char const *));
+    if (0 == result) {
+        perror("bear: realloc");
+        exit(EXIT_FAILURE);
+    }
+    result[size++] = 0;
+
+    return result;
+}
+
+static char const **bear_strings_copy(char const **const in) {
+    size_t const size = bear_strings_length(in);
+
+    char const **const result = malloc((size + 1) * sizeof(char const *));
+    if (0 == result) {
+        perror("bear: malloc");
+        exit(EXIT_FAILURE);
+    }
+
+    char const **out_it = result;
+    for (char const *const *in_it = in; (in_it) && (*in_it);
+         ++in_it, ++out_it) {
+        *out_it = strdup(*in_it);
+        if (0 == *out_it) {
+            perror("bear: strdup");
+            exit(EXIT_FAILURE);
+        }
+    }
+    *out_it = 0;
+    return result;
+}
+
+static char const **bear_strings_append(char const **const in,
+                                        char const *const e) {
+    size_t size = bear_strings_length(in);
+    char const **result = realloc(in, (size + 2) * sizeof(char const *));
+    if (0 == result) {
+        perror("bear: realloc");
+        exit(EXIT_FAILURE);
+    }
+    result[size++] = e;
+    result[size++] = 0;
+    return result;
+}
+
+static size_t bear_strings_length(char const *const *const in) {
+    size_t result = 0;
+    for (char const *const *it = in; (it) && (*it); ++it)
+        ++result;
+    return result;
+}
+
+static void bear_strings_release(char const **in) {
+    for (char const *const *it = in; (it) && (*it); ++it) {
+        free((void *)*it);
+    }
+    free((void *)in);
+}

Added: cfe/trunk/tools/scan-build-py/libscanbuild/__init__.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libscanbuild/__init__.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libscanbuild/__init__.py (added)
+++ cfe/trunk/tools/scan-build-py/libscanbuild/__init__.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,82 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+"""
+This module responsible to run the Clang static analyzer against any build
+and generate reports.
+"""
+
+
+def duplicate_check(method):
+    """ Predicate to detect duplicated entries.
+
+    Unique hash method can be use to detect duplicates. Entries are
+    represented as dictionaries, which has no default hash method.
+    This implementation uses a set datatype to store the unique hash values.
+
+    This method returns a method which can detect the duplicate values. """
+
+    def predicate(entry):
+        entry_hash = predicate.unique(entry)
+        if entry_hash not in predicate.state:
+            predicate.state.add(entry_hash)
+            return False
+        return True
+
+    predicate.unique = method
+    predicate.state = set()
+    return predicate
+
+
+def tempdir():
+    """ Return the default temorary directory. """
+
+    from os import getenv
+    return getenv('TMPDIR', getenv('TEMP', getenv('TMP', '/tmp')))
+
+
+def initialize_logging(verbose_level):
+    """ Output content controlled by the verbosity level. """
+
+    import sys
+    import os.path
+    import logging
+    level = logging.WARNING - min(logging.WARNING, (10 * verbose_level))
+
+    if verbose_level <= 3:
+        fmt_string = '{0}: %(levelname)s: %(message)s'
+    else:
+        fmt_string = '{0}: %(levelname)s: %(funcName)s: %(message)s'
+
+    program = os.path.basename(sys.argv[0])
+    logging.basicConfig(format=fmt_string.format(program), level=level)
+
+
+def command_entry_point(function):
+    """ Decorator for command entry points. """
+
+    import functools
+    import logging
+
+    @functools.wraps(function)
+    def wrapper(*args, **kwargs):
+
+        exit_code = 127
+        try:
+            exit_code = function(*args, **kwargs)
+        except KeyboardInterrupt:
+            logging.warning('Keyboard interupt')
+        except Exception:
+            logging.exception('Internal error.')
+            if logging.getLogger().isEnabledFor(logging.DEBUG):
+                logging.error("Please report this bug and attach the output "
+                              "to the bug report")
+            else:
+                logging.error("Please run this command again and turn on "
+                              "verbose mode (add '-vvv' as argument).")
+        finally:
+            return exit_code
+
+    return wrapper

Added: cfe/trunk/tools/scan-build-py/libscanbuild/analyze.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libscanbuild/analyze.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libscanbuild/analyze.py (added)
+++ cfe/trunk/tools/scan-build-py/libscanbuild/analyze.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,502 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module implements the 'scan-build' command API.
+
+To run the static analyzer against a build is done in multiple steps:
+
+ -- Intercept: capture the compilation command during the build,
+ -- Analyze:   run the analyzer against the captured commands,
+ -- Report:    create a cover report from the analyzer outputs.  """
+
+import sys
+import re
+import os
+import os.path
+import json
+import argparse
+import logging
+import subprocess
+import multiprocessing
+from libscanbuild import initialize_logging, tempdir, command_entry_point
+from libscanbuild.runner import run
+from libscanbuild.intercept import capture
+from libscanbuild.report import report_directory, document
+from libscanbuild.clang import get_checkers
+from libscanbuild.runner import action_check
+from libscanbuild.command import classify_parameters, classify_source
+
+__all__ = ['analyze_build_main', 'analyze_build_wrapper']
+
+COMPILER_WRAPPER_CC = 'analyze-cc'
+COMPILER_WRAPPER_CXX = 'analyze-c++'
+
+
+ at command_entry_point
+def analyze_build_main(bin_dir, from_build_command):
+    """ Entry point for 'analyze-build' and 'scan-build'. """
+
+    parser = create_parser(from_build_command)
+    args = parser.parse_args()
+    validate(parser, args, from_build_command)
+
+    # setup logging
+    initialize_logging(args.verbose)
+    logging.debug('Parsed arguments: %s', args)
+
+    with report_directory(args.output, args.keep_empty) as target_dir:
+        if not from_build_command:
+            # run analyzer only and generate cover report
+            run_analyzer(args, target_dir)
+            number_of_bugs = document(args, target_dir, True)
+            return number_of_bugs if args.status_bugs else 0
+        elif args.intercept_first:
+            # run build command and capture compiler executions
+            exit_code = capture(args, bin_dir)
+            # next step to run the analyzer against the captured commands
+            if need_analyzer(args.build):
+                run_analyzer(args, target_dir)
+                # cover report generation and bug counting
+                number_of_bugs = document(args, target_dir, True)
+                # remove the compilation database when it was not requested
+                if os.path.exists(args.cdb):
+                    os.unlink(args.cdb)
+                # set exit status as it was requested
+                return number_of_bugs if args.status_bugs else exit_code
+            else:
+                return exit_code
+        else:
+            # run the build command with compiler wrappers which
+            # execute the analyzer too. (interposition)
+            environment = setup_environment(args, target_dir, bin_dir)
+            logging.debug('run build in environment: %s', environment)
+            exit_code = subprocess.call(args.build, env=environment)
+            logging.debug('build finished with exit code: %d', exit_code)
+            # cover report generation and bug counting
+            number_of_bugs = document(args, target_dir, False)
+            # set exit status as it was requested
+            return number_of_bugs if args.status_bugs else exit_code
+
+
+def need_analyzer(args):
+    """ Check the intent of the build command.
+
+    When static analyzer run against project configure step, it should be
+    silent and no need to run the analyzer or generate report.
+
+    To run `scan-build` against the configure step might be neccessary,
+    when compiler wrappers are used. That's the moment when build setup
+    check the compiler and capture the location for the build process. """
+
+    return len(args) and not re.search('configure|autogen', args[0])
+
+
+def run_analyzer(args, output_dir):
+    """ Runs the analyzer against the given compilation database. """
+
+    def exclude(filename):
+        """ Return true when any excluded directory prefix the filename. """
+        return any(re.match(r'^' + directory, filename)
+                   for directory in args.excludes)
+
+    consts = {
+        'clang': args.clang,
+        'output_dir': output_dir,
+        'output_format': args.output_format,
+        'output_failures': args.output_failures,
+        'direct_args': analyzer_params(args)
+    }
+
+    logging.debug('run analyzer against compilation database')
+    with open(args.cdb, 'r') as handle:
+        generator = (dict(cmd, **consts)
+                     for cmd in json.load(handle) if not exclude(cmd['file']))
+        # when verbose output requested execute sequentially
+        pool = multiprocessing.Pool(1 if args.verbose > 2 else None)
+        for current in pool.imap_unordered(run, generator):
+            if current is not None:
+                # display error message from the static analyzer
+                for line in current['error_output']:
+                    logging.info(line.rstrip())
+        pool.close()
+        pool.join()
+
+
+def setup_environment(args, destination, bin_dir):
+    """ Set up environment for build command to interpose compiler wrapper. """
+
+    environment = dict(os.environ)
+    environment.update({
+        'CC': os.path.join(bin_dir, COMPILER_WRAPPER_CC),
+        'CXX': os.path.join(bin_dir, COMPILER_WRAPPER_CXX),
+        'ANALYZE_BUILD_CC': args.cc,
+        'ANALYZE_BUILD_CXX': args.cxx,
+        'ANALYZE_BUILD_CLANG': args.clang if need_analyzer(args.build) else '',
+        'ANALYZE_BUILD_VERBOSE': 'DEBUG' if args.verbose > 2 else 'WARNING',
+        'ANALYZE_BUILD_REPORT_DIR': destination,
+        'ANALYZE_BUILD_REPORT_FORMAT': args.output_format,
+        'ANALYZE_BUILD_REPORT_FAILURES': 'yes' if args.output_failures else '',
+        'ANALYZE_BUILD_PARAMETERS': ' '.join(analyzer_params(args))
+    })
+    return environment
+
+
+def analyze_build_wrapper(cplusplus):
+    """ Entry point for `analyze-cc` and `analyze-c++` compiler wrappers. """
+
+    # initialize wrapper logging
+    logging.basicConfig(format='analyze: %(levelname)s: %(message)s',
+                        level=os.getenv('ANALYZE_BUILD_VERBOSE', 'INFO'))
+    # execute with real compiler
+    compiler = os.getenv('ANALYZE_BUILD_CXX', 'c++') if cplusplus \
+        else os.getenv('ANALYZE_BUILD_CC', 'cc')
+    compilation = [compiler] + sys.argv[1:]
+    logging.info('execute compiler: %s', compilation)
+    result = subprocess.call(compilation)
+    # exit when it fails, ...
+    if result or not os.getenv('ANALYZE_BUILD_CLANG'):
+        return result
+    # ... and run the analyzer if all went well.
+    try:
+        # collect the needed parameters from environment, crash when missing
+        consts = {
+            'clang': os.getenv('ANALYZE_BUILD_CLANG'),
+            'output_dir': os.getenv('ANALYZE_BUILD_REPORT_DIR'),
+            'output_format': os.getenv('ANALYZE_BUILD_REPORT_FORMAT'),
+            'output_failures': os.getenv('ANALYZE_BUILD_REPORT_FAILURES'),
+            'direct_args': os.getenv('ANALYZE_BUILD_PARAMETERS',
+                                     '').split(' '),
+            'directory': os.getcwd(),
+        }
+        # get relevant parameters from command line arguments
+        args = classify_parameters(sys.argv)
+        filenames = args.pop('files', [])
+        for filename in (name for name in filenames if classify_source(name)):
+            parameters = dict(args, file=filename, **consts)
+            logging.debug('analyzer parameters %s', parameters)
+            current = action_check(parameters)
+            # display error message from the static analyzer
+            if current is not None:
+                for line in current['error_output']:
+                    logging.info(line.rstrip())
+    except Exception:
+        logging.exception("run analyzer inside compiler wrapper failed.")
+    return 0
+
+
+def analyzer_params(args):
+    """ A group of command line arguments can mapped to command
+    line arguments of the analyzer. This method generates those. """
+
+    def prefix_with(constant, pieces):
+        """ From a sequence create another sequence where every second element
+        is from the original sequence and the odd elements are the prefix.
+
+        eg.: prefix_with(0, [1,2,3]) creates [0, 1, 0, 2, 0, 3] """
+
+        return [elem for piece in pieces for elem in [constant, piece]]
+
+    result = []
+
+    if args.store_model:
+        result.append('-analyzer-store={0}'.format(args.store_model))
+    if args.constraints_model:
+        result.append(
+            '-analyzer-constraints={0}'.format(args.constraints_model))
+    if args.internal_stats:
+        result.append('-analyzer-stats')
+    if args.analyze_headers:
+        result.append('-analyzer-opt-analyze-headers')
+    if args.stats:
+        result.append('-analyzer-checker=debug.Stats')
+    if args.maxloop:
+        result.extend(['-analyzer-max-loop', str(args.maxloop)])
+    if args.output_format:
+        result.append('-analyzer-output={0}'.format(args.output_format))
+    if args.analyzer_config:
+        result.append(args.analyzer_config)
+    if args.verbose >= 4:
+        result.append('-analyzer-display-progress')
+    if args.plugins:
+        result.extend(prefix_with('-load', args.plugins))
+    if args.enable_checker:
+        checkers = ','.join(args.enable_checker)
+        result.extend(['-analyzer-checker', checkers])
+    if args.disable_checker:
+        checkers = ','.join(args.disable_checker)
+        result.extend(['-analyzer-disable-checker', checkers])
+    if os.getenv('UBIVIZ'):
+        result.append('-analyzer-viz-egraph-ubigraph')
+
+    return prefix_with('-Xclang', result)
+
+
+def print_active_checkers(checkers):
+    """ Print active checkers to stdout. """
+
+    for name in sorted(name for name, (_, active) in checkers.items()
+                       if active):
+        print(name)
+
+
+def print_checkers(checkers):
+    """ Print verbose checker help to stdout. """
+
+    print('')
+    print('available checkers:')
+    print('')
+    for name in sorted(checkers.keys()):
+        description, active = checkers[name]
+        prefix = '+' if active else ' '
+        if len(name) > 30:
+            print(' {0} {1}'.format(prefix, name))
+            print(' ' * 35 + description)
+        else:
+            print(' {0} {1: <30}  {2}'.format(prefix, name, description))
+    print('')
+    print('NOTE: "+" indicates that an analysis is enabled by default.')
+    print('')
+
+
+def validate(parser, args, from_build_command):
+    """ Validation done by the parser itself, but semantic check still
+    needs to be done. This method is doing that. """
+
+    if args.help_checkers_verbose:
+        print_checkers(get_checkers(args.clang, args.plugins))
+        parser.exit()
+    elif args.help_checkers:
+        print_active_checkers(get_checkers(args.clang, args.plugins))
+        parser.exit()
+
+    if from_build_command and not args.build:
+        parser.error('missing build command')
+
+
+def create_parser(from_build_command):
+    """ Command line argument parser factory method. """
+
+    parser = argparse.ArgumentParser(
+        formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+
+    parser.add_argument(
+        '--verbose', '-v',
+        action='count',
+        default=0,
+        help="""Enable verbose output from '%(prog)s'. A second and third
+                flag increases verbosity.""")
+    parser.add_argument(
+        '--override-compiler',
+        action='store_true',
+        help="""Always resort to the compiler wrapper even when better
+                interposition methods are available.""")
+    parser.add_argument(
+        '--intercept-first',
+        action='store_true',
+        help="""Run the build commands only, build a compilation database,
+                then run the static analyzer afterwards.
+                Generally speaking it has better coverage on build commands.
+                With '--override-compiler' it use compiler wrapper, but does
+                not run the analyzer till the build is finished. """)
+    parser.add_argument(
+        '--cdb',
+        metavar='<file>',
+        default="compile_commands.json",
+        help="""The JSON compilation database.""")
+
+    parser.add_argument(
+        '--output', '-o',
+        metavar='<path>',
+        default=tempdir(),
+        help="""Specifies the output directory for analyzer reports.
+                Subdirectory will be created if default directory is targeted.
+                """)
+    parser.add_argument(
+        '--status-bugs',
+        action='store_true',
+        help="""By default, the exit status of '%(prog)s' is the same as the
+                executed build command. Specifying this option causes the exit
+                status of '%(prog)s' to be non zero if it found potential bugs
+                and zero otherwise.""")
+    parser.add_argument(
+        '--html-title',
+        metavar='<title>',
+        help="""Specify the title used on generated HTML pages.
+                If not specified, a default title will be used.""")
+    parser.add_argument(
+        '--analyze-headers',
+        action='store_true',
+        help="""Also analyze functions in #included files. By default, such
+                functions are skipped unless they are called by functions
+                within the main source file.""")
+    format_group = parser.add_mutually_exclusive_group()
+    format_group.add_argument(
+        '--plist', '-plist',
+        dest='output_format',
+        const='plist',
+        default='html',
+        action='store_const',
+        help="""This option outputs the results as a set of .plist files.""")
+    format_group.add_argument(
+        '--plist-html', '-plist-html',
+        dest='output_format',
+        const='plist-html',
+        default='html',
+        action='store_const',
+        help="""This option outputs the results as a set of .html and .plist
+                files.""")
+    # TODO: implement '-view '
+
+    advanced = parser.add_argument_group('advanced options')
+    advanced.add_argument(
+        '--keep-empty',
+        action='store_true',
+        help="""Don't remove the build results directory even if no issues
+                were reported.""")
+    advanced.add_argument(
+        '--no-failure-reports', '-no-failure-reports',
+        dest='output_failures',
+        action='store_false',
+        help="""Do not create a 'failures' subdirectory that includes analyzer
+                crash reports and preprocessed source files.""")
+    advanced.add_argument(
+        '--stats', '-stats',
+        action='store_true',
+        help="""Generates visitation statistics for the project being analyzed.
+                """)
+    advanced.add_argument(
+        '--internal-stats',
+        action='store_true',
+        help="""Generate internal analyzer statistics.""")
+    advanced.add_argument(
+        '--maxloop', '-maxloop',
+        metavar='<loop count>',
+        type=int,
+        help="""Specifiy the number of times a block can be visited before
+                giving up. Increase for more comprehensive coverage at a cost
+                of speed.""")
+    advanced.add_argument(
+        '--store', '-store',
+        metavar='<model>',
+        dest='store_model',
+        choices=['region', 'basic'],
+        help="""Specify the store model used by the analyzer.
+                'region' specifies a field- sensitive store model.
+                'basic' which is far less precise but can more quickly
+                analyze code. 'basic' was the default store model for
+                checker-0.221 and earlier.""")
+    advanced.add_argument(
+        '--constraints', '-constraints',
+        metavar='<model>',
+        dest='constraints_model',
+        choices=['range', 'basic'],
+        help="""Specify the contraint engine used by the analyzer. Specifying
+                'basic' uses a simpler, less powerful constraint model used by
+                checker-0.160 and earlier.""")
+    advanced.add_argument(
+        '--use-analyzer',
+        metavar='<path>',
+        dest='clang',
+        default='clang',
+        help="""'%(prog)s' uses the 'clang' executable relative to itself for
+                static analysis. One can override this behavior with this
+                option by using the 'clang' packaged with Xcode (on OS X) or
+                from the PATH.""")
+    advanced.add_argument(
+        '--use-cc',
+        metavar='<path>',
+        dest='cc',
+        default='cc',
+        help="""When '%(prog)s' analyzes a project by interposing a "fake
+                compiler", which executes a real compiler for compilation and
+                do other tasks (to run the static analyzer or just record the
+                compiler invocation). Because of this interposing, '%(prog)s'
+                does not know what compiler your project normally uses.
+                Instead, it simply overrides the CC environment variable, and
+                guesses your default compiler.
+
+                If you need '%(prog)s' to use a specific compiler for
+                *compilation* then you can use this option to specify a path
+                to that compiler.""")
+    advanced.add_argument(
+        '--use-c++',
+        metavar='<path>',
+        dest='cxx',
+        default='c++',
+        help="""This is the same as "--use-cc" but for C++ code.""")
+    advanced.add_argument(
+        '--analyzer-config', '-analyzer-config',
+        metavar='<options>',
+        help="""Provide options to pass through to the analyzer's
+                -analyzer-config flag. Several options are separated with
+                comma: 'key1=val1,key2=val2'
+
+                Available options:
+                    stable-report-filename=true or false (default)
+
+                Switch the page naming to:
+                report-<filename>-<function/method name>-<id>.html
+                instead of report-XXXXXX.html""")
+    advanced.add_argument(
+        '--exclude',
+        metavar='<directory>',
+        dest='excludes',
+        action='append',
+        default=[],
+        help="""Do not run static analyzer against files found in this
+                directory. (You can specify this option multiple times.)
+                Could be usefull when project contains 3rd party libraries.
+                The directory path shall be absolute path as file names in
+                the compilation database.""")
+
+    plugins = parser.add_argument_group('checker options')
+    plugins.add_argument(
+        '--load-plugin', '-load-plugin',
+        metavar='<plugin library>',
+        dest='plugins',
+        action='append',
+        help="""Loading external checkers using the clang plugin interface.""")
+    plugins.add_argument(
+        '--enable-checker', '-enable-checker',
+        metavar='<checker name>',
+        action=AppendCommaSeparated,
+        help="""Enable specific checker.""")
+    plugins.add_argument(
+        '--disable-checker', '-disable-checker',
+        metavar='<checker name>',
+        action=AppendCommaSeparated,
+        help="""Disable specific checker.""")
+    plugins.add_argument(
+        '--help-checkers',
+        action='store_true',
+        help="""A default group of checkers is run unless explicitly disabled.
+                Exactly which checkers constitute the default group is a
+                function of the operating system in use. These can be printed
+                with this flag.""")
+    plugins.add_argument(
+        '--help-checkers-verbose',
+        action='store_true',
+        help="""Print all available checkers and mark the enabled ones.""")
+
+    if from_build_command:
+        parser.add_argument(
+            dest='build',
+            nargs=argparse.REMAINDER,
+            help="""Command to run.""")
+
+    return parser
+
+
+class AppendCommaSeparated(argparse.Action):
+    """ argparse Action class to support multiple comma separated lists. """
+
+    def __call__(self, __parser, namespace, values, __option_string):
+        # getattr(obj, attr, default) does not really returns default but none
+        if getattr(namespace, self.dest, None) is None:
+            setattr(namespace, self.dest, [])
+        # once it's fixed we can use as expected
+        actual = getattr(namespace, self.dest)
+        actual.extend(values.split(','))
+        setattr(namespace, self.dest, actual)

Added: cfe/trunk/tools/scan-build-py/libscanbuild/clang.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libscanbuild/clang.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libscanbuild/clang.py (added)
+++ cfe/trunk/tools/scan-build-py/libscanbuild/clang.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,156 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module is responsible for the Clang executable.
+
+Since Clang command line interface is so rich, but this project is using only
+a subset of that, it makes sense to create a function specific wrapper. """
+
+import re
+import subprocess
+import logging
+from libscanbuild.shell import decode
+
+__all__ = ['get_version', 'get_arguments', 'get_checkers']
+
+
+def get_version(cmd):
+    """ Returns the compiler version as string. """
+
+    lines = subprocess.check_output([cmd, '-v'], stderr=subprocess.STDOUT)
+    return lines.decode('ascii').splitlines()[0]
+
+
+def get_arguments(command, cwd):
+    """ Capture Clang invocation.
+
+    This method returns the front-end invocation that would be executed as
+    a result of the given driver invocation. """
+
+    def lastline(stream):
+        last = None
+        for line in stream:
+            last = line
+        if last is None:
+            raise Exception("output not found")
+        return last
+
+    cmd = command[:]
+    cmd.insert(1, '-###')
+    logging.debug('exec command in %s: %s', cwd, ' '.join(cmd))
+    child = subprocess.Popen(cmd,
+                             cwd=cwd,
+                             universal_newlines=True,
+                             stdout=subprocess.PIPE,
+                             stderr=subprocess.STDOUT)
+    line = lastline(child.stdout)
+    child.stdout.close()
+    child.wait()
+    if child.returncode == 0:
+        if re.search(r'clang(.*): error:', line):
+            raise Exception(line)
+        return decode(line)
+    else:
+        raise Exception(line)
+
+
+def get_active_checkers(clang, plugins):
+    """ To get the default plugins we execute Clang to print how this
+    compilation would be called.
+
+    For input file we specify stdin and pass only language information. """
+
+    def checkers(language):
+        """ Returns a list of active checkers for the given language. """
+
+        load = [elem
+                for plugin in plugins
+                for elem in ['-Xclang', '-load', '-Xclang', plugin]]
+        cmd = [clang, '--analyze'] + load + ['-x', language, '-']
+        pattern = re.compile(r'^-analyzer-checker=(.*)$')
+        return [pattern.match(arg).group(1)
+                for arg in get_arguments(cmd, '.') if pattern.match(arg)]
+
+    result = set()
+    for language in ['c', 'c++', 'objective-c', 'objective-c++']:
+        result.update(checkers(language))
+    return result
+
+
+def get_checkers(clang, plugins):
+    """ Get all the available checkers from default and from the plugins.
+
+    clang -- the compiler we are using
+    plugins -- list of plugins which was requested by the user
+
+    This method returns a dictionary of all available checkers and status.
+
+    {<plugin name>: (<plugin description>, <is active by default>)} """
+
+    plugins = plugins if plugins else []
+
+    def parse_checkers(stream):
+        """ Parse clang -analyzer-checker-help output.
+
+        Below the line 'CHECKERS:' are there the name description pairs.
+        Many of them are in one line, but some long named plugins has the
+        name and the description in separate lines.
+
+        The plugin name is always prefixed with two space character. The
+        name contains no whitespaces. Then followed by newline (if it's
+        too long) or other space characters comes the description of the
+        plugin. The description ends with a newline character. """
+
+        # find checkers header
+        for line in stream:
+            if re.match(r'^CHECKERS:', line):
+                break
+        # find entries
+        state = None
+        for line in stream:
+            if state and not re.match(r'^\s\s\S', line):
+                yield (state, line.strip())
+                state = None
+            elif re.match(r'^\s\s\S+$', line.rstrip()):
+                state = line.strip()
+            else:
+                pattern = re.compile(r'^\s\s(?P<key>\S*)\s*(?P<value>.*)')
+                match = pattern.match(line.rstrip())
+                if match:
+                    current = match.groupdict()
+                    yield (current['key'], current['value'])
+
+    def is_active(actives, entry):
+        """ Returns true if plugin name is matching the active plugin names.
+
+        actives -- set of active plugin names (or prefixes).
+        entry -- the current plugin name to judge.
+
+        The active plugin names are specific plugin names or prefix of some
+        names. One example for prefix, when it say 'unix' and it shall match
+        on 'unix.API', 'unix.Malloc' and 'unix.MallocSizeof'. """
+
+        return any(re.match(r'^' + a + r'(\.|$)', entry) for a in actives)
+
+    actives = get_active_checkers(clang, plugins)
+
+    load = [elem for plugin in plugins for elem in ['-load', plugin]]
+    cmd = [clang, '-cc1'] + load + ['-analyzer-checker-help']
+
+    logging.debug('exec command: %s', ' '.join(cmd))
+    child = subprocess.Popen(cmd,
+                             universal_newlines=True,
+                             stdout=subprocess.PIPE,
+                             stderr=subprocess.STDOUT)
+    checkers = {
+        k: (v, is_active(actives, k))
+        for k, v in parse_checkers(child.stdout)
+    }
+    child.stdout.close()
+    child.wait()
+    if child.returncode == 0 and len(checkers):
+        return checkers
+    else:
+        raise Exception('Could not query Clang for available checkers.')

Added: cfe/trunk/tools/scan-build-py/libscanbuild/command.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libscanbuild/command.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libscanbuild/command.py (added)
+++ cfe/trunk/tools/scan-build-py/libscanbuild/command.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,133 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module is responsible for to parse a compiler invocation. """
+
+import re
+import os
+
+__all__ = ['Action', 'classify_parameters', 'classify_source']
+
+
+class Action(object):
+    """ Enumeration class for compiler action. """
+
+    Link, Compile, Ignored = range(3)
+
+
+def classify_parameters(command):
+    """ Parses the command line arguments of the given invocation. """
+
+    # result value of this method.
+    # some value are preset, some will be set only when found.
+    result = {
+        'action': Action.Link,
+        'files': [],
+        'output': None,
+        'compile_options': [],
+        'c++': is_cplusplus_compiler(command[0])
+        # archs_seen
+        # language
+    }
+
+    # data structure to ignore compiler parameters.
+    # key: parameter name, value: number of parameters to ignore afterwards.
+    ignored = {
+        '-g': 0,
+        '-fsyntax-only': 0,
+        '-save-temps': 0,
+        '-install_name': 1,
+        '-exported_symbols_list': 1,
+        '-current_version': 1,
+        '-compatibility_version': 1,
+        '-init': 1,
+        '-e': 1,
+        '-seg1addr': 1,
+        '-bundle_loader': 1,
+        '-multiply_defined': 1,
+        '-sectorder': 3,
+        '--param': 1,
+        '--serialize-diagnostics': 1
+    }
+
+    args = iter(command[1:])
+    for arg in args:
+        # compiler action parameters are the most important ones...
+        if arg in {'-E', '-S', '-cc1', '-M', '-MM', '-###'}:
+            result.update({'action': Action.Ignored})
+        elif arg == '-c':
+            result.update({'action': max(result['action'], Action.Compile)})
+        # arch flags are taken...
+        elif arg == '-arch':
+            archs = result.get('archs_seen', [])
+            result.update({'archs_seen': archs + [next(args)]})
+        # explicit language option taken...
+        elif arg == '-x':
+            result.update({'language': next(args)})
+        # output flag taken...
+        elif arg == '-o':
+            result.update({'output': next(args)})
+        # warning disable options are taken...
+        elif re.match(r'^-Wno-', arg):
+            result['compile_options'].append(arg)
+        # warning options are ignored...
+        elif re.match(r'^-[mW].+', arg):
+            pass
+        # some preprocessor parameters are ignored...
+        elif arg in {'-MD', '-MMD', '-MG', '-MP'}:
+            pass
+        elif arg in {'-MF', '-MT', '-MQ'}:
+            next(args)
+        # linker options are ignored...
+        elif arg in {'-static', '-shared', '-s', '-rdynamic'} or \
+                re.match(r'^-[lL].+', arg):
+            pass
+        elif arg in {'-l', '-L', '-u', '-z', '-T', '-Xlinker'}:
+            next(args)
+        # some other options are ignored...
+        elif arg in ignored.keys():
+            for _ in range(ignored[arg]):
+                next(args)
+        # parameters which looks source file are taken...
+        elif re.match(r'^[^-].+', arg) and classify_source(arg):
+            result['files'].append(arg)
+        # and consider everything else as compile option.
+        else:
+            result['compile_options'].append(arg)
+
+    return result
+
+
+def classify_source(filename, cplusplus=False):
+    """ Return the language from file name extension. """
+
+    mapping = {
+        '.c': 'c++' if cplusplus else 'c',
+        '.i': 'c++-cpp-output' if cplusplus else 'c-cpp-output',
+        '.ii': 'c++-cpp-output',
+        '.m': 'objective-c',
+        '.mi': 'objective-c-cpp-output',
+        '.mm': 'objective-c++',
+        '.mii': 'objective-c++-cpp-output',
+        '.C': 'c++',
+        '.cc': 'c++',
+        '.CC': 'c++',
+        '.cp': 'c++',
+        '.cpp': 'c++',
+        '.cxx': 'c++',
+        '.c++': 'c++',
+        '.C++': 'c++',
+        '.txx': 'c++'
+    }
+
+    __, extension = os.path.splitext(os.path.basename(filename))
+    return mapping.get(extension)
+
+
+def is_cplusplus_compiler(name):
+    """ Returns true when the compiler name refer to a C++ compiler. """
+
+    match = re.match(r'^([^/]*/)*(\w*-)*(\w+\+\+)(-(\d+(\.\d+){0,3}))?$', name)
+    return False if match is None else True

Added: cfe/trunk/tools/scan-build-py/libscanbuild/intercept.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libscanbuild/intercept.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libscanbuild/intercept.py (added)
+++ cfe/trunk/tools/scan-build-py/libscanbuild/intercept.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,359 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module is responsible to capture the compiler invocation of any
+build process. The result of that should be a compilation database.
+
+This implementation is using the LD_PRELOAD or DYLD_INSERT_LIBRARIES
+mechanisms provided by the dynamic linker. The related library is implemented
+in C language and can be found under 'libear' directory.
+
+The 'libear' library is capturing all child process creation and logging the
+relevant information about it into separate files in a specified directory.
+The parameter of this process is the output directory name, where the report
+files shall be placed. This parameter is passed as an environment variable.
+
+The module also implements compiler wrappers to intercept the compiler calls.
+
+The module implements the build command execution and the post-processing of
+the output files, which will condensates into a compilation database. """
+
+import sys
+import os
+import os.path
+import re
+import itertools
+import json
+import glob
+import argparse
+import logging
+import subprocess
+from libear import build_libear, TemporaryDirectory
+from libscanbuild import duplicate_check, tempdir, initialize_logging
+from libscanbuild import command_entry_point
+from libscanbuild.command import Action, classify_parameters
+from libscanbuild.shell import encode, decode
+
+__all__ = ['capture', 'intercept_build_main', 'intercept_build_wrapper']
+
+GS = chr(0x1d)
+RS = chr(0x1e)
+US = chr(0x1f)
+
+COMPILER_WRAPPER_CC = 'intercept-cc'
+COMPILER_WRAPPER_CXX = 'intercept-c++'
+
+
+ at command_entry_point
+def intercept_build_main(bin_dir):
+    """ Entry point for 'intercept-build' command. """
+
+    parser = create_parser()
+    args = parser.parse_args()
+
+    initialize_logging(args.verbose)
+    logging.debug('Parsed arguments: %s', args)
+
+    if not args.build:
+        parser.print_help()
+        return 0
+
+    return capture(args, bin_dir)
+
+
+def capture(args, bin_dir):
+    """ The entry point of build command interception. """
+
+    def post_processing(commands):
+        """ To make a compilation database, it needs to filter out commands
+        which are not compiler calls. Needs to find the source file name
+        from the arguments. And do shell escaping on the command.
+
+        To support incremental builds, it is desired to read elements from
+        an existing compilation database from a previous run. These elemets
+        shall be merged with the new elements. """
+
+        # create entries from the current run
+        current = itertools.chain.from_iterable(
+            # creates a sequence of entry generators from an exec,
+            # but filter out non compiler calls before.
+            (format_entry(x) for x in commands if is_compiler_call(x)))
+        # read entries from previous run
+        if 'append' in args and args.append and os.path.exists(args.cdb):
+            with open(args.cdb) as handle:
+                previous = iter(json.load(handle))
+        else:
+            previous = iter([])
+        # filter out duplicate entries from both
+        duplicate = duplicate_check(entry_hash)
+        return (entry for entry in itertools.chain(previous, current)
+                if os.path.exists(entry['file']) and not duplicate(entry))
+
+    with TemporaryDirectory(prefix='intercept-', dir=tempdir()) as tmp_dir:
+        # run the build command
+        environment = setup_environment(args, tmp_dir, bin_dir)
+        logging.debug('run build in environment: %s', environment)
+        exit_code = subprocess.call(args.build, env=environment)
+        logging.info('build finished with exit code: %d', exit_code)
+        # read the intercepted exec calls
+        commands = itertools.chain.from_iterable(
+            parse_exec_trace(os.path.join(tmp_dir, filename))
+            for filename in sorted(glob.iglob(os.path.join(tmp_dir, '*.cmd'))))
+        # do post processing only if that was requested
+        if 'raw_entries' not in args or not args.raw_entries:
+            entries = post_processing(commands)
+        else:
+            entries = commands
+        # dump the compilation database
+        with open(args.cdb, 'w+') as handle:
+            json.dump(list(entries), handle, sort_keys=True, indent=4)
+        return exit_code
+
+
+def setup_environment(args, destination, bin_dir):
+    """ Sets up the environment for the build command.
+
+    It sets the required environment variables and execute the given command.
+    The exec calls will be logged by the 'libear' preloaded library or by the
+    'wrapper' programs. """
+
+    c_compiler = args.cc if 'cc' in args else 'cc'
+    cxx_compiler = args.cxx if 'cxx' in args else 'c++'
+
+    libear_path = None if args.override_compiler or is_preload_disabled(
+        sys.platform) else build_libear(c_compiler, destination)
+
+    environment = dict(os.environ)
+    environment.update({'INTERCEPT_BUILD_TARGET_DIR': destination})
+
+    if not libear_path:
+        logging.debug('intercept gonna use compiler wrappers')
+        environment.update({
+            'CC': os.path.join(bin_dir, COMPILER_WRAPPER_CC),
+            'CXX': os.path.join(bin_dir, COMPILER_WRAPPER_CXX),
+            'INTERCEPT_BUILD_CC': c_compiler,
+            'INTERCEPT_BUILD_CXX': cxx_compiler,
+            'INTERCEPT_BUILD_VERBOSE': 'DEBUG' if args.verbose > 2 else 'INFO'
+        })
+    elif sys.platform == 'darwin':
+        logging.debug('intercept gonna preload libear on OSX')
+        environment.update({
+            'DYLD_INSERT_LIBRARIES': libear_path,
+            'DYLD_FORCE_FLAT_NAMESPACE': '1'
+        })
+    else:
+        logging.debug('intercept gonna preload libear on UNIX')
+        environment.update({'LD_PRELOAD': libear_path})
+
+    return environment
+
+
+def intercept_build_wrapper(cplusplus):
+    """ Entry point for `intercept-cc` and `intercept-c++` compiler wrappers.
+
+    It does generate execution report into target directory. And execute
+    the wrapped compilation with the real compiler. The parameters for
+    report and execution are from environment variables.
+
+    Those parameters which for 'libear' library can't have meaningful
+    values are faked. """
+
+    # initialize wrapper logging
+    logging.basicConfig(format='intercept: %(levelname)s: %(message)s',
+                        level=os.getenv('INTERCEPT_BUILD_VERBOSE', 'INFO'))
+    # write report
+    try:
+        target_dir = os.getenv('INTERCEPT_BUILD_TARGET_DIR')
+        if not target_dir:
+            raise UserWarning('exec report target directory not found')
+        pid = str(os.getpid())
+        target_file = os.path.join(target_dir, pid + '.cmd')
+        logging.debug('writing exec report to: %s', target_file)
+        with open(target_file, 'ab') as handler:
+            working_dir = os.getcwd()
+            command = US.join(sys.argv) + US
+            content = RS.join([pid, pid, 'wrapper', working_dir, command]) + GS
+            handler.write(content.encode('utf-8'))
+    except IOError:
+        logging.exception('writing exec report failed')
+    except UserWarning as warning:
+        logging.warning(warning)
+    # execute with real compiler
+    compiler = os.getenv('INTERCEPT_BUILD_CXX', 'c++') if cplusplus \
+        else os.getenv('INTERCEPT_BUILD_CC', 'cc')
+    compilation = [compiler] + sys.argv[1:]
+    logging.debug('execute compiler: %s', compilation)
+    return subprocess.call(compilation)
+
+
+def parse_exec_trace(filename):
+    """ Parse the file generated by the 'libear' preloaded library.
+
+    Given filename points to a file which contains the basic report
+    generated by the interception library or wrapper command. A single
+    report file _might_ contain multiple process creation info. """
+
+    logging.debug('parse exec trace file: %s', filename)
+    with open(filename, 'r') as handler:
+        content = handler.read()
+        for group in filter(bool, content.split(GS)):
+            records = group.split(RS)
+            yield {
+                'pid': records[0],
+                'ppid': records[1],
+                'function': records[2],
+                'directory': records[3],
+                'command': records[4].split(US)[:-1]
+            }
+
+
+def format_entry(entry):
+    """ Generate the desired fields for compilation database entries. """
+
+    def abspath(cwd, name):
+        """ Create normalized absolute path from input filename. """
+        fullname = name if os.path.isabs(name) else os.path.join(cwd, name)
+        return os.path.normpath(fullname)
+
+    logging.debug('format this command: %s', entry['command'])
+    atoms = classify_parameters(entry['command'])
+    if atoms['action'] <= Action.Compile:
+        for source in atoms['files']:
+            compiler = 'c++' if atoms['c++'] else 'cc'
+            flags = atoms['compile_options']
+            flags += ['-o', atoms['output']] if atoms['output'] else []
+            flags += ['-x', atoms['language']] if 'language' in atoms else []
+            flags += [elem
+                      for arch in atoms.get('archs_seen', [])
+                      for elem in ['-arch', arch]]
+            command = [compiler, '-c'] + flags + [source]
+            logging.debug('formated as: %s', command)
+            yield {
+                'directory': entry['directory'],
+                'command': encode(command),
+                'file': abspath(entry['directory'], source)
+            }
+
+
+def is_compiler_call(entry):
+    """ A predicate to decide the entry is a compiler call or not. """
+
+    patterns = [
+        re.compile(r'^([^/]*/)*intercept-c(c|\+\+)$'),
+        re.compile(r'^([^/]*/)*c(c|\+\+)$'),
+        re.compile(r'^([^/]*/)*([^-]*-)*[mg](cc|\+\+)(-\d+(\.\d+){0,2})?$'),
+        re.compile(r'^([^/]*/)*([^-]*-)*clang(\+\+)?(-\d+(\.\d+){0,2})?$'),
+        re.compile(r'^([^/]*/)*llvm-g(cc|\+\+)$'),
+    ]
+    executable = entry['command'][0]
+    return any((pattern.match(executable) for pattern in patterns))
+
+
+def is_preload_disabled(platform):
+    """ Library-based interposition will fail silently if SIP is enabled,
+    so this should be detected. You can detect whether SIP is enabled on
+    Darwin by checking whether (1) there is a binary called 'csrutil' in
+    the path and, if so, (2) whether the output of executing 'csrutil status'
+    contains 'System Integrity Protection status: enabled'.
+
+    Same problem on linux when SELinux is enabled. The status query program
+    'sestatus' and the output when it's enabled 'SELinux status: enabled'. """
+
+    if platform == 'darwin':
+        pattern = re.compile(r'System Integrity Protection status:\s+enabled')
+        command = ['csrutil', 'status']
+    elif platform in {'linux', 'linux2'}:
+        pattern = re.compile(r'SELinux status:\s+enabled')
+        command = ['sestatus']
+    else:
+        return False
+
+    try:
+        lines = subprocess.check_output(command).decode('utf-8')
+        return any((pattern.match(line) for line in lines.splitlines()))
+    except:
+        return False
+
+
+def entry_hash(entry):
+    """ Implement unique hash method for compilation database entries. """
+
+    # For faster lookup in set filename is reverted
+    filename = entry['file'][::-1]
+    # For faster lookup in set directory is reverted
+    directory = entry['directory'][::-1]
+    # On OS X the 'cc' and 'c++' compilers are wrappers for
+    # 'clang' therefore both call would be logged. To avoid
+    # this the hash does not contain the first word of the
+    # command.
+    command = ' '.join(decode(entry['command'])[1:])
+
+    return '<>'.join([filename, directory, command])
+
+
+def create_parser():
+    """ Command line argument parser factory method. """
+
+    parser = argparse.ArgumentParser(
+        formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+
+    parser.add_argument(
+        '--verbose', '-v',
+        action='count',
+        default=0,
+        help="""Enable verbose output from '%(prog)s'. A second and third
+                flag increases verbosity.""")
+    parser.add_argument(
+        '--cdb',
+        metavar='<file>',
+        default="compile_commands.json",
+        help="""The JSON compilation database.""")
+    group = parser.add_mutually_exclusive_group()
+    group.add_argument(
+        '--append',
+        action='store_true',
+        help="""Append new entries to existing compilation database.""")
+    group.add_argument(
+        '--disable-filter', '-n',
+        dest='raw_entries',
+        action='store_true',
+        help="""Intercepted child process creation calls (exec calls) are all
+                logged to the output. The output is not a compilation database.
+                This flag is for debug purposes.""")
+
+    advanced = parser.add_argument_group('advanced options')
+    advanced.add_argument(
+        '--override-compiler',
+        action='store_true',
+        help="""Always resort to the compiler wrapper even when better
+                intercept methods are available.""")
+    advanced.add_argument(
+        '--use-cc',
+        metavar='<path>',
+        dest='cc',
+        default='cc',
+        help="""When '%(prog)s' analyzes a project by interposing a compiler
+                wrapper, which executes a real compiler for compilation and
+                do other tasks (record the compiler invocation). Because of
+                this interposing, '%(prog)s' does not know what compiler your
+                project normally uses. Instead, it simply overrides the CC
+                environment variable, and guesses your default compiler.
+
+                If you need '%(prog)s' to use a specific compiler for
+                *compilation* then you can use this option to specify a path
+                to that compiler.""")
+    advanced.add_argument(
+        '--use-c++',
+        metavar='<path>',
+        dest='cxx',
+        default='c++',
+        help="""This is the same as "--use-cc" but for C++ code.""")
+
+    parser.add_argument(
+        dest='build',
+        nargs=argparse.REMAINDER,
+        help="""Command to run.""")
+
+    return parser

Added: cfe/trunk/tools/scan-build-py/libscanbuild/report.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libscanbuild/report.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libscanbuild/report.py (added)
+++ cfe/trunk/tools/scan-build-py/libscanbuild/report.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,530 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module is responsible to generate 'index.html' for the report.
+
+The input for this step is the output directory, where individual reports
+could be found. It parses those reports and generates 'index.html'. """
+
+import re
+import os
+import os.path
+import sys
+import shutil
+import time
+import tempfile
+import itertools
+import plistlib
+import glob
+import json
+import logging
+import contextlib
+from libscanbuild import duplicate_check
+from libscanbuild.clang import get_version
+
+__all__ = ['report_directory', 'document']
+
+
+ at contextlib.contextmanager
+def report_directory(hint, keep):
+    """ Responsible for the report directory.
+
+    hint -- could specify the parent directory of the output directory.
+    keep -- a boolean value to keep or delete the empty report directory. """
+
+    stamp = time.strftime('scan-build-%Y-%m-%d-%H%M%S-', time.localtime())
+    name = tempfile.mkdtemp(prefix=stamp, dir=hint)
+
+    logging.info('Report directory created: %s', name)
+
+    try:
+        yield name
+    finally:
+        if os.listdir(name):
+            msg = "Run 'scan-view %s' to examine bug reports."
+            keep = True
+        else:
+            if keep:
+                msg = "Report directory '%s' contans no report, but kept."
+            else:
+                msg = "Removing directory '%s' because it contains no report."
+        logging.warning(msg, name)
+
+        if not keep:
+            os.rmdir(name)
+
+
+def document(args, output_dir, use_cdb):
+    """ Generates cover report and returns the number of bugs/crashes. """
+
+    html_reports_available = args.output_format in {'html', 'plist-html'}
+
+    logging.debug('count crashes and bugs')
+    crash_count = sum(1 for _ in read_crashes(output_dir))
+    bug_counter = create_counters()
+    for bug in read_bugs(output_dir, html_reports_available):
+        bug_counter(bug)
+    result = crash_count + bug_counter.total
+
+    if html_reports_available and result:
+        logging.debug('generate index.html file')
+        # common prefix for source files to have sort filenames
+        prefix = commonprefix_from(args.cdb) if use_cdb else os.getcwd()
+        # assemble the cover from multiple fragments
+        try:
+            fragments = []
+            if bug_counter.total:
+                fragments.append(bug_summary(output_dir, bug_counter))
+                fragments.append(bug_report(output_dir, prefix))
+            if crash_count:
+                fragments.append(crash_report(output_dir, prefix))
+            assemble_cover(output_dir, prefix, args, fragments)
+            # copy additinal files to the report
+            copy_resource_files(output_dir)
+            if use_cdb:
+                shutil.copy(args.cdb, output_dir)
+        finally:
+            for fragment in fragments:
+                os.remove(fragment)
+    return result
+
+
+def assemble_cover(output_dir, prefix, args, fragments):
+    """ Put together the fragments into a final report. """
+
+    import getpass
+    import socket
+    import datetime
+
+    if args.html_title is None:
+        args.html_title = os.path.basename(prefix) + ' - analyzer results'
+
+    with open(os.path.join(output_dir, 'index.html'), 'w') as handle:
+        indent = 0
+        handle.write(reindent("""
+        |<!DOCTYPE html>
+        |<html>
+        |  <head>
+        |    <title>{html_title}</title>
+        |    <link type="text/css" rel="stylesheet" href="scanview.css"/>
+        |    <script type='text/javascript' src="sorttable.js"></script>
+        |    <script type='text/javascript' src='selectable.js'></script>
+        |  </head>""", indent).format(html_title=args.html_title))
+        handle.write(comment('SUMMARYENDHEAD'))
+        handle.write(reindent("""
+        |  <body>
+        |    <h1>{html_title}</h1>
+        |    <table>
+        |      <tr><th>User:</th><td>{user_name}@{host_name}</td></tr>
+        |      <tr><th>Working Directory:</th><td>{current_dir}</td></tr>
+        |      <tr><th>Command Line:</th><td>{cmd_args}</td></tr>
+        |      <tr><th>Clang Version:</th><td>{clang_version}</td></tr>
+        |      <tr><th>Date:</th><td>{date}</td></tr>
+        |    </table>""", indent).format(html_title=args.html_title,
+                                         user_name=getpass.getuser(),
+                                         host_name=socket.gethostname(),
+                                         current_dir=prefix,
+                                         cmd_args=' '.join(sys.argv),
+                                         clang_version=get_version(args.clang),
+                                         date=datetime.datetime.today(
+                                         ).strftime('%c')))
+        for fragment in fragments:
+            # copy the content of fragments
+            with open(fragment, 'r') as input_handle:
+                shutil.copyfileobj(input_handle, handle)
+        handle.write(reindent("""
+        |  </body>
+        |</html>""", indent))
+
+
+def bug_summary(output_dir, bug_counter):
+    """ Bug summary is a HTML table to give a better overview of the bugs. """
+
+    name = os.path.join(output_dir, 'summary.html.fragment')
+    with open(name, 'w') as handle:
+        indent = 4
+        handle.write(reindent("""
+        |<h2>Bug Summary</h2>
+        |<table>
+        |  <thead>
+        |    <tr>
+        |      <td>Bug Type</td>
+        |      <td>Quantity</td>
+        |      <td class="sorttable_nosort">Display?</td>
+        |    </tr>
+        |  </thead>
+        |  <tbody>""", indent))
+        handle.write(reindent("""
+        |    <tr style="font-weight:bold">
+        |      <td class="SUMM_DESC">All Bugs</td>
+        |      <td class="Q">{0}</td>
+        |      <td>
+        |        <center>
+        |          <input checked type="checkbox" id="AllBugsCheck"
+        |                 onClick="CopyCheckedStateToCheckButtons(this);"/>
+        |        </center>
+        |      </td>
+        |    </tr>""", indent).format(bug_counter.total))
+        for category, types in bug_counter.categories.items():
+            handle.write(reindent("""
+        |    <tr>
+        |      <th>{0}</th><th colspan=2></th>
+        |    </tr>""", indent).format(category))
+            for bug_type in types.values():
+                handle.write(reindent("""
+        |    <tr>
+        |      <td class="SUMM_DESC">{bug_type}</td>
+        |      <td class="Q">{bug_count}</td>
+        |      <td>
+        |        <center>
+        |          <input checked type="checkbox"
+        |                 onClick="ToggleDisplay(this,'{bug_type_class}');"/>
+        |        </center>
+        |      </td>
+        |    </tr>""", indent).format(**bug_type))
+        handle.write(reindent("""
+        |  </tbody>
+        |</table>""", indent))
+        handle.write(comment('SUMMARYBUGEND'))
+    return name
+
+
+def bug_report(output_dir, prefix):
+    """ Creates a fragment from the analyzer reports. """
+
+    pretty = prettify_bug(prefix, output_dir)
+    bugs = (pretty(bug) for bug in read_bugs(output_dir, True))
+
+    name = os.path.join(output_dir, 'bugs.html.fragment')
+    with open(name, 'w') as handle:
+        indent = 4
+        handle.write(reindent("""
+        |<h2>Reports</h2>
+        |<table class="sortable" style="table-layout:automatic">
+        |  <thead>
+        |    <tr>
+        |      <td>Bug Group</td>
+        |      <td class="sorttable_sorted">
+        |        Bug Type
+        |        <span id="sorttable_sortfwdind"> &#x25BE;</span>
+        |      </td>
+        |      <td>File</td>
+        |      <td>Function/Method</td>
+        |      <td class="Q">Line</td>
+        |      <td class="Q">Path Length</td>
+        |      <td class="sorttable_nosort"></td>
+        |    </tr>
+        |  </thead>
+        |  <tbody>""", indent))
+        handle.write(comment('REPORTBUGCOL'))
+        for current in bugs:
+            handle.write(reindent("""
+        |    <tr class="{bug_type_class}">
+        |      <td class="DESC">{bug_category}</td>
+        |      <td class="DESC">{bug_type}</td>
+        |      <td>{bug_file}</td>
+        |      <td class="DESC">{bug_function}</td>
+        |      <td class="Q">{bug_line}</td>
+        |      <td class="Q">{bug_path_length}</td>
+        |      <td><a href="{report_file}#EndPath">View Report</a></td>
+        |    </tr>""", indent).format(**current))
+            handle.write(comment('REPORTBUG', {'id': current['report_file']}))
+        handle.write(reindent("""
+        |  </tbody>
+        |</table>""", indent))
+        handle.write(comment('REPORTBUGEND'))
+    return name
+
+
+def crash_report(output_dir, prefix):
+    """ Creates a fragment from the compiler crashes. """
+
+    pretty = prettify_crash(prefix, output_dir)
+    crashes = (pretty(crash) for crash in read_crashes(output_dir))
+
+    name = os.path.join(output_dir, 'crashes.html.fragment')
+    with open(name, 'w') as handle:
+        indent = 4
+        handle.write(reindent("""
+        |<h2>Analyzer Failures</h2>
+        |<p>The analyzer had problems processing the following files:</p>
+        |<table>
+        |  <thead>
+        |    <tr>
+        |      <td>Problem</td>
+        |      <td>Source File</td>
+        |      <td>Preprocessed File</td>
+        |      <td>STDERR Output</td>
+        |    </tr>
+        |  </thead>
+        |  <tbody>""", indent))
+        for current in crashes:
+            handle.write(reindent("""
+        |    <tr>
+        |      <td>{problem}</td>
+        |      <td>{source}</td>
+        |      <td><a href="{file}">preprocessor output</a></td>
+        |      <td><a href="{stderr}">analyzer std err</a></td>
+        |    </tr>""", indent).format(**current))
+            handle.write(comment('REPORTPROBLEM', current))
+        handle.write(reindent("""
+        |  </tbody>
+        |</table>""", indent))
+        handle.write(comment('REPORTCRASHES'))
+    return name
+
+
+def read_crashes(output_dir):
+    """ Generate a unique sequence of crashes from given output directory. """
+
+    return (parse_crash(filename)
+            for filename in glob.iglob(os.path.join(output_dir, 'failures',
+                                                    '*.info.txt')))
+
+
+def read_bugs(output_dir, html):
+    """ Generate a unique sequence of bugs from given output directory.
+
+    Duplicates can be in a project if the same module was compiled multiple
+    times with different compiler options. These would be better to show in
+    the final report (cover) only once. """
+
+    parser = parse_bug_html if html else parse_bug_plist
+    pattern = '*.html' if html else '*.plist'
+
+    duplicate = duplicate_check(
+        lambda bug: '{bug_line}.{bug_path_length}:{bug_file}'.format(**bug))
+
+    bugs = itertools.chain.from_iterable(
+        # parser creates a bug generator not the bug itself
+        parser(filename)
+        for filename in glob.iglob(os.path.join(output_dir, pattern)))
+
+    return (bug for bug in bugs if not duplicate(bug))
+
+
+def parse_bug_plist(filename):
+    """ Returns the generator of bugs from a single .plist file. """
+
+    content = plistlib.readPlist(filename)
+    files = content.get('files')
+    for bug in content.get('diagnostics', []):
+        if len(files) <= int(bug['location']['file']):
+            logging.warning('Parsing bug from "%s" failed', filename)
+            continue
+
+        yield {
+            'result': filename,
+            'bug_type': bug['type'],
+            'bug_category': bug['category'],
+            'bug_line': int(bug['location']['line']),
+            'bug_path_length': int(bug['location']['col']),
+            'bug_file': files[int(bug['location']['file'])]
+        }
+
+
+def parse_bug_html(filename):
+    """ Parse out the bug information from HTML output. """
+
+    patterns = [re.compile(r'<!-- BUGTYPE (?P<bug_type>.*) -->$'),
+                re.compile(r'<!-- BUGFILE (?P<bug_file>.*) -->$'),
+                re.compile(r'<!-- BUGPATHLENGTH (?P<bug_path_length>.*) -->$'),
+                re.compile(r'<!-- BUGLINE (?P<bug_line>.*) -->$'),
+                re.compile(r'<!-- BUGCATEGORY (?P<bug_category>.*) -->$'),
+                re.compile(r'<!-- BUGDESC (?P<bug_description>.*) -->$'),
+                re.compile(r'<!-- FUNCTIONNAME (?P<bug_function>.*) -->$')]
+    endsign = re.compile(r'<!-- BUGMETAEND -->')
+
+    bug = {
+        'report_file': filename,
+        'bug_function': 'n/a',  # compatibility with < clang-3.5
+        'bug_category': 'Other',
+        'bug_line': 0,
+        'bug_path_length': 1
+    }
+
+    with open(filename) as handler:
+        for line in handler.readlines():
+            # do not read the file further
+            if endsign.match(line):
+                break
+            # search for the right lines
+            for regex in patterns:
+                match = regex.match(line.strip())
+                if match:
+                    bug.update(match.groupdict())
+                    break
+
+    encode_value(bug, 'bug_line', int)
+    encode_value(bug, 'bug_path_length', int)
+
+    yield bug
+
+
+def parse_crash(filename):
+    """ Parse out the crash information from the report file. """
+
+    match = re.match(r'(.*)\.info\.txt', filename)
+    name = match.group(1) if match else None
+    with open(filename) as handler:
+        lines = handler.readlines()
+        return {
+            'source': lines[0].rstrip(),
+            'problem': lines[1].rstrip(),
+            'file': name,
+            'info': name + '.info.txt',
+            'stderr': name + '.stderr.txt'
+        }
+
+
+def category_type_name(bug):
+    """ Create a new bug attribute from bug by category and type.
+
+    The result will be used as CSS class selector in the final report. """
+
+    def smash(key):
+        """ Make value ready to be HTML attribute value. """
+
+        return bug.get(key, '').lower().replace(' ', '_').replace("'", '')
+
+    return escape('bt_' + smash('bug_category') + '_' + smash('bug_type'))
+
+
+def create_counters():
+    """ Create counters for bug statistics.
+
+    Two entries are maintained: 'total' is an integer, represents the
+    number of bugs. The 'categories' is a two level categorisation of bug
+    counters. The first level is 'bug category' the second is 'bug type'.
+    Each entry in this classification is a dictionary of 'count', 'type'
+    and 'label'. """
+
+    def predicate(bug):
+        bug_category = bug['bug_category']
+        bug_type = bug['bug_type']
+        current_category = predicate.categories.get(bug_category, dict())
+        current_type = current_category.get(bug_type, {
+            'bug_type': bug_type,
+            'bug_type_class': category_type_name(bug),
+            'bug_count': 0
+        })
+        current_type.update({'bug_count': current_type['bug_count'] + 1})
+        current_category.update({bug_type: current_type})
+        predicate.categories.update({bug_category: current_category})
+        predicate.total += 1
+
+    predicate.total = 0
+    predicate.categories = dict()
+    return predicate
+
+
+def prettify_bug(prefix, output_dir):
+    def predicate(bug):
+        """ Make safe this values to embed into HTML. """
+
+        bug['bug_type_class'] = category_type_name(bug)
+
+        encode_value(bug, 'bug_file', lambda x: escape(chop(prefix, x)))
+        encode_value(bug, 'bug_category', escape)
+        encode_value(bug, 'bug_type', escape)
+        encode_value(bug, 'report_file', lambda x: escape(chop(output_dir, x)))
+        return bug
+
+    return predicate
+
+
+def prettify_crash(prefix, output_dir):
+    def predicate(crash):
+        """ Make safe this values to embed into HTML. """
+
+        encode_value(crash, 'source', lambda x: escape(chop(prefix, x)))
+        encode_value(crash, 'problem', escape)
+        encode_value(crash, 'file', lambda x: escape(chop(output_dir, x)))
+        encode_value(crash, 'info', lambda x: escape(chop(output_dir, x)))
+        encode_value(crash, 'stderr', lambda x: escape(chop(output_dir, x)))
+        return crash
+
+    return predicate
+
+
+def copy_resource_files(output_dir):
+    """ Copy the javascript and css files to the report directory. """
+
+    this_dir = os.path.dirname(os.path.realpath(__file__))
+    for resource in os.listdir(os.path.join(this_dir, 'resources')):
+        shutil.copy(os.path.join(this_dir, 'resources', resource), output_dir)
+
+
+def encode_value(container, key, encode):
+    """ Run 'encode' on 'container[key]' value and update it. """
+
+    if key in container:
+        value = encode(container[key])
+        container.update({key: value})
+
+
+def chop(prefix, filename):
+    """ Create 'filename' from '/prefix/filename' """
+
+    return filename if not len(prefix) else os.path.relpath(filename, prefix)
+
+
+def escape(text):
+    """ Paranoid HTML escape method. (Python version independent) """
+
+    escape_table = {
+        '&': '&',
+        '"': '"',
+        "'": ''',
+        '>': '>',
+        '<': '<'
+    }
+    return ''.join(escape_table.get(c, c) for c in text)
+
+
+def reindent(text, indent):
+    """ Utility function to format html output and keep indentation. """
+
+    result = ''
+    for line in text.splitlines():
+        if len(line.strip()):
+            result += ' ' * indent + line.split('|')[1] + os.linesep
+    return result
+
+
+def comment(name, opts=dict()):
+    """ Utility function to format meta information as comment. """
+
+    attributes = ''
+    for key, value in opts.items():
+        attributes += ' {0}="{1}"'.format(key, value)
+
+    return '<!-- {0}{1} -->{2}'.format(name, attributes, os.linesep)
+
+
+def commonprefix_from(filename):
+    """ Create file prefix from a compilation database entries. """
+
+    with open(filename, 'r') as handle:
+        return commonprefix(item['file'] for item in json.load(handle))
+
+
+def commonprefix(files):
+    """ Fixed version of os.path.commonprefix. Return the longest path prefix
+    that is a prefix of all paths in filenames. """
+
+    result = None
+    for current in files:
+        if result is not None:
+            result = os.path.commonprefix([result, current])
+        else:
+            result = current
+
+    if result is None:
+        return ''
+    elif not os.path.isdir(result):
+        return os.path.dirname(result)
+    else:
+        return os.path.abspath(result)

Added: cfe/trunk/tools/scan-build-py/libscanbuild/resources/scanview.css
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libscanbuild/resources/scanview.css?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libscanbuild/resources/scanview.css (added)
+++ cfe/trunk/tools/scan-build-py/libscanbuild/resources/scanview.css Tue Jan 12 16:38:41 2016
@@ -0,0 +1,62 @@
+body { color:#000000; background-color:#ffffff }
+body { font-family: Helvetica, sans-serif; font-size:9pt }
+h1 { font-size: 14pt; }
+h2 { font-size: 12pt; }
+table { font-size:9pt }
+table { border-spacing: 0px; border: 1px solid black }
+th, table thead {
+  background-color:#eee; color:#666666;
+  font-weight: bold; cursor: default;
+  text-align:center;
+  font-weight: bold; font-family: Verdana;
+  white-space:nowrap;
+}
+.W { font-size:0px }
+th, td { padding:5px; padding-left:8px; text-align:left }
+td.SUMM_DESC { padding-left:12px }
+td.DESC { white-space:pre }
+td.Q { text-align:right }
+td { text-align:left }
+tbody.scrollContent { overflow:auto }
+
+table.form_group {
+    background-color: #ccc;
+    border: 1px solid #333;
+    padding: 2px;
+}
+
+table.form_inner_group {
+    background-color: #ccc;
+    border: 1px solid #333;
+    padding: 0px;
+}
+
+table.form {
+    background-color: #999;
+    border: 1px solid #333;
+    padding: 2px;
+}
+
+td.form_label {
+    text-align: right;
+    vertical-align: top;
+}
+/* For one line entires */
+td.form_clabel {
+    text-align: right;
+    vertical-align: center;
+}
+td.form_value {
+    text-align: left;
+    vertical-align: top;
+}
+td.form_submit {
+    text-align: right;
+    vertical-align: top;
+}
+
+h1.SubmitFail {
+    color: #f00;
+}
+h1.SubmitOk {
+}

Added: cfe/trunk/tools/scan-build-py/libscanbuild/resources/selectable.js
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libscanbuild/resources/selectable.js?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libscanbuild/resources/selectable.js (added)
+++ cfe/trunk/tools/scan-build-py/libscanbuild/resources/selectable.js Tue Jan 12 16:38:41 2016
@@ -0,0 +1,47 @@
+function SetDisplay(RowClass, DisplayVal)
+{
+  var Rows = document.getElementsByTagName("tr");
+  for ( var i = 0 ; i < Rows.length; ++i ) {
+    if (Rows[i].className == RowClass) {
+      Rows[i].style.display = DisplayVal;
+    }
+  }
+}
+
+function CopyCheckedStateToCheckButtons(SummaryCheckButton) {
+  var Inputs = document.getElementsByTagName("input");
+  for ( var i = 0 ; i < Inputs.length; ++i ) {
+    if (Inputs[i].type == "checkbox") {
+      if(Inputs[i] != SummaryCheckButton) {
+        Inputs[i].checked = SummaryCheckButton.checked;
+        Inputs[i].onclick();
+	  }
+    }
+  }
+}
+
+function returnObjById( id ) {
+    if (document.getElementById)
+        var returnVar = document.getElementById(id);
+    else if (document.all)
+        var returnVar = document.all[id];
+    else if (document.layers)
+        var returnVar = document.layers[id];
+    return returnVar;
+}
+
+var NumUnchecked = 0;
+
+function ToggleDisplay(CheckButton, ClassName) {
+  if (CheckButton.checked) {
+    SetDisplay(ClassName, "");
+    if (--NumUnchecked == 0) {
+      returnObjById("AllBugsCheck").checked = true;
+    }
+  }
+  else {
+    SetDisplay(ClassName, "none");
+    NumUnchecked++;
+    returnObjById("AllBugsCheck").checked = false;
+  }
+}

Added: cfe/trunk/tools/scan-build-py/libscanbuild/resources/sorttable.js
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libscanbuild/resources/sorttable.js?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libscanbuild/resources/sorttable.js (added)
+++ cfe/trunk/tools/scan-build-py/libscanbuild/resources/sorttable.js Tue Jan 12 16:38:41 2016
@@ -0,0 +1,492 @@
+/*
+  SortTable
+  version 2
+  7th April 2007
+  Stuart Langridge, http://www.kryogenix.org/code/browser/sorttable/
+
+  Instructions:
+  Download this file
+  Add <script src="sorttable.js"></script> to your HTML
+  Add class="sortable" to any table you'd like to make sortable
+  Click on the headers to sort
+
+  Thanks to many, many people for contributions and suggestions.
+  Licenced as X11: http://www.kryogenix.org/code/browser/licence.html
+  This basically means: do what you want with it.
+*/
+
+
+var stIsIE = /*@cc_on!@*/false;
+
+sorttable = {
+  init: function() {
+    // quit if this function has already been called
+    if (arguments.callee.done) return;
+    // flag this function so we don't do the same thing twice
+    arguments.callee.done = true;
+    // kill the timer
+    if (_timer) clearInterval(_timer);
+
+    if (!document.createElement || !document.getElementsByTagName) return;
+
+    sorttable.DATE_RE = /^(\d\d?)[\/\.-](\d\d?)[\/\.-]((\d\d)?\d\d)$/;
+
+    forEach(document.getElementsByTagName('table'), function(table) {
+      if (table.className.search(/\bsortable\b/) != -1) {
+        sorttable.makeSortable(table);
+      }
+    });
+
+  },
+
+  makeSortable: function(table) {
+    if (table.getElementsByTagName('thead').length == 0) {
+      // table doesn't have a tHead. Since it should have, create one and
+      // put the first table row in it.
+      the = document.createElement('thead');
+      the.appendChild(table.rows[0]);
+      table.insertBefore(the,table.firstChild);
+    }
+    // Safari doesn't support table.tHead, sigh
+    if (table.tHead == null) table.tHead = table.getElementsByTagName('thead')[0];
+
+    if (table.tHead.rows.length != 1) return; // can't cope with two header rows
+
+    // Sorttable v1 put rows with a class of "sortbottom" at the bottom (as
+    // "total" rows, for example). This is B&R, since what you're supposed
+    // to do is put them in a tfoot. So, if there are sortbottom rows,
+    // for backward compatibility, move them to tfoot (creating it if needed).
+    sortbottomrows = [];
+    for (var i=0; i<table.rows.length; i++) {
+      if (table.rows[i].className.search(/\bsortbottom\b/) != -1) {
+        sortbottomrows[sortbottomrows.length] = table.rows[i];
+      }
+    }
+    if (sortbottomrows) {
+      if (table.tFoot == null) {
+        // table doesn't have a tfoot. Create one.
+        tfo = document.createElement('tfoot');
+        table.appendChild(tfo);
+      }
+      for (var i=0; i<sortbottomrows.length; i++) {
+        tfo.appendChild(sortbottomrows[i]);
+      }
+      delete sortbottomrows;
+    }
+
+    // work through each column and calculate its type
+    headrow = table.tHead.rows[0].cells;
+    for (var i=0; i<headrow.length; i++) {
+      // manually override the type with a sorttable_type attribute
+      if (!headrow[i].className.match(/\bsorttable_nosort\b/)) { // skip this col
+        mtch = headrow[i].className.match(/\bsorttable_([a-z0-9]+)\b/);
+        if (mtch) { override = mtch[1]; }
+	      if (mtch && typeof sorttable["sort_"+override] == 'function') {
+	        headrow[i].sorttable_sortfunction = sorttable["sort_"+override];
+	      } else {
+	        headrow[i].sorttable_sortfunction = sorttable.guessType(table,i);
+	      }
+	      // make it clickable to sort
+	      headrow[i].sorttable_columnindex = i;
+	      headrow[i].sorttable_tbody = table.tBodies[0];
+	      dean_addEvent(headrow[i],"click", function(e) {
+
+          if (this.className.search(/\bsorttable_sorted\b/) != -1) {
+            // if we're already sorted by this column, just
+            // reverse the table, which is quicker
+            sorttable.reverse(this.sorttable_tbody);
+            this.className = this.className.replace('sorttable_sorted',
+                                                    'sorttable_sorted_reverse');
+            this.removeChild(document.getElementById('sorttable_sortfwdind'));
+            sortrevind = document.createElement('span');
+            sortrevind.id = "sorttable_sortrevind";
+            sortrevind.innerHTML = stIsIE ? '&nbsp<font face="webdings">5</font>' : ' &#x25B4;';
+            this.appendChild(sortrevind);
+            return;
+          }
+          if (this.className.search(/\bsorttable_sorted_reverse\b/) != -1) {
+            // if we're already sorted by this column in reverse, just
+            // re-reverse the table, which is quicker
+            sorttable.reverse(this.sorttable_tbody);
+            this.className = this.className.replace('sorttable_sorted_reverse',
+                                                    'sorttable_sorted');
+            this.removeChild(document.getElementById('sorttable_sortrevind'));
+            sortfwdind = document.createElement('span');
+            sortfwdind.id = "sorttable_sortfwdind";
+            sortfwdind.innerHTML = stIsIE ? '&nbsp<font face="webdings">6</font>' : ' &#x25BE;';
+            this.appendChild(sortfwdind);
+            return;
+          }
+
+          // remove sorttable_sorted classes
+          theadrow = this.parentNode;
+          forEach(theadrow.childNodes, function(cell) {
+            if (cell.nodeType == 1) { // an element
+              cell.className = cell.className.replace('sorttable_sorted_reverse','');
+              cell.className = cell.className.replace('sorttable_sorted','');
+            }
+          });
+          sortfwdind = document.getElementById('sorttable_sortfwdind');
+          if (sortfwdind) { sortfwdind.parentNode.removeChild(sortfwdind); }
+          sortrevind = document.getElementById('sorttable_sortrevind');
+          if (sortrevind) { sortrevind.parentNode.removeChild(sortrevind); }
+
+          this.className += ' sorttable_sorted';
+          sortfwdind = document.createElement('span');
+          sortfwdind.id = "sorttable_sortfwdind";
+          sortfwdind.innerHTML = stIsIE ? '&nbsp<font face="webdings">6</font>' : ' &#x25BE;';
+          this.appendChild(sortfwdind);
+
+	        // build an array to sort. This is a Schwartzian transform thing,
+	        // i.e., we "decorate" each row with the actual sort key,
+	        // sort based on the sort keys, and then put the rows back in order
+	        // which is a lot faster because you only do getInnerText once per row
+	        row_array = [];
+	        col = this.sorttable_columnindex;
+	        rows = this.sorttable_tbody.rows;
+	        for (var j=0; j<rows.length; j++) {
+	          row_array[row_array.length] = [sorttable.getInnerText(rows[j].cells[col]), rows[j]];
+	        }
+	        /* If you want a stable sort, uncomment the following line */
+	        sorttable.shaker_sort(row_array, this.sorttable_sortfunction);
+	        /* and comment out this one */
+	        //row_array.sort(this.sorttable_sortfunction);
+
+	        tb = this.sorttable_tbody;
+	        for (var j=0; j<row_array.length; j++) {
+	          tb.appendChild(row_array[j][1]);
+	        }
+
+	        delete row_array;
+	      });
+	    }
+    }
+  },
+
+  guessType: function(table, column) {
+    // guess the type of a column based on its first non-blank row
+    sortfn = sorttable.sort_alpha;
+    for (var i=0; i<table.tBodies[0].rows.length; i++) {
+      text = sorttable.getInnerText(table.tBodies[0].rows[i].cells[column]);
+      if (text != '') {
+        if (text.match(/^-?[£$¤]?[\d,.]+%?$/)) {
+          return sorttable.sort_numeric;
+        }
+        // check for a date: dd/mm/yyyy or dd/mm/yy
+        // can have / or . or - as separator
+        // can be mm/dd as well
+        possdate = text.match(sorttable.DATE_RE)
+        if (possdate) {
+          // looks like a date
+          first = parseInt(possdate[1]);
+          second = parseInt(possdate[2]);
+          if (first > 12) {
+            // definitely dd/mm
+            return sorttable.sort_ddmm;
+          } else if (second > 12) {
+            return sorttable.sort_mmdd;
+          } else {
+            // looks like a date, but we can't tell which, so assume
+            // that it's dd/mm (English imperialism!) and keep looking
+            sortfn = sorttable.sort_ddmm;
+          }
+        }
+      }
+    }
+    return sortfn;
+  },
+
+  getInnerText: function(node) {
+    // gets the text we want to use for sorting for a cell.
+    // strips leading and trailing whitespace.
+    // this is *not* a generic getInnerText function; it's special to sorttable.
+    // for example, you can override the cell text with a customkey attribute.
+    // it also gets .value for <input> fields.
+
+    hasInputs = (typeof node.getElementsByTagName == 'function') &&
+                 node.getElementsByTagName('input').length;
+
+    if (node.getAttribute("sorttable_customkey") != null) {
+      return node.getAttribute("sorttable_customkey");
+    }
+    else if (typeof node.textContent != 'undefined' && !hasInputs) {
+      return node.textContent.replace(/^\s+|\s+$/g, '');
+    }
+    else if (typeof node.innerText != 'undefined' && !hasInputs) {
+      return node.innerText.replace(/^\s+|\s+$/g, '');
+    }
+    else if (typeof node.text != 'undefined' && !hasInputs) {
+      return node.text.replace(/^\s+|\s+$/g, '');
+    }
+    else {
+      switch (node.nodeType) {
+        case 3:
+          if (node.nodeName.toLowerCase() == 'input') {
+            return node.value.replace(/^\s+|\s+$/g, '');
+          }
+        case 4:
+          return node.nodeValue.replace(/^\s+|\s+$/g, '');
+          break;
+        case 1:
+        case 11:
+          var innerText = '';
+          for (var i = 0; i < node.childNodes.length; i++) {
+            innerText += sorttable.getInnerText(node.childNodes[i]);
+          }
+          return innerText.replace(/^\s+|\s+$/g, '');
+          break;
+        default:
+          return '';
+      }
+    }
+  },
+
+  reverse: function(tbody) {
+    // reverse the rows in a tbody
+    newrows = [];
+    for (var i=0; i<tbody.rows.length; i++) {
+      newrows[newrows.length] = tbody.rows[i];
+    }
+    for (var i=newrows.length-1; i>=0; i--) {
+       tbody.appendChild(newrows[i]);
+    }
+    delete newrows;
+  },
+
+  /* sort functions
+     each sort function takes two parameters, a and b
+     you are comparing a[0] and b[0] */
+  sort_numeric: function(a,b) {
+    aa = parseFloat(a[0].replace(/[^0-9.-]/g,''));
+    if (isNaN(aa)) aa = 0;
+    bb = parseFloat(b[0].replace(/[^0-9.-]/g,''));
+    if (isNaN(bb)) bb = 0;
+    return aa-bb;
+  },
+  sort_alpha: function(a,b) {
+    if (a[0]==b[0]) return 0;
+    if (a[0]<b[0]) return -1;
+    return 1;
+  },
+  sort_ddmm: function(a,b) {
+    mtch = a[0].match(sorttable.DATE_RE);
+    y = mtch[3]; m = mtch[2]; d = mtch[1];
+    if (m.length == 1) m = '0'+m;
+    if (d.length == 1) d = '0'+d;
+    dt1 = y+m+d;
+    mtch = b[0].match(sorttable.DATE_RE);
+    y = mtch[3]; m = mtch[2]; d = mtch[1];
+    if (m.length == 1) m = '0'+m;
+    if (d.length == 1) d = '0'+d;
+    dt2 = y+m+d;
+    if (dt1==dt2) return 0;
+    if (dt1<dt2) return -1;
+    return 1;
+  },
+  sort_mmdd: function(a,b) {
+    mtch = a[0].match(sorttable.DATE_RE);
+    y = mtch[3]; d = mtch[2]; m = mtch[1];
+    if (m.length == 1) m = '0'+m;
+    if (d.length == 1) d = '0'+d;
+    dt1 = y+m+d;
+    mtch = b[0].match(sorttable.DATE_RE);
+    y = mtch[3]; d = mtch[2]; m = mtch[1];
+    if (m.length == 1) m = '0'+m;
+    if (d.length == 1) d = '0'+d;
+    dt2 = y+m+d;
+    if (dt1==dt2) return 0;
+    if (dt1<dt2) return -1;
+    return 1;
+  },
+
+  shaker_sort: function(list, comp_func) {
+    // A stable sort function to allow multi-level sorting of data
+    // see: http://en.wikipedia.org/wiki/Cocktail_sort
+    // thanks to Joseph Nahmias
+    var b = 0;
+    var t = list.length - 1;
+    var swap = true;
+
+    while(swap) {
+        swap = false;
+        for(var i = b; i < t; ++i) {
+            if ( comp_func(list[i], list[i+1]) > 0 ) {
+                var q = list[i]; list[i] = list[i+1]; list[i+1] = q;
+                swap = true;
+            }
+        } // for
+        t--;
+
+        if (!swap) break;
+
+        for(var i = t; i > b; --i) {
+            if ( comp_func(list[i], list[i-1]) < 0 ) {
+                var q = list[i]; list[i] = list[i-1]; list[i-1] = q;
+                swap = true;
+            }
+        } // for
+        b++;
+
+    } // while(swap)
+  }
+}
+
+/* ******************************************************************
+   Supporting functions: bundled here to avoid depending on a library
+   ****************************************************************** */
+
+// Dean Edwards/Matthias Miller/John Resig
+
+/* for Mozilla/Opera9 */
+if (document.addEventListener) {
+    document.addEventListener("DOMContentLoaded", sorttable.init, false);
+}
+
+/* for Internet Explorer */
+/*@cc_on @*/
+/*@if (@_win32)
+    document.write("<script id=__ie_onload defer src=javascript:void(0)><\/script>");
+    var script = document.getElementById("__ie_onload");
+    script.onreadystatechange = function() {
+        if (this.readyState == "complete") {
+            sorttable.init(); // call the onload handler
+        }
+    };
+/*@end @*/
+
+/* for Safari */
+if (/WebKit/i.test(navigator.userAgent)) { // sniff
+    var _timer = setInterval(function() {
+        if (/loaded|complete/.test(document.readyState)) {
+            sorttable.init(); // call the onload handler
+        }
+    }, 10);
+}
+
+/* for other browsers */
+window.onload = sorttable.init;
+
+// written by Dean Edwards, 2005
+// with input from Tino Zijdel, Matthias Miller, Diego Perini
+
+// http://dean.edwards.name/weblog/2005/10/add-event/
+
+function dean_addEvent(element, type, handler) {
+	if (element.addEventListener) {
+		element.addEventListener(type, handler, false);
+	} else {
+		// assign each event handler a unique ID
+		if (!handler.$$guid) handler.$$guid = dean_addEvent.guid++;
+		// create a hash table of event types for the element
+		if (!element.events) element.events = {};
+		// create a hash table of event handlers for each element/event pair
+		var handlers = element.events[type];
+		if (!handlers) {
+			handlers = element.events[type] = {};
+			// store the existing event handler (if there is one)
+			if (element["on" + type]) {
+				handlers[0] = element["on" + type];
+			}
+		}
+		// store the event handler in the hash table
+		handlers[handler.$$guid] = handler;
+		// assign a global event handler to do all the work
+		element["on" + type] = handleEvent;
+	}
+};
+// a counter used to create unique IDs
+dean_addEvent.guid = 1;
+
+function removeEvent(element, type, handler) {
+	if (element.removeEventListener) {
+		element.removeEventListener(type, handler, false);
+	} else {
+		// delete the event handler from the hash table
+		if (element.events && element.events[type]) {
+			delete element.events[type][handler.$$guid];
+		}
+	}
+};
+
+function handleEvent(event) {
+	var returnValue = true;
+	// grab the event object (IE uses a global event object)
+	event = event || fixEvent(((this.ownerDocument || this.document || this).parentWindow || window).event);
+	// get a reference to the hash table of event handlers
+	var handlers = this.events[event.type];
+	// execute each event handler
+	for (var i in handlers) {
+		this.$$handleEvent = handlers[i];
+		if (this.$$handleEvent(event) === false) {
+			returnValue = false;
+		}
+	}
+	return returnValue;
+};
+
+function fixEvent(event) {
+	// add W3C standard event methods
+	event.preventDefault = fixEvent.preventDefault;
+	event.stopPropagation = fixEvent.stopPropagation;
+	return event;
+};
+fixEvent.preventDefault = function() {
+	this.returnValue = false;
+};
+fixEvent.stopPropagation = function() {
+  this.cancelBubble = true;
+}
+
+// Dean's forEach: http://dean.edwards.name/base/forEach.js
+/*
+	forEach, version 1.0
+	Copyright 2006, Dean Edwards
+	License: http://www.opensource.org/licenses/mit-license.php
+*/
+
+// array-like enumeration
+if (!Array.forEach) { // mozilla already supports this
+	Array.forEach = function(array, block, context) {
+		for (var i = 0; i < array.length; i++) {
+			block.call(context, array[i], i, array);
+		}
+	};
+}
+
+// generic enumeration
+Function.prototype.forEach = function(object, block, context) {
+	for (var key in object) {
+		if (typeof this.prototype[key] == "undefined") {
+			block.call(context, object[key], key, object);
+		}
+	}
+};
+
+// character enumeration
+String.forEach = function(string, block, context) {
+	Array.forEach(string.split(""), function(chr, index) {
+		block.call(context, chr, index, string);
+	});
+};
+
+// globally resolve forEach enumeration
+var forEach = function(object, block, context) {
+	if (object) {
+		var resolve = Object; // default
+		if (object instanceof Function) {
+			// functions have a "length" property
+			resolve = Function;
+		} else if (object.forEach instanceof Function) {
+			// the object implements a custom forEach method so use that
+			object.forEach(block, context);
+			return;
+		} else if (typeof object == "string") {
+			// the object is a string
+			resolve = String;
+		} else if (typeof object.length == "number") {
+			// the object is array-like
+			resolve = Array;
+		}
+		resolve.forEach(object, block, context);
+	}
+};

Added: cfe/trunk/tools/scan-build-py/libscanbuild/runner.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libscanbuild/runner.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libscanbuild/runner.py (added)
+++ cfe/trunk/tools/scan-build-py/libscanbuild/runner.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,256 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module is responsible to run the analyzer commands. """
+
+import os
+import os.path
+import tempfile
+import functools
+import subprocess
+import logging
+from libscanbuild.command import classify_parameters, Action, classify_source
+from libscanbuild.clang import get_arguments, get_version
+from libscanbuild.shell import decode
+
+__all__ = ['run']
+
+
+def require(required):
+    """ Decorator for checking the required values in state.
+
+    It checks the required attributes in the passed state and stop when
+    any of those is missing. """
+
+    def decorator(function):
+        @functools.wraps(function)
+        def wrapper(*args, **kwargs):
+            for key in required:
+                if key not in args[0]:
+                    raise KeyError(
+                        '{0} not passed to {1}'.format(key, function.__name__))
+
+            return function(*args, **kwargs)
+
+        return wrapper
+
+    return decorator
+
+
+ at require(['command', 'directory', 'file',  # an entry from compilation database
+          'clang', 'direct_args',  # compiler name, and arguments from command
+          'output_dir', 'output_format', 'output_failures'])
+def run(opts):
+    """ Entry point to run (or not) static analyzer against a single entry
+    of the compilation database.
+
+    This complex task is decomposed into smaller methods which are calling
+    each other in chain. If the analyzis is not possibe the given method
+    just return and break the chain.
+
+    The passed parameter is a python dictionary. Each method first check
+    that the needed parameters received. (This is done by the 'require'
+    decorator. It's like an 'assert' to check the contract between the
+    caller and the called method.) """
+
+    try:
+        command = opts.pop('command')
+        logging.debug("Run analyzer against '%s'", command)
+        opts.update(classify_parameters(decode(command)))
+
+        return action_check(opts)
+    except Exception:
+        logging.error("Problem occured during analyzis.", exc_info=1)
+        return None
+
+
+ at require(['report', 'directory', 'clang', 'output_dir', 'language', 'file',
+          'error_type', 'error_output', 'exit_code'])
+def report_failure(opts):
+    """ Create report when analyzer failed.
+
+    The major report is the preprocessor output. The output filename generated
+    randomly. The compiler output also captured into '.stderr.txt' file.
+    And some more execution context also saved into '.info.txt' file. """
+
+    def extension(opts):
+        """ Generate preprocessor file extension. """
+
+        mapping = {'objective-c++': '.mii', 'objective-c': '.mi', 'c++': '.ii'}
+        return mapping.get(opts['language'], '.i')
+
+    def destination(opts):
+        """ Creates failures directory if not exits yet. """
+
+        name = os.path.join(opts['output_dir'], 'failures')
+        if not os.path.isdir(name):
+            os.makedirs(name)
+        return name
+
+    error = opts['error_type']
+    (handle, name) = tempfile.mkstemp(suffix=extension(opts),
+                                      prefix='clang_' + error + '_',
+                                      dir=destination(opts))
+    os.close(handle)
+    cwd = opts['directory']
+    cmd = get_arguments([opts['clang']] + opts['report'] + ['-o', name], cwd)
+    logging.debug('exec command in %s: %s', cwd, ' '.join(cmd))
+    subprocess.call(cmd, cwd=cwd)
+
+    with open(name + '.info.txt', 'w') as handle:
+        handle.write(opts['file'] + os.linesep)
+        handle.write(error.title().replace('_', ' ') + os.linesep)
+        handle.write(' '.join(cmd) + os.linesep)
+        handle.write(' '.join(os.uname()) + os.linesep)
+        handle.write(get_version(cmd[0]))
+        handle.close()
+
+    with open(name + '.stderr.txt', 'w') as handle:
+        handle.writelines(opts['error_output'])
+        handle.close()
+
+    return {
+        'error_output': opts['error_output'],
+        'exit_code': opts['exit_code']
+    }
+
+
+ at require(['clang', 'analyze', 'directory', 'output'])
+def run_analyzer(opts, continuation=report_failure):
+    """ It assembles the analysis command line and executes it. Capture the
+    output of the analysis and returns with it. If failure reports are
+    requested, it calls the continuation to generate it. """
+
+    cwd = opts['directory']
+    cmd = get_arguments([opts['clang']] + opts['analyze'] + opts['output'],
+                        cwd)
+    logging.debug('exec command in %s: %s', cwd, ' '.join(cmd))
+    child = subprocess.Popen(cmd,
+                             cwd=cwd,
+                             universal_newlines=True,
+                             stdout=subprocess.PIPE,
+                             stderr=subprocess.STDOUT)
+    output = child.stdout.readlines()
+    child.stdout.close()
+    # do report details if it were asked
+    child.wait()
+    if opts.get('output_failures', False) and child.returncode:
+        error_type = 'crash' if child.returncode & 127 else 'other_error'
+        opts.update({
+            'error_type': error_type,
+            'error_output': output,
+            'exit_code': child.returncode
+        })
+        return continuation(opts)
+    return {'error_output': output, 'exit_code': child.returncode}
+
+
+ at require(['output_dir'])
+def set_analyzer_output(opts, continuation=run_analyzer):
+    """ Create output file if was requested.
+
+    This plays a role only if .plist files are requested. """
+
+    if opts.get('output_format') in {'plist', 'plist-html'}:
+        with tempfile.NamedTemporaryFile(prefix='report-',
+                                         suffix='.plist',
+                                         delete=False,
+                                         dir=opts['output_dir']) as output:
+            opts.update({'output': ['-o', output.name]})
+            return continuation(opts)
+    else:
+        opts.update({'output': ['-o', opts['output_dir']]})
+        return continuation(opts)
+
+
+ at require(['file', 'directory', 'clang', 'direct_args', 'language',
+          'output_dir', 'output_format', 'output_failures'])
+def create_commands(opts, continuation=set_analyzer_output):
+    """ Create command to run analyzer or failure report generation.
+
+    It generates commands (from compilation database entries) which contains
+    enough information to run the analyzer (and the crash report generation
+    if that was requested). """
+
+    common = []
+    if 'arch' in opts:
+        common.extend(['-arch', opts.pop('arch')])
+    common.extend(opts.pop('compile_options', []))
+    common.extend(['-x', opts['language']])
+    common.append(os.path.relpath(opts['file'], opts['directory']))
+
+    opts.update({
+        'analyze': ['--analyze'] + opts['direct_args'] + common,
+        'report': ['-fsyntax-only', '-E'] + common
+    })
+
+    return continuation(opts)
+
+
+ at require(['file', 'c++'])
+def language_check(opts, continuation=create_commands):
+    """ Find out the language from command line parameters or file name
+    extension. The decision also influenced by the compiler invocation. """
+
+    accepteds = {
+        'c', 'c++', 'objective-c', 'objective-c++', 'c-cpp-output',
+        'c++-cpp-output', 'objective-c-cpp-output'
+    }
+
+    key = 'language'
+    language = opts[key] if key in opts else \
+        classify_source(opts['file'], opts['c++'])
+
+    if language is None:
+        logging.debug('skip analysis, language not known')
+        return None
+    elif language not in accepteds:
+        logging.debug('skip analysis, language not supported')
+        return None
+    else:
+        logging.debug('analysis, language: %s', language)
+        opts.update({key: language})
+        return continuation(opts)
+
+
+ at require([])
+def arch_check(opts, continuation=language_check):
+    """ Do run analyzer through one of the given architectures. """
+
+    disableds = {'ppc', 'ppc64'}
+
+    key = 'archs_seen'
+    if key in opts:
+        # filter out disabled architectures and -arch switches
+        archs = [a for a in opts[key] if a not in disableds]
+
+        if not archs:
+            logging.debug('skip analysis, found not supported arch')
+            return None
+        else:
+            # There should be only one arch given (or the same multiple
+            # times). If there are multiple arch are given and are not
+            # the same, those should not change the pre-processing step.
+            # But that's the only pass we have before run the analyzer.
+            arch = archs.pop()
+            logging.debug('analysis, on arch: %s', arch)
+
+            opts.update({'arch': arch})
+            del opts[key]
+            return continuation(opts)
+    else:
+        logging.debug('analysis, on default arch')
+        return continuation(opts)
+
+
+ at require(['action'])
+def action_check(opts, continuation=arch_check):
+    """ Continue analysis only if it compilation or link. """
+
+    if opts.pop('action') <= Action.Compile:
+        return continuation(opts)
+    else:
+        logging.debug('skip analysis, not compilation nor link')
+        return None

Added: cfe/trunk/tools/scan-build-py/libscanbuild/shell.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/libscanbuild/shell.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/libscanbuild/shell.py (added)
+++ cfe/trunk/tools/scan-build-py/libscanbuild/shell.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,66 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module implements basic shell escaping/unescaping methods. """
+
+import re
+import shlex
+
+__all__ = ['encode', 'decode']
+
+
+def encode(command):
+    """ Takes a command as list and returns a string. """
+
+    def needs_quote(word):
+        """ Returns true if arguments needs to be protected by quotes.
+
+        Previous implementation was shlex.split method, but that's not good
+        for this job. Currently is running through the string with a basic
+        state checking. """
+
+        reserved = {' ', '$', '%', '&', '(', ')', '[', ']', '{', '}', '*', '|',
+                    '<', '>', '@', '?', '!'}
+        state = 0
+        for current in word:
+            if state == 0 and current in reserved:
+                return True
+            elif state == 0 and current == '\\':
+                state = 1
+            elif state == 1 and current in reserved | {'\\'}:
+                state = 0
+            elif state == 0 and current == '"':
+                state = 2
+            elif state == 2 and current == '"':
+                state = 0
+            elif state == 0 and current == "'":
+                state = 3
+            elif state == 3 and current == "'":
+                state = 0
+        return state != 0
+
+    def escape(word):
+        """ Do protect argument if that's needed. """
+
+        table = {'\\': '\\\\', '"': '\\"'}
+        escaped = ''.join([table.get(c, c) for c in word])
+
+        return '"' + escaped + '"' if needs_quote(word) else escaped
+
+    return " ".join([escape(arg) for arg in command])
+
+
+def decode(string):
+    """ Takes a command string and returns as a list. """
+
+    def unescape(arg):
+        """ Gets rid of the escaping characters. """
+
+        if len(arg) >= 2 and arg[0] == arg[-1] and arg[0] == '"':
+            arg = arg[1:-1]
+            return re.sub(r'\\(["\\])', r'\1', arg)
+        return re.sub(r'\\([\\ $%&\(\)\[\]\{\}\*|<>@?!])', r'\1', arg)
+
+    return [unescape(arg) for arg in shlex.split(string)]

Added: cfe/trunk/tools/scan-build-py/tests/__init__.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/__init__.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/__init__.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/__init__.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,18 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import unittest
+
+import tests.unit
+import tests.functional.cases
+
+
+def suite():
+    loader = unittest.TestLoader()
+    suite = unittest.TestSuite()
+    suite.addTests(loader.loadTestsFromModule(tests.unit))
+    suite.addTests(loader.loadTestsFromModule(tests.functional.cases))
+    return suite

Added: cfe/trunk/tools/scan-build-py/tests/functional/__init__.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/__init__.py?rev=257533&view=auto
==============================================================================
    (empty)

Added: cfe/trunk/tools/scan-build-py/tests/functional/cases/__init__.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/cases/__init__.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/cases/__init__.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/cases/__init__.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,71 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import re
+import os.path
+import subprocess
+
+
+def load_tests(loader, suite, pattern):
+    from . import test_from_cdb
+    suite.addTests(loader.loadTestsFromModule(test_from_cdb))
+    from . import test_from_cmd
+    suite.addTests(loader.loadTestsFromModule(test_from_cmd))
+    from . import test_create_cdb
+    suite.addTests(loader.loadTestsFromModule(test_create_cdb))
+    from . import test_exec_anatomy
+    suite.addTests(loader.loadTestsFromModule(test_exec_anatomy))
+    return suite
+
+
+def make_args(target):
+    this_dir, _ = os.path.split(__file__)
+    path = os.path.normpath(os.path.join(this_dir, '..', 'src'))
+    return ['make', 'SRCDIR={}'.format(path), 'OBJDIR={}'.format(target), '-f',
+            os.path.join(path, 'build', 'Makefile')]
+
+
+def silent_call(cmd, *args, **kwargs):
+    kwargs.update({'stdout': subprocess.PIPE, 'stderr': subprocess.STDOUT})
+    return subprocess.call(cmd, *args, **kwargs)
+
+
+def silent_check_call(cmd, *args, **kwargs):
+    kwargs.update({'stdout': subprocess.PIPE, 'stderr': subprocess.STDOUT})
+    return subprocess.check_call(cmd, *args, **kwargs)
+
+
+def call_and_report(analyzer_cmd, build_cmd):
+    child = subprocess.Popen(analyzer_cmd + ['-v'] + build_cmd,
+                             universal_newlines=True,
+                             stdout=subprocess.PIPE,
+                             stderr=subprocess.STDOUT)
+
+    pattern = re.compile('Report directory created: (.+)')
+    directory = None
+    for line in child.stdout.readlines():
+        match = pattern.search(line)
+        if match and match.lastindex == 1:
+            directory = match.group(1)
+            break
+    child.stdout.close()
+    child.wait()
+
+    return (child.returncode, directory)
+
+
+def check_call_and_report(analyzer_cmd, build_cmd):
+    exit_code, result = call_and_report(analyzer_cmd, build_cmd)
+    if exit_code != 0:
+        raise subprocess.CalledProcessError(
+            exit_code, analyzer_cmd + build_cmd, None)
+    else:
+        return result
+
+
+def create_empty_file(filename):
+    with open(filename, 'a') as handle:
+        pass

Added: cfe/trunk/tools/scan-build-py/tests/functional/cases/test_create_cdb.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/cases/test_create_cdb.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/cases/test_create_cdb.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/cases/test_create_cdb.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,191 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+from ...unit import fixtures
+from . import make_args, silent_check_call, silent_call, create_empty_file
+import unittest
+
+import os.path
+import json
+
+
+class CompilationDatabaseTest(unittest.TestCase):
+    @staticmethod
+    def run_intercept(tmpdir, args):
+        result = os.path.join(tmpdir, 'cdb.json')
+        make = make_args(tmpdir) + args
+        silent_check_call(
+            ['intercept-build', '--cdb', result] + make)
+        return result
+
+    @staticmethod
+    def count_entries(filename):
+        with open(filename, 'r') as handler:
+            content = json.load(handler)
+            return len(content)
+
+    def test_successful_build(self):
+        with fixtures.TempDir() as tmpdir:
+            result = self.run_intercept(tmpdir, ['build_regular'])
+            self.assertTrue(os.path.isfile(result))
+            self.assertEqual(5, self.count_entries(result))
+
+    def test_successful_build_with_wrapper(self):
+        with fixtures.TempDir() as tmpdir:
+            result = os.path.join(tmpdir, 'cdb.json')
+            make = make_args(tmpdir) + ['build_regular']
+            silent_check_call(['intercept-build', '--cdb', result,
+                               '--override-compiler'] + make)
+            self.assertTrue(os.path.isfile(result))
+            self.assertEqual(5, self.count_entries(result))
+
+    @unittest.skipIf(os.getenv('TRAVIS'), 'ubuntu make return -11')
+    def test_successful_build_parallel(self):
+        with fixtures.TempDir() as tmpdir:
+            result = self.run_intercept(tmpdir, ['-j', '4', 'build_regular'])
+            self.assertTrue(os.path.isfile(result))
+            self.assertEqual(5, self.count_entries(result))
+
+    @unittest.skipIf(os.getenv('TRAVIS'), 'ubuntu env remove clang from path')
+    def test_successful_build_on_empty_env(self):
+        with fixtures.TempDir() as tmpdir:
+            result = os.path.join(tmpdir, 'cdb.json')
+            make = make_args(tmpdir) + ['CC=clang', 'build_regular']
+            silent_check_call(['intercept-build', '--cdb', result,
+                               'env', '-'] + make)
+            self.assertTrue(os.path.isfile(result))
+            self.assertEqual(5, self.count_entries(result))
+
+    def test_successful_build_all_in_one(self):
+        with fixtures.TempDir() as tmpdir:
+            result = self.run_intercept(tmpdir, ['build_all_in_one'])
+            self.assertTrue(os.path.isfile(result))
+            self.assertEqual(5, self.count_entries(result))
+
+    def test_not_successful_build(self):
+        with fixtures.TempDir() as tmpdir:
+            result = os.path.join(tmpdir, 'cdb.json')
+            make = make_args(tmpdir) + ['build_broken']
+            silent_call(
+                ['intercept-build', '--cdb', result] + make)
+            self.assertTrue(os.path.isfile(result))
+            self.assertEqual(2, self.count_entries(result))
+
+
+class ExitCodeTest(unittest.TestCase):
+    @staticmethod
+    def run_intercept(tmpdir, target):
+        result = os.path.join(tmpdir, 'cdb.json')
+        make = make_args(tmpdir) + [target]
+        return silent_call(
+            ['intercept-build', '--cdb', result] + make)
+
+    def test_successful_build(self):
+        with fixtures.TempDir() as tmpdir:
+            exitcode = self.run_intercept(tmpdir, 'build_clean')
+            self.assertFalse(exitcode)
+
+    def test_not_successful_build(self):
+        with fixtures.TempDir() as tmpdir:
+            exitcode = self.run_intercept(tmpdir, 'build_broken')
+            self.assertTrue(exitcode)
+
+
+class ResumeFeatureTest(unittest.TestCase):
+    @staticmethod
+    def run_intercept(tmpdir, target, args):
+        result = os.path.join(tmpdir, 'cdb.json')
+        make = make_args(tmpdir) + [target]
+        silent_check_call(
+            ['intercept-build', '--cdb', result] + args + make)
+        return result
+
+    @staticmethod
+    def count_entries(filename):
+        with open(filename, 'r') as handler:
+            content = json.load(handler)
+            return len(content)
+
+    def test_overwrite_existing_cdb(self):
+        with fixtures.TempDir() as tmpdir:
+            result = self.run_intercept(tmpdir, 'build_clean', [])
+            self.assertTrue(os.path.isfile(result))
+            result = self.run_intercept(tmpdir, 'build_regular', [])
+            self.assertTrue(os.path.isfile(result))
+            self.assertEqual(2, self.count_entries(result))
+
+    def test_append_to_existing_cdb(self):
+        with fixtures.TempDir() as tmpdir:
+            result = self.run_intercept(tmpdir, 'build_clean', [])
+            self.assertTrue(os.path.isfile(result))
+            result = self.run_intercept(tmpdir, 'build_regular', ['--append'])
+            self.assertTrue(os.path.isfile(result))
+            self.assertEqual(5, self.count_entries(result))
+
+
+class ResultFormatingTest(unittest.TestCase):
+    @staticmethod
+    def run_intercept(tmpdir, command):
+        result = os.path.join(tmpdir, 'cdb.json')
+        silent_check_call(
+            ['intercept-build', '--cdb', result] + command,
+            cwd=tmpdir)
+        with open(result, 'r') as handler:
+            content = json.load(handler)
+            return content
+
+    def assert_creates_number_of_entries(self, command, count):
+        with fixtures.TempDir() as tmpdir:
+            filename = os.path.join(tmpdir, 'test.c')
+            create_empty_file(filename)
+            command.append(filename)
+            cmd = ['sh', '-c', ' '.join(command)]
+            cdb = self.run_intercept(tmpdir, cmd)
+            self.assertEqual(count, len(cdb))
+
+    def test_filter_preprocessor_only_calls(self):
+        self.assert_creates_number_of_entries(['cc', '-c'], 1)
+        self.assert_creates_number_of_entries(['cc', '-c', '-E'], 0)
+        self.assert_creates_number_of_entries(['cc', '-c', '-M'], 0)
+        self.assert_creates_number_of_entries(['cc', '-c', '-MM'], 0)
+
+    def assert_command_creates_entry(self, command, expected):
+        with fixtures.TempDir() as tmpdir:
+            filename = os.path.join(tmpdir, command[-1])
+            create_empty_file(filename)
+            cmd = ['sh', '-c', ' '.join(command)]
+            cdb = self.run_intercept(tmpdir, cmd)
+            self.assertEqual(' '.join(expected), cdb[0]['command'])
+
+    def test_filter_preprocessor_flags(self):
+        self.assert_command_creates_entry(
+            ['cc', '-c', '-MD', 'test.c'],
+            ['cc', '-c', 'test.c'])
+        self.assert_command_creates_entry(
+            ['cc', '-c', '-MMD', 'test.c'],
+            ['cc', '-c', 'test.c'])
+        self.assert_command_creates_entry(
+            ['cc', '-c', '-MD', '-MF', 'test.d', 'test.c'],
+            ['cc', '-c', 'test.c'])
+
+    def test_pass_language_flag(self):
+        self.assert_command_creates_entry(
+            ['cc', '-c', '-x', 'c', 'test.c'],
+            ['cc', '-c', '-x', 'c', 'test.c'])
+        self.assert_command_creates_entry(
+            ['cc', '-c', 'test.c'],
+            ['cc', '-c', 'test.c'])
+
+    def test_pass_arch_flags(self):
+        self.assert_command_creates_entry(
+            ['clang', '-c', 'test.c'],
+            ['cc', '-c', 'test.c'])
+        self.assert_command_creates_entry(
+            ['clang', '-c', '-arch', 'i386', 'test.c'],
+            ['cc', '-c', '-arch', 'i386', 'test.c'])
+        self.assert_command_creates_entry(
+            ['clang', '-c', '-arch', 'i386', '-arch', 'armv7l', 'test.c'],
+            ['cc', '-c', '-arch', 'i386', '-arch', 'armv7l', 'test.c'])

Added: cfe/trunk/tools/scan-build-py/tests/functional/cases/test_exec_anatomy.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/cases/test_exec_anatomy.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/cases/test_exec_anatomy.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/cases/test_exec_anatomy.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,50 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+from ...unit import fixtures
+import unittest
+
+import os.path
+import subprocess
+import json
+
+
+def run(source_dir, target_dir):
+    def execute(cmd):
+        return subprocess.check_call(cmd,
+                                     cwd=target_dir,
+                                     stdout=subprocess.PIPE,
+                                     stderr=subprocess.STDOUT)
+
+    execute(['cmake', source_dir])
+    execute(['make'])
+
+    result_file = os.path.join(target_dir, 'result.json')
+    expected_file = os.path.join(target_dir, 'expected.json')
+    execute(['intercept-build', '--cdb', result_file, './exec',
+             expected_file])
+    return (expected_file, result_file)
+
+
+class ExecAnatomyTest(unittest.TestCase):
+    def assertEqualJson(self, expected, result):
+        def read_json(filename):
+            with open(filename) as handler:
+                return json.load(handler)
+
+        lhs = read_json(expected)
+        rhs = read_json(result)
+        for item in lhs:
+            self.assertTrue(rhs.count(item))
+        for item in rhs:
+            self.assertTrue(lhs.count(item))
+
+    def test_all_exec_calls(self):
+        this_dir, _ = os.path.split(__file__)
+        source_dir = os.path.normpath(os.path.join(this_dir, '..', 'exec'))
+        with fixtures.TempDir() as tmp_dir:
+            expected, result = run(source_dir, tmp_dir)
+            self.assertEqualJson(expected, result)

Added: cfe/trunk/tools/scan-build-py/tests/functional/cases/test_from_cdb.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/cases/test_from_cdb.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/cases/test_from_cdb.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/cases/test_from_cdb.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,183 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+from ...unit import fixtures
+from . import call_and_report
+import unittest
+
+import os.path
+import string
+import subprocess
+import glob
+
+
+def prepare_cdb(name, target_dir):
+    target_file = 'build_{0}.json'.format(name)
+    this_dir, _ = os.path.split(__file__)
+    path = os.path.normpath(os.path.join(this_dir, '..', 'src'))
+    source_dir = os.path.join(path, 'compilation_database')
+    source_file = os.path.join(source_dir, target_file + '.in')
+    target_file = os.path.join(target_dir, 'compile_commands.json')
+    with open(source_file, 'r') as in_handle:
+        with open(target_file, 'w') as out_handle:
+            for line in in_handle:
+                temp = string.Template(line)
+                out_handle.write(temp.substitute(path=path))
+    return target_file
+
+
+def run_analyzer(directory, cdb, args):
+    cmd = ['analyze-build', '--cdb', cdb, '--output', directory] \
+        + args
+    return call_and_report(cmd, [])
+
+
+class OutputDirectoryTest(unittest.TestCase):
+    def test_regular_keeps_report_dir(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('regular', tmpdir)
+            exit_code, reportdir = run_analyzer(tmpdir, cdb, [])
+            self.assertTrue(os.path.isdir(reportdir))
+
+    def test_clear_deletes_report_dir(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('clean', tmpdir)
+            exit_code, reportdir = run_analyzer(tmpdir, cdb, [])
+            self.assertFalse(os.path.isdir(reportdir))
+
+    def test_clear_keeps_report_dir_when_asked(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('clean', tmpdir)
+            exit_code, reportdir = run_analyzer(tmpdir, cdb, ['--keep-empty'])
+            self.assertTrue(os.path.isdir(reportdir))
+
+
+class ExitCodeTest(unittest.TestCase):
+    def test_regular_does_not_set_exit_code(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('regular', tmpdir)
+            exit_code, __ = run_analyzer(tmpdir, cdb, [])
+            self.assertFalse(exit_code)
+
+    def test_clear_does_not_set_exit_code(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('clean', tmpdir)
+            exit_code, __ = run_analyzer(tmpdir, cdb, [])
+            self.assertFalse(exit_code)
+
+    def test_regular_sets_exit_code_if_asked(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('regular', tmpdir)
+            exit_code, __ = run_analyzer(tmpdir, cdb, ['--status-bugs'])
+            self.assertTrue(exit_code)
+
+    def test_clear_does_not_set_exit_code_if_asked(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('clean', tmpdir)
+            exit_code, __ = run_analyzer(tmpdir, cdb, ['--status-bugs'])
+            self.assertFalse(exit_code)
+
+    def test_regular_sets_exit_code_if_asked_from_plist(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('regular', tmpdir)
+            exit_code, __ = run_analyzer(
+                tmpdir, cdb, ['--status-bugs', '--plist'])
+            self.assertTrue(exit_code)
+
+    def test_clear_does_not_set_exit_code_if_asked_from_plist(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('clean', tmpdir)
+            exit_code, __ = run_analyzer(
+                tmpdir, cdb, ['--status-bugs', '--plist'])
+            self.assertFalse(exit_code)
+
+
+class OutputFormatTest(unittest.TestCase):
+    @staticmethod
+    def get_html_count(directory):
+        return len(glob.glob(os.path.join(directory, 'report-*.html')))
+
+    @staticmethod
+    def get_plist_count(directory):
+        return len(glob.glob(os.path.join(directory, 'report-*.plist')))
+
+    def test_default_creates_html_report(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('regular', tmpdir)
+            exit_code, reportdir = run_analyzer(tmpdir, cdb, [])
+            self.assertTrue(
+                os.path.exists(os.path.join(reportdir, 'index.html')))
+            self.assertEqual(self.get_html_count(reportdir), 2)
+            self.assertEqual(self.get_plist_count(reportdir), 0)
+
+    def test_plist_and_html_creates_html_report(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('regular', tmpdir)
+            exit_code, reportdir = run_analyzer(tmpdir, cdb, ['--plist-html'])
+            self.assertTrue(
+                os.path.exists(os.path.join(reportdir, 'index.html')))
+            self.assertEqual(self.get_html_count(reportdir), 2)
+            self.assertEqual(self.get_plist_count(reportdir), 5)
+
+    def test_plist_does_not_creates_html_report(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('regular', tmpdir)
+            exit_code, reportdir = run_analyzer(tmpdir, cdb, ['--plist'])
+            self.assertFalse(
+                os.path.exists(os.path.join(reportdir, 'index.html')))
+            self.assertEqual(self.get_html_count(reportdir), 0)
+            self.assertEqual(self.get_plist_count(reportdir), 5)
+
+
+class FailureReportTest(unittest.TestCase):
+    def test_broken_creates_failure_reports(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('broken', tmpdir)
+            exit_code, reportdir = run_analyzer(tmpdir, cdb, [])
+            self.assertTrue(
+                os.path.isdir(os.path.join(reportdir, 'failures')))
+
+    def test_broken_does_not_creates_failure_reports(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('broken', tmpdir)
+            exit_code, reportdir = run_analyzer(
+                tmpdir, cdb, ['--no-failure-reports'])
+            self.assertFalse(
+                os.path.isdir(os.path.join(reportdir, 'failures')))
+
+
+class TitleTest(unittest.TestCase):
+    def assertTitleEqual(self, directory, expected):
+        import re
+        patterns = [
+            re.compile(r'<title>(?P<page>.*)</title>'),
+            re.compile(r'<h1>(?P<head>.*)</h1>')
+        ]
+        result = dict()
+
+        index = os.path.join(directory, 'index.html')
+        with open(index, 'r') as handler:
+            for line in handler.readlines():
+                for regex in patterns:
+                    match = regex.match(line.strip())
+                    if match:
+                        result.update(match.groupdict())
+                        break
+        self.assertEqual(result['page'], result['head'])
+        self.assertEqual(result['page'], expected)
+
+    def test_default_title_in_report(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('broken', tmpdir)
+            exit_code, reportdir = run_analyzer(tmpdir, cdb, [])
+            self.assertTitleEqual(reportdir, 'src - analyzer results')
+
+    def test_given_title_in_report(self):
+        with fixtures.TempDir() as tmpdir:
+            cdb = prepare_cdb('broken', tmpdir)
+            exit_code, reportdir = run_analyzer(
+                tmpdir, cdb, ['--html-title', 'this is the title'])
+            self.assertTitleEqual(reportdir, 'this is the title')

Added: cfe/trunk/tools/scan-build-py/tests/functional/cases/test_from_cmd.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/cases/test_from_cmd.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/cases/test_from_cmd.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/cases/test_from_cmd.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,118 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+from ...unit import fixtures
+from . import make_args, check_call_and_report, create_empty_file
+import unittest
+
+import os
+import os.path
+import glob
+
+
+class OutputDirectoryTest(unittest.TestCase):
+
+    @staticmethod
+    def run_analyzer(outdir, args, cmd):
+        return check_call_and_report(
+            ['scan-build', '--intercept-first', '-o', outdir] + args,
+            cmd)
+
+    def test_regular_keeps_report_dir(self):
+        with fixtures.TempDir() as tmpdir:
+            make = make_args(tmpdir) + ['build_regular']
+            outdir = self.run_analyzer(tmpdir, [], make)
+            self.assertTrue(os.path.isdir(outdir))
+
+    def test_clear_deletes_report_dir(self):
+        with fixtures.TempDir() as tmpdir:
+            make = make_args(tmpdir) + ['build_clean']
+            outdir = self.run_analyzer(tmpdir, [], make)
+            self.assertFalse(os.path.isdir(outdir))
+
+    def test_clear_keeps_report_dir_when_asked(self):
+        with fixtures.TempDir() as tmpdir:
+            make = make_args(tmpdir) + ['build_clean']
+            outdir = self.run_analyzer(tmpdir, ['--keep-empty'], make)
+            self.assertTrue(os.path.isdir(outdir))
+
+
+class RunAnalyzerTest(unittest.TestCase):
+
+    @staticmethod
+    def get_plist_count(directory):
+        return len(glob.glob(os.path.join(directory, 'report-*.plist')))
+
+    def test_interposition_works(self):
+        with fixtures.TempDir() as tmpdir:
+            make = make_args(tmpdir) + ['build_regular']
+            outdir = check_call_and_report(
+                ['scan-build', '--plist', '-o', tmpdir, '--override-compiler'],
+                make)
+
+            self.assertTrue(os.path.isdir(outdir))
+            self.assertEqual(self.get_plist_count(outdir), 5)
+
+    def test_intercept_wrapper_works(self):
+        with fixtures.TempDir() as tmpdir:
+            make = make_args(tmpdir) + ['build_regular']
+            outdir = check_call_and_report(
+                ['scan-build', '--plist', '-o', tmpdir, '--intercept-first',
+                 '--override-compiler'],
+                make)
+
+            self.assertTrue(os.path.isdir(outdir))
+            self.assertEqual(self.get_plist_count(outdir), 5)
+
+    def test_intercept_library_works(self):
+        with fixtures.TempDir() as tmpdir:
+            make = make_args(tmpdir) + ['build_regular']
+            outdir = check_call_and_report(
+                ['scan-build', '--plist', '-o', tmpdir, '--intercept-first'],
+                make)
+
+            self.assertTrue(os.path.isdir(outdir))
+            self.assertEqual(self.get_plist_count(outdir), 5)
+
+    @staticmethod
+    def compile_empty_source_file(target_dir, is_cxx):
+        compiler = '$CXX' if is_cxx else '$CC'
+        src_file_name = 'test.cxx' if is_cxx else 'test.c'
+        src_file = os.path.join(target_dir, src_file_name)
+        obj_file = os.path.join(target_dir, 'test.o')
+        create_empty_file(src_file)
+        command = ' '.join([compiler, '-c', src_file, '-o', obj_file])
+        return ['sh', '-c', command]
+
+    def test_interposition_cc_works(self):
+        with fixtures.TempDir() as tmpdir:
+            outdir = check_call_and_report(
+                ['scan-build', '--plist', '-o', tmpdir, '--override-compiler'],
+                self.compile_empty_source_file(tmpdir, False))
+            self.assertEqual(self.get_plist_count(outdir), 1)
+
+    def test_interposition_cxx_works(self):
+        with fixtures.TempDir() as tmpdir:
+            outdir = check_call_and_report(
+                ['scan-build', '--plist', '-o', tmpdir, '--override-compiler'],
+                self.compile_empty_source_file(tmpdir, True))
+            self.assertEqual(self.get_plist_count(outdir), 1)
+
+    def test_intercept_cc_works(self):
+        with fixtures.TempDir() as tmpdir:
+            outdir = check_call_and_report(
+                ['scan-build', '--plist', '-o', tmpdir, '--override-compiler',
+                 '--intercept-first'],
+                self.compile_empty_source_file(tmpdir, False))
+            self.assertEqual(self.get_plist_count(outdir), 1)
+
+    def test_intercept_cxx_works(self):
+        with fixtures.TempDir() as tmpdir:
+            outdir = check_call_and_report(
+                ['scan-build', '--plist', '-o', tmpdir, '--override-compiler',
+                 '--intercept-first'],
+                self.compile_empty_source_file(tmpdir, True))
+            self.assertEqual(self.get_plist_count(outdir), 1)

Added: cfe/trunk/tools/scan-build-py/tests/functional/exec/CMakeLists.txt
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/exec/CMakeLists.txt?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/exec/CMakeLists.txt (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/exec/CMakeLists.txt Tue Jan 12 16:38:41 2016
@@ -0,0 +1,32 @@
+project(exec C)
+
+cmake_minimum_required(VERSION 2.8)
+
+include(CheckCCompilerFlag)
+check_c_compiler_flag("-std=c99" C99_SUPPORTED)
+if (C99_SUPPORTED)
+    set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -std=c99")
+endif()
+
+include(CheckFunctionExists)
+include(CheckSymbolExists)
+
+add_definitions(-D_GNU_SOURCE)
+list(APPEND CMAKE_REQUIRED_DEFINITIONS -D_GNU_SOURCE)
+
+check_function_exists(execve HAVE_EXECVE)
+check_function_exists(execv HAVE_EXECV)
+check_function_exists(execvpe HAVE_EXECVPE)
+check_function_exists(execvp HAVE_EXECVP)
+check_function_exists(execvP HAVE_EXECVP2)
+check_function_exists(exect HAVE_EXECT)
+check_function_exists(execl HAVE_EXECL)
+check_function_exists(execlp HAVE_EXECLP)
+check_function_exists(execle HAVE_EXECLE)
+check_function_exists(posix_spawn HAVE_POSIX_SPAWN)
+check_function_exists(posix_spawnp HAVE_POSIX_SPAWNP)
+
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/config.h.in ${CMAKE_CURRENT_BINARY_DIR}/config.h)
+include_directories(${CMAKE_CURRENT_BINARY_DIR})
+
+add_executable(exec main.c)

Added: cfe/trunk/tools/scan-build-py/tests/functional/exec/config.h.in
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/exec/config.h.in?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/exec/config.h.in (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/exec/config.h.in Tue Jan 12 16:38:41 2016
@@ -0,0 +1,20 @@
+/* -*- coding: utf-8 -*-
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+*/
+
+#pragma once
+
+#cmakedefine HAVE_EXECVE
+#cmakedefine HAVE_EXECV
+#cmakedefine HAVE_EXECVPE
+#cmakedefine HAVE_EXECVP
+#cmakedefine HAVE_EXECVP2
+#cmakedefine HAVE_EXECT
+#cmakedefine HAVE_EXECL
+#cmakedefine HAVE_EXECLP
+#cmakedefine HAVE_EXECLE
+#cmakedefine HAVE_POSIX_SPAWN
+#cmakedefine HAVE_POSIX_SPAWNP

Added: cfe/trunk/tools/scan-build-py/tests/functional/exec/main.c
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/exec/main.c?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/exec/main.c (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/exec/main.c Tue Jan 12 16:38:41 2016
@@ -0,0 +1,307 @@
+/* -*- coding: utf-8 -*-
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+*/
+
+#include "config.h"
+
+#include <sys/wait.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <paths.h>
+
+#if defined HAVE_POSIX_SPAWN || defined HAVE_POSIX_SPAWNP
+#include <spawn.h>
+#endif
+
+// ..:: environment access fixer - begin ::..
+#ifdef HAVE_NSGETENVIRON
+#include <crt_externs.h>
+#else
+extern char **environ;
+#endif
+
+char **get_environ() {
+#ifdef HAVE_NSGETENVIRON
+    return *_NSGetEnviron();
+#else
+    return environ;
+#endif
+}
+// ..:: environment access fixer - end ::..
+
+// ..:: test fixtures - begin ::..
+static char const *cwd = NULL;
+static FILE *fd = NULL;
+static int need_comma = 0;
+
+void expected_out_open(const char *expected) {
+    cwd = getcwd(NULL, 0);
+    fd = fopen(expected, "w");
+    if (!fd) {
+        perror("fopen");
+        exit(EXIT_FAILURE);
+    }
+    fprintf(fd, "[\n");
+    need_comma = 0;
+}
+
+void expected_out_close() {
+    fprintf(fd, "]\n");
+    fclose(fd);
+    fd = NULL;
+
+    free((void *)cwd);
+    cwd = NULL;
+}
+
+void expected_out(const char *file) {
+    if (need_comma)
+        fprintf(fd, ",\n");
+    else
+        need_comma = 1;
+
+    fprintf(fd, "{\n");
+    fprintf(fd, "  \"directory\": \"%s\",\n", cwd);
+    fprintf(fd, "  \"command\": \"cc -c %s\",\n", file);
+    fprintf(fd, "  \"file\": \"%s/%s\"\n", cwd, file);
+    fprintf(fd, "}\n");
+}
+
+void create_source(char *file) {
+    FILE *fd = fopen(file, "w");
+    if (!fd) {
+        perror("fopen");
+        exit(EXIT_FAILURE);
+    }
+    fprintf(fd, "typedef int score;\n");
+    fclose(fd);
+}
+
+typedef void (*exec_fun)();
+
+void wait_for(pid_t child) {
+    int status;
+    if (-1 == waitpid(child, &status, 0)) {
+        perror("wait");
+        exit(EXIT_FAILURE);
+    }
+    if (WIFEXITED(status) ? WEXITSTATUS(status) : EXIT_FAILURE) {
+        fprintf(stderr, "children process has non zero exit code\n");
+        exit(EXIT_FAILURE);
+    }
+}
+
+#define FORK(FUNC)                                                             \
+    {                                                                          \
+        pid_t child = fork();                                                  \
+        if (-1 == child) {                                                     \
+            perror("fork");                                                    \
+            exit(EXIT_FAILURE);                                                \
+        } else if (0 == child) {                                               \
+            FUNC fprintf(stderr, "children process failed to exec\n");         \
+            exit(EXIT_FAILURE);                                                \
+        } else {                                                               \
+            wait_for(child);                                                   \
+        }                                                                      \
+    }
+// ..:: test fixtures - end ::..
+
+#ifdef HAVE_EXECV
+void call_execv() {
+    char *const file = "execv.c";
+    char *const compiler = "/usr/bin/cc";
+    char *const argv[] = {"cc", "-c", file, 0};
+
+    expected_out(file);
+    create_source(file);
+
+    FORK(execv(compiler, argv);)
+}
+#endif
+
+#ifdef HAVE_EXECVE
+void call_execve() {
+    char *const file = "execve.c";
+    char *const compiler = "/usr/bin/cc";
+    char *const argv[] = {compiler, "-c", file, 0};
+    char *const envp[] = {"THIS=THAT", 0};
+
+    expected_out(file);
+    create_source(file);
+
+    FORK(execve(compiler, argv, envp);)
+}
+#endif
+
+#ifdef HAVE_EXECVP
+void call_execvp() {
+    char *const file = "execvp.c";
+    char *const compiler = "cc";
+    char *const argv[] = {compiler, "-c", file, 0};
+
+    expected_out(file);
+    create_source(file);
+
+    FORK(execvp(compiler, argv);)
+}
+#endif
+
+#ifdef HAVE_EXECVP2
+void call_execvP() {
+    char *const file = "execv_p.c";
+    char *const compiler = "cc";
+    char *const argv[] = {compiler, "-c", file, 0};
+
+    expected_out(file);
+    create_source(file);
+
+    FORK(execvP(compiler, _PATH_DEFPATH, argv);)
+}
+#endif
+
+#ifdef HAVE_EXECVPE
+void call_execvpe() {
+    char *const file = "execvpe.c";
+    char *const compiler = "cc";
+    char *const argv[] = {"/usr/bin/cc", "-c", file, 0};
+    char *const envp[] = {"THIS=THAT", 0};
+
+    expected_out(file);
+    create_source(file);
+
+    FORK(execvpe(compiler, argv, envp);)
+}
+#endif
+
+#ifdef HAVE_EXECT
+void call_exect() {
+    char *const file = "exect.c";
+    char *const compiler = "/usr/bin/cc";
+    char *const argv[] = {compiler, "-c", file, 0};
+    char *const envp[] = {"THIS=THAT", 0};
+
+    expected_out(file);
+    create_source(file);
+
+    FORK(exect(compiler, argv, envp);)
+}
+#endif
+
+#ifdef HAVE_EXECL
+void call_execl() {
+    char *const file = "execl.c";
+    char *const compiler = "/usr/bin/cc";
+
+    expected_out(file);
+    create_source(file);
+
+    FORK(execl(compiler, "cc", "-c", file, (char *)0);)
+}
+#endif
+
+#ifdef HAVE_EXECLP
+void call_execlp() {
+    char *const file = "execlp.c";
+    char *const compiler = "cc";
+
+    expected_out(file);
+    create_source(file);
+
+    FORK(execlp(compiler, compiler, "-c", file, (char *)0);)
+}
+#endif
+
+#ifdef HAVE_EXECLE
+void call_execle() {
+    char *const file = "execle.c";
+    char *const compiler = "/usr/bin/cc";
+    char *const envp[] = {"THIS=THAT", 0};
+
+    expected_out(file);
+    create_source(file);
+
+    FORK(execle(compiler, compiler, "-c", file, (char *)0, envp);)
+}
+#endif
+
+#ifdef HAVE_POSIX_SPAWN
+void call_posix_spawn() {
+    char *const file = "posix_spawn.c";
+    char *const compiler = "cc";
+    char *const argv[] = {compiler, "-c", file, 0};
+
+    expected_out(file);
+    create_source(file);
+
+    pid_t child;
+    if (0 != posix_spawn(&child, "/usr/bin/cc", 0, 0, argv, get_environ())) {
+        perror("posix_spawn");
+        exit(EXIT_FAILURE);
+    }
+    wait_for(child);
+}
+#endif
+
+#ifdef HAVE_POSIX_SPAWNP
+void call_posix_spawnp() {
+    char *const file = "posix_spawnp.c";
+    char *const compiler = "cc";
+    char *const argv[] = {compiler, "-c", file, 0};
+
+    expected_out(file);
+    create_source(file);
+
+    pid_t child;
+    if (0 != posix_spawnp(&child, "cc", 0, 0, argv, get_environ())) {
+        perror("posix_spawnp");
+        exit(EXIT_FAILURE);
+    }
+    wait_for(child);
+}
+#endif
+
+int main(int argc, char *const argv[]) {
+    if (argc != 2)
+        exit(EXIT_FAILURE);
+
+    expected_out_open(argv[1]);
+#ifdef HAVE_EXECV
+    call_execv();
+#endif
+#ifdef HAVE_EXECVE
+    call_execve();
+#endif
+#ifdef HAVE_EXECVP
+    call_execvp();
+#endif
+#ifdef HAVE_EXECVP2
+    call_execvP();
+#endif
+#ifdef HAVE_EXECVPE
+    call_execvpe();
+#endif
+#ifdef HAVE_EXECT
+    call_exect();
+#endif
+#ifdef HAVE_EXECL
+    call_execl();
+#endif
+#ifdef HAVE_EXECLP
+    call_execlp();
+#endif
+#ifdef HAVE_EXECLE
+    call_execle();
+#endif
+#ifdef HAVE_POSIX_SPAWN
+    call_posix_spawn();
+#endif
+#ifdef HAVE_POSIX_SPAWNP
+    call_posix_spawnp();
+#endif
+    expected_out_close();
+    return 0;
+}

Added: cfe/trunk/tools/scan-build-py/tests/functional/src/broken-one.c
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/src/broken-one.c?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/src/broken-one.c (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/src/broken-one.c Tue Jan 12 16:38:41 2016
@@ -0,0 +1,6 @@
+#include <notexisting.hpp>
+
+int value(int in)
+{
+    return 2 * in;
+}

Added: cfe/trunk/tools/scan-build-py/tests/functional/src/broken-two.c
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/src/broken-two.c?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/src/broken-two.c (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/src/broken-two.c Tue Jan 12 16:38:41 2016
@@ -0,0 +1 @@
+int test() { ;

Added: cfe/trunk/tools/scan-build-py/tests/functional/src/build/Makefile
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/src/build/Makefile?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/src/build/Makefile (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/src/build/Makefile Tue Jan 12 16:38:41 2016
@@ -0,0 +1,42 @@
+SRCDIR := ..
+OBJDIR := .
+
+CFLAGS = -Wall -DDEBUG -Dvariable="value with space" -I $(SRCDIR)/include
+LDFLAGS =
+PROGRAM = $(OBJDIR)/prg
+
+$(OBJDIR)/main.o: $(SRCDIR)/main.c
+	$(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/main.c
+
+$(OBJDIR)/clean-one.o: $(SRCDIR)/clean-one.c
+	$(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/clean-one.c
+
+$(OBJDIR)/clean-two.o: $(SRCDIR)/clean-two.c
+	$(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/clean-two.c
+
+$(OBJDIR)/emit-one.o: $(SRCDIR)/emit-one.c
+	$(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/emit-one.c
+
+$(OBJDIR)/emit-two.o: $(SRCDIR)/emit-two.c
+	$(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/emit-two.c
+
+$(OBJDIR)/broken-one.o: $(SRCDIR)/broken-one.c
+	$(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/broken-one.c
+
+$(OBJDIR)/broken-two.o: $(SRCDIR)/broken-two.c
+	$(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/broken-two.c
+
+$(PROGRAM): $(OBJDIR)/main.o $(OBJDIR)/clean-one.o $(OBJDIR)/clean-two.o $(OBJDIR)/emit-one.o $(OBJDIR)/emit-two.o
+	$(CC) $(LDFLAGS) -o $@ $(OBJDIR)/main.o $(OBJDIR)/clean-one.o $(OBJDIR)/clean-two.o $(OBJDIR)/emit-one.o $(OBJDIR)/emit-two.o
+
+build_regular: $(PROGRAM)
+
+build_clean: $(OBJDIR)/main.o $(OBJDIR)/clean-one.o $(OBJDIR)/clean-two.o
+
+build_broken: $(OBJDIR)/main.o $(OBJDIR)/broken-one.o $(OBJDIR)/broken-two.o
+
+build_all_in_one: $(SRCDIR)/main.c $(SRCDIR)/clean-one.c $(SRCDIR)/clean-two.c $(SRCDIR)/emit-one.c $(SRCDIR)/emit-two.c
+	$(CC) $(CFLAGS) $(LDFLAGS) -o $(PROGRAM) $(SRCDIR)/main.c $(SRCDIR)/clean-one.c $(SRCDIR)/clean-two.c $(SRCDIR)/emit-one.c $(SRCDIR)/emit-two.c
+
+clean:
+	rm -f $(PROGRAM) $(OBJDIR)/*.o

Added: cfe/trunk/tools/scan-build-py/tests/functional/src/clean-one.c
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/src/clean-one.c?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/src/clean-one.c (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/src/clean-one.c Tue Jan 12 16:38:41 2016
@@ -0,0 +1,13 @@
+#include <clean-one.h>
+
+int do_nothing_loop()
+{
+    int i = 32;
+    int idx = 0;
+
+    for (idx = i; idx > 0; --idx)
+    {
+        i += idx;
+    }
+    return i;
+}

Added: cfe/trunk/tools/scan-build-py/tests/functional/src/clean-two.c
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/src/clean-two.c?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/src/clean-two.c (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/src/clean-two.c Tue Jan 12 16:38:41 2016
@@ -0,0 +1,11 @@
+#include <clean-one.h>
+
+#include <stdlib.h>
+
+unsigned int another_method()
+{
+    unsigned int const size = do_nothing_loop();
+    unsigned int const square = size * size;
+
+    return square;
+}

Added: cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_broken.json.in
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_broken.json.in?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_broken.json.in (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_broken.json.in Tue Jan 12 16:38:41 2016
@@ -0,0 +1,43 @@
+[
+{
+  "directory": "${path}",
+  "command": "g++ -c -o main.o main.c -Wall -DDEBUG -Dvariable=value",
+  "file": "${path}/main.c"
+}
+,
+{
+  "directory": "${path}",
+  "command": "cc -c -o broken-one.o broken-one.c -Wall -DDEBUG \"-Dvariable=value with space\"",
+  "file": "${path}/broken-one.c"
+}
+,
+{
+  "directory": "${path}",
+  "command": "g++ -c -o broken-two.o broken-two.c -Wall -DDEBUG -Dvariable=value",
+  "file": "${path}/broken-two.c"
+}
+,
+{
+  "directory": "${path}",
+  "command": "cc -c -o clean-one.o clean-one.c -Wall -DDEBUG \"-Dvariable=value with space\" -Iinclude",
+  "file": "${path}/clean-one.c"
+}
+,
+{
+  "directory": "${path}",
+  "command": "g++ -c -o clean-two.o clean-two.c -Wall -DDEBUG -Dvariable=value -I ./include",
+  "file": "${path}/clean-two.c"
+}
+,
+{
+  "directory": "${path}",
+  "command": "cc -c -o emit-one.o emit-one.c -Wall -DDEBUG \"-Dvariable=value with space\"",
+  "file": "${path}/emit-one.c"
+}
+,
+{
+  "directory": "${path}",
+  "command": "g++ -c -o emit-two.o emit-two.c -Wall -DDEBUG -Dvariable=value",
+  "file": "${path}/emit-two.c"
+}
+]

Added: cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_clean.json.in
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_clean.json.in?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_clean.json.in (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_clean.json.in Tue Jan 12 16:38:41 2016
@@ -0,0 +1,19 @@
+[
+{
+  "directory": "${path}",
+  "command": "g++ -c -o main.o main.c -Wall -DDEBUG -Dvariable=value",
+  "file": "${path}/main.c"
+}
+,
+{
+  "directory": "${path}",
+  "command": "cc -c -o clean-one.o clean-one.c -Wall -DDEBUG \"-Dvariable=value with space\" -Iinclude",
+  "file": "${path}/clean-one.c"
+}
+,
+{
+  "directory": "${path}",
+  "command": "g++ -c -o clean-two.o clean-two.c -Wall -DDEBUG -Dvariable=value -I ./include",
+  "file": "${path}/clean-two.c"
+}
+]

Added: cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_regular.json.in
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_regular.json.in?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_regular.json.in (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/src/compilation_database/build_regular.json.in Tue Jan 12 16:38:41 2016
@@ -0,0 +1,31 @@
+[
+{
+  "directory": "${path}",
+  "command": "g++ -c -o main.o main.c -Wall -DDEBUG -Dvariable=value",
+  "file": "${path}/main.c"
+}
+,
+{
+  "directory": "${path}",
+  "command": "cc -c -o clean-one.o clean-one.c -Wall -DDEBUG \"-Dvariable=value with space\" -Iinclude",
+  "file": "${path}/clean-one.c"
+}
+,
+{
+  "directory": "${path}",
+  "command": "g++ -c -o clean-two.o clean-two.c -Wall -DDEBUG -Dvariable=value -I ./include",
+  "file": "${path}/clean-two.c"
+}
+,
+{
+  "directory": "${path}",
+  "command": "cc -c -o emit-one.o emit-one.c -Wall -DDEBUG \"-Dvariable=value with space\"",
+  "file": "${path}/emit-one.c"
+}
+,
+{
+  "directory": "${path}",
+  "command": "g++ -c -o emit-two.o emit-two.c -Wall -DDEBUG -Dvariable=value",
+  "file": "${path}/emit-two.c"
+}
+]

Added: cfe/trunk/tools/scan-build-py/tests/functional/src/emit-one.c
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/src/emit-one.c?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/src/emit-one.c (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/src/emit-one.c Tue Jan 12 16:38:41 2016
@@ -0,0 +1,23 @@
+#include <assert.h>
+
+int div(int numerator, int denominator)
+{
+    return numerator / denominator;
+}
+
+void div_test()
+{
+    int i = 0;
+    for (i = 0; i < 2; ++i)
+        assert(div(2 * i, i) == 2);
+}
+
+int do_nothing()
+{
+    unsigned int i = 0;
+
+    int k = 100;
+    int j = k + 1;
+
+    return j;
+}

Added: cfe/trunk/tools/scan-build-py/tests/functional/src/emit-two.c
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/src/emit-two.c?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/src/emit-two.c (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/src/emit-two.c Tue Jan 12 16:38:41 2016
@@ -0,0 +1,13 @@
+
+int bad_guy(int * i)
+{
+    *i = 9;
+    return *i;
+}
+
+void bad_guy_test()
+{
+    int * ptr = 0;
+
+    bad_guy(ptr);
+}

Added: cfe/trunk/tools/scan-build-py/tests/functional/src/include/clean-one.h
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/src/include/clean-one.h?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/src/include/clean-one.h (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/src/include/clean-one.h Tue Jan 12 16:38:41 2016
@@ -0,0 +1,6 @@
+#ifndef CLEAN_ONE_H
+#define CLEAN_ONE_H
+
+int do_nothing_loop();
+
+#endif

Added: cfe/trunk/tools/scan-build-py/tests/functional/src/main.c
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/functional/src/main.c?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/functional/src/main.c (added)
+++ cfe/trunk/tools/scan-build-py/tests/functional/src/main.c Tue Jan 12 16:38:41 2016
@@ -0,0 +1,4 @@
+int main()
+{
+    return 0;
+}

Added: cfe/trunk/tools/scan-build-py/tests/unit/__init__.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/unit/__init__.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/unit/__init__.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/unit/__init__.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,24 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+from . import test_command
+from . import test_clang
+from . import test_runner
+from . import test_report
+from . import test_analyze
+from . import test_intercept
+from . import test_shell
+
+
+def load_tests(loader, suite, pattern):
+    suite.addTests(loader.loadTestsFromModule(test_command))
+    suite.addTests(loader.loadTestsFromModule(test_clang))
+    suite.addTests(loader.loadTestsFromModule(test_runner))
+    suite.addTests(loader.loadTestsFromModule(test_report))
+    suite.addTests(loader.loadTestsFromModule(test_analyze))
+    suite.addTests(loader.loadTestsFromModule(test_intercept))
+    suite.addTests(loader.loadTestsFromModule(test_shell))
+    return suite

Added: cfe/trunk/tools/scan-build-py/tests/unit/fixtures.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/unit/fixtures.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/unit/fixtures.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/unit/fixtures.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,40 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import contextlib
+import tempfile
+import shutil
+import unittest
+
+
+class Spy(object):
+    def __init__(self):
+        self.arg = None
+        self.success = 0
+
+    def call(self, params):
+        self.arg = params
+        return self.success
+
+
+ at contextlib.contextmanager
+def TempDir():
+    name = tempfile.mkdtemp(prefix='scan-build-test-')
+    try:
+        yield name
+    finally:
+        shutil.rmtree(name)
+
+
+class TestCase(unittest.TestCase):
+    def assertIn(self, element, collection):
+        found = False
+        for it in collection:
+            if element == it:
+                found = True
+
+        self.assertTrue(found, '{0} does not have {1}'.format(collection,
+                                                              element))

Added: cfe/trunk/tools/scan-build-py/tests/unit/test_analyze.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/unit/test_analyze.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/unit/test_analyze.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/unit/test_analyze.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,8 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libscanbuild.analyze as sut
+from . import fixtures

Added: cfe/trunk/tools/scan-build-py/tests/unit/test_clang.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/unit/test_clang.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/unit/test_clang.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/unit/test_clang.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,41 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libscanbuild.clang as sut
+from . import fixtures
+import os.path
+
+
+class GetClangArgumentsTest(fixtures.TestCase):
+    def test_get_clang_arguments(self):
+        with fixtures.TempDir() as tmpdir:
+            filename = os.path.join(tmpdir, 'test.c')
+            with open(filename, 'w') as handle:
+                handle.write('')
+
+            result = sut.get_arguments(
+                ['clang', '-c', filename, '-DNDEBUG', '-Dvar="this is it"'],
+                tmpdir)
+
+            self.assertIn('NDEBUG', result)
+            self.assertIn('var="this is it"', result)
+
+    def test_get_clang_arguments_fails(self):
+        self.assertRaises(
+            Exception, sut.get_arguments,
+            ['clang', '-###', '-fsyntax-only', '-x', 'c', 'notexist.c'], '.')
+
+
+class GetCheckersTest(fixtures.TestCase):
+    def test_get_checkers(self):
+        # this test is only to see is not crashing
+        result = sut.get_checkers('clang', [])
+        self.assertTrue(len(result))
+
+    def test_get_active_checkers(self):
+        # this test is only to see is not crashing
+        result = sut.get_active_checkers('clang', [])
+        self.assertTrue(len(result))

Added: cfe/trunk/tools/scan-build-py/tests/unit/test_command.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/unit/test_command.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/unit/test_command.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/unit/test_command.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,193 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libscanbuild.command as sut
+from . import fixtures
+import unittest
+
+
+class ParseTest(unittest.TestCase):
+
+    def test_action(self):
+        def test(expected, cmd):
+            opts = sut.classify_parameters(cmd)
+            self.assertEqual(expected, opts['action'])
+
+        Link = sut.Action.Link
+        test(Link, ['clang', 'source.c'])
+
+        Compile = sut.Action.Compile
+        test(Compile, ['clang', '-c', 'source.c'])
+        test(Compile, ['clang', '-c', 'source.c', '-MF', 'source.d'])
+
+        Preprocess = sut.Action.Ignored
+        test(Preprocess, ['clang', '-E', 'source.c'])
+        test(Preprocess, ['clang', '-c', '-E', 'source.c'])
+        test(Preprocess, ['clang', '-c', '-M', 'source.c'])
+        test(Preprocess, ['clang', '-c', '-MM', 'source.c'])
+
+    def test_optimalizations(self):
+        def test(cmd):
+            opts = sut.classify_parameters(cmd)
+            return opts.get('compile_options', [])
+
+        self.assertEqual(['-O'],  test(['clang', '-c', 'source.c', '-O']))
+        self.assertEqual(['-O1'], test(['clang', '-c', 'source.c', '-O1']))
+        self.assertEqual(['-Os'], test(['clang', '-c', 'source.c', '-Os']))
+        self.assertEqual(['-O2'], test(['clang', '-c', 'source.c', '-O2']))
+        self.assertEqual(['-O3'], test(['clang', '-c', 'source.c', '-O3']))
+
+    def test_language(self):
+        def test(cmd):
+            opts = sut.classify_parameters(cmd)
+            return opts.get('language')
+
+        self.assertEqual(None, test(['clang', '-c', 'source.c']))
+        self.assertEqual('c', test(['clang', '-c', 'source.c', '-x', 'c']))
+        self.assertEqual('cpp', test(['clang', '-c', 'source.c', '-x', 'cpp']))
+
+    def test_output(self):
+        def test(cmd):
+            opts = sut.classify_parameters(cmd)
+            return opts.get('output')
+
+        self.assertEqual(None, test(['clang', '-c', 'source.c']))
+        self.assertEqual('source.o',
+                         test(['clang', '-c', '-o', 'source.o', 'source.c']))
+
+    def test_arch(self):
+        def test(cmd):
+            opts = sut.classify_parameters(cmd)
+            return opts.get('archs_seen', [])
+
+        eq = self.assertEqual
+
+        eq([], test(['clang', '-c', 'source.c']))
+        eq(['mips'],
+           test(['clang', '-c', 'source.c', '-arch', 'mips']))
+        eq(['mips', 'i386'],
+           test(['clang', '-c', 'source.c', '-arch', 'mips', '-arch', 'i386']))
+
+    def test_input_file(self):
+        def test(cmd):
+            opts = sut.classify_parameters(cmd)
+            return opts.get('files', [])
+
+        eq = self.assertEqual
+
+        eq(['src.c'], test(['clang', 'src.c']))
+        eq(['src.c'], test(['clang', '-c', 'src.c']))
+        eq(['s1.c', 's2.c'], test(['clang', '-c', 's1.c', 's2.c']))
+
+    def test_include(self):
+        def test(cmd):
+            opts = sut.classify_parameters(cmd)
+            return opts.get('compile_options', [])
+
+        eq = self.assertEqual
+
+        eq([], test(['clang', '-c', 'src.c']))
+        eq(['-include', '/usr/local/include'],
+           test(['clang', '-c', 'src.c', '-include', '/usr/local/include']))
+        eq(['-I.'],
+           test(['clang', '-c', 'src.c', '-I.']))
+        eq(['-I', '.'],
+           test(['clang', '-c', 'src.c', '-I', '.']))
+        eq(['-I/usr/local/include'],
+           test(['clang', '-c', 'src.c', '-I/usr/local/include']))
+        eq(['-I', '/usr/local/include'],
+           test(['clang', '-c', 'src.c', '-I', '/usr/local/include']))
+        eq(['-I/opt', '-I', '/opt/otp/include'],
+           test(['clang', '-c', 'src.c', '-I/opt', '-I', '/opt/otp/include']))
+        eq(['-isystem', '/path'],
+           test(['clang', '-c', 'src.c', '-isystem', '/path']))
+        eq(['-isystem=/path'],
+           test(['clang', '-c', 'src.c', '-isystem=/path']))
+
+    def test_define(self):
+        def test(cmd):
+            opts = sut.classify_parameters(cmd)
+            return opts.get('compile_options', [])
+
+        eq = self.assertEqual
+
+        eq([], test(['clang', '-c', 'src.c']))
+        eq(['-DNDEBUG'],
+           test(['clang', '-c', 'src.c', '-DNDEBUG']))
+        eq(['-UNDEBUG'],
+           test(['clang', '-c', 'src.c', '-UNDEBUG']))
+        eq(['-Dvar1=val1', '-Dvar2=val2'],
+           test(['clang', '-c', 'src.c', '-Dvar1=val1', '-Dvar2=val2']))
+        eq(['-Dvar="val ues"'],
+           test(['clang', '-c', 'src.c', '-Dvar="val ues"']))
+
+    def test_ignored_flags(self):
+        def test(flags):
+            cmd = ['clang', 'src.o']
+            opts = sut.classify_parameters(cmd + flags)
+            self.assertEqual(['src.o'], opts.get('compile_options'))
+
+        test([])
+        test(['-lrt', '-L/opt/company/lib'])
+        test(['-static'])
+        test(['-Wnoexcept', '-Wall'])
+        test(['-mtune=i386', '-mcpu=i386'])
+
+    def test_compile_only_flags(self):
+        def test(cmd):
+            opts = sut.classify_parameters(cmd)
+            return opts.get('compile_options', [])
+
+        eq = self.assertEqual
+
+        eq(['-std=C99'],
+           test(['clang', '-c', 'src.c', '-std=C99']))
+        eq(['-nostdinc'],
+           test(['clang', '-c', 'src.c', '-nostdinc']))
+        eq(['-isystem', '/image/debian'],
+           test(['clang', '-c', 'src.c', '-isystem', '/image/debian']))
+        eq(['-iprefix', '/usr/local'],
+           test(['clang', '-c', 'src.c', '-iprefix', '/usr/local']))
+        eq(['-iquote=me'],
+           test(['clang', '-c', 'src.c', '-iquote=me']))
+        eq(['-iquote', 'me'],
+           test(['clang', '-c', 'src.c', '-iquote', 'me']))
+
+    def test_compile_and_link_flags(self):
+        def test(cmd):
+            opts = sut.classify_parameters(cmd)
+            return opts.get('compile_options', [])
+
+        eq = self.assertEqual
+
+        eq(['-fsinged-char'],
+           test(['clang', '-c', 'src.c', '-fsinged-char']))
+        eq(['-fPIC'],
+           test(['clang', '-c', 'src.c', '-fPIC']))
+        eq(['-stdlib=libc++'],
+           test(['clang', '-c', 'src.c', '-stdlib=libc++']))
+        eq(['--sysroot', '/'],
+           test(['clang', '-c', 'src.c', '--sysroot', '/']))
+        eq(['-isysroot', '/'],
+           test(['clang', '-c', 'src.c', '-isysroot', '/']))
+        eq([],
+           test(['clang', '-c', 'src.c', '-fsyntax-only']))
+        eq([],
+           test(['clang', '-c', 'src.c', '-sectorder', 'a', 'b', 'c']))
+
+    def test_detect_cxx_from_compiler_name(self):
+        def test(cmd):
+            opts = sut.classify_parameters(cmd)
+            return opts.get('c++')
+
+        eq = self.assertEqual
+
+        eq(False, test(['cc', '-c', 'src.c']))
+        eq(True, test(['c++', '-c', 'src.c']))
+        eq(False, test(['clang', '-c', 'src.c']))
+        eq(True, test(['clang++', '-c', 'src.c']))
+        eq(False, test(['gcc', '-c', 'src.c']))
+        eq(True, test(['g++', '-c', 'src.c']))

Added: cfe/trunk/tools/scan-build-py/tests/unit/test_intercept.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/unit/test_intercept.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/unit/test_intercept.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/unit/test_intercept.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,123 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libscanbuild.intercept as sut
+from . import fixtures
+import os.path
+
+
+class InterceptUtilTest(fixtures.TestCase):
+
+    def test_is_compiler_call_filter(self):
+        def test(command):
+            return sut.is_compiler_call({'command': [command]})
+
+        self.assertTrue(test('clang'))
+        self.assertTrue(test('clang-3.6'))
+        self.assertTrue(test('clang++'))
+        self.assertTrue(test('clang++-3.5.1'))
+        self.assertTrue(test('cc'))
+        self.assertTrue(test('c++'))
+        self.assertTrue(test('gcc'))
+        self.assertTrue(test('g++'))
+        self.assertTrue(test('/usr/local/bin/gcc'))
+        self.assertTrue(test('/usr/local/bin/g++'))
+        self.assertTrue(test('/usr/local/bin/clang'))
+        self.assertTrue(test('armv7_neno-linux-gnueabi-g++'))
+
+        self.assertFalse(test(''))
+        self.assertFalse(test('ld'))
+        self.assertFalse(test('as'))
+        self.assertFalse(test('/usr/local/bin/compiler'))
+
+    def test_format_entry_filters_action(self):
+        def test(command):
+            return list(sut.format_entry(
+                {'command': command, 'directory': '/opt/src/project'}))
+
+        self.assertTrue(test(['cc', '-c', 'file.c', '-o', 'file.o']))
+        self.assertFalse(test(['cc', '-E', 'file.c']))
+        self.assertFalse(test(['cc', '-MM', 'file.c']))
+        self.assertFalse(test(['cc', 'this.o', 'that.o', '-o', 'a.out']))
+        self.assertFalse(test(['cc', '-print-prog-name']))
+
+    def test_format_entry_normalize_filename(self):
+        directory = os.path.join(os.sep, 'home', 'me', 'project')
+
+        def test(command):
+            result = list(sut.format_entry(
+                {'command': command, 'directory': directory}))
+            return result[0]['file']
+
+        self.assertEqual(test(['cc', '-c', 'file.c']),
+                         os.path.join(directory, 'file.c'))
+        self.assertEqual(test(['cc', '-c', './file.c']),
+                         os.path.join(directory, 'file.c'))
+        self.assertEqual(test(['cc', '-c', '../file.c']),
+                         os.path.join(os.path.dirname(directory), 'file.c'))
+        self.assertEqual(test(['cc', '-c', '/opt/file.c']),
+                         '/opt/file.c')
+
+    def test_sip(self):
+        def create_status_report(filename, message):
+            content = """#!/usr/bin/env sh
+                         echo 'sa-la-la-la'
+                         echo 'la-la-la'
+                         echo '{0}'
+                         echo 'sa-la-la-la'
+                         echo 'la-la-la'
+                      """.format(message)
+            lines = [line.strip() for line in content.split('\n')]
+            with open(filename, 'w') as handle:
+                handle.write('\n'.join(lines))
+                handle.close()
+            os.chmod(filename, 0x1ff)
+
+        def create_csrutil(dest_dir, status):
+            filename = os.path.join(dest_dir, 'csrutil')
+            message = 'System Integrity Protection status: {0}'.format(status)
+            return create_status_report(filename, message)
+
+        def create_sestatus(dest_dir, status):
+            filename = os.path.join(dest_dir, 'sestatus')
+            message = 'SELinux status:\t{0}'.format(status)
+            return create_status_report(filename, message)
+
+        ENABLED = 'enabled'
+        DISABLED = 'disabled'
+
+        OSX = 'darwin'
+        LINUX = 'linux'
+
+        with fixtures.TempDir() as tmpdir:
+            try:
+                saved = os.environ['PATH']
+                os.environ['PATH'] = tmpdir + ':' + saved
+
+                create_csrutil(tmpdir, ENABLED)
+                self.assertTrue(sut.is_preload_disabled(OSX))
+
+                create_csrutil(tmpdir, DISABLED)
+                self.assertFalse(sut.is_preload_disabled(OSX))
+
+                create_sestatus(tmpdir, ENABLED)
+                self.assertTrue(sut.is_preload_disabled(LINUX))
+
+                create_sestatus(tmpdir, DISABLED)
+                self.assertFalse(sut.is_preload_disabled(LINUX))
+            finally:
+                os.environ['PATH'] = saved
+
+        try:
+            saved = os.environ['PATH']
+            os.environ['PATH'] = ''
+            # shall be false when it's not in the path
+            self.assertFalse(sut.is_preload_disabled(OSX))
+            self.assertFalse(sut.is_preload_disabled(LINUX))
+
+            self.assertFalse(sut.is_preload_disabled('unix'))
+        finally:
+            os.environ['PATH'] = saved

Added: cfe/trunk/tools/scan-build-py/tests/unit/test_report.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/unit/test_report.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/unit/test_report.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/unit/test_report.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,146 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libscanbuild.report as sut
+from . import fixtures
+import unittest
+import os
+import os.path
+
+
+def run_bug_parse(content):
+    with fixtures.TempDir() as tmpdir:
+        file_name = os.path.join(tmpdir, 'test.html')
+        with open(file_name, 'w') as handle:
+            handle.writelines(content)
+        for bug in sut.parse_bug_html(file_name):
+            return bug
+
+
+def run_crash_parse(content, preproc):
+    with fixtures.TempDir() as tmpdir:
+        file_name = os.path.join(tmpdir, preproc + '.info.txt')
+        with open(file_name, 'w') as handle:
+            handle.writelines(content)
+        return sut.parse_crash(file_name)
+
+
+class ParseFileTest(unittest.TestCase):
+
+    def test_parse_bug(self):
+        content = [
+            "some header\n",
+            "<!-- BUGDESC Division by zero -->\n",
+            "<!-- BUGTYPE Division by zero -->\n",
+            "<!-- BUGCATEGORY Logic error -->\n",
+            "<!-- BUGFILE xx -->\n",
+            "<!-- BUGLINE 5 -->\n",
+            "<!-- BUGCOLUMN 22 -->\n",
+            "<!-- BUGPATHLENGTH 4 -->\n",
+            "<!-- BUGMETAEND -->\n",
+            "<!-- REPORTHEADER -->\n",
+            "some tails\n"]
+        result = run_bug_parse(content)
+        self.assertEqual(result['bug_category'], 'Logic error')
+        self.assertEqual(result['bug_path_length'], 4)
+        self.assertEqual(result['bug_line'], 5)
+        self.assertEqual(result['bug_description'], 'Division by zero')
+        self.assertEqual(result['bug_type'], 'Division by zero')
+        self.assertEqual(result['bug_file'], 'xx')
+
+    def test_parse_bug_empty(self):
+        content = []
+        result = run_bug_parse(content)
+        self.assertEqual(result['bug_category'], 'Other')
+        self.assertEqual(result['bug_path_length'], 1)
+        self.assertEqual(result['bug_line'], 0)
+
+    def test_parse_crash(self):
+        content = [
+            "/some/path/file.c\n",
+            "Some very serious Error\n",
+            "bla\n",
+            "bla-bla\n"]
+        result = run_crash_parse(content, 'file.i')
+        self.assertEqual(result['source'], content[0].rstrip())
+        self.assertEqual(result['problem'], content[1].rstrip())
+        self.assertEqual(os.path.basename(result['file']),
+                         'file.i')
+        self.assertEqual(os.path.basename(result['info']),
+                         'file.i.info.txt')
+        self.assertEqual(os.path.basename(result['stderr']),
+                         'file.i.stderr.txt')
+
+    def test_parse_real_crash(self):
+        import libscanbuild.runner as sut2
+        import re
+        with fixtures.TempDir() as tmpdir:
+            filename = os.path.join(tmpdir, 'test.c')
+            with open(filename, 'w') as handle:
+                handle.write('int main() { return 0')
+            # produce failure report
+            opts = {'directory': os.getcwd(),
+                    'clang': 'clang',
+                    'file': filename,
+                    'report': ['-fsyntax-only', '-E', filename],
+                    'language': 'c',
+                    'output_dir': tmpdir,
+                    'error_type': 'other_error',
+                    'error_output': 'some output',
+                    'exit_code': 13}
+            sut2.report_failure(opts)
+            # find the info file
+            pp_file = None
+            for root, _, files in os.walk(tmpdir):
+                keys = [os.path.join(root, name) for name in files]
+                for key in keys:
+                    if re.match(r'^(.*/)+clang(.*)\.i$', key):
+                        pp_file = key
+            self.assertIsNot(pp_file, None)
+            # read the failure report back
+            result = sut.parse_crash(pp_file + '.info.txt')
+            self.assertEqual(result['source'], filename)
+            self.assertEqual(result['problem'], 'Other Error')
+            self.assertEqual(result['file'], pp_file)
+            self.assertEqual(result['info'], pp_file + '.info.txt')
+            self.assertEqual(result['stderr'], pp_file + '.stderr.txt')
+
+
+class ReportMethodTest(unittest.TestCase):
+
+    def test_chop(self):
+        self.assertEqual('file', sut.chop('/prefix', '/prefix/file'))
+        self.assertEqual('file', sut.chop('/prefix/', '/prefix/file'))
+        self.assertEqual('lib/file', sut.chop('/prefix/', '/prefix/lib/file'))
+        self.assertEqual('/prefix/file', sut.chop('', '/prefix/file'))
+
+    def test_chop_when_cwd(self):
+        self.assertEqual('../src/file', sut.chop('/cwd', '/src/file'))
+        self.assertEqual('../src/file', sut.chop('/prefix/cwd',
+                                                 '/prefix/src/file'))
+
+
+class GetPrefixFromCompilationDatabaseTest(fixtures.TestCase):
+
+    def test_with_different_filenames(self):
+        self.assertEqual(
+            sut.commonprefix(['/tmp/a.c', '/tmp/b.c']), '/tmp')
+
+    def test_with_different_dirnames(self):
+        self.assertEqual(
+            sut.commonprefix(['/tmp/abs/a.c', '/tmp/ack/b.c']), '/tmp')
+
+    def test_no_common_prefix(self):
+        self.assertEqual(
+            sut.commonprefix(['/tmp/abs/a.c', '/usr/ack/b.c']), '/')
+
+    def test_with_single_file(self):
+        self.assertEqual(
+            sut.commonprefix(['/tmp/a.c']), '/tmp')
+
+    def test_empty(self):
+        self.assertEqual(
+            sut.commonprefix([]), '')

Added: cfe/trunk/tools/scan-build-py/tests/unit/test_runner.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/unit/test_runner.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/unit/test_runner.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/unit/test_runner.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,213 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libscanbuild.runner as sut
+from . import fixtures
+import unittest
+import re
+import os
+import os.path
+
+
+def run_analyzer(content, opts):
+    with fixtures.TempDir() as tmpdir:
+        filename = os.path.join(tmpdir, 'test.cpp')
+        with open(filename, 'w') as handle:
+            handle.write(content)
+
+        opts.update({
+            'directory': os.getcwd(),
+            'clang': 'clang',
+            'file': filename,
+            'language': 'c++',
+            'analyze': ['--analyze', '-x', 'c++', filename],
+            'output': ['-o', tmpdir]})
+        spy = fixtures.Spy()
+        result = sut.run_analyzer(opts, spy.call)
+        return (result, spy.arg)
+
+
+class RunAnalyzerTest(unittest.TestCase):
+
+    def test_run_analyzer(self):
+        content = "int div(int n, int d) { return n / d; }"
+        (result, fwds) = run_analyzer(content, dict())
+        self.assertEqual(None, fwds)
+        self.assertEqual(0, result['exit_code'])
+
+    def test_run_analyzer_crash(self):
+        content = "int div(int n, int d) { return n / d }"
+        (result, fwds) = run_analyzer(content, dict())
+        self.assertEqual(None, fwds)
+        self.assertEqual(1, result['exit_code'])
+
+    def test_run_analyzer_crash_and_forwarded(self):
+        content = "int div(int n, int d) { return n / d }"
+        (_, fwds) = run_analyzer(content, {'output_failures': True})
+        self.assertEqual('crash', fwds['error_type'])
+        self.assertEqual(1, fwds['exit_code'])
+        self.assertTrue(len(fwds['error_output']) > 0)
+
+
+class SetAnalyzerOutputTest(fixtures.TestCase):
+
+    def test_not_defined(self):
+        with fixtures.TempDir() as tmpdir:
+            opts = {'output_dir': tmpdir}
+            spy = fixtures.Spy()
+            sut.set_analyzer_output(opts, spy.call)
+            self.assertTrue(os.path.exists(spy.arg['output'][1]))
+            self.assertTrue(os.path.isdir(spy.arg['output'][1]))
+
+    def test_html(self):
+        with fixtures.TempDir() as tmpdir:
+            opts = {'output_dir': tmpdir, 'output_format': 'html'}
+            spy = fixtures.Spy()
+            sut.set_analyzer_output(opts, spy.call)
+            self.assertTrue(os.path.exists(spy.arg['output'][1]))
+            self.assertTrue(os.path.isdir(spy.arg['output'][1]))
+
+    def test_plist_html(self):
+        with fixtures.TempDir() as tmpdir:
+            opts = {'output_dir': tmpdir, 'output_format': 'plist-html'}
+            spy = fixtures.Spy()
+            sut.set_analyzer_output(opts, spy.call)
+            self.assertTrue(os.path.exists(spy.arg['output'][1]))
+            self.assertTrue(os.path.isfile(spy.arg['output'][1]))
+
+    def test_plist(self):
+        with fixtures.TempDir() as tmpdir:
+            opts = {'output_dir': tmpdir, 'output_format': 'plist'}
+            spy = fixtures.Spy()
+            sut.set_analyzer_output(opts, spy.call)
+            self.assertTrue(os.path.exists(spy.arg['output'][1]))
+            self.assertTrue(os.path.isfile(spy.arg['output'][1]))
+
+
+class ReportFailureTest(fixtures.TestCase):
+
+    def assertUnderFailures(self, path):
+        self.assertEqual('failures', os.path.basename(os.path.dirname(path)))
+
+    def test_report_failure_create_files(self):
+        with fixtures.TempDir() as tmpdir:
+            # create input file
+            filename = os.path.join(tmpdir, 'test.c')
+            with open(filename, 'w') as handle:
+                handle.write('int main() { return 0')
+            uname_msg = ' '.join(os.uname()) + os.linesep
+            error_msg = 'this is my error output'
+            # execute test
+            opts = {'directory': os.getcwd(),
+                    'clang': 'clang',
+                    'file': filename,
+                    'report': ['-fsyntax-only', '-E', filename],
+                    'language': 'c',
+                    'output_dir': tmpdir,
+                    'error_type': 'other_error',
+                    'error_output': error_msg,
+                    'exit_code': 13}
+            sut.report_failure(opts)
+            # verify the result
+            result = dict()
+            pp_file = None
+            for root, _, files in os.walk(tmpdir):
+                keys = [os.path.join(root, name) for name in files]
+                for key in keys:
+                    with open(key, 'r') as handle:
+                        result[key] = handle.readlines()
+                    if re.match(r'^(.*/)+clang(.*)\.i$', key):
+                        pp_file = key
+
+            # prepocessor file generated
+            self.assertUnderFailures(pp_file)
+            # info file generated and content dumped
+            info_file = pp_file + '.info.txt'
+            self.assertIn(info_file, result)
+            self.assertEqual('Other Error\n', result[info_file][1])
+            self.assertEqual(uname_msg, result[info_file][3])
+            # error file generated and content dumped
+            error_file = pp_file + '.stderr.txt'
+            self.assertIn(error_file, result)
+            self.assertEqual([error_msg], result[error_file])
+
+
+class AnalyzerTest(unittest.TestCase):
+
+    def test_set_language(self):
+        def test(expected, input):
+            spy = fixtures.Spy()
+            self.assertEqual(spy.success, sut.language_check(input, spy.call))
+            self.assertEqual(expected, spy.arg['language'])
+
+        l = 'language'
+        f = 'file'
+        i = 'c++'
+        test('c',   {f: 'file.c', l: 'c', i: False})
+        test('c++', {f: 'file.c', l: 'c++', i: False})
+        test('c++', {f: 'file.c', i: True})
+        test('c',   {f: 'file.c', i: False})
+        test('c++', {f: 'file.cxx', i: False})
+        test('c-cpp-output',   {f: 'file.i', i: False})
+        test('c++-cpp-output', {f: 'file.i', i: True})
+        test('c-cpp-output',   {f: 'f.i', l: 'c-cpp-output', i: True})
+
+    def test_arch_loop(self):
+        def test(input):
+            spy = fixtures.Spy()
+            sut.arch_check(input, spy.call)
+            return spy.arg
+
+        input = {'key': 'value'}
+        self.assertEqual(input, test(input))
+
+        input = {'archs_seen': ['i386']}
+        self.assertEqual({'arch': 'i386'}, test(input))
+
+        input = {'archs_seen': ['ppc']}
+        self.assertEqual(None, test(input))
+
+        input = {'archs_seen': ['i386', 'ppc']}
+        self.assertEqual({'arch': 'i386'}, test(input))
+
+        input = {'archs_seen': ['i386', 'sparc']}
+        result = test(input)
+        self.assertTrue(result == {'arch': 'i386'} or
+                        result == {'arch': 'sparc'})
+
+
+ at sut.require([])
+def method_without_expecteds(opts):
+    return 0
+
+
+ at sut.require(['this', 'that'])
+def method_with_expecteds(opts):
+    return 0
+
+
+ at sut.require([])
+def method_exception_from_inside(opts):
+    raise Exception('here is one')
+
+
+class RequireDecoratorTest(unittest.TestCase):
+
+    def test_method_without_expecteds(self):
+        self.assertEqual(method_without_expecteds(dict()), 0)
+        self.assertEqual(method_without_expecteds({}), 0)
+        self.assertEqual(method_without_expecteds({'this': 2}), 0)
+        self.assertEqual(method_without_expecteds({'that': 3}), 0)
+
+    def test_method_with_expecteds(self):
+        self.assertRaises(KeyError, method_with_expecteds, dict())
+        self.assertRaises(KeyError, method_with_expecteds, {})
+        self.assertRaises(KeyError, method_with_expecteds, {'this': 2})
+        self.assertRaises(KeyError, method_with_expecteds, {'that': 3})
+        self.assertEqual(method_with_expecteds({'this': 0, 'that': 3}), 0)
+
+    def test_method_exception_not_caught(self):
+        self.assertRaises(Exception, method_exception_from_inside, dict())

Added: cfe/trunk/tools/scan-build-py/tests/unit/test_shell.py
URL: http://llvm.org/viewvc/llvm-project/cfe/trunk/tools/scan-build-py/tests/unit/test_shell.py?rev=257533&view=auto
==============================================================================
--- cfe/trunk/tools/scan-build-py/tests/unit/test_shell.py (added)
+++ cfe/trunk/tools/scan-build-py/tests/unit/test_shell.py Tue Jan 12 16:38:41 2016
@@ -0,0 +1,42 @@
+# -*- coding: utf-8 -*-
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libscanbuild.shell as sut
+import unittest
+
+
+class ShellTest(unittest.TestCase):
+
+    def test_encode_decode_are_same(self):
+        def test(value):
+            self.assertEqual(sut.encode(sut.decode(value)), value)
+
+        test("")
+        test("clang")
+        test("clang this and that")
+
+    def test_decode_encode_are_same(self):
+        def test(value):
+            self.assertEqual(sut.decode(sut.encode(value)), value)
+
+        test([])
+        test(['clang'])
+        test(['clang', 'this', 'and', 'that'])
+        test(['clang', 'this and', 'that'])
+        test(['clang', "it's me", 'again'])
+        test(['clang', 'some "words" are', 'quoted'])
+
+    def test_encode(self):
+        self.assertEqual(sut.encode(['clang', "it's me", 'again']),
+                         'clang "it\'s me" again')
+        self.assertEqual(sut.encode(['clang', "it(s me", 'again)']),
+                         'clang "it(s me" "again)"')
+        self.assertEqual(sut.encode(['clang', 'redirect > it']),
+                         'clang "redirect > it"')
+        self.assertEqual(sut.encode(['clang', '-DKEY="VALUE"']),
+                         'clang -DKEY=\\"VALUE\\"')
+        self.assertEqual(sut.encode(['clang', '-DKEY="value with spaces"']),
+                         'clang -DKEY=\\"value with spaces\\"')




More information about the cfe-commits mailing list