[zorg] r249757 - llvmlab bisect tool

Chris Matthews via llvm-commits llvm-commits at lists.llvm.org
Mon Sep 19 17:12:35 PDT 2016


The tool has an open door policy - anyone with a bot can upload their builds if they would like.  So you just need to find a volunteer!

On September 19, 2016 at 5:10:11 PM, Vassil Vassilev (v.g.vassilev at gmail.com) wrote:

On 16/09/16 20:25, Chris Matthews wrote:
Yes those are good changes! Please do add them.
Thanks, done in r281890. The tool is great! Are there plans to upload unix builds, too? It'd be very helpful to us.


On September 16, 2016 at 3:10:27 AM, Vassil Vassilev (v.g.vassilev at gmail.com) wrote:

On 08/10/15 23:52, Chris Matthews via llvm-commits wrote:
> Author: cmatthews
> Date: Thu Oct 8 16:52:50 2015
> New Revision: 249757
>
> URL: http://llvm.org/viewvc/llvm-project?rev=249757&view=rev
> Log:
> llvmlab bisect tool
>
> Added:
> zorg/trunk/llvmbisect/
> zorg/trunk/llvmbisect/bin/
> zorg/trunk/llvmbisect/bin/llvmlab (with props)
> zorg/trunk/llvmbisect/docs/
> zorg/trunk/llvmbisect/docs/Makefile
> zorg/trunk/llvmbisect/docs/builders.rst
> zorg/trunk/llvmbisect/docs/conf.py
> zorg/trunk/llvmbisect/docs/index.rst
> zorg/trunk/llvmbisect/docs/llvmlab_bisect.rst
> zorg/trunk/llvmbisect/llvmlab/
> zorg/trunk/llvmbisect/llvmlab/__init__.py
> zorg/trunk/llvmbisect/llvmlab/algorithm.py
> zorg/trunk/llvmbisect/llvmlab/ci.py
> zorg/trunk/llvmbisect/llvmlab/clang_link (with props)
> zorg/trunk/llvmbisect/llvmlab/gcs.py
> zorg/trunk/llvmbisect/llvmlab/llvmlab.py
> zorg/trunk/llvmbisect/llvmlab/scripts.py
> zorg/trunk/llvmbisect/llvmlab/shell.py
> zorg/trunk/llvmbisect/llvmlab/test_llvmlab.py
> zorg/trunk/llvmbisect/llvmlab/util.py
> zorg/trunk/llvmbisect/setup.py
>
> Added: zorg/trunk/llvmbisect/bin/llvmlab
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/bin/llvmlab?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/bin/llvmlab (added)
> +++ zorg/trunk/llvmbisect/bin/llvmlab Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,27 @@
> +#!/usr/bin/env python
> +
> +import sys
> +import errno
> +
> +from llvmlab.ci import action_fetch, action_ls, action_bisect, action_exec
> +from llvmlab.ci import action_test
> +from llvmlab import scripts
> +
> +
> +tool = scripts.Tool(locals())
> +main = tool.main
> +
> +if __name__ == '__main__':
> + rc = None
> + # Execute the main function in a try block to catch EPIPE exceptions.
> + try:
> + rc = main(sys.argv)
> +
> + # Force a flush on the output pipe to ensure EPIPE shows up here (prior
> + # to sys.stdout shutdown).
> + sys.stdout.flush()
> + sys.stderr.flush()
> + except IOError as e:
> + if e.errno != errno.EPIPE:
> + raise
> + sys.exit(0)
>
> Propchange: zorg/trunk/llvmbisect/bin/llvmlab
> ------------------------------------------------------------------------------
> svn:executable = *
>
> Added: zorg/trunk/llvmbisect/docs/Makefile
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/docs/Makefile?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/docs/Makefile (added)
> +++ zorg/trunk/llvmbisect/docs/Makefile Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,177 @@
> +# Makefile for Sphinx documentation
> +#
> +
> +# You can set these variables from the command line.
> +SPHINXOPTS =
> +SPHINXBUILD = sphinx-build
> +PAPER =
> +BUILDDIR = _build
> +
> +# User-friendly check for sphinx-build
> +ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
> +$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
> +endif
> +
> +# Internal variables.
> +PAPEROPT_a4 = -D latex_paper_size=a4
> +PAPEROPT_letter = -D latex_paper_size=letter
> +ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
> +# the i18n builder cannot share the environment and doctrees with the others
> +I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
> +
> +.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
> +
> +help:
> + @echo "Please use \`make <target>' where <target> is one of"
> + @echo " html to make standalone HTML files"
> + @echo " dirhtml to make HTML files named index.html in directories"
> + @echo " singlehtml to make a single large HTML file"
> + @echo " pickle to make pickle files"
> + @echo " json to make JSON files"
> + @echo " htmlhelp to make HTML files and a HTML help project"
> + @echo " qthelp to make HTML files and a qthelp project"
> + @echo " devhelp to make HTML files and a Devhelp project"
> + @echo " epub to make an epub"
> + @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
> + @echo " latexpdf to make LaTeX files and run them through pdflatex"
> + @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
> + @echo " text to make text files"
> + @echo " man to make manual pages"
> + @echo " texinfo to make Texinfo files"
> + @echo " info to make Texinfo files and run them through makeinfo"
> + @echo " gettext to make PO message catalogs"
> + @echo " changes to make an overview of all changed/added/deprecated items"
> + @echo " xml to make Docutils-native XML files"
> + @echo " pseudoxml to make pseudoxml-XML files for display purposes"
> + @echo " linkcheck to check all external links for integrity"
> + @echo " doctest to run all doctests embedded in the documentation (if enabled)"
> +
> +clean:
> + rm -rf $(BUILDDIR)/*
> +
> +html:
> + $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
> + @echo
> + @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
> +
> +dirhtml:
> + $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
> + @echo
> + @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
> +
> +singlehtml:
> + $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
> + @echo
> + @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
> +
> +pickle:
> + $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
> + @echo
> + @echo "Build finished; now you can process the pickle files."
> +
> +json:
> + $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
> + @echo
> + @echo "Build finished; now you can process the JSON files."
> +
> +htmlhelp:
> + $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
> + @echo
> + @echo "Build finished; now you can run HTML Help Workshop with the" \
> + ".hhp project file in $(BUILDDIR)/htmlhelp."
> +
> +qthelp:
> + $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
> + @echo
> + @echo "Build finished; now you can run "qcollectiongenerator" with the" \
> + ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
> + @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/LLVMLabTools.qhcp"
> + @echo "To view the help file:"
> + @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/LLVMLabTools.qhc"
> +
> +devhelp:
> + $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
> + @echo
> + @echo "Build finished."
> + @echo "To view the help file:"
> + @echo "# mkdir -p $$HOME/.local/share/devhelp/LLVMLabTools"
> + @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/LLVMLabTools"
> + @echo "# devhelp"
> +
> +epub:
> + $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
> + @echo
> + @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
> +
> +latex:
> + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
> + @echo
> + @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
> + @echo "Run \`make' in that directory to run these through (pdf)latex" \
> + "(use \`make latexpdf' here to do that automatically)."
> +
> +latexpdf:
> + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
> + @echo "Running LaTeX files through pdflatex..."
> + $(MAKE) -C $(BUILDDIR)/latex all-pdf
> + @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
> +
> +latexpdfja:
> + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
> + @echo "Running LaTeX files through platex and dvipdfmx..."
> + $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
> + @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
> +
> +text:
> + $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
> + @echo
> + @echo "Build finished. The text files are in $(BUILDDIR)/text."
> +
> +man:
> + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
> + @echo
> + @echo "Build finished. The manual pages are in $(BUILDDIR)/man."
> +
> +texinfo:
> + $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
> + @echo
> + @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
> + @echo "Run \`make' in that directory to run these through makeinfo" \
> + "(use \`make info' here to do that automatically)."
> +
> +info:
> + $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
> + @echo "Running Texinfo files through makeinfo..."
> + make -C $(BUILDDIR)/texinfo info
> + @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
> +
> +gettext:
> + $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
> + @echo
> + @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
> +
> +changes:
> + $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
> + @echo
> + @echo "The overview file is in $(BUILDDIR)/changes."
> +
> +linkcheck:
> + $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
> + @echo
> + @echo "Link check complete; look for any errors in the above output " \
> + "or in $(BUILDDIR)/linkcheck/output.txt."
> +
> +doctest:
> + $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
> + @echo "Testing of doctests in the sources finished, look at the " \
> + "results in $(BUILDDIR)/doctest/output.txt."
> +
> +xml:
> + $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
> + @echo
> + @echo "Build finished. The XML files are in $(BUILDDIR)/xml."
> +
> +pseudoxml:
> + $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
> + @echo
> + @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
>
> Added: zorg/trunk/llvmbisect/docs/builders.rst
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/docs/builders.rst?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/docs/builders.rst (added)
> +++ zorg/trunk/llvmbisect/docs/builders.rst Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,19 @@
> +.. _builders:
> +
> +Adding your builder to llvmlab bisect
> +=====================================
> +
> +llvmlab bisect compilers are stored on Google Cloud Storage. There is a common
> +bucket called llvm-build-artifacts, within that there is a directory for each
> +build. Builds can be uploaded in two ways, with authorized credentials with
> +the gsutil tool, or if the builder is in lab.llvm.org, from the labmaster2
> +stageing server.
> +
> +On the labmaster2 staging server any builds uploaded to:
> +``/Library/WebServer/Documents/artifacts/<buildername>/`` will be uploaded via
> +a cron job. Your builders public key will need to be added to that machine.
> +Rsync or scp can be used to upload the files.
> +
> +llvmbisect uses some regexes in llvmlab.py to parse the comiler information.
> +The tar file you upload will need to match those regexes. For example:
> +``clang-r249497-t13154-b13154.tar.gz``
>
> Added: zorg/trunk/llvmbisect/docs/conf.py
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/docs/conf.py?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/docs/conf.py (added)
> +++ zorg/trunk/llvmbisect/docs/conf.py Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,258 @@
> +# -*- coding: utf-8 -*-
> +#
> +# LLVM Lab Tools documentation build configuration file, created by
> +# sphinx-quickstart on Tue Oct 6 18:29:53 2015.
> +#
> +# This file is execfile()d with the current directory set to its
> +# containing dir.
> +#
> +# Note that not all possible configuration values are present in this
> +# autogenerated file.
> +#
> +# All configuration values have a default; values that are commented out
> +# serve to show the default.
> +
> +import sys
> +import os
> +
> +# If extensions (or modules to document with autodoc) are in another directory,
> +# add these directories to sys.path here. If the directory is relative to the
> +# documentation root, use os.path.abspath to make it absolute, like shown here.
> +#sys.path.insert(0, os.path.abspath('.'))
> +
> +# -- General configuration ------------------------------------------------
> +
> +# If your documentation needs a minimal Sphinx version, state it here.
> +#needs_sphinx = '1.0'
> +
> +# Add any Sphinx extension module names here, as strings. They can be
> +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
> +# ones.
> +extensions = []
> +
> +# Add any paths that contain templates here, relative to this directory.
> +templates_path = ['_templates']
> +
> +# The suffix of source filenames.
> +source_suffix = '.rst'
> +
> +# The encoding of source files.
> +#source_encoding = 'utf-8-sig'
> +
> +# The master toctree document.
> +master_doc = 'index'
> +
> +# General information about the project.
> +project = u'LLVM Lab Tools'
> +copyright = u'2015, Daniel Dunbar, Chris Matthews'
> +
> +# The version info for the project you're documenting, acts as replacement for
> +# |version| and |release|, also used in various other places throughout the
> +# built documents.
> +#
> +# The short X.Y version.
> +version = '1.0'
> +# The full version, including alpha/beta/rc tags.
> +release = '1.0'
> +
> +# The language for content autogenerated by Sphinx. Refer to documentation
> +# for a list of supported languages.
> +#language = None
> +
> +# There are two options for replacing |today|: either, you set today to some
> +# non-false value, then it is used:
> +#today = ''
> +# Else, today_fmt is used as the format for a strftime call.
> +#today_fmt = '%B %d, %Y'
> +
> +# List of patterns, relative to source directory, that match files and
> +# directories to ignore when looking for source files.
> +exclude_patterns = ['_build']
> +
> +# The reST default role (used for this markup: `text`) to use for all
> +# documents.
> +#default_role = None
> +
> +# If true, '()' will be appended to :func: etc. cross-reference text.
> +#add_function_parentheses = True
> +
> +# If true, the current module name will be prepended to all description
> +# unit titles (such as .. function::).
> +#add_module_names = True
> +
> +# If true, sectionauthor and moduleauthor directives will be shown in the
> +# output. They are ignored by default.
> +#show_authors = False
> +
> +# The name of the Pygments (syntax highlighting) style to use.
> +pygments_style = 'sphinx'
> +
> +# A list of ignored prefixes for module index sorting.
> +#modindex_common_prefix = []
> +
> +# If true, keep warnings as "system message" paragraphs in the built documents.
> +#keep_warnings = False
> +
> +
> +# -- Options for HTML output ----------------------------------------------
> +
> +# The theme to use for HTML and HTML Help pages. See the documentation for
> +# a list of builtin themes.
> +html_theme = 'default'
> +
> +# Theme options are theme-specific and customize the look and feel of a theme
> +# further. For a list of options available for each theme, see the
> +# documentation.
> +#html_theme_options = {}
> +
> +# Add any paths that contain custom themes here, relative to this directory.
> +#html_theme_path = []
> +
> +# The name for this set of Sphinx documents. If None, it defaults to
> +# "<project> v<release> documentation".
> +#html_title = None
> +
> +# A shorter title for the navigation bar. Default is the same as html_title.
> +#html_short_title = None
> +
> +# The name of an image file (relative to this directory) to place at the top
> +# of the sidebar.
> +#html_logo = None
> +
> +# The name of an image file (within the static path) to use as favicon of the
> +# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
> +# pixels large.
> +#html_favicon = None
> +
> +# Add any paths that contain custom static files (such as style sheets) here,
> +# relative to this directory. They are copied after the builtin static files,
> +# so a file named "default.css" will overwrite the builtin "default.css".
> +html_static_path = ['_static']
> +
> +# Add any extra paths that contain custom files (such as robots.txt or
> +# .htaccess) here, relative to this directory. These files are copied
> +# directly to the root of the documentation.
> +#html_extra_path = []
> +
> +# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
> +# using the given strftime format.
> +#html_last_updated_fmt = '%b %d, %Y'
> +
> +# If true, SmartyPants will be used to convert quotes and dashes to
> +# typographically correct entities.
> +#html_use_smartypants = True
> +
> +# Custom sidebar templates, maps document names to template names.
> +#html_sidebars = {}
> +
> +# Additional templates that should be rendered to pages, maps page names to
> +# template names.
> +#html_additional_pages = {}
> +
> +# If false, no module index is generated.
> +#html_domain_indices = True
> +
> +# If false, no index is generated.
> +#html_use_index = True
> +
> +# If true, the index is split into individual pages for each letter.
> +#html_split_index = False
> +
> +# If true, links to the reST sources are added to the pages.
> +#html_show_sourcelink = True
> +
> +# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
> +#html_show_sphinx = True
> +
> +# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
> +#html_show_copyright = True
> +
> +# If true, an OpenSearch description file will be output, and all pages will
> +# contain a <link> tag referring to it. The value of this option must be the
> +# base URL from which the finished HTML is served.
> +#html_use_opensearch = ''
> +
> +# This is the file name suffix for HTML files (e.g. ".xhtml").
> +#html_file_suffix = None
> +
> +# Output file base name for HTML help builder.
> +htmlhelp_basename = 'LLVMLabToolsdoc'
> +
> +
> +# -- Options for LaTeX output ---------------------------------------------
> +
> +latex_elements = {
> +# The paper size ('letterpaper' or 'a4paper').
> +#'papersize': 'letterpaper',
> +
> +# The font size ('10pt', '11pt' or '12pt').
> +#'pointsize': '10pt',
> +
> +# Additional stuff for the LaTeX preamble.
> +#'preamble': '',
> +}
> +
> +# Grouping the document tree into LaTeX files. List of tuples
> +# (source start file, target name, title,
> +# author, documentclass [howto, manual, or own class]).
> +latex_documents = [
> + ('index', 'LLVMLabTools.tex', u'LLVM Lab Tools Documentation',
> + u'Daniel Dunbar, Chris Matthews', 'manual'),
> +]
> +
> +# The name of an image file (relative to this directory) to place at the top of
> +# the title page.
> +#latex_logo = None
> +
> +# For "manual" documents, if this is true, then toplevel headings are parts,
> +# not chapters.
> +#latex_use_parts = False
> +
> +# If true, show page references after internal links.
> +#latex_show_pagerefs = False
> +
> +# If true, show URL addresses after external links.
> +#latex_show_urls = False
> +
> +# Documents to append as an appendix to all manuals.
> +#latex_appendices = []
> +
> +# If false, no module index is generated.
> +#latex_domain_indices = True
> +
> +
> +# -- Options for manual page output ---------------------------------------
> +
> +# One entry per manual page. List of tuples
> +# (source start file, name, description, authors, manual section).
> +man_pages = [
> + ('index', 'llvmlabtools', u'LLVM Lab Tools Documentation',
> + [u'Daniel Dunbar, Chris Matthews'], 1)
> +]
> +
> +# If true, show URL addresses after external links.
> +#man_show_urls = False
> +
> +
> +# -- Options for Texinfo output -------------------------------------------
> +
> +# Grouping the document tree into Texinfo files. List of tuples
> +# (source start file, target name, title, author,
> +# dir menu entry, description, category)
> +texinfo_documents = [
> + ('index', 'LLVMLabTools', u'LLVM Lab Tools Documentation',
> + u'Daniel Dunbar, Chris Matthews', 'LLVMLabTools', 'One line description of project.',
> + 'Miscellaneous'),
> +]
> +
> +# Documents to append as an appendix to all manuals.
> +#texinfo_appendices = []
> +
> +# If false, no module index is generated.
> +#texinfo_domain_indices = True
> +
> +# How to display URL addresses: 'footnote', 'no', or 'inline'.
> +#texinfo_show_urls = 'footnote'
> +
> +# If true, do not generate a @detailmenu in the "Top" node's menu.
> +#texinfo_no_detailmenu = False
>
> Added: zorg/trunk/llvmbisect/docs/index.rst
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/docs/index.rst?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/docs/index.rst (added)
> +++ zorg/trunk/llvmbisect/docs/index.rst Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,23 @@
> +.. LLVM Lab Tools documentation master file, created by
> + sphinx-quickstart on Tue Oct 6 18:29:53 2015.
> + You can adapt this file completely to your liking, but it should at least
> + contain the root `toctree` directive.
> +
> +Welcome to LLVM Lab Tools's documentation!
> +==========================================
> +
> +Contents:
> +
> +.. toctree::
> + :maxdepth: 2
> +
> + llvmlab_bisect
> + builders
> +
> +
> +Indices and tables
> +==================
> +
> +* :ref:`genindex`
> +* :ref:`modindex`
> +* :ref:`search`
>
> Added: zorg/trunk/llvmbisect/docs/llvmlab_bisect.rst
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/docs/llvmlab_bisect.rst?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/docs/llvmlab_bisect.rst (added)
> +++ zorg/trunk/llvmbisect/docs/llvmlab_bisect.rst Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,569 @@
> +.. _llvmlab-bisect:
> +
> +Automatic compiler bisecting with llvmlab bisect
> +==================================================
> +
> +``llvmlab bisect`` is a tool for automatically identifying
> +when a regression was introduced in any build of Clang, or LLVM
> +produced by one of our Buildbots or Jenkins jobs.
> +
> +The basics of the tool are very simple, you must provided it with a "test case"
> +which will reproduce the problem you are seeing. Once you have done that, the
> +tool will automatically download compiler packages from the cloud
> +(typically produced by our continuous integration system) and will check whether
> +the problem reproduces with that compiler or not. The tool will then attempt to
> +narrow down on the first compiler which broke the test, and will report the last
> +compiler which worked and the first compiler that failed.
> +
> +Getting the tool
> +~~~~~~~~~~~~~~~~
> +
> +The tool is in our tools repo::
> + $ svn checkout https://llvm.org/svn/llvm-project/zorg
Did you mean https://llvm.org/svn/llvm-project/zorg/trunk zorg ?
Otherwise the next command doesn't work.

Also I ended up working around the 'sudo' requirement by:

svn co https://llvm.org/svn/llvm-project/zorg/trunk/ zorg
cd zorg/llvmbisect
LOCAL_PYTHON_INSTALL_PATH=$(pwd)/local_python_packages/lib/python2.7/site-packages/
mkdir -p $LOCAL_PYTHON_INSTALL_PATH
export PYTHONPATH=$LOCAL_PYTHON_INSTALL_PATH:$PYTHONPATH
python setup.py install --prefix=$(pwd)/local_python_packages
export PATH=$(pwd)/bin:$PATH

Do you think it is worth mentioning in the docs?

--Vassil
> + $ cd zorg/llvmbisect
> + $ sudo python setup.py install
> + $ llvmlab ls
> +
> +
> +The Bisection Process
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +There are several parts of the bisection process you should understand to use
> +``llvmlab bisect`` effectively:
> +
> + * How the tool gets compiler builds.
> +
> + * How tests (bisection predicates) are run.
> +
> + * How the bisect process sandboxes tests.
> +
> + * Test filters.
> +
> +Compiler Packages
> ++++++++++++++++++
> +
> +Bisection uses packages produced by the continuous integration
> +system. Currently, it will only consider packages which are produced by one
> +particular "builder". The default is to use the ``clang-stage1-configure-RA_build``
> +line of packages, because that is produced by our mini farm and has a very high
> +granularity and a long history.
> +
> +You can tell the tool to use a particular line of builds using the ``-b`` or
> +``--build`` command line option.
> +
> +You can see a list of the kinds of builds which are published using::
> +
> + $ ./llvmlab ls
> + clang-stage1-configure-RA_build
> + clang-stage2-foo
> +
> +Each of these corresponds to a particular buildbot/Jenkins builder
> +which is constantly building new revisions and uploading them to
> +the cloud.
> +
> +The important thing to understand is that the particular compiler package in use
> +may impact your test. For example, ``clang-stage1-configure-RA_build`` builds
> +are x86_64 compilers built on Yosemite in release asserts mode. Generally, you
> +should make sure your test explicitly sets anything which could impact the test
> +(like the architecture).
> +
> +The other way this impacts your tests is that some packages are layed out
> +differently than others. Most compiler packages are layed out in a "Unix" style,
> +with ``bin`` and ``lib`` subdirectories. One easy way to see the package layout
> +is to use ``llvmlab fetch`` to grab a build from the particular builder you
> +are using and poke at it. For example::
> +
> + $ llvmlab fetch clang-i386-darwin9-snt
> + downloaded root: clang-r128299-b6960.tgz
> + extracted path :
> + $ ls clang-r128299-b6960
> + bin docs lib share
> +
> +See ``llvmlab fetch --help`` for more information on the ``fetch`` tool.
> +
> +The main exception to remember is that Apple style builds generally will have
> +"root" style layouts, where the package is meant to be installed directly into
> +``/``, and will be layed out with ``usr/bin`` and ``Developer/usr/bin``
> +subdirectories.
> +
> +
> +The Build Cache
> ++++++++++++++++
> +
> +``llvmlab bisect`` can be configured to cache downloaded archives. This is
> +useful for users who frequently bisect things and want the command to run as
> +fast as possible. Note that the tool doesn't try and do anything smart about
> +minimizing the amount of disk space the cache uses, so use this at your own
> +risk.
> +
> +To enable the cache::
> + $ mkdir -p ~/.llvmlab
> + $ echo "[ci]" > ~/.llvmlab/config
> + $ echo "cache_builds = True" > ~/.llvmlab/config
> +
> +
> +Bisection Predicates
> +++++++++++++++++++++
> +
> +Like most bisection tools, ``llvmlab bisect`` needs to have a way to test
> +whether a particular build "passes" or "fails". ``llvmlab bisect`` uses a
> +format which allows writing most bisection commands on a single command line
> +without having to write extra shell scripts.
> +
> +``llvmlab ci exec`` is an invaluable tool for checking bisection
> +predicates. It accepts the exact same syntax as llvmlab bisect, but prints a
> +bit more information by default and only runs a single command. This is useful
> +for vetting bisection predicates before running a full bisection process.
> +
> +Predicates are written as commands which are expected to exit successfully
> +(i.e., return 0 as the exit code) when the test succeeds
> +[#predicate_tense]_. The command will be run once for each downloaded package to
> +determine if the test passes or fails on that particular build.
> +
> +``llvmlab bisect`` treats all non-optional command line arguments as the
> +command to be run. Each argument will be rewritten to possibly substitute
> +variables, and then the entire command line will be run (i.e., ``exec()``'d) to
> +determine whether the test passes or fails.
> +
> +.. _string: http://docs.python.org/library/stdtypes.html#string-formatting
> +
> +Bisection downloads each package into a separate directory inside a sandbox and
> +provides a mechanism for substituting the path to package into the command to be
> +run. Variables are substituted using the Python syntax with string_ formatting
> +named keys. For the most part, the syntax is like ``printf`` but variable names
> +are written in parentheses before the format specifier [#sh_parens]_.
> +
> +The most important variable is "path", which will be set to the path to the
> +downloaded package. For example::
> +
> + %(path)s/bin/clang
> +
> +would typically be expanded to something like::
> +
> + .../<sandbox>/clang-r128289-b6957/bin/clang
> +
> +before the command is run. You can use the ``-v`` (``--verbose``) command line
> +option to have ``llvmlab bisect`` print the command lines it is running after
> +substitution.
> +
> +The tool provides a few other variables but "path" is the only one needed for
> +all but the rarest bisections. You can see the others in ``llvmlab bisect
> +--help``.
> +
> +The tool optimizes for the situation where downloaded packages include command
> +line executable which are going to be used in the tests, by automatically
> +extending the PATH and DYLD_LIBRARY_PATH variables to point into the downloaded
> +build directory whenever it sees that the downloaded package has ``bin`` or
> +``lib`` directories (the tool will also look for ``/Developer/usr/...``
> +directories). This environment extensions mean that it is usually possible to
> +write simple test commands without requiring any substitutions.
> +
> +For some bisection scenarios, it is easier to write a test script than to try
> +and come up with a single predicate command. For these scenarioes, ``llvmlab
> +bisect`` also makes all of the substitution variables available in the command's
> +environment. Each variable is injected into the environment as
> +``TEST_<variable>``. As an example, the following script could be used as a test
> +predicate which just checks that the compile succeeds::
> +
> + #!/bin/sh
> +
> + $TEST_PATH/bin/clang -c t.c
> +
> +Even though llvmlab bisect itself will only run one individual command per
> +build, you can write arbitrarily complicated test predicates by either (a)
> +writing external test scripts, or (b) writing shell "one-liners" and using
> +``/bin/sh -c`` to execute them. For example, the following bisect will test that
> +a particular source file both compiles and executes successfully::
> +
> + llvmlab bisect /bin/sh -c '%(path)s/bin/clang t.c && ./a.out'
> +
> +llvmlab bisect also supports a shortcut for this particular pattern. Separate
> +test commands can be separated on the command by a literal "----" command line
> +argument. Each command will be substituted as usual, but will they will be run
> +separately in order and if any command fails the entire test will fail.
> +
> +.. [#predicate_tense] Note that ``llvmlab bisect`` always looks for the latest
> + build where a predicate *passes*. This means that it
> + generally expects the predicate to fail on any recent
> + build. If you are used to using tools like ``delta`` you
> + may be used to the predicate having the opposite tense --
> + however, for regression analysis usually one is
> + investigating a failure, and so one expects the test to
> + currently fail.
> +
> +.. [#sh_parens] Most shells will assign a syntax to (foo) so you generally have
> + to quote arguments which require substitution. One day I'll
> + think of a clever way I like to commands even easier to
> + write. Until then, quote away!
> +
> +
> +The Bisection Sandbox
> ++++++++++++++++++++++
> +
> +``llvmlab bisect`` tries to be very lightweight and not modify your working
> +directory or leave stray files around unless asked to. For that reason, it
> +downloads all of the packages and runs all of the tests inside a sandbox. By
> +default, the tool uses a sandbox inside ``/tmp`` and will destroy the sandbox
> +when it is done running tests.
> +
> +The tool also tries to be quiet and minimize command output, so the output of
> +each individual test run is also stored inside the sandbox. Unfortunately, this
> +means when the sandbox is destroyed you will no longer have access to the log
> +files if you think the predicate was not working correctly.
> +
> +For long running or complicated bisects, it is recommended to use the ``-s`` or
> +``--sandbox`` to tell the tool where to put the sandbox. If this option is used,
> +the sandbox will not be destroyed and you can investigate the log files for each
> +predicate run and the downloaded packages at your leisure.
> +
> +Predicates commands themselves are **NOT** run inside the sandbox, they are
> +always run in the current working directory. This is useful for referring to
> +test input files, but may be a problem if you wish to store the outputs of each
> +individual test run (for example, to analyze later). For that case, one method
> +is to store the test outputs inside the download package directories. The
> +following example will store each generated executable inside the build
> +directory for testing later::
> +
> + llvmlab bisect /bin/sh -c '%(path)s/bin/clang t.c -o %(path)s/foo && %(path)s/foo'
> +
> +
> +Environment Extensions
> +++++++++++++++++++++++
> +
> +``llvmlab bisect`` tries to optimize for the common case where build product
> +have executables or libraries to test, by automatically extending the ``PATH``
> +and ``DYLD_LIBRARY_PATH`` variables when it recognizes that the build package
> +has ``bin`` or ``lib`` subdirectories.
> +
> +For almost all common bisection tasks, this makes it possible to run the tool
> +without having to explicitly specify the substitution variables.
> +
> +For example::
> +
> + llvmlab bisect '%(path)s/bin/clang' -c t.c
> +
> +could just be written as::
> +
> + llvmlab bisect clang -c t.c
> +
> +because the ``clang`` binary in the downloaded package will be found first in
> +the environment lookup.
> +
> +
> +Test Filters
> +++++++++++++
> +
> +For more advanced uses, llvmlab bisect has a syntax for specifying "filters"
> +on individual commands. The syntax for filters is that they should be specified
> +at the start of the command using arguments like "%%<filter expression>%%".
> +
> +The filters are used as a way to specify additional parameters which only apply
> +to particular test commands. The expressions themselves are just Python
> +expressions which should evaluate to a boolean result, which becomes the result
> +of the test.
> +
> +The Python expressions are evaluate in an environment which contains the
> +following predefined variables:
> +
> +``result``
> +
> + The current boolean result of the test predicate (that is, true if the test is
> + "passing"). This may have been modified by preceeding filters.
> +
> +``user_time``, ``sys_time``, ``wall_time``
> +
> + The user, system, and wall time the command took to execute, respectively.
> +
> +These variables can be used to easily construct predicates which fail based on
> +more complex criterion. For example, here is a filter to look for the latest
> +build where the compiler succeeds in less than .5 seconds::
> +
> + llvmlab bisect "%% result and user_time < .5 %%" clang -c t.c
> +
> +
> +Using ``llvmlab bisect``
> +~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +``llvmlab bisect`` is very flexible but takes some getting used to. The
> +following section has example bisection commands for many common scenarios.
> +
> +Compiler Crashes
> +++++++++++++++++
> +
> +This is the simplest case, a bisection for a compiler crash or assertion failure
> +usually looks like::
> +
> + $ llvmlab bisect '%(path)s'/bin/clang -c t.c ... compiler flags ...
> +
> +because when the compiler crashes it will have a non-zero exit code. *For
> +bisecting assertion failures, you should make sure the build being tested has
> +assertions compiled in!*
> +
> +Suppose you are investigating a crash which has been fixed, and you want to know
> +where. Just use the LLVM ``not`` tool to reverse the test:
> +
> + $ llvmlab bisect not '%(path)s'/bin/clang -c t.c ... compiler flags ...
> +
> +By looking for the latest build where ``not clang ...`` *passes* we are
> +effectively looking for the latest broken build. The next build will generally
> +be the one which fixed the problem.
> +
> +
> +Miscompiles
> ++++++++++++
> +
> +Miscompiles usually involve compiling and running the output.
> +
> +The simplest scenario is when the program crashes when run. In that case the
> +simplest method is to use the ``/bin/sh -c "... arguments ..."`` trick to
> +combine the compile and execute steps into one command line::
> +
> + $ llvmlab bisect /bin/sh -c '%(path)s/bin/clang t.c && ./a.out'
> +
> +Note that because we are already quoting the shell command, we can just move the
> +quotes around the entire line and not worry about quoting individual arguments
> +(unless they have spaces!).
> +
> +A more complex scenario is when the program runs but has bad output. Usually
> +this just means you need to grep the output for correct output. For example, to
> +bisect a program which is supposed to print "OK" (but isn't currently) we could
> +use::
> +
> + $ llvmlab bisect /bin/sh -c '%(path)s/bin/clang t.c && ./a.out | grep "OK"'
> +
> +Beware the pitfalls of exit codes and pipes, and use temporary files if you
> +aren't sure of what you are doing!
> +
> +
> +Overlapped Failures
> ++++++++++++++++++++
> +
> +If you are used to using a test case reduction tool like ``delta`` or
> +``bugpoint``, you are probably familiar with the problem of running the tool for
> +hours, only to find that it found a very nice test case for a different problem
> +than what you were looking for.
> +
> +The same problem happens when bisecting a program which was previously broken
> +for a different reason. If you run the tool but the results don't seem to make
> +sense, I recommend saving the sandbox (e.g., ``llvmlab bisect -s /tmp/foo
> +...``) and investigating the log files to make sure bisection looked for the
> +problem you are interested in. If it didn't, usually you should make your
> +predicate more precise, for example by using ``grep`` to search the output for a
> +more precise failure message (like an assertion failure string).
> +
> +
> +Infinite Loops
> +++++++++++++++
> +
> +On occasion, you will want to bisect something that infinite loops or takes
> +much longer than usual. This is a problem because you usually don't want to wait
> +for a long time (or infinity) for the predicate to complete.
> +
> +One simple trick which can work is to use the ``ulimit`` command to set a time
> +limit. The following command will look for the latest build where the compiler
> +runs in less than 10 seconds on the given input::
> +
> + $ llvmlab bisect /bin/sh -c 'ulimit -t 10; %(path)s/bin/clang -c t.c'
> +
> +
> +Performance Regressions
> ++++++++++++++++++++++++
> +
> +Bisecting performance regressions is done most easily using the filter
> +expressions. Usually you would start by determining what an approximate upper
> +bound on the expected time of the command is. Then, use a ``max_time`` filter
> +with that time to cause any test running longer than that to fail.
> +
> +For example, the following example shows a real bisection of a performance
> +regression on the ``telecom-gsm`` benchmark::
> +
> + llvmlab bisect \
> + '%(path)s/bin/clang' -o telecomm-gsm.exe -w -arch x86_64 -O3 \
> + ~/llvm-test-suite/MultiSource/Benchmarks/MiBench/telecomm-gsm/*.c \
> + -lm -DSTUPID_COMPILER -DNeedFunctionPrototypes=1 -DSASR \
> + ---- \
> + "%% user_time < 0.25 %%" ./telecomm-gsm.exe -fps -c \
> + ~/llvm-test-suite/MultiSource/Benchmarks/MiBench/telecomm-gsm/large.au
> +
> +
> +Nightly Test Failures
> ++++++++++++++++++++++
> +
> +If you are bisecting a nightly test failure, it commonly helps to leverage the
> +existing nightly test Makefiles rather than try to write your own step to build
> +or test an executable against the expected output. In particular, the Makefiles
> +generate report files which say whether the test passed or failed.
> +
> +For example, if you are using LNT to run your nightly tests, then the top line
> +the ``test.log`` file shows the exact command used to run the tests. You can
> +always rerun this command in any subdirectory. For example, here is an example
> +from an i386 Clang run::
> +
> + 2010-10-12 08:54:39: running: "make" "-k" "-j" "1" "report" "report.simple.csv" \
> + "TARGET_LLVMGCC=/Users/ddunbar/llvm.ref/2010-10-12_00-01.install/bin/clang" \
> + "CC_UNDER_TEST_TARGET_IS_I386=1" "ENABLE_HASHED_PROGRAM_OUTPUT=1" "TARGET_CXX=None" \
> + "LLI_OPTFLAGS=-O0" "TARGET_CC=None" \
> + "TARGET_LLVMGXX=/Users/ddunbar/llvm.ref/2010-10-12_00-01.install/bin/clang++" \
> + "TEST=simple" "CC_UNDER_TEST_IS_CLANG=1" "TARGET_LLCFLAGS=" "TARGET_FLAGS=-g -arch i386" \
> + "USE_REFERENCE_OUTPUT=1" "OPTFLAGS=-O0" "SMALL_PROBLEM_SIZE=1" "LLC_OPTFLAGS=-O0" \
> + "ENABLE_OPTIMIZED=1" "ARCH=x86" "DISABLE_CBE=1" "DISABLE_JIT=1"
> +
> +Suppose we wanted to bisect a test failure on something complicated, like
> +``254.gap``. The "easiest" thing to do is:
> +
> + #. Replace the compiler paths with "%(path)s" so that we use the right compiler to test.
> +
> + #. Change into the test directory, in this case ``External/SPEC/CINT2000/254.gap``.
> +
> + #. Each test produces a ``<test name>.simple.execute.report.txt`` text file which will have a line that looks like::
> +
> + TEST-FAIL: exec /Users/ddunbar/nt/clang.i386.O0.g/test-2011-03-25_06-35-35/External/SPEC/CINT2000/254.gap/254.gap
> +
> + because the tests are make driven, we can tell make to only build this
> + file. In SingleSource directories, this would make sure we don't run any
> + tests we don't need to.
> +
> + In this case, replace the "report" and "report.simple.csv" make targest on
> + the command line with "Output/254.gap.simple.exec.txt".
> +
> + #. Make sure your test predicate removes the Output directory and any ``report...`` files (if
> + you forget this, you won't end up rebuilding the test with the right compiler).
> +
> + #. Add a grep for "TEST-PASS" of the report file.
> +
> +An example of what the final bisect command might look like::
> +
> + $ llvmlab bisect /bin/sh -c \
> + 'rm -rf report.* Output && \
> + "make" "-k" "-j" "1" "Output/254.gap.simple.exec.txt" \
> + "TARGET_LLVMGCC=%(path)s/bin/clang" \
> + "CC_UNDER_TEST_TARGET_IS_I386=1" "ENABLE_HASHED_PROGRAM_OUTPUT=1" "TARGET_CXX=None" \
> + "LLI_OPTFLAGS=-O0" "TARGET_CC=None" \
> + "TARGET_LLVMGXX=%(path)s/bin/clang++" \
> + "TEST=simple" "CC_UNDER_TEST_IS_CLANG=1" "TARGET_LLCFLAGS=" "TARGET_FLAGS=-g -arch i386" \
> + "USE_REFERENCE_OUTPUT=1" "OPTFLAGS=-O0" "SMALL_PROBLEM_SIZE=1" "LLC_OPTFLAGS=-O0" \
> + "ENABLE_OPTIMIZED=1" "ARCH=x86" "DISABLE_CBE=1" "DISABLE_JIT=1" && \
> + grep "TEST-PASS" "Output/254.gap.simple.exec.txt"'
> +
> +
> +Nightly Test Performance Regressions
> +++++++++++++++++++++++++++++++++++++
> +
> +This is similar to the problem of bisecting nightly test above, but made more
> +complicated because the test predicate needs to do a comparison on the
> +performance result.
> +
> +One way to do this is to extract a script which reproduces the performance
> +regression, and use a filter expression as described previously. However, this
> +requires extracting the exact commands which are run by the ``test-suite``
> +Makefiles.
> +
> +A simpler way is to use the ``test-suite/tools/get-report-time`` script in
> +conjunction with a standard Unix command line tool like ``expr`` to do the
> +performance comparison.
> +
> +The basic process is similar to the one above, the differences are that instead
> +of just using ``grep`` to check the output, we use the ``get-report-time`` tool
> +and a quick script using ``bc`` to compare the result. Here is an example::
> +
> + $ llvmlab bisect -s sandbox /bin/sh -c \
> + 'set -ex; \
> + rm -rf Output && \
> + "make" "-k" "-j" "1" "Output/security-rijndael.simple.compile.report.txt" \
> + "TARGET_LLVMGCC=%(path)s/bin/clang" "ENABLE_HASHED_PROGRAM_OUTPUT=1" "TARGET_CXX=None" \
> + "LLI_OPTFLAGS=-O0" "TARGET_CC=None" \
> + "TARGET_LLVMGXX=%(path)s/bin/clang++" \
> + "TEST=simple" "CC_UNDER_TEST_IS_CLANG=1" "ENABLE_PARALLEL_REPORT=1" "TARGET_FLAGS=-g" \
> + "USE_REFERENCE_OUTPUT=1" "CC_UNDER_TEST_TARGET_IS_X86_64=1" "OPTFLAGS=-O0" \
> + "LLC_OPTFLAGS=-O0" "ENABLE_OPTIMIZED=1" "ARCH=x86_64" "DISABLE_CBE=1" "DISABLE_JIT=1" && \
> + ./check-value.sh'
> +
> +Where ``check-value.sh`` looks like this::
> +
> + #!/bin/sh -x
> +
> + cmd1=`/Volumes/Data/sources/llvm/projects/test-suite/tools/get-report-time \
> + Output/security-rijndael.simple.compile.report.txt`
> + cmd2=`echo "$cmd1 < 0.42" | bc -l`
> +
> + if [ $cmd2 == '1' ]; then
> + exit 0
> + fi
> +
> + exit 1
> +
> +Another trick this particular example uses is using the bash ``set -x`` command
> +to log the commands which get run. In this case, this allows us to inspect the
> +log files in the ``sandbox`` directory and see what the time used in the
> +``expr`` comparison was. This is handy in case we aren't exactly sure if the
> +comparison time we used is correct.
> +
> +
> +Tests With Interactive Steps
> +++++++++++++++++++++++++++++
> +
> +Sometimes test predicates require some steps that must be performed
> +interactively or are too hard to automate in a test script.
> +
> +In such cases its still possible to use llvmlab bisect by writing the test
> +script in such a way that it will wait for the user to inform it whether the
> +test passed or failed. For example, here is a real test script that was used to
> +bisect where I was running a GUI app to check for distorted colors as part of
> +the test step.
> +
> +After each step, the GUI app would be launched, I would check the colors, and
> +then type in "yes" or "no" based on whether the app worked or not. Note that
> +because llvmlab bisect hides the test output by default, the prompt itself
> +doesn't show up, but the command still can read stdin.
> +
> +Here is the test script::
> +
> + #!/bin/sh
> +
> + git reset --hard
> +
> + CC=clang
> + COMPILE HERE
> + sudo ditto built_files/ /
> +
> + open /Applications/GUIApp
> +
> + while true; do
> + read -p "OK?" is_ok
> + if [ "$is_ok" == "yes" ]; then
> + echo "OK!"
> + exit 0
> + elif [ "$is_ok" == "no" ]; then
> + echo "FAILED!"
> + exit 1
> + else
> + echo "Answer yes or no you!";
> + fi
> + done
> +
> +And here is log showing the transcript of the bisect::
> +
> + bash-3.2# ~admin/zorg/utils/llvmlab bisect --max-rev 131837 ./test.sh
> + no
> + FAIL: clang-r131837-b8165
> + no
> + FAIL: clang-r131835-b8164
> + no
> + FAIL: clang-r131832-b8162
> + no
> + FAIL: clang-r131828-b8158
> + yes
> + PASS: clang-r131795-b8146
> + no
> + FAIL: clang-r131809-b8151
> + no
> + FAIL: clang-r131806-b8149
> + no
> + FAIL: clang-r131801-b8147
> + clang-r131795-b8146: first working build
> + clang-r131801-b8147: next failing build
> +
> +Note that it is very easy to make a mistake and type the wrong answer when
> +following this process, in which case the bisect will come up with the wrong
> +answer. It's always worth sanity checking the results (e.g., using ``llvmlab
> +ci exec``) after the bisect is complete.
>
> Added: zorg/trunk/llvmbisect/llvmlab/__init__.py
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/llvmlab/__init__.py?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/llvmlab/__init__.py (added)
> +++ zorg/trunk/llvmbisect/llvmlab/__init__.py Thu Oct 8 16:52:50 2015
> @@ -0,0 +1 @@
> +""""""
>
> Added: zorg/trunk/llvmbisect/llvmlab/algorithm.py
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/llvmlab/algorithm.py?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/llvmlab/algorithm.py (added)
> +++ zorg/trunk/llvmbisect/llvmlab/algorithm.py Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,85 @@
> +"""Handy algorithms."""
> +
> +
> +def bisect(predicate, list):
> + """
> + bisect(predicate, list) -> item or None
> +
> + Given a test predicate and a list of items, search return the first item in
> + the list for which the predicate succeeds, or None if no such item is
> + found.
> +
> + The list is assumed to be ordered such that (predicate(i) for i in list) is
> + monotonic. If this condition is not met, the returned item is guaranteed to
> + satisfy the predicate and the item preceeding it is guaranteed to fail the
> + predicate, but that is all. Additionally, if the last item does not pass
> + the predicate, such an item might not be found.
> +
> + This function is optimized for the case where the searched for item is near
> + the beginning of the list.
> + """
> +
> + if not list:
> + return None
> +
> + lo = 0
> + hi = len(list)-1
> +
> + # Check first item immediately.
> + if predicate(list[lo]):
> + return list[lo]
> +
> + # Invariants:
> + # not predicate(list[lo])
> + # predicate(list[hi])
> +
> + # Binary search region.
> + while lo + 1 != hi:
> + mid = (lo + hi) // 2
> + if predicate(list[mid]):
> + hi = mid
> + else:
> + lo = mid
> +
> + return list[hi]
> +
> +
> +def gallop(predicate, list):
> + """
> + gallop(predicate, list) -> list or None
> +
> + Given a test predicate and a list of items, reduce the search space
> + assuming the searched for item is near the beginning of the list.
> +
> + The list is assumed to be ordered such that (predicate(i) for i in list) is
> + monotonic. If this condition is not met, the returned item is guaranteed to
> + satisfy the predicate and the item preceeding it is guaranteed to fail the
> + predicate, but that is all. Additionally, if the last item does not pass
> + the predicate, such an item might not be found.
> + """
> +
> + if not list:
> + return None
> +
> + # Check first item immediately.
> + if predicate(list[0]):
> + return list[0:1]
> +
> + # Invariants:
> + # not predicate(list[lo])
> +
> + # Gallop to find initial search range, under the assumption that we are
> + # most likely looking for something at the head of this list.
> + lo = 0
> + hi = 1
> + while hi < len(list):
> + if predicate(list[hi]):
> + break
> + lo, hi = hi, hi + (hi - lo)*2
> +
> + # If we galloped past the end, limit the hi range.
> + if hi >= len(list):
> + hi = len(list) - 1
> + if hi == lo or not predicate(list[hi]):
> + return None
> + return list[lo:hi+1]
>
> Added: zorg/trunk/llvmbisect/llvmlab/ci.py
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/llvmlab/ci.py?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/llvmlab/ci.py (added)
> +++ zorg/trunk/llvmbisect/llvmlab/ci.py Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,650 @@
> +"""
> +Tools for working with llvmlab CI infrastructure.
> +"""
> +
> +import errno
> +import os
> +import resource
> +import shutil
> +import subprocess
> +import sys
> +import tempfile
> +import time
> +
> +
> +from . import shell
> +from . import algorithm
> +from . import llvmlab
> +from . import util
> +from util import warning, fatal, note
> +from . import scripts
> +from . import util
> +
> +from optparse import OptionParser
> +
> +
> +class Command(object):
> + class Filter(object):
> + def __init__(self):
> + pass
> +
> + def evaluate(self, command):
> + raise RuntimeError("Abstract method.")
> +
> + class NotFilter(Filter):
> + def evaluate(self, command):
> + warning("'negate' filter is deprecated, use 'not result' "
> + "filter expression")
> + command.result = not command.result
> +
> + class MaxTimeFilter(Filter):
> + def __init__(self, value):
> + try:
> + self.value = float(value)
> + except:
> + fatal("invalid argument: %r" % time)
> + warning("'max_time' filter is deprecated, use "
> + "'user_time < %.4f' filter expression" % self.value)
> +
> + def evaluate(self, command):
> + if command.metrics["user_time"] >= self.value:
> + command.result = False
> +
> + available_filters = {"negate": NotFilter(), # note this is an instance.
> + "max_time": MaxTimeFilter}
> +
> + def __init__(self, command, stdout_path, stderr_path, env):
> + self.command = command
> + self.stdout_path = stdout_path
> + self.stderr_path = stderr_path
> + self.env = env
> +
> + # Test data.
> + self.metrics = {}
> + self.result = None
> +
> + def execute(self, verbose=False):
> + if verbose:
> + note('executing: %s' % ' '.join("'%s'" % arg
> + for arg in self.command))
> +
> + start_rusage = resource.getrusage(resource.RUSAGE_CHILDREN)
> + start_time = time.time()
> +
> + p = subprocess.Popen(self.command,
> + stdout=open(self.stdout_path, 'w'),
> + stderr=open(self.stderr_path, 'w'),
> + env=self.env)
> + self.result = p.wait() == 0
> +
> + end_time = time.time()
> + end_rusage = resource.getrusage(resource.RUSAGE_CHILDREN)
> + self.metrics["user_time"] = end_rusage.ru_utime - start_rusage.ru_utime
> + self.metrics["sys_time"] = end_rusage.ru_stime - start_rusage.ru_stime
> + self.metrics["wall_time"] = end_time - start_time
> +
> + if verbose:
> + note("command executed in -- "
> + "user: %.4fs, wall: %.4fs, sys: %.4fs" % (
> + self.metrics["user_time"], self.metrics["wall_time"],
> + self.metrics["sys_time"]))
> +
> + def evaluate_filter_spec(self, spec):
> + # Run the filter in an environment with the builtin filters and the
> + # metrics.
> + env = {"result": self.result}
> + env.update(self.available_filters)
> + env.update(self.metrics)
> + result = eval(spec, {}, env)
> +
> + # If the result is a filter object, evaluate it.
> + if isinstance(result, Command.Filter):
> + result.evaluate(self)
> + return
> +
> + # Otherwise, treat the result as a boolean predicate.
> + self.result = bool(result)
> +
> +
> +def execute_sandboxed_test(sandbox, builder, build, args,
> + verbose=False, very_verbose=False,
> + add_path_variables=True,
> + show_command_output=False,
> + reuse_sandbox=False):
> +
> + def split_command_filters(command):
> + for i, arg in enumerate(command):
> + if arg[:2] != "%%" or arg[-2:] != "%%":
> + break
> + else:
> + fatal("invalid command: %s, only contains filter "
> + "specifications" % ("".join('"%s"' % a for a in command)))
> +
> + return ([a[2:-2] for a in command[:i]],
> + command[i:])
> +
> + path = build.tobasename(include_suffix=False)
> + fullpath = build.tobasename()
> +
> + if verbose:
> + note('testing %r' % path)
> +
> + # Create the sandbox directory, if it doesn't exist.
> + is_temp = False
> + if sandbox is None:
> + sandbox = tempfile.mkdtemp()
> + is_temp = True
> + else:
> + # Make absolute.
> + sandbox = os.path.abspath(sandbox)
> + if not os.path.exists(sandbox):
> + os.mkdir(sandbox)
> +
> + # Compute paths and make sure sandbox is clean.
> + root_path = os.path.join(sandbox, fullpath)
> + builddir_path = os.path.join(sandbox, path)
> + need_build = True
> + if reuse_sandbox and (os.path.exists(root_path) and
> + os.path.exists(builddir_path)):
> + need_build = False
> + else:
> + for p in (root_path, builddir_path):
> + if os.path.exists(p):
> + fatal('sandbox is not clean, %r exists' % p)
> +
> + # Fetch and extract the build.
> + if need_build:
> + start_time = time.time()
> + llvmlab.fetch_build_to_path(builder, build, root_path, builddir_path)
> + if very_verbose:
> + note("extracted build in %.2fs" % (time.time() - start_time,))
> +
> + # Attempt to find clang/clang++ in the downloaded build.
> + def find_binary(name):
> + x = subprocess.check_output(['find', builddir_path, '-name', name])\
> + .strip().split("\n")[0]
> + if x == '':
> + x = None
> + return x
> +
> + clang_path = find_binary('clang')
> + clangpp_path = find_binary('clang++')
> + liblto_path = find_binary('libLTO.dylib')
> + if liblto_path is not None:
> + liblto_dir = os.path.dirname(liblto_path)
> + else:
> + liblto_dir = None
> +
> + # Construct the interpolation variables.
> + options = {'sandbox': sandbox,
> + 'path': builddir_path,
> + 'revision': build.revision,
> + 'build': build.build,
> + 'clang': clang_path,
> + 'clang++': clangpp_path,
> + 'libltodir': liblto_dir}
> +
> + # Inject environment variables.
> + env = os.environ.copy()
> + for key, value in options.items():
> + env['TEST_%s' % key.upper()] = str(value)
> +
> + # Extend the environment to include the path to the extracted build.
> + #
> + # FIXME: Ideally, we would be able to read some kind of configuration
> + # notermation about a builder so that we could just set this up, it doesn't
> + # necessarily here as hard-coded notermation.
> + if add_path_variables:
> + path_extensions = []
> + dyld_library_path_extensions = []
> + toolchains_dir = os.path.join(builddir_path,
> + ('Applications/Xcode.app/Contents/'
> + 'Developer/Toolchains'))
> + toolchain_paths = []
> + if os.path.exists(toolchains_dir):
> + toolchain_paths = [os.path.join(toolchains_dir, name, 'usr')
> + for name in os.listdir(toolchains_dir)]
> + for package_root in ['', 'Developer/usr/'] + toolchain_paths:
> + p = os.path.join(builddir_path, package_root, 'bin')
> + if os.path.exists(p):
> + path_extensions.append(p)
> + p = os.path.join(builddir_path, package_root, 'lib')
> + if os.path.exists(p):
> + dyld_library_path_extensions.append(p)
> + if path_extensions:
> + env['PATH'] = os.pathsep.join(
> + path_extensions + [os.environ.get('PATH', '')])
> + if dyld_library_path_extensions:
> + env['DYLD_LIBRARY_PATH'] = os.pathsep.join(
> + dyld_library_path_extensions + [
> + os.environ.get('DYLD_LIBRARY_PATH', '')])
> +
> + # Split the arguments into distinct commands.
> + #
> + # Extended command syntax allows running multiple commands by separating
> + # them with '----'.
> + test_commands = util.list_split(args, "----")
> +
> + # Split command specs into filters and commands.
> + test_commands = [split_command_filters(spec) for spec in test_commands]
> +
> + # Execute the test.
> + command_objects = []
> + interpolated_variables = False
> + for i, (filters, command) in enumerate(test_commands):
> + # Interpolate arguments.
> + old_command = command
> + command = [a % options for a in command]
> + if old_command != command:
> + interpolated_variables = True
> +
> + # Create the command object...
> + stdout_log_path = os.path.join(sandbox, '%s.%d.stdout' % (path, i))
> + stderr_log_path = os.path.join(sandbox, '%s.%d.stderr' % (path, i))
> + cmd_object = Command(command, stdout_log_path, stderr_log_path, env)
> + command_objects.append(cmd_object)
> +
> + # Execute the command.
> + try:
> + cmd_object.execute(verbose=verbose)
> + except OSError, e:
> + # Python's exceptions are horribly to read, and this one is
> + # incredibly common when people don't use the right syntax (or
> + # misspell something) when writing a predicate. Detect this and
> + # notify the user.
> + if e.errno == errno.ENOENT:
> + fatal("invalid command, executable doesn't exist: %r" % (
> + cmd_object.command[0],))
> + elif e.errno == errno.ENOEXEC:
> + fatal("invalid command, executable has a bad format. Did you "
> + "forget to put a #! at the top of a script?: %r"
> + % (cmd_object.command[0],))
> + else:
> + # Otherwise raise the error again.
> + raise e
> +
> + # Evaluate the filters.
> + for filter in filters:
> + cmd_object.evaluate_filter_spec(filter)
> +
> + if show_command_output:
> + for p, type in ((stdout_log_path, "stdout"),
> + (stderr_log_path, "stderr")):
> + if not os.path.exists(p):
> + continue
> +
> + f = open(p)
> + data = f.read()
> + f.close()
> + if data:
> + print ("-- command %s (note: suppressed by default, "
> + "see sandbox dir for log files) --" % (type))
> + print "--\n%s--\n" % data
> +
> + test_result = cmd_object.result
> + if not test_result:
> + break
> + if not interpolated_variables:
> + warning('no substitutions found. Fetched root ignored?')
> +
> + # Remove the temporary directory.
> + if is_temp:
> + if shell.execute(['rm', '-rf', sandbox]) != 0:
> + note('unable to remove sandbox dir %r' % path)
> +
> + return test_result, command_objects
> +
> +
> +def get_best_match(builds, name, key=lambda x: x):
> + builds = list(builds)
> + builds.sort(key=key)
> +
> + if name is None and builds:
> + return builds[-1]
> +
> + to_find = llvmlab.Build.frombasename(name, None)
> +
> + best = None
> + for item in builds:
> + build = key(item)
> + # Check for a prefix match.
> + path = build.tobasename()
> + if path.startswith(name):
> + return item
> +
> + # Check for a revision match.
> + if build.revision == to_find.revision and build.revision is not None:
> + return item
> +
> + # Otherwise, stop when we aren't getting closer.
> + if build > to_find:
> + break
> + best = item
> +
> + return best
> +
> +
> +def action_fetch(name, args):
> + """fetch a build from the server"""
> +
> + parser = OptionParser("""\
> +usage: %%prog %(name)s [options] builder [build-name]
> +
> +Fetch the build from the named builder which matchs build-name. If no match is
> +found, get the first build before the given name. If no build name is given,
> +the most recent build is fetched.
> +
> +The available builders can be listed using:
> +
> + %%prog ci ls""" % locals())
> + parser.add_option("-f", "--force", dest="force",
> + help=("always download and extract, overwriting any"
> + "existing files"),
> + action="store_true", default=False)
> + parser.add_option("", "--update-link", dest="update_link", metavar="PATH",
> + help=("update a symbolic link at PATH to point to the "
> + "fetched build (on success)"),
> + action="store", default=None)
> + parser.add_option("-d", "--dry-run", dest='dry_run',
> + help=("Perform all operations except the actual "
> + "downloading and extracting of any files"),
> + action="store_true", default=False)
> +
> + (opts, args) = parser.parse_args(args)
> +
> + if len(args) == 0:
> + parser.error("please specify a builder name")
> + elif len(args) == 1:
> + builder, = args
> + build_name = None
> + elif len(args) == 2:
> + builder, build_name = args
> + else:
> + parser.error("invalid number of arguments")
> +
> + builds = list(llvmlab.fetch_builds(builder))
> + if not builds:
> + parser.error("no builds for builder: %r" % builder)
> +
> + build = get_best_match(builds, build_name)
> + if not build:
> + parser.error("no match for build %r" % build_name)
> +
> + path = build.tobasename()
> + if build_name is not None and not path.startswith(build_name):
> + note('no exact match, fetching %r' % path)
> +
> + # Get the paths to extract to.
> + root_path = path
> + builddir_path = build.tobasename(include_suffix=False)
> +
> + if not opts.dry_run:
> + # Check that the download and extract paths are clean.
> + for p in (root_path, builddir_path):
> + if os.path.exists(p):
> + # If we are using --force, then clean the path.
> + if opts.force:
> + shutil.rmtree(p, ignore_errors=True)
> + continue
> + fatal('current directory is not clean, %r exists' % p)
> + llvmlab.fetch_build_to_path(builder, build, root_path, builddir_path)
> +
> + print 'downloaded root: %s' % root_path
> + print 'extracted path : %s' % builddir_path
> +
> + # Update the symbolic link, if requested.
> + if not opts.dry_run and opts.update_link:
> + # Remove the existing path.
> + try:
> + os.unlink(opts.update_link)
> + except OSError as e:
> + if e.errno != errno.ENOENT:
> + fatal('unable to update symbolic link at %r, cannot unlink' % (
> + opts.update_link))
> +
> + # Create the symbolic link.
> + os.symlink(os.path.abspath(builddir_path), opts.update_link)
> + print 'updated link at: %s' % opts.update_link
> + return os.path.abspath(builddir_path)
> +
> +
> +def action_ls(name, args):
> + """list available build names or builds"""
> +
> + parser = OptionParser("""\
> +usage: %%prog %s [build-name]
> +
> +With no arguments, list the available build names on 'llvmlab'. With a build
> +name, list the available builds for that builder.\
> +""" % name)
> +
> + (opts, args) = parser.parse_args(args)
> +
> + if not len(args):
> + available_buildnames = llvmlab.fetch_builders()
> + available_buildnames.sort()
> + for item in available_buildnames:
> + print item
> + return available_buildnames
> +
> + for name in args:
> + if len(args) > 1:
> + if name is not args[0]:
> + print
> + print '%s:' % name
> + available_builds = list(llvmlab.fetch_builds(name))
> + available_builds.sort()
> + available_builds.reverse()
> + for build in available_builds:
> + print build.tobasename(include_suffix=False)
> + min_rev = min([x.revision for x in available_builds])
> + max_rev = max([x.revision for x in available_builds])
> + note("Summary: found {} builds: r{}-r{}".format(len(available_builds),
> + min_rev, max_rev))
> + return available_builds
> +
> +DEFAULT_BUILDER = "clang-stage1-configure-RA_build"
> +
> +
> +def action_bisect(name, args):
> + """find first failing build using binary search"""
> +
> + parser = OptionParser("""\
> +usage: %%prog %(name)s [options] ... test command args ...
> +
> +Look for the first published build where a test failed, using the builds on
> +llvmlab. The command arguments are executed once per build tested, but each
> +argument is first subject to string interpolation. The syntax is
> +"%%(VARIABLE)FORMAT" where FORMAT is a standard printf format, and VARIABLE is
> +one of:
> +
> + 'sandbox' - the path to the sandbox directory.
> + 'path' - the path to the build under test.
> + 'revision' - the revision number of the build.
> + 'build' - the build number of the build under test.
> + 'clang' - the path to the clang binary of the build if it exists.
> + 'clang++' - the path to the clang++ binary of the build if it exists.
> + 'libltodir' - the path to the directory containing libLTO.dylib, if it
> + exists.
> +
> +Each test is run in a sandbox directory. By default, sandbox directories are
> +temporary directories which are created and destroyed for each test (see
> +--sandbox).
> +
> +For use in auxiliary test scripts, each test is also run with each variable
> +available in the environment as TEST_<variable name> (variables are converted
> +to uppercase). For example, a test script could use "TEST_PATH" to find the
> +path to the build under test.
> +
> +The stdout and stderr of the command are logged to files inside the sandbox
> +directory. Use an explicit sandbox directory if you would like to look at
> +them.
> +
> +It is possible to run multiple distinct commands for each test by separating
> +them in the command line arguments by '----'. The failure of any command causes
> +the entire test to fail.\
> +""" % locals())
> +
> + parser.add_option("-b", "--build", dest="build_name", metavar="STR",
> + help="name of build to fetch",
> + action="store", default=DEFAULT_BUILDER)
> + parser.add_option("-s", "--sandbox", dest="sandbox",
> + help="directory to use as a sandbox",
> + action="store", default=None)
> + parser.add_option("-v", "--verbose", dest="verbose",
> + help="output more test notermation",
> + action="store_true", default=False)
> + parser.add_option("-V", "--very-verbose", dest="very_verbose",
> + help="output even more test notermation",
> + action="store_true", default=False)
> + parser.add_option("", "--show-output", dest="show_command_output",
> + help="display command output",
> + action="store_true", default=False)
> + parser.add_option("", "--single-step", dest="single_step",
> + help="single step instead of binary stepping",
> + action="store_true", default=False)
> + parser.add_option("", "--min-rev", dest="min_rev",
> + help="minimum revision to test",
> + type="int", action="store", default=None)
> + parser.add_option("", "--max-rev", dest="max_rev",
> + help="maximum revision to test",
> + type="int", action="store", default=None)
> +
> + parser.disable_interspersed_args()
> +
> + (opts, args) = parser.parse_args(args)
> +
> + if opts.build_name is None:
> + parser.error("no build name given (see --build)")
> +
> + # Very verbose implies verbose.
> + opts.verbose |= opts.very_verbose
> +
> + start_time = time.time()
> + available_builds = list(llvmlab.fetch_builds(opts.build_name))
> + available_builds.sort()
> + available_builds.reverse()
> + if opts.very_verbose:
> + note("fetched builds in %.2fs" % (time.time() - start_time,))
> +
> + if opts.min_rev is not None:
> + available_builds = [b for b in available_builds
> + if b.revision >= opts.min_rev]
> + if opts.max_rev is not None:
> + available_builds = [b for b in available_builds
> + if b.revision <= opts.max_rev]
> +
> + def predicate(item):
> + # Run the sandboxed test.
> + test_result, _ = execute_sandboxed_test(
> + opts.sandbox, opts.build_name, item, args, verbose=opts.verbose,
> + very_verbose=opts.very_verbose,
> + show_command_output=opts.show_command_output or opts.very_verbose)
> +
> + # Print status.
> + print '%s: %s' % (('FAIL', 'PASS')[test_result],
> + item.tobasename(include_suffix=False))
> +
> + return test_result
> +
> + if opts.single_step:
> + for item in available_builds:
> + if predicate(item):
> + break
> + else:
> + item = None
> + else:
> + if opts.min_rev is None or opts.max_rev is None:
> + # Gallop to find initial search range, under the assumption that we
> + # are most likely looking for something at the head of this list.
> + search_space = algorithm.gallop(predicate, available_builds)
> + else:
> + # If both min and max revisions are specified,
> + # don't gallop - bisect the given range.
> + search_space = available_builds
> + item = algorithm.bisect(predicate, search_space)
> +
> + if item is None:
> + fatal('unable to find any passing build!')
> +
> + print '%s: first working build' % item.tobasename(include_suffix=False)
> + index = available_builds.index(item)
> + if index == 0:
> + print 'no failing builds!?'
> + else:
> + print '%s: next failing build' % available_builds[index-1].tobasename(
> + include_suffix=False)
> +
> +
> +def action_exec(name, args):
> + """execute a command against a published root"""
> +
> + parser = OptionParser("""\
> +usage: %%prog %(name)s [options] ... test command args ...
> +
> +Executes the given command against the latest published build. The syntax for
> +commands (and exit code) is exactly the same as for the 'bisect' tool, so this
> +command is useful for testing bisect test commands.
> +
> +See 'bisect' for more notermation on the exact test syntax.\
> +""" % locals())
> +
> + parser.add_option("-b", "--build", dest="build_name", metavar="STR",
> + help="name of build to fetch",
> + action="store", default=DEFAULT_BUILDER)
> + parser.add_option("-s", "--sandbox", dest="sandbox",
> + help="directory to use as a sandbox",
> + action="store", default=None)
> + parser.add_option("", "--min-rev", dest="min_rev",
> + help="minimum revision to test",
> + type="int", action="store", default=None)
> + parser.add_option("", "--max-rev", dest="max_rev",
> + help="maximum revision to test",
> + type="int", action="store", default=None)
> + parser.add_option("", "--near", dest="near_build",
> + help="use a build near NAME",
> + type="str", action="store", metavar="NAME", default=None)
> +
> + parser.disable_interspersed_args()
> +
> + (opts, args) = parser.parse_args(args)
> +
> + if opts.build_name is None:
> + parser.error("no build name given (see --build)")
> +
> + available_builds = list(llvmlab.fetch_builds(opts.build_name))
> + available_builds.sort()
> + available_builds.reverse()
> +
> + if opts.min_rev is not None:
> + available_builds = [b for b in available_builds
> + if b.revision >= opts.min_rev]
> + if opts.max_rev is not None:
> + available_builds = [b for b in available_builds
> + if b.revision <= opts.max_rev]
> +
> + if len(available_builds) == 0:
> + fatal("No builds available for builder name: %s" % opts.build_name)
> +
> + # Find the best match, if requested.
> + if opts.near_build:
> + build = get_best_match(available_builds, opts.near_build)
> + if not build:
> + parser.error("no match for build %r" % opts.near_build)
> + else:
> + # Otherwise, take the latest build.
> + build = available_builds[0]
> +
> + test_result, _ = execute_sandboxed_test(
> + opts.sandbox, opts.build_name, build, args, verbose=True,
> + show_command_output=True)
> +
> + print '%s: %s' % (('FAIL', 'PASS')[test_result],
> + build.tobasename(include_suffix=False))
> +
> + raise SystemExit(test_result != True)
> +
> +
> +def action_test(name, args):
> + from . import test_llvmlab
> + test_llvmlab.run_tests()
>
> Added: zorg/trunk/llvmbisect/llvmlab/clang_link
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/llvmlab/clang_link?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/llvmlab/clang_link (added)
> +++ zorg/trunk/llvmbisect/llvmlab/clang_link Thu Oct 8 16:52:50 2015
> @@ -0,0 +1 @@
> +link /Users/cmatthews/src/zorg/llvmbisect/llvmlab/clang-r219899-t2014-10-15_20-42-53-b808
> \ No newline at end of file
>
> Propchange: zorg/trunk/llvmbisect/llvmlab/clang_link
> ------------------------------------------------------------------------------
> svn:special = *
>
> Added: zorg/trunk/llvmbisect/llvmlab/gcs.py
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/llvmlab/gcs.py?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/llvmlab/gcs.py (added)
> +++ zorg/trunk/llvmbisect/llvmlab/gcs.py Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,53 @@
> +"""Integration with Google Cloud Storage.
> +
> +"""
> +import os
> +import requests
> +
> +# Root URL to use for our queries.
> +GCS = "https://www.googleapis.com/storage/v1/"
> +
> +DEFAULT_BUCKET = "llvm-build-artifacts"
> +
> +BUCKET = os.getenv("BUCKET", DEFAULT_BUCKET)
> +
> +
> +def fetch_builders():
> + """Each build kind is stored as a folder in the GCS bucket.
> + List all the folders in the bucket, which is our list of possible
> + compilers.
> + """
> + params = {'delimiter': "/", 'fields': "prefixes"}
> + r = requests.get(GCS + "b/" + BUCKET + "/o", params=params)
> + r.raise_for_status()
> + reply_data = r.json()
> + folders = reply_data['prefixes']
> + no_slashes = [x.replace("/", "") for x in folders]
> + return no_slashes
> +
> +
> +def fetch_builds(project):
> + """Given a builder name, get the list of all the files stored for that
> + builder.
> + """
> + assert project is not None
> + params = {'delimiter': "/",
> + "fields": "kind,items(name, mediaLink)",
> + 'prefix': project + "/"}
> + r = requests.get(GCS + "b/" + BUCKET + "/o", params=params)
> + r.raise_for_status()
> + reply_data = r.json()
> + return reply_data
> +
> +# Dunno what this could be moved up to?
> +CHUNK_SIZE = 5124288
> +
> +
> +def get_compiler(url, filename):
> + """Get the compiler at the url, and save to filename."""
> + r = requests.get(url)
> + r.raise_for_status()
> + with open(filename, 'wb') as fd:
> + for chunk in r.iter_content(CHUNK_SIZE):
> + fd.write(chunk)
> + return filename
>
> Added: zorg/trunk/llvmbisect/llvmlab/llvmlab.py
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/llvmlab/llvmlab.py?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/llvmlab/llvmlab.py (added)
> +++ zorg/trunk/llvmbisect/llvmlab/llvmlab.py Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,272 @@
> +"""Utilities for accessing stuff from llvmlab."""
> +
> +import json
> +import os
> +import re
> +import shutil
> +import time
> +
> +from . import shell
> +from . import util
> +from . import gcs
> +
> +from util import fatal
> +
> +
> +class BuilderMap(object):
> + # Expire the buildermap after 24 hours.
> + expiration_time = 24 * 60 * 60
> +
> + @classmethod
> + def frompath(klass, path):
> + with open(path) as f:
> + data = json.load(f)
> + return klass(data['builders'], data['timestamp'])
> +
> + def __init__(self, builders, timestamp):
> + self.builders = builders
> + self.timestamp = timestamp
> +
> + def topath(self, path):
> + with open(path, 'w') as f:
> + data = {'builders': self.builders,
> + 'timestamp': self.timestamp}
> + json.dump(data, f, indent=2)
> +
> + def is_expired(self):
> + return time.time() > self.timestamp + self.expiration_time
> +
> +BUILD_NAME_REGEX = re.compile(
> + r"((apple-)?clang)-([0-9]+)(\.([0-9]+))?(\.([0-9]+))?"
> + r"-([A-Z][A-Za-z]+)(\.(.*))?")
> +
> +
> +class Build(object):
> + @staticmethod
> + def frombasename(str, url=None):
> +
> + str = os.path.basename(str)
> + revision = timestamp = build = None
> +
> + # Check if this is a BNI style build.
> + m = BUILD_NAME_REGEX.match(str)
> + if m:
> + name, _, major_str, _, minor_str, _, micro_str, build, \
> + _, suffix = m.groups()
> + revision = [int(major_str)]
> + if minor_str:
> + revision.append(int(minor_str))
> + if micro_str:
> + revision.append(int(micro_str))
> + return Build(name, tuple(revision), None, build, suffix)
> +
> + if '.' in str:
> + str, suffix = str.split('.', 1)
> + else:
> + suffix = None
> +
> + m = re.match(r'(.*)-b([0-9]+)', str)
> + if m:
> + str, build = m.groups()
> + build = int(build)
> +
> + m = re.match(r'(.*)-t([0-9-]{8,10}_[0-9-]{6,8})', str)
> + if m:
> + str, timestamp = m.groups()
> +
> + m = re.match(r'(.*)-r([0-9]+)', str)
> + if m:
> + str, revision = m.groups()
> + revision = int(revision)
> +
> + return Build(str, revision, timestamp, build, suffix, url)
> +
> + @staticmethod
> + def fromdata(data):
> + return Build(data['name'], data['revision'], data['timestamp'],
> + data['build'], data['suffix'])
> +
> + def todata(self):
> + return {'name': self.name,
> + 'revision': self.revision,
> + 'timestamp': self.timestamp,
> + 'build': self.build,
> + 'suffix': self.suffix}
> +
> + def __init__(self, name, revision, timestamp, build, suffix, url):
> + self.name = name
> + self.revision = revision
> + self.timestamp = timestamp
> + self.build = build
> + self.suffix = suffix
> + self.url = url
> +
> + def tobasename(self, include_suffix=True):
> + basename = self.name
> + if self.revision is not None:
> + if isinstance(self.revision, (tuple, list)):
> + basename += '-' + '.'.join(str(r) for r in self.revision)
> + else:
> + assert isinstance(self.revision, int)
> + basename += '-r%d' % self.revision
> + if self.timestamp is not None:
> + basename += '-t%s' % self.timestamp
> + if self.build is not None:
> + if isinstance(self.build, str):
> + basename += '-' + self.build
> + else:
> + basename += '-b%d' % self.build
> + if include_suffix and self.suffix is not None:
> + basename += '.%s' % self.suffix
> + return basename
> +
> + def __repr__(self):
> + return "%s%r" % (self.__class__.__name__,
> + (self.name, self.revision, self.timestamp, self.build,
> + self.suffix))
> +
> + def __cmp__(self, other):
> + return cmp((self.revision, self.timestamp,
> + self.build, self.suffix, self.name),
> + ((other.revision, other.timestamp,
> + other.build, other.suffix, other.name)))
> +
> +
> +def load_builder_map(reload=False):
> + """
> + load_builder_map() -> BuilderMap
> +
> + Load a map of builder names to the server url that holds those artifacts.
> + """
> +
> + prefs = util.get_prefs()
> +
> + # Load load the builder map if present (and not reloading)
> + data_path = os.path.join(prefs.path, "ci")
> + buildermap_path = os.path.join(data_path, "build_map.json")
> + if not reload and os.path.exists(buildermap_path):
> + buildermap = BuilderMap.frompath(buildermap_path)
> +
> + # If the buildermap is not out-of-date, return it.
> + if not buildermap.is_expired():
> + return buildermap
> +
> + # Otherwise, we didn't have a buildermap or it is out of date, compute it.
> + builders = {}
> + for build in gcs.fetch_builders():
> + builders[build] = build
> +
> + # Create the buildermap and save it.
> + buildermap = BuilderMap(builders, time.time())
> + if not os.path.exists(data_path):
> + shell.mkdir_p(data_path)
> + buildermap.topath(buildermap_path)
> +
> + return buildermap
> +
> +
> +def fetch_builders():
> + """
> + fetch_builders() -> [builder-name, ...]
> +
> + Get a list of available builders.
> + """
> +
> + # Handle only_use_cache setting.
> + prefs = util.get_prefs()
> + if prefs.getboolean("ci", "only_use_cache"):
> + cache_path = os.path.join(prefs.path, "ci", "build_cache")
> + return sorted(os.listdir(cache_path))
> +
> + # Otherwise, fetch the builder map.
> + return sorted(load_builder_map().builders.keys())
> +
> +
> +def fetch_builds(name):
> + """
> + fetch_builds(name) -> [(path, revision, build), ...]
> +
> + Get a list of available builds for the named builder.
> + """
> + # Handle only_use_cache setting.
> + prefs = util.get_prefs()
> + if prefs.getboolean("ci", "only_use_cache"):
> + cache_path = os.path.join(prefs.path, "ci", "build_cache")
> + cache_build_path = os.path.join(cache_path, name)
> + items = os.listdir(cache_build_path)
> + assert False, "Unimplemented?" + str(items)
> + # Otherwise, load the builder map.
> + buildermap = load_builder_map()
> +
> + # If the builder isn't in the builder map, do a forced load of the builder
> + # map.
> + if name not in buildermap.builders:
> + buildermap = load_builder_map(reload=True)
> +
> + # If the builder doesn't exist, report an error.
> + builder_artifacts = buildermap.builders.get(name)
> + if builder_artifacts is None:
> + fatal("unknown builder name: %r" % (name,))
> +
> + # Otherwise, load the builder list.
> + server_builds = gcs.fetch_builds(builder_artifacts)
> + builds = []
> + for path in server_builds['items']:
> + build = Build.frombasename(path['name'], path['mediaLink'])
> +
> + # Ignore any links which don't at least have a revision component.
> + if build.revision is not None:
> + builds.append(build)
> +
> + # If there were no builds, report an error.
> + if not builds:
> + fatal("builder %r may be misconfigured (no items)" % (name,))
> +
> + # Sort the builds, to make sure we return them ordered properly.
> + builds.sort()
> +
> + return builds
> +
> +
> +def fetch_build_to_path(builder, build, root_path, builddir_path):
> + path = build.tobasename()
> +
> + # Check whether we are using a build cache and get the cached build path if
> + # so.
> + prefs = util.get_prefs()
> + cache_build_path = None
> + if prefs.getboolean("ci", "cache_builds"):
> + cache_path = os.path.join(prefs.path, "ci", "build_cache")
> + cache_build_path = os.path.join(cache_path, builder, path)
> +
> + # Copy the build from the cache or download it.
> + if cache_build_path and os.path.exists(cache_build_path):
> + shutil.copy(cache_build_path, root_path)
> + else:
> + # Load the builder map.
> + buildermap = load_builder_map()
> +
> + # If the builder isn't in the builder map, do a forced reload of the
> + # builder map.
> + if builder not in buildermap.builders:
> + buildermap = load_builder_map(reload=True)
> +
> + # If the builder doesn't exist, report an error.
> + builder_artifacts = buildermap.builders.get(builder)
> + if builder_artifacts is None:
> + fatal("unknown builder name: %r" % (builder,))
> +
> + # Otherwise create the build url.
> + gcs.get_compiler(build.url, root_path)
> +
> + # Copy the build into the cache, if enabled.
> + if cache_build_path is not None:
> + shell.mkdir_p(os.path.dirname(cache_build_path))
> + shutil.copy(root_path, cache_build_path)
> +
> + # Create the directory for the build.
> + os.mkdir(builddir_path)
> +
> + # Extract the build.
> + if shell.execute(['tar', '-xf', root_path, '-C', builddir_path]):
> + fatal('unable to extract %r to %r' % (root_path, builddir_path))
>
> Added: zorg/trunk/llvmbisect/llvmlab/scripts.py
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/llvmlab/scripts.py?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/llvmlab/scripts.py (added)
> +++ zorg/trunk/llvmbisect/llvmlab/scripts.py Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,60 @@
> +"""
> +Utilities for building the llvmlab multi-tool.
> +"""
> +
> +import os
> +import sys
> +
> +
> +class Tool(object):
> + """
> + This object defines a generic command line tool instance, which dynamically
> + builds its commands from a module dictionary.
> +
> + Example usage::
> +
> + import scripts
> +
> + def action_foo(name, args):
> + "the foo command"
> +
> + ...
> +
> + tool = scripts.Tool(locals())
> + if __name__ == '__main__':
> + tool.main(sys.argv)
> +
> + Any function beginning with "action_" is considered a tool command. It's
> + name is defined by the function name suffix. Underscores in the function
> + name are converted to '-' in the command line syntax. Actions ending ith
> + "-debug" are not listed in the help.
> + """
> +
> + def __init__(self, locals):
> + # Create the list of commands.
> + self.commands = dict((name[7:].replace('_', '-'), f)
> + for name, f in locals.items()
> + if name.startswith('action_'))
> +
> + def usage(self, name):
> + print >>sys.stderr, "Usage: %s command [options]" % (
> + os.path.basename(name))
> + print >>sys.stderr
> + print >>sys.stderr, "Available commands:"
> + cmds_width = max(map(len, self.commands))
> + for name, func in sorted(self.commands.items()):
> + if name.endswith("-debug"):
> + continue
> +
> + print >>sys.stderr, " %-*s - %s" % (cmds_width, name,
> + func.__doc__)
> + sys.exit(1)
> +
> + def main(self, args):
> + if len(args) < 2 or args[1] not in self.commands:
> + if len(args) >= 2:
> + print >>sys.stderr, "error: invalid command %r\n" % args[1]
> + self.usage(args[0])
> +
> + cmd = args[1]
> + return self.commands[cmd]('%s %s' % (args[0], cmd), args[2:])
>
> Added: zorg/trunk/llvmbisect/llvmlab/shell.py
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/llvmlab/shell.py?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/llvmlab/shell.py (added)
> +++ zorg/trunk/llvmbisect/llvmlab/shell.py Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,44 @@
> +"""
> +shell like utilities
> +"""
> +
> +import os
> +
> +
> +def execute(args):
> + import subprocess
> + """execute(command) - Run the given command (or argv list) in a shell and
> + return the exit code."""
> + return subprocess.Popen(args).wait()
> +
> +
> +def capture(args, include_stderr=False):
> + import subprocess
> + """capture(command) - Run the given command (or argv list) in a shell and
> + return the standard output."""
> + stderr = subprocess.PIPE
> + if include_stderr:
> + stderr = subprocess.STDOUT
> + p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=stderr)
> + out, _ = p.communicate()
> + return p.wait(), out
> +
> +
> +def mkdir_p(path):
> + """mkdir_p(path) - Make the "path" directory, if it does not exist; this
> + will also make directories for any missing parent directories."""
> + import errno
> +
> + if not path or os.path.exists(path):
> + return
> +
> + parent = os.path.dirname(path)
> + if parent != path:
> + mkdir_p(parent)
> +
> + try:
> + os.mkdir(path)
> + except OSError as e:
> + # Ignore EEXIST, which may occur during a race condition.
> + if e.errno != errno.EEXIST:
> + raise
>
> Added: zorg/trunk/llvmbisect/llvmlab/test_llvmlab.py
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/llvmlab/test_llvmlab.py?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/llvmlab/test_llvmlab.py (added)
> +++ zorg/trunk/llvmbisect/llvmlab/test_llvmlab.py Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,50 @@
> +# RUN: pythong test_llvmlab.py
> +import unittest
> +
> +import os
> +import shutil
> +import tempfile
> +from . import ci
> +
> +class TestLLVMLabCI(unittest.TestCase):
> +
> + def setUp(self):
> + self.workdir = tempfile.mkdtemp()
> + print self.workdir
> + os.chdir(self.workdir)
> +
> + def tearDown(self):
> + shutil.rmtree(self.workdir)
> +
> + def test_bisect(self):
> + ci.action_bisect("llvmlab", ["--min-rev", "219719",
> + "--max-rev", "219899",
> + "bash", "-c",
> + "%(path)s/bin/clang -v | grep b700"])
> +
> + def test_ls(self):
> + """Check that you can """
> + builds = ci.action_ls("llvmlab", [])
> + self.assertIn("clang-stage1-configure-RA_build", builds)
> + compilers = ci.action_ls("llvmlab",
> + ["clang-stage1-configure-RA_build"])
> + compiler_revs = [x.revision for x in compilers]
> + self.assertIn(219899, compiler_revs)
> +
> + def test_fetch_noargs(self):
> + """ """
> + path = ci.action_fetch("llvmlab", ["clang-stage1-configure-RA_build"])
> + self.assertTrue(os.path.isdir(path), "Fetch did not get a compiler?")
> +
> + def test_fetch_arg(self):
> + """ """
> + path = ci.action_fetch("llvmlab",
> + ["--update-link", "clang_link",
> + "clang-stage1-configure-RA_build",
> + "clang-r219899-t2014-10-15_20-42-53-b808"])
> + self.assertTrue(os.path.isdir(path), "Fetch did not get a compiler?")
> +
> +
> +def run_tests():
> + suite = unittest.TestLoader().loadTestsFromTestCase(TestLLVMLabCI)
> + unittest.TextTestRunner(verbosity=2).run(suite)
>
> Added: zorg/trunk/llvmbisect/llvmlab/util.py
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/llvmlab/util.py?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/llvmlab/util.py (added)
> +++ zorg/trunk/llvmbisect/llvmlab/util.py Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,286 @@
> +import ConfigParser
> +import datetime
> +import inspect
> +import os
> +import sys
> +import traceback
> +
> +__all__ = []
> +
> +def _write_message(kind, message):
> + # Get the file/line where this message was generated.
> + f = inspect.currentframe()
> + # Step out of _write_message, and then out of wrapper.
> + f = f.f_back.f_back
> + file,line,_,_,_ = inspect.getframeinfo(f)
> + location = '%s:%d' % (os.path.basename(file), line)
> +
> + print >>sys.stderr, '%s: %s: %s' % (location, kind, message)
> +
> +note = lambda message: _write_message('note', message)
> +warning = lambda message: _write_message('warning', message)
> +error = lambda message: _write_message('error', message)
> +fatal = lambda message: (_write_message('fatal error', message), sys.exit(1))
> +
> +
> +def sorted(l, **kwargs):
> + l = list(l)
> + l.sort(**kwargs)
> + return l
> +
> +def list_split(list, item):
> + parts = []
> + while item in list:
> + index = list.index(item)
> + parts.append(list[:index])
> + list = list[index+1:]
> + parts.append(list)
> + return parts
> +
> +def pairs(l):
> + return zip(l, l[1:])
> +
> +###
> +
> +class EnumVal(object):
> + def __init__(self, enum, name, value):
> + self.enum = enum
> + self.name = name
> + self.value = value
> +
> + def __repr__(self):
> + return '%s.%s' % (self.enum._name, self.name)
> +
> +class Enum(object):
> + def __init__(self, name, **kwargs):
> + self._name = name
> + self.__items = dict((name, EnumVal(self, name, value))
> + for name,value in kwargs.items())
> + self.__reverse_map = dict((e.value,e.name)
> + for e in self.__items.values())
> + self.__dict__.update(self.__items)
> +
> + def get_value(self, name):
> + return self.__items.get(name)
> +
> + def get_name(self, value):
> + return self.__reverse_map.get(value)
> +
> + def get_by_value(self, value):
> + return self.__items.get(self.__reverse_map.get(value))
> +
> + def contains(self, item):
> + if not isinstance(item, EnumVal):
> + return False
> + return item.enum == self
> +
> +class multidict:
> + def __init__(self, elts=()):
> + self.data = {}
> + for key,value in elts:
> + self[key] = value
> +
> + def __contains__(self, item):
> + return item in self.data
> + def __getitem__(self, item):
> + return self.data[item]
> + def __setitem__(self, key, value):
> + if key in self.data:
> + self.data[key].append(value)
> + else:
> + self.data[key] = [value]
> + def items(self):
> + return self.data.items()
> + def values(self):
> + return self.data.values()
> + def keys(self):
> + return self.data.keys()
> + def __len__(self):
> + return len(self.data)
> + def get(self, key, default=None):
> + return self.data.get(key, default)
> + def todict(self):
> + return self.data.copy()
> +
> +###
> +
> +class Preferences(object):
> + def __init__(self, path):
> + self.path = path
> + self.config_path = os.path.join(path, "config")
> + self.options = ConfigParser.RawConfigParser()
> +
> + # Load the config file, if present.
> + if os.path.exists(self.config_path):
> + self.options.read(self.config_path)
> +
> + def save(self):
> + file = open(self.config_path, "w")
> + try:
> + self.options.write(file)
> + finally:
> + file.close()
> +
> + def get(self, section, option, default = None):
> + if self.options.has_option(section, option):
> + return self.options.get(section, option)
> + else:
> + return default
> +
> + def getboolean(self, section, option, default = None):
> + if self.options.has_option(section, option):
> + return self.options.getboolean(section, option)
> + else:
> + return default
> +
> + def setboolean(self, section, option, value):
> + return self.options.set(section, option, str(value))
> +
> +_prefs = None
> +def get_prefs():
> + global _prefs
> + if _prefs is None:
> + _prefs = Preferences(os.path.expanduser("~/.llvmlab"))
> +
> + # Allow dynamic override of only_use_cache option.
> + if os.environ.get("LLVMLAB_ONLY_USE_CACHE"):
> + _prefs.setboolean("ci", "only_use_cache", True)
> +
> + return _prefs
> +
> +###
> +
> +import threading
> +import Queue
> +
> +def detect_num_cpus():
> + """
> + Detects the number of CPUs on a system. Cribbed from pp.
> + """
> + # Linux, Unix and MacOS:
> + if hasattr(os, "sysconf"):
> + if os.sysconf_names.has_key("SC_NPROCESSORS_ONLN"):
> + # Linux & Unix:
> + ncpus = os.sysconf("SC_NPROCESSORS_ONLN")
> + if isinstance(ncpus, int) and ncpus > 0:
> + return ncpus
> + else: # OSX:
> + return int(os.popen2("sysctl -n hw.ncpu")[1].read())
> + # Windows:
> + if os.environ.has_key("NUMBER_OF_PROCESSORS"):
> + ncpus = int(os.environ["NUMBER_OF_PROCESSORS"])
> + if ncpus > 0:
> + return ncpus
> + return 1 # Default
> +
> +def execute_task_on_threads(fn, iterable, num_threads = None):
> + """execute_task_on_threads(fn, iterable) -> iterable
> +
> + Given a task function to run on an iterable list of work items, execute the
> + task on each item in the list using some number of threads, and yield the
> + results of the task function.
> +
> + If a task function throws an exception, the exception will be
> + printed but not returned to the caller. Clients which wish to
> + control exceptions should handle them inside the task function.
> + """
> + def push_work():
> + for item in iterable:
> + work_queue.put(item)
> +
> + # Push sentinels to cause workers to terminate.
> + for i in range(num_threads):
> + work_queue.put(_sentinel)
> + def do_work():
> + while True:
> + # Read a work item.
> + item = work_queue.get()
> +
> + # If we hit a sentinel, propogate it to the output queue and
> + # terminate.
> + if item is _sentinel:
> + output_queue.put(_sentinel)
> + break
> +
> + # Otherwise, execute the task and push to the output queue.
> + try:
> + output = (None, fn(item))
> + except Exception, e:
> + output = ('error', sys.exc_info())
> +
> + output_queue.put(output)
> +
> + # Compute the number of threads to use.
> + if num_threads is None:
> + num_threads = detect_num_cpus()
> +
> + # Create two queues, one for feeding items to the works and another for
> + # consuming the output.
> + work_queue = Queue.Queue()
> + output_queue = Queue.Queue()
> +
> + # Create our unique sentinel object.
> + _sentinel = []
> +
> + # Create and run thread to push items onto the work queue.
> + threading.Thread(target=push_work).start()
> +
> + # Create and run the worker threads.
> + for i in range(num_threads):
> + t = threading.Thread(target=do_work)
> + t.daemon = True
> + t.start()
> +
> + # Read items from the output queue until all threads are finished.
> + finished = 0
> + while finished != num_threads:
> + item = output_queue.get()
> +
> + # Check for termination marker.
> + if item is _sentinel:
> + finished += 1
> + continue
> +
> + # Check for exceptions.
> + if item[0] == 'error':
> + _,(t,v,tb) = item
> + traceback.print_exception(t, v, tb)
> + continue
> +
> + assert item[0] is None
> + yield item[1]
> +
> +def timestamp():
> + return datetime.datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')
> +
> +###
> +
> +import collections
> +
> +class orderedset(object):
> + def __init__(self, items=None):
> + self.base = collections.OrderedDict()
> + if items is not None:
> + self.update(items)
> +
> + def update(self, items):
> + for item in items:
> + self.add(item)
> +
> + def add(self, item):
> + self.base[item] = None
> +
> + def remove(self, item):
> + del self.base[item]
> +
> + def __nonzero__(self):
> + return bool(self.base)
> +
> + def __len__(self):
> + return len(self.base)
> +
> + def __iter__(self):
> + return iter(self.base)
> +
> + def __contains__(self, item):
> + return item in self.base
>
> Added: zorg/trunk/llvmbisect/setup.py
> URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/llvmbisect/setup.py?rev=249757&view=auto
> ==============================================================================
> --- zorg/trunk/llvmbisect/setup.py (added)
> +++ zorg/trunk/llvmbisect/setup.py Thu Oct 8 16:52:50 2015
> @@ -0,0 +1,41 @@
> +import os
> +
> +from setuptools import setup, find_packages
> +
> +# setuptools expects to be invoked from within the directory of setup.py, but it
> +# is nice to allow:
> +# python path/to/setup.py install
> +# to work (for scripts, etc.)
> +os.chdir(os.path.dirname(os.path.abspath(__file__)))
> +
> +setup(
> + name = "llvmbisect",
> + version = "1.0",
> +
> + author = "Daniel Dunbar and Chris Matthews",
> + author_email = "chris.matthews at apple.com",
> + url = 'http://lab.llvm.org',
> + license = 'BSD',
> +
> + description = "Compiler bisection service.",
> + keywords = 'testing compiler performance development llvm',
> +
> + classifiers=[
> + 'Development Status :: 4 - Beta',
> + 'Environment :: Console',
> + 'Intended Audience :: Developers',
> + ('License :: OSI Approved :: '
> + 'University of Illinois/NCSA Open Source License'),
> + 'Natural Language :: English',
> + 'Operating System :: OS Independent',
> + 'Progamming Language :: Python',
> + 'Topic :: Software Development :: Quality Assurance',
> + 'Topic :: Software Development :: Testing',
> + ],
> +
> + packages = find_packages(),
> +
> + scripts = ['bin/llvmlab'],
> +
> + install_requires=['requests'],
> +)
>
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20160919/18959fd3/attachment-0001.html>


More information about the llvm-commits mailing list