[www] r197619 - More video links.

Tanya Lattner tonic at nondot.org
Wed Dec 18 14:52:28 PST 2013

Author: tbrethou
Date: Wed Dec 18 16:52:28 2013
New Revision: 197619

URL: http://llvm.org/viewvc/llvm-project?rev=197619&view=rev
More video links.


Modified: www/trunk/devmtg/2013-11/index.html
URL: http://llvm.org/viewvc/llvm-project/www/trunk/devmtg/2013-11/index.html?rev=197619&r1=197618&r2=197619&view=diff
--- www/trunk/devmtg/2013-11/index.html (original)
+++ www/trunk/devmtg/2013-11/index.html Wed Dec 18 16:52:28 2013
@@ -98,7 +98,7 @@ Keynote talk celebrating the 10th annive
 <b><a id="talk2">Emscripten: Compiling LLVM bitcode to JavaScript</a></b><br>
 <i>Alon Zakai - Mozilla</i><br>
 <a href="slides/Zakai-Emscripten.pdf">Slides</a><br>
-<a href="videos/">Video</a> (Computer) <a href="videos/">Video</a> (Mobile)<br>
+<a href="videos/Zakai-Emscripten-720.mov">Video</a> (Computer) <a href="videos/Zakai-Emscripten-360.mov">Video</a> (Mobile)<br>
 Emscripten is an open source compiler that converts LLVM bitcode to JavaScript. JavaScript is a fairly unusual target for compilation, being a high-level dynamic language instead of a low-level CPU assembly, but efficient compilation to JavaScript is useful because of the ubiquity of web browsers which use it as their standard language. This talk will detail how Emscripten utilizes LLVM and clang to convert C/C++ into JavaScript, and cover the specific challenges that compiling to JavaScript entails, such as the lack of goto statements, while on the other hand making other aspects of compilation simpler, for example having native exception handling support. Some such issues are general and have to do with JavaScript itself, but specific challenges with Emscripten's interaction with LLVM will also be described, as well as opportunities for better integration between the projects in the future.
@@ -106,7 +106,7 @@ Emscripten is an open source compiler th
 <b><a id="talk3">Code Size Reduction using Similar Function Merging</a></b><br>
 <i>Tobias Edler von Koch - University of Edinburgh / QuIC, Pranav Bhandarkar - QuIC</i><br>
 <a href="slides/Koch-FunctionMerging.pdf">Slides</a><br>
-<a href="videos/">Video</a> (Computer) <a href="videos/">Video</a> (Mobile)<br>
+<a href="videos/Koch-CodeSize-720.mov">Video</a> (Computer) <a href="videos/Koch-CodeSize-360.mov">Video</a> (Mobile)<br>
 Code size reduction is a critical goal for compiler optimizations targeting embedded applications. While LLVM continues to improve its performance optimization capabilities, it is currently still lacking a robust set of optimizations specifically targeting code size. In our talk, we will describe an optimization pass that aims to reduce code size by merging similar functions at the IR level. Significantly extending the existing MergeFunctions optimization, the pass is capable of merging multiple functions even if there are minor differences between them. A number of heuristics are used to determine when merging of functions is profitable. Alongside hash tables, these also ensure that compilation time remains at an acceptable level. We will describe our experience of using this new optimization pass to reduce the code size of a significant embedded application at Qualcomm Innovation Center by 2%. 
@@ -114,7 +114,7 @@ Code size reduction is a critical goal f
 <b><a id="talk4">Julia: An LLVM-based approach to scientific computing</a></b><br>
 <i>Keno Fischer - Harvard College/MIT CSAIL</i><br>
 <a href="slides/Fischer-Julia.html">Slides</a><br>
-<a href="videos/">Video</a> (Computer) <a href="videos/">Video</a> (Mobile)<br>
+<a href="videos/Fischer-Julia-720.mov">Video</a> (Computer) <a href="videos/Fischer-Julia-360.mov">Video</a> (Mobile)<br>
 Julia is a new high-level dynamic programming language specifically designed for
 scientific and technical computing, while at the same time not ignoring the 
 need for the expressiveness and the power of a modern general purpose 
@@ -133,7 +133,7 @@ subtle differences between the prototype
 <b><a id="talk5">Verifying optimizations using SMT solvers</a></b><br>
 <i>Nuno Lopes - INESC-ID / U. Lisboa</i><br>
 <a href="slides/">Slides</a>
-<a href="videos/">Video</a> (Computer) <a href="videos/">Video</a> (Mobile)<br>
+<a href="videos/Lopes-SMTSolvers-720.mov">Video</a> (Computer) <a href="videos/Lopes-SMTSolvers-360.mov">Video</a> (Mobile)<br>
 Instcombine and Selection DAG optimizations, although usually simple, can easily hide bugs.
 We've had many cases in the past where these optimizers were producing wrong code in certain corner cases.
 In this talk I'll describe a way to prove the correctness of such optimization using an off-the-shelf SMT solver (bit-vector theory).  I'll give examples of past bugs found in these optimizations, how to encode them into SMT-Lib 2 format, and how to spot the bugs.
@@ -145,6 +145,7 @@ The encoding to the SMT format, although
 <i>Kostya Serebryany - Google,
 Alexey Samsonov - Google</i><br>
 <a href="slides/Serebryany-ASAN.pdf">Slides</a><br>
+<a href="videos/Serebryany-ASAN-720.mov">Video</a> (Computer) <a href="videos/Serebryany-ASAN-360.mov">Video</a> (Mobile)<br>
 AddressSanitizer is a fast memory error detector that uses LLVM for compile-time instrumentation. In this talk we will present several new features in AddressSanitizer.
 <li>Initialization order checker finds bugs where the program behavior depends on the order in which global variables from different modules are initialized.</li>
@@ -159,6 +160,7 @@ AddressSanitizer is a fast memory error
 <b><a id="talk7">A Detailed Look at the R600 Backend</a></b><br>
 <i>Tom Stellard - Advanced Micro Devices Inc.</i><br>
 <a href="slides/Stellard-R600.pdf">Slides</a><br>
+<a href="videos/Stellard-R600-720.mov">Video</a> (Computer) <a href="videos/Stellard-R600-360.mov">Video</a> (Mobile)<br>
 The R600 backend, which targets AMD GPUs, was merged into LLVM prior to
 the 3.3 release.  It is one component of AMD's open source GPU drivers
 which provide support for several popular graphics and compute APIs.
@@ -178,6 +180,7 @@ improvement in the backend as well as co
 <b><a id="talk8">Developer Toolchain for the PlayStation®4</a></b><br>
 <i>Paul T. Robinson - Sony Computer Entertainment America</i><br>
 <a href="slides/Robinson-PS4Toolchain.pdf">Slides</a><br>
+<a href="videos/Robinson-PS4Toolchain-360.mov">Video</a> (Computer) <a href="videos/Robinson-PS4Toolchain-360.mov">Video</a> (Mobile)<br>
 The PlayStation®4 has a developer toolchain centered on Clang as the CPU compiler.  We describe how Clang/LLVM fits into Sony Computer Entertainment's (mostly proprietary) toolchain, focusing on customizations, game-developer experience, and working with the open-source community.
@@ -185,6 +188,7 @@ The PlayStation®4 has a developer too
 <b><a id="talk9">Annotations for Safe Parallelism in Clang</a></b><br>
 <i>Alexandros Tzannes -
 University of Illinois, Urbana-Champaign</i><br>
+<a href="videos/Tzannes-ASaP-720.mov">Video</a> (Computer) <a href="videos/Tzannes-ASaP-360.mov">Video</a> (Mobile)<br>
 The Annotations for Safe Parallelism (ASaP) project at UIUC is implementing a static checker in Clang to allow writing provably safe parallel code. ASaP is inspired by DPJ (Deterministic Parallel Java) but unlike it, it does not extend the base language. Instead, we rely on the rich C++11 attribute system to enrich C++ types and to pass information to our ASaP checker. The ASaP checker gives strong guarantees such as race-freedom, *strong* atomicity, and deadlock freedom for commonly used parallelism patterns, and it is at the prototyping stage where we can prove the parallel safety of simple TBB programs. We are evolving ASaP in collaboration with our Autodesk partners who help guide its design in order to solve incrementally complex problems faced by real software teams in industry. In this presentation, I will present an overview of how the checker works, what is currently supported, what we have "in the works", and some discussion about incorporating some of the ideas o!
 f the thr
 ead safety annotation to assist our analysis.
@@ -192,6 +196,7 @@ The Annotations for Safe Parallelism (AS
 <b><a id="talk10">Vectorization in LLVM</a></b><br>
 <i>Nadav Rotem - Apple, Arnold Schwaighofer - Apple</i><br>
 <a href="slides/Rotem-Vectorization.pdf">Slides</a><br>
+<a href="videos/Rotem-Vectorization-720.mov">Video</a> (Computer) <a href="videos/Rotem-Vectorization-360.mov">Video</a> (Mobile)<br>
 Vectorization is a powerful optimization that can accelerate programs in multiple domains.  Over the last year two new vectorization passes were added to LLVM: the Loop-vectorizer, which vectorizes loops, and the SLP-vectorizer, which combines independent scalar calculations into a vector. Both of these optimizations together show a significant performance increase on many applications.  In this talk we’ll present our work on the vectorizers in the past year.  We’ll discuss the overall architecture of these passes, the cost model for deciding when vectorization is profitable, and describe some interesting design tradeoffs. Finally, we want to talk about some ideas to further improve the vectorization infrastructure.
@@ -200,6 +205,7 @@ Vectorization is a powerful optimization
 <i>Reid Kleckner - Google</i><br>
 <a href="slides/Kleckner-ClangVisualC++.pdf">Slides</a><br>
+<a href="videos/Kleckner-ClangVisualC++-360.mov">Video</a> (Computer) <a href="videos/Kleckner-ClangVisualC++-360.mov">Video</a> (Mobile)<br>
 This talk covers the work we've been doing to help make clang and LLVM more
 compatible with Microsoft's Visual C++ toolchain.  With a compatible toolchain,
 we can deliver all of the features that clang and LLVM have to offer, such as
@@ -213,6 +219,7 @@ talk will go into detail about some of t
 <b><a id="talk12">Building a Modern Database with LLVM</a></b><br>
 <i>Skye Wanderman-Milne - Cloudera</i><br>
 <a href="slides/Wanderman-Milne-Cloudera.pdf">Slides</a><br>
+<a href="videos/Wanderman-Milne-Cloudera-720.mov">Video</a> (Computer) <a href="videos/Wanderman-Milne-Cloudera-360.mov">Video</a> (Mobile)<br>
 Cloudera Impala is a low-latency SQL query engine for Apache Hadoop. In order to achieve optimal CPU efficiency and query execution times, Impala uses LLVM to perform JIT code generation to take advantage of query-specific information unavailable at compile time. For example, code generation allows us to remove many conditionals (and the associated branch misprediction overhead) necessary for handling multiples types, operators, functions, etc.; inline what would otherwise be virtual function calls; and propagate query-specific constants. These optimization can reduce overall query time by almost 300%.
 In this talk, I'll outline the motivation for using LLVM within Impala and go over some examples and results of JIT optimizations we currently perform, as well as ones we'd like to implement in the future.
@@ -223,6 +230,7 @@ In this talk, I'll outline the motivatio
 <i>Colin Riley - Codeplay</i><br>
 <a href="slides/Riley-DebugginWithLLDB.pdf">Slides</a><br>
+<a href="videos/Riley-DebugginWithLLDB-720.mov">Video</a> (Computer) <a href="videos/Riley-DebugginWithLLDB-360.mov">Video</a> (Mobile)<br>
 LLDB is at the stage of development where support is being added for a wide range of hardware devices. Its modular approach means adapting it to debug a new system has a well-defined step-by-step process, which can progress fairly quickly. Presented is a guide of what implementation steps are required to get your hardware supported via LLDB using Remote Debugging, giving examples from work we are doing to support the Hexagon DSP within LLDB.
@@ -233,7 +241,7 @@ LLDB is at the stage of development wher
 Chandler Carruth - Google,
 Diego Novillo - Google</i><br>
 <a href="slides/Carruth-PGO.pdf">Slides</a><br>
+<a href="videos/Carruth-PGO-720.mov">Video</a> (Computer) <a href="videos/Carruth-PGO-360.mov">Video</a> (Mobile)<br>
 Profile Guided Optimization (PGO) is one of the most fundamental weaknesses in the LLVM optimization portfolio. We have had several attempts to build it, and to this day we still lack a holistic platform for driving optimizations through profiling. This talk will consist of three light-speed crash courses on where PGO is in LLVM, where it needs to be, and how several of us are working to get it there.
 First, we will present some motivational background on what PGO is good for and what it isn't. We will cover exactly how profile information interacts with the LLVM optimizations, the strategies we use at a high level to organize and use profile information, and the specific optimizations that are in turn driven by it. Much of this will cover infrastructure as it exists today, with some forward-looking information added into the mix.

More information about the llvm-commits mailing list