[www] r312895 - Update talk list.

Tanya Lattner via llvm-commits llvm-commits at lists.llvm.org
Mon Sep 11 00:32:59 PDT 2017


Author: tbrethou
Date: Mon Sep 11 00:32:59 2017
New Revision: 312895

URL: http://llvm.org/viewvc/llvm-project?rev=312895&view=rev
Log:
Update talk list.

Modified:
    www/trunk/devmtg/2017-10/index.html

Modified: www/trunk/devmtg/2017-10/index.html
URL: http://llvm.org/viewvc/llvm-project/www/trunk/devmtg/2017-10/index.html?rev=312895&r1=312894&r2=312895&view=diff
==============================================================================
--- www/trunk/devmtg/2017-10/index.html (original)
+++ www/trunk/devmtg/2017-10/index.html Mon Sep 11 00:32:59 2017
@@ -139,7 +139,187 @@ Please email Tanya Lattner (tanyalattner
 </p>
 
 <div class="www_sectiontitle" id="program">Program</div>
-<p>The schedule has not been released yet.</p>
+<p>The schedule has not been released yet but here is a list of the accepted talks. See below for full listing and abstracts./p>
+
+<p>Keynotes:<br>
+
+<b><a href="#talk12">Falcon: An optimizing Java JIT
+</a></b> - <i>Philip Reames</i><br>
+
+<b><a href="#talk21">Compiling Android userspace and Linux kernel with LLVM
+</a></b> - <i>Stephen Hines, Nick Desaulniers and Greg Hackmann</i><br>
+</p>
+
+<p>Talks:
+<b><a href="#talk1">Apple LLVM GPU Compiler: Embedded Dragons
+</a></b> - <i>Marcello Maggioni and Charu Chandrasekaran</i><br>
+
+<b><a href="#talk2">Bringing link-time optimization to the embedded world: (Thin)LTO with Linker Scripts
+</a></b> - <i>Tobias Edler von Koch, Sergei Larin, Shankar Easwaran and Hemant Kulkarni</i><br>
+
+<b><a href="#talk3">Advancing Clangd: Bringing persisted indexing to Clang tooling
+</a></b> - <i>Marc-Andre Laperle</i><br>
+
+<b><a href="#talk4">The Further Benefits of Explicit Modularization: Modular Codegen
+</a></b> - <i>David Blaikie</i><br>
+
+<b><a href="#talk5">eval() in C++
+</a></b> - <i>Sean Callanan</i><br>
+
+<b><a href="#talk7">Enabling Parallel Computing in Chapel with Clang and LLVM
+</a></b> - <i>Michael Ferguson</i><br>
+
+<b><a href="#talk8">Structure-aware fuzzing for Clang and LLVM with libprotobuf-mutator
+</a></b> - <i>Kostya Serebryany, Vitaly Buka and Matt Morehouse</i><br>
+
+<b><a href="#talk9">Adding Index‐While‐Building and Refactoring to Clang
+</a></b> - <i>Alex Lorenz and Nathan Hawes</i><br>
+
+<b><a href="#talk10">XRay in LLVM: Function Call Tracing and Analysis
+</a></b> - <i>Dean Michael Berris</i><br>
+
+<b><a href="#talk11">GlobalISel: Past, Present, and Future
+</a></b> - <i>Quentin Colombet and Ahmed Bougacha</i><br>
+
+<b><a href="#talk12">Falcon: An optimizing Java JIT
+</a></b> - <i>Philip Reames</i><br>
+
+<b><a href="#talk13">Dominator Trees and incremental updates that transcend time
+</a></b> - <i>Jakub Kuderski</i><br>
+
+<b><a href="#talk14">Scale, Robust and Regression-Free Loop Optimizations for Scientific Fortran and Modern C++
+</a></b> - <i>Tobias Grosser and Michael Kruse</i><br>
+
+<b><a href="#talk15">Implementing Swift Generics
+</a></b> - <i>Douglas Gregor, Slava Pestov and John McCall</i><br>
+
+<b><a href="#talk16">lld: A Fast, Simple, and Portable Linker
+</a></b> - <i>Rui Ueyama</i><br>
+
+<b><a href="#talk17">Vectorizing Loops with VPlan – Current State and Next Steps
+</a></b> - <i>Ayal Zaks and Gil Rapaport</i><br>
+
+<b><a href="#talk18">LLVM Compile-Time: Challenges. Improvements. Outlook.
+</a></b><br> - <i>Michael Zolotukhin</i><br>
+
+<b><a href="#talk19">Challenges when building an LLVM bitcode Obfuscator
+</a></b> - <i>Serge Guelton, Adrien Guinet, Juan Manuel Martinez and Pierrick Brunet</i><br>
+
+<b><a href="#talk20">Building Your Product Around LLVM Releases
+</a></b> - <i>Tom Stellard</i><br>
+</p>
+<p>
+BoFs:<br>
+<b><a href="#bof1">Storing Clang data for IDEs and static analysis
+</a></b> - <i>Marc-Andre Laperle</i><br>
+
+<b><a href="#bof2">Source-based Code Coverage BoF
+</a></b> - <i>Eli Friedman and Vedant Kumar</i><br>
+
+<b><a href="#bof3">Clang Static Analyzer BoF
+</a></b> - <i>Devin Coughlin, Artem Dergachev and Anna Zaks</i><br>
+
+<b><a href="#bof4">Co-ordinating RISC-V development in LLVM
+</a></b> - <i>Alex Bradbury</i><br>
+
+<b><a href="#bof5">Thoughts and State for Representing Parallelism with Minimal IR Extensions in LLVM
+</a></b> - <i>Xinmin Tian, Hal Finkel, Tb Schardl, Johannes Doerfert and Vikram Adve</i><br>
+
+<b><a href="#bof6">BoF - Loop and Accelerator Compilation Using Integer Polyhedra
+</a></b> - <i>Tobias Grosser and Hal Finkel</i><br>
+
+<b><a href="#bof7">LLDB Future Directions
+</a></b> - <i>Zachary Turner and David Blaikie</i><br>
+
+<b><a href="#bof8">LLVM Foundation - Status and Involvement
+</a></b> - <i>LLVM Foundation Board of Directors</i><br>
+
+</p>
+
+<p>
+Tutorials:<br>
+<b><a href="#tutorial1">Writing Great Machine Schedulers
+</a></b> - <i>Javed Absar and Florian Hahn</i><br>
+
+<b><a href="#tutorial2">Tutorial: Head First into GlobalISel
+</a></b> - <i>Daniel Sanders, Aditya Nandakumar and Justin Bogner</i><br>
+
+<b><a href="#tutorial3">Welcome to the back-end: The LLVM machine representation.
+</a></b> - <i>Matthias Braun</i><br>
+
+</p>
+<p>
+Lightning Talks:<br>
+
+<b><a href="#lightning1">Porting OpenVMS using LLVM
+</a></b> - <i>John Reagan</i><br>
+
+<b><a href="#lightning2">Porting LeakSanitizer: A Beginner's Guide
+</a></b> - <i>Francis Ricci</i><br>
+
+<b><a href="#lightning3">Introsort based sorting function for libc++
+</a></b> - <i>Divya Shanmughan and Aditya Kumar</i><br>
+
+<b><a href="#lightning4">Code Size Optimization: Interprocedural Outlining at the IR Level
+</a></b> - <i>River Riddle</i><br>
+
+<b><a href="#lightning5">ThreadSanitizer APIs for external libraries
+</a></b> - <i>Kuba Mracek</i><br>
+
+<b><a href="#lightning6">A better shell command-line autocompletion for clang
+</a></b> - <i>Yuka Takahashi</i><br>
+
+<b><a href="#lightning7">A CMake toolkit for migrating C++ projects to clang’s module system.
+</a></b> - <i>Raphael Isemann</i><br>
+
+<b><a href="#lightning8">Debugging of optimized code: Extending the lifetime of local variables
+</a></b> - <i>Wolfgang Pieb</i><br>
+
+<b><a href="#lightning10">An LLVM based Loop Profiler
+</a></b> - <i>Shalini Jain, Kamlesh Kumar, Suresh Purini, Dibyendu Das and Ramakrishna Upadrasta</i><br>
+
+<b><a href="#lightning11">Compiling cross-toolchains with CMake and runtimes build
+</a></b> - <i>Petr Hosek</i><br>
+
+</p>
+
+<p>
+Student Research Competition:<br>
+
+<b><a href="#src1">VPlan + RV: A Proposal
+</a></b> - <i>Simon Moll and Sebastian Hack</i><br>
+
+<b><a href="#src2">Polyhedral Value & Memory Analysis
+</a></b> - <i>Johannes Doerfert and Sebastian Hack</i><br>
+
+<b><a href="#src3">DLVM: A Compiler Framework for Deep Learning DSLs
+</a></b> - <i>Richard Wei, Vikram Adve and Lane Schwartz</i><br>
+
+<b><a href="#src4">Leveraging LLVM to Optimize Parallel Programs
+</a></b> - <i>William Moses</i><br>
+
+<b><a href="#src5">Exploiting and improving LLVM's data flow analysis using superoptimizer
+</a></b> - <i>Jubi Taneja</i><br>
+
+</p>
+<p>
+Posters:<br>
+
+<b><a href="#poster1">Venerable Variadic Vulnerabilities Vanquished
+</a></b> - <i>Priyam Biswas, Alessandro Di Federico, Scott A. Carr, Prabhu Rajasekaran, Stijn Volckaert, Yeoul Na, Michael Franz and Mathias Payer</i><br>
+
+<b><a href="#poster2">Extending LLVM’s masked.gather/scatter Intrinsic to Read/write Contiguous Chunks from/to Arbitrary Locations.
+</a></b> - <i>Farhana Aleen, Elena Demikhovsky and Hideki Saito</i><br>
+
+<b><a href="#poster3">An LLVM based Loop Profiler
+</a></b> - <i>Shalini Jain, Kamlesh Kumar, Suresh Purini, Dibyendu Das and Ramakrishna Upadrasta</i><br>
+
+More coming soon...
+
+</p>
+
+
+
 
 <div class="www_sectiontitle" id="talks">Talk Abstracts</div>
 
@@ -441,6 +621,16 @@ obfuscate C/C++/Objective-C code obfusca
 In this talk, we will look at how everyone from individual users to large organizations can make their LLVM-based products better when building them on top of official LLVM releases. We will cover topics like best practices for working with upstream, keeping internal branches in sync with the latest git/svn code. How to design continuous integration systems to test both public and private branches. The LLVM release process, how it works, and how you can leverage it when releasing your own products, and other topics related to LLVM releases.
 </p>
 
+<p>
+<b><a id="talk21">Compiling Android userspace and Linux kernel with LLVM
+</a></b><br>
+<i>Stephen Hines, Nick Desaulniers and Greg Hackmann</i><br>
+[Slides] [Video] (Available after dev mtg)<br>
+A few years ago, a few \strikethrough{reckless} brave pioneers set out to switch Android to Clang/LLVM for its userland toolchain. Over the past few months, a new band of \strikethrough{willing victims} adventurers decided that it wasn’t fair to leave the kernel out, so they embarked to finish the quest of the LLVMLinux folks with the Linux kernel. This is the epic tale of their journey. From the Valley of Miscompiles to the peaks of Warning-Clean, we will share the glorious stories of their fiercest battles. 
+<br>
+This talk is for anyone interested in deploying Clang/LLVM for a large production software codebase. It will cover both userland and kernel challenges and results. We will focus on our experiences in diagnosing and resolving a multitude of issues that similar \strikethrough{knights} software engineers might encounter when transitioning other large projects. Best practices that we have discovered will also be shared, in order to help other advocates in their own quests to spread Clang/LLVM to their projects. 
+</p>
+
 
 <p>
 <b><a id="bof1">Storing Clang data for IDEs and static analysis
@@ -773,7 +963,28 @@ This proposal is about increasing the re
 Our goal is not limited to exploiting the data flow facts imported from LLVM to help our superoptimizer: "Souper". We also improve the LLVM’s data flow analysis by finding imprecision and making suggestions. It is harder to implement optimizations with path conditions in LLVM compiler. To avoid writing fragile optimization without any additional information, we automatically scan the Souper’s optimizations for path conditions that map into data flow facts already known to LLVM and suggest corresponding optimizations. The interesting set of optimizations found by Souper also resulted in form of patches to improve LLVM’s data flow analysis and some of them are already accepted.
 </p>
 
+<b><a id="poster1">Venerable Variadic Vulnerabilities Vanquished
+</a></b><br><i>Priyam Biswas, Alessandro Di Federico, Scott A. Carr, Prabhu Rajasekaran, Stijn Volckaert, Yeoul Na, Michael Franz and Mathias Payer</i><br>
+[Slides] [Video] (Available after dev mtg)<br>
+Programming languages such as C and C++ support variadic functions, i.e., functions that accept a variable number of arguments (e.g., printf). While variadic functions are flexible, they are inherently not type-safe. In fact, the semantics and parameters of variadic functions are defined implicitly by their implementation. It is left to the programmer to ensure that the caller and callee follow this implicit specification, without the help of a static type checker. An adversary can take advantage of a mismatch between the argument types used by the caller of a variadic function and the types expected by the callee to violate the language semantics and to tamper with memory. Format string attacks are the most popular example of such a mismatch. Indirect function calls can be exploited by an adversary to divert execution through illegal paths. CFI restricts call targets according to the function prototype which, for variadic functions, does not include all the actual parameters. However, as shown by our case study, current Control Flow Integrity (CFI) implementations are mainly limited to non-variadic functions and fail to address this potential attack vector. Defending against such an attack requires a stateful dynamic check. We present HexVASAN, a compiler based sanitizer to effectively type-check and thus prevent any attack via variadic functions (when called directly or indirectly). The key idea is to record metadata at the call site and verify parameters and their types at the callee whenever they are used at runtime. Our evaluation shows that HexVASAN is (i) practically deployable as the measured overhead is negligible (0.72%) and (ii) effective as we show in several case studies.
+</p>
+
+<b><a id="poster2">Extending LLVM’s masked.gather/scatter Intrinsic to Read/write Contiguous Chunks from/to Arbitrary Locations.
+</a></b><br><i>Farhana Aleen, Elena Demikhovsky and Hideki Saito</i><br>
+[Slides] [Video] (Available after dev mtg)<br>
+Vectorization is important and growing part of the LLVM’s eco-system. With new SIMD ISA extensions like gather/scatter instructions, it is not uncommon to vectorize complex, irregular data access patterns. LLVM’s gather/scatter intrinsics serves these cases well. Today LLVM’s vectorizer represents a group of adjacent interleaved-accesses using a wide-load followed by shuffle instructions which get further optimized by the target-specific optimizations. This covers the case where multiple strided loads/stores together accesses a single contiguous chunk of memory. But currently there is no way to represent the cases where multiple gathers accesses a group of contiguous chunks of memory. This poster shows how a group of adjacent non-interleaved accesses can be represented using the wide-vector+shuffles schema and how they can be further optimized by the targets to provide further performance gain on top of the regular vectorization.
+</p>
+
 
+<b><a id="poster3">An LLVM based Loop Profiler
+</a></b><br>Shalini Jain, Kamlesh Kumar, Suresh Purini, Dibyendu Das and Ramakrishna Upadrasta</i><br>
+[Slides] [Video] (Available after dev mtg)<br>
+It is well understood that programs spend most of their time in loops. The application writer may want to know the measure of time-taken for each loop in large programs, so that (s)he can then focus on these loops for applying optimizations. Loop profiling is a way to calculate loop-based run-time information such as execution-time, cache-miss equations and other runtime metrics, which help us to analyze code to fix performance related issues in the code base. This is achieved by instrumenting/annotating the existing input program. There already exist loop-profilers for conventional languages like C++, Java etc., both in open-source and commercial domain. However, as far as we know, there is no such loop-profiler available for LLVM-IR; such a tool would help LLVM users analyze loops in LLVM-IR. Our work mainly focuses around developing such a generic loop profiler for LLVM-IR. It can thus be used for any language(s) which have a LLVM IR.
+<br>
+Our current work proposes an LLVM based loop-profiler which works on the IR level and gives execution times, and total number of clocks for each loop. Currently, we focus on the inner-most loop(s) as well as each individual loop(s) for collecting run-time profiling data. Our profiler works on LLVM IR and inserts the instrumented code into the entry and exit blocks of each loop. It also returns the number of clock(s) ticks and execution time(s) for each loop of the input program. It also append(s) some instrumented code into the exit block of outer-most loop for calculating total and average number of clocks for each loop. We are currently working to capture other runtime metrics like number of cache misses, number of registers required.
+<br>
+We have results from SPEC CPU 2006 which demonstrate that for all benchmarks in the suite, very few loops are highly compute intensive. For most of the other loops, either control does not reach to them, or they take negligible execution time.
+</p>
 
 <div class="www_sectiontitle" id="grant">Travel Grants for Students</div>
 <p>




More information about the llvm-commits mailing list