[www] r192528 - Add blurb about RTSC. Also do some reformatting while in here.

Bill Wendling isanbard at gmail.com
Sat Oct 12 02:09:59 PDT 2013


Author: void
Date: Sat Oct 12 04:09:59 2013
New Revision: 192528

URL: http://llvm.org/viewvc/llvm-project?rev=192528&view=rev
Log:
Add blurb about RTSC. Also do some reformatting while in here.

Modified:
    www/trunk/ProjectsWithLLVM/index.html

Modified: www/trunk/ProjectsWithLLVM/index.html
URL: http://llvm.org/viewvc/llvm-project/www/trunk/ProjectsWithLLVM/index.html?rev=192528&r1=192527&r2=192528&view=diff
==============================================================================
--- www/trunk/ProjectsWithLLVM/index.html (original)
+++ www/trunk/ProjectsWithLLVM/index.html Sat Oct 12 04:09:59 2013
@@ -4,25 +4,26 @@
 <div class="www_text">
 
 <p>This page is an incomplete list of the projects built with LLVM, sorted in
-reverse chronological order.  The idea of this list is to show some of the
-things that have been done with LLVM for various course projects or for other
-purposes, which can be used as a source of ideas for future projects.  Another
-good place to look is the list of <a href="../pubs">published papers and
-theses that use LLVM</a>.</p>
+   reverse chronological order.  The idea of this list is to show some of the
+   things that have been done with LLVM for various course projects or for other
+   purposes, which can be used as a source of ideas for future projects.
+   Another good place to look is the list of <a href="../pubs">published papers
+   and theses that use LLVM</a>.</p>
 
 <p>Note that this page is not intended to reflect that current state of LLVM or
-show endorsement of any particular project over another.  This is just a
-showcase of the hard work various people have done.  It also shows a bit about
-how the capabilities of LLVM have evolved over time.</p>
+   show endorsement of any particular project over another.  This is just a
+   showcase of the hard work various people have done.  It also shows a bit
+   about how the capabilities of LLVM have evolved over time.</p>
 
 <p>We are always looking for new contributions to this page.  If you work on a
-project that uses LLVM for a course or a publication, we would definitely like
-to hear about it, and would like to include your work here as well.  Please just
-send email to <a href="mailto:llvmdev at cs.uiuc.edu">the LLVMdev mailing list</a> with an entry
-like those below.  We're not particularly looking for source code (though we
-welcome source-code contributions through the normal channels), but instead
-would like to put up the "polished results" of your work, including reports,
-papers, presentations, posters, or anything else you have.</p>
+   project that uses LLVM for a course or a publication, we would definitely
+   like to hear about it, and would like to include your work here as well.
+   Please just send email to <a href="mailto:llvmdev at cs.uiuc.edu">the LLVMdev
+   mailing list</a> with an entry like those below.  We're not particularly
+   looking for source code (though we welcome source-code contributions through
+   the normal channels), but instead would like to put up the "polished results"
+   of your work, including reports, papers, presentations, posters, or anything
+   else you have.</p>
 
 </div>
 
@@ -65,6 +66,7 @@ papers, presentations, posters, or anyth
 <li><a href="#emscripten">Emscripten: An LLVM to JavaScript Compiler</a></li>
 <li><a href="#rust">Rust: a safe, concurrent, practical language</a></li>
 <li><a href="#ESL">ESL: Embedded Systems Language</a></li>
+<li><a href="#RTSC">RTSC: The Real-Time Systems Compiler</a></li>
 </ul>
 
 </div>
@@ -79,11 +81,19 @@ By Jérôme Gorin (ARTEMIS/Institut
 </div>
 
 <div class="www_text">
-<p>
-<a href="https://github.com/orcc/jade">Jade</a> (Just-in-time Adaptive Decoder Engine) is a generic video decoder engine using LLVM for just-in-time compilation of video decoder configurations. Those configurations are designed by MPEG Reconfigurable Video Coding (RVC) committee. MPEG RVC standard is built on a stream-based dataflow representation of decoders. It is composed of a standard library of coding tools written in RVC-CAL language and a dataflow configuration -- block diagram -- of a decoder.
-</p>
-<p>
-Jade project is hosted as part of the Open RVC-CAL Compiler (<a href="http://orcc.sf.net">Orcc</a>) and requires it to translate the RVC-CAL standard library of video coding tools into an LLVM assembly code.</p>
+<p><a href="https://github.com/orcc/jade">Jade</a> (Just-in-time Adaptive
+   Decoder Engine) is a generic video decoder engine using LLVM for just-in-time
+   compilation of video decoder configurations. Those configurations are
+   designed by MPEG Reconfigurable Video Coding (RVC) committee. MPEG RVC
+   standard is built on a stream-based dataflow representation of decoders. It
+   is composed of a standard library of coding tools written in RVC-CAL language
+   and a dataflow configuration — block diagram — of a decoder.</p>
+
+<p>Jade project is hosted as part of the Open RVC-CAL Compiler
+   (<a href="http://orcc.sf.net">Orcc</a>) and requires it to translate the
+   RVC-CAL standard library of video coding tools into an LLVM assembly
+   code.</p>
+
 </div>
 
 <!--=========================================================================-->
@@ -93,11 +103,11 @@ Jade project is hosted as part of the Op
 <!--=========================================================================-->
 
 <div class="www_text">
-<p><a href="http://code.google.com/p/crack-language/">Crack</a> aims to provide 
-the ease of development of a scripting language with the performance of a 
-compiled language. The language derives concepts from C++, Java and Python, 
-incorporating object-oriented programming, operator overloading and strong 
-typing.</p>
+<p><a href="http://code.google.com/p/crack-language/">Crack</a> aims to provide
+   the ease of development of a scripting language with the performance of a
+   compiled language. The language derives concepts from C++, Java and Python,
+   incorporating object-oriented programming, operator overloading and strong
+   typing.</p>
 </div>
 
 
@@ -113,8 +123,8 @@ By <a href="http://rubini.us/community.h
 
 <div class="www_text">
 <p><a href="http://github.com/evanphx/rubinius">Rubinius</a> is a new virtual
-machine for Ruby. It leverages LLVM to dynamically compile Ruby code down to
-machine code using LLVM's JIT.</p>
+   machine for Ruby. It leverages LLVM to dynamically compile Ruby code down to
+   machine code using LLVM's JIT.</p>
 </div>
 
 
@@ -124,17 +134,19 @@ machine code using LLVM's JIT.</p>
 </div>
 
 <div class="www_subsubsection">
-By the
-<a href="http://www.macruby.org/project.html">MacRuby Project Team</a>
+By the <a href="http://www.macruby.org/project.html">MacRuby Project Team</a>
 </div>
 
 <div class="www_text">
-<p>
-<a href="http://macruby.org">MacRuby</a> is an implementation of Ruby on top of core Mac OS X technologies, such as the Objective-C common runtime and garbage collector, and the CoreFoundation framework. It is principally developed by Apple and aims at enabling the creation of full-fledged Mac OS X applications.</p>
+<p><a href="http://macruby.org">MacRuby</a> is an implementation of Ruby on top
+   of core Mac OS X technologies, such as the Objective-C common runtime and
+   garbage collector, and the CoreFoundation framework. It is principally
+   developed by Apple and aims at enabling the creation of full-fledged Mac OS X
+   applications.</p>
 
-<p>
-MacRuby uses LLVM for optimization passes, JIT and AOT compilation of Ruby expressions. It also uses zero-cost DWARF exceptions to implement Ruby exception handling.
-</p>
+<p>MacRuby uses LLVM for optimization passes, JIT and AOT compilation of Ruby
+   expressions. It also uses zero-cost DWARF exceptions to implement Ruby
+   exception handling.</p>
 
 </div>
 
@@ -148,21 +160,18 @@ By the Flexible Design Methodologies for
 </div>
 
 <div class="www_text">
-<p>
-<a href="http://tce.cs.tut.fi">TCE</a> is a toolset for designing 
-application-specific processors (ASP) based on
-the Transport triggered architecture (TTA). The toolset provides a complete
-co-design flow from C programs down to synthesizable VHDL and parallel program
-binaries. Processor customization points include the register files, function
-units, supported operations, and the interconnection network.</p>
-
-<p>
-TCE uses llvm-gcc and LLVM for C language support, target independent
-optimizations and also for parts of code generation. TCE generates new
-LLVM-based code generators "on the fly" for the designed TTA processors and
-loads them in to the compiler backend as runtime libraries to avoid per-target
-recompilation of larger parts of the compiler chain.
-</p>
+<p><a href="http://tce.cs.tut.fi">TCE</a> is a toolset for designing
+   application-specific processors (ASP) based on the Transport triggered
+   architecture (TTA). The toolset provides a complete co-design flow from C
+   programs down to synthesizable VHDL and parallel program binaries. Processor
+   customization points include the register files, function units, supported
+   operations, and the interconnection network.</p>
+
+<p>TCE uses llvm-gcc and LLVM for C language support, target independent
+   optimizations and also for parts of code generation. TCE generates new
+   LLVM-based code generators "on the fly" for the designed TTA processors and
+   loads them in to the compiler backend as runtime libraries to avoid
+   per-target recompilation of larger parts of the compiler chain.</p>
 </div>
 
 <!--=========================================================================-->
@@ -175,24 +184,22 @@ By Gary Benson (Red Hat, USA)
 </div>
 
 <div class="www_text">
-<p>
-The <a href="http://icedtea.classpath.org/">IcedTea</a>
-project was formed to provide a harness to build OpenJDK
-using only free software build tools and to provide replacements for
-the not-yet free parts of OpenJDK.  Over time, various extensions to
-OpenJDK have been included in IcedTea.
-</p>
-<p> One of these extensions is
-<a href="http://openjdk.java.net/projects/zero/">Zero.</a>
-OpenJDK only supports x86 and SPARC
-processors; Zero is a processor-independent layer that allows OpenJDK
-to build and run using any processor.  Zero contains a JIT compiler
-called <a href="http://icedtea.classpath.org/wiki/ZeroSharkFaq">Shark</a>
-which uses LLVM to provide native code generation without
-introducing processor-dependent code.
-</p>
-<p> The development of Zero and Shark were funded by Red Hat.
-</p>
+<p>The <a href="http://icedtea.classpath.org/">IcedTea</a> project was formed to
+   provide a harness to build OpenJDK using only free software build tools and
+   to provide replacements for the not-yet free parts of OpenJDK.  Over time,
+   various extensions to OpenJDK have been included in IcedTea.</p>
+
+<p> One of these extensions
+    is <a href="http://openjdk.java.net/projects/zero/">Zero.</a>  OpenJDK only
+    supports x86 and SPARC processors; Zero is a processor-independent layer
+    that allows OpenJDK to build and run using any processor.  Zero contains a
+    JIT compiler
+    called <a href="http://icedtea.classpath.org/wiki/ZeroSharkFaq">Shark</a>
+    which uses LLVM to provide native code generation without introducing
+    processor-dependent code.</p>
+
+<p>The development of Zero and Shark were funded by Red Hat.</p>
+
 </div>
 
 <!--=========================================================================-->
@@ -205,24 +212,23 @@ By Albert Graef, Johannes Gutenberg Univ
 </div>
 
 <div class="www_text">
-<p> <a href="http://pure-lang.googlecode.com/">Pure</a> is an 
-algebraic/functional
-programming language based on term rewriting. Programs are collections
-of equations which are used to evaluate expressions in a symbolic
-fashion. Pure offers dynamic typing, eager and lazy evaluation, lexical
-closures, a hygienic macro system (also based on term rewriting),
-built-in list and matrix support (including list and matrix
-comprehensions) and an easy-to-use C interface. The interpreter uses
-LLVM as a backend to JIT-compile Pure programs to fast native code.</p>
+<p><a href="http://pure-lang.googlecode.com/">Pure</a> is an algebraic /
+   functional programming language based on term rewriting. Programs are
+   collections of equations which are used to evaluate expressions in a symbolic
+   fashion. Pure offers dynamic typing, eager and lazy evaluation, lexical
+   closures, a hygienic macro system (also based on term rewriting), built-in
+   list and matrix support (including list and matrix comprehensions) and an
+   easy-to-use C interface. The interpreter uses LLVM as a backend to
+   JIT-compile Pure programs to fast native code.</p>
 
 <p>In addition to the usual algebraic data structures, Pure also has
-MATLAB-style matrices in order to support numeric computations and
-signal processing in an efficient way. Pure is mainly aimed at
-mathematical applications right now, but it has been designed as a
-general purpose language. The dynamic interpreter environment and the C
-interface make it possible to use it as a kind of functional scripting
-language for many application areas.
-</p>
+   MATLAB-style matrices in order to support numeric computations and signal
+   processing in an efficient way. Pure is mainly aimed at mathematical
+   applications right now, but it has been designed as a general purpose
+   language. The dynamic interpreter environment and the C interface make it
+   possible to use it as a kind of functional scripting language for many
+   application areas.</p>
+
 </div>
 
 <!--=========================================================================-->
@@ -235,12 +241,11 @@ By the <a href="http://groups.google.com
 </div>
 
 <div class="www_text">
-<p>
-<a href="http://www.dsource.org/projects/ldc">LDC</a> is a compiler for the <a 
-href="http://www.digitalmars.com/d">D programming Language</a>. It is based on
-the latest DMD frontend and uses LLVM as its backend.  LLVM provides a
-fast and modern backend for high quality code generation.</a>
-</p>
+<p><a href="http://www.dsource.org/projects/ldc">LDC</a> is a compiler for
+   the <a href="http://www.digitalmars.com/d">D programming Language</a>. It is
+   based on the latest DMD frontend and uses LLVM as its backend.  LLVM provides
+   a fast and modern backend for high quality code generation.</p>
+
 </div>
 
 <!--=========================================================================-->
@@ -253,14 +258,13 @@ By <a href="http://staff.polito.it/silva
 </div>
 
 <div class="www_text">
-<p>
-<a href="http://staff.polito.it/silvano.rivoira/HowToWriteYourOwnCompiler.htm">
-This project</a> describes the development of a compiler front end producing 
-LLVM Assembly Code for a Java-like programming language.  It is used in a 
-course on Compilers to show how to incrementally design and implement the 
-successive phases of the translation process by means of  common tools such 
-as JFlex and Cup.  The source code developed at each step is made available.
-</p>
+<p><a href="http://staff.polito.it/silvano.rivoira/HowToWriteYourOwnCompiler.htm">
+   This project</a> describes the development of a compiler front end producing
+   LLVM Assembly Code for a Java-like programming language.  It is used in a
+   course on Compilers to show how to incrementally design and implement the
+   successive phases of the translation process by means of common tools such as
+   JFlex and Cup.  The source code developed at each step is made available.</p>
+
 </div>
 
 <!--=========================================================================-->
@@ -273,28 +277,21 @@ By Fernando Pereira and Jens Palsberg, U
 </div>
 
 <div class="www_text">
-<p>
-In this project, we have shown that register allocation can be viewed
-as solving a collection of puzzles.
-We model the register file as a puzzle board and
-the program variables as puzzle pieces;
-pre-coloring and register aliasing fit in naturally.
-For architectures such as x86, SPARC V8, and StrongARM,
-we can solve the puzzles in polynomial time, and we have augmented
-the puzzle solver with a simple heuristic for spilling.
-For SPEC CPU2000, our implementation is as fast as
-the extended version of linear scan used by LLVM.
-Our implementation produces Pentium code that is of similar quality to the
-code produced by the slower, state-of-the-art iterated register coalescing
-algorithm of George and Appel augmented with extensions by Smith, Ramsey, and
-Holloway.
-</p>
-
-<p>
-<a href="http://compilers.cs.ucla.edu/fernando/projects/puzzles/">Project
-   page</a> with a link to a tool that verifies the output of LLVM's register 
-   allocator.
-</p>
+<p>In this project, we have shown that register allocation can be viewed as
+   solving a collection of puzzles.  We model the register file as a puzzle
+   board and the program variables as puzzle pieces; pre-coloring and register
+   aliasing fit in naturally.  For architectures such as x86, SPARC V8, and
+   StrongARM, we can solve the puzzles in polynomial time, and we have augmented
+   the puzzle solver with a simple heuristic for spilling.  For SPEC CPU2000,
+   our implementation is as fast as the extended version of linear scan used by
+   LLVM.  Our implementation produces Pentium code that is of similar quality to
+   the code produced by the slower, state-of-the-art iterated register
+   coalescing algorithm of George and Appel augmented with extensions by Smith,
+   Ramsey, and Holloway.</p>
+
+<p><a href="http://compilers.cs.ucla.edu/fernando/projects/puzzles/">Project
+   page</a> with a link to a tool that verifies the output of LLVM's register
+   allocator.</p>
 
 </div>
 
@@ -310,19 +307,17 @@ By <a href="http://www.grame.fr">Grame,
 
 <div class="www_text">
 
-<p>
-FAUST is a compiled language for real-time audio signal processing. The name
-FAUST stands for Functional AUdio STream. Its programming model combines two
-approaches: functional programming and block diagram composition. You can
-think of FAUST as a structured block diagram language with a textual syntax.
-The project aims at developing a new backend for Faust that will directly
-produce LLVM IR instead of the C++ class Faust currently produces. With a
-(yet to come) library version of the Faust compiler, it will allow
-developers to embed Faust + LLVM JIT to dynamically define, compile on the
-fly and execute Faust plug-ins. LLVM IR and tools also allows some nice
-bytecode manipulations like "partial evaluation/specialization" that will
-also be investigated.
-</p>
+<p>FAUST is a compiled language for real-time audio signal processing. The name
+   FAUST stands for Functional AUdio STream. Its programming model combines two
+   approaches: functional programming and block diagram composition. You can
+   think of FAUST as a structured block diagram language with a textual syntax.
+   The project aims at developing a new backend for Faust that will directly
+   produce LLVM IR instead of the C++ class Faust currently produces. With a
+   (yet to come) library version of the Faust compiler, it will allow developers
+   to embed Faust + LLVM JIT to dynamically define, compile on the fly and
+   execute Faust plug-ins. LLVM IR and tools also allows some nice bytecode
+   manipulations like "partial evaluation/specialization" that will also be
+   investigated.</p>
 
 </div>
 
@@ -339,20 +334,19 @@ By Adobe Systems Incorporated
 
 <div class="www_text">
 
-<p>
-Efficient use of the computational resources available for image processing is
-a goal of the <a href="http://labs.adobe.com/wiki/index.php/AIF_Toolkit">Adobe 
-Image Foundation</a> project.  Our language, "Hydra", is used 
-to describe single- and multi-stage image processing kernels, which are then 
-compiled and run on a target machine within a larger application.  Similarly 
-to how its namesake had many heads, our Hydra can be run on the GPU or 
-alternately on the host CPU(s).  AIF uses LLVM for our CPU path.</p>
-
-<p>
-The first Adobe application to use our system is the soon-to-ship After 
-Effects CS3.  We welcome you to try out our public beta found at 
-<a href="http://labs.adobe.com/technologies/aftereffectscs3/">labs.adobe.com</a>.
-</p>
+<p>Efficient use of the computational resources available for image processing
+   is a goal of
+   the <a href="http://labs.adobe.com/wiki/index.php/AIF_Toolkit">Adobe Image
+   Foundation</a> project.  Our language, "Hydra", is used to describe single-
+   and multi-stage image processing kernels, which are then compiled and run on
+   a target machine within a larger application.  Similarly to how its namesake
+   had many heads, our Hydra can be run on the GPU or alternately on the host
+   CPU(s).  AIF uses LLVM for our CPU path.</p>
+
+<p>The first Adobe application to use our system is the soon-to-ship After
+   Effects CS3.  We welcome you to try out our public beta found
+   at <a href="http://labs.adobe.com/technologies/aftereffectscs3/">labs.adobe.com</a>.</p>
+
 </div>
 
 <!--=========================================================================-->
@@ -368,20 +362,16 @@ By Domagoj Babic, UBC.
 
 <div class="www_text">
 
-<p>
-<a href="http://www.cs.ubc.ca/~babic/index_calysto.htm">Calysto</a>
-is a scalable context- and path-sensitive SSA-based static
-assertion checker. Unlike other static
-checkers, Calysto analyzes SSA directly, which means that it not only
-checks the original code, but also the front-end (including
-SSA-optimizations) of the compiler which
-was used to compile the code. The advantage of doing static checking on
-the SSA is language independency and the fact that the checked code is much
-closer to the generated assembly than the source code.
-</p>
+<p><a href="http://www.cs.ubc.ca/~babic/index_calysto.htm">Calysto</a> is a
+   scalable context- and path-sensitive SSA-based static assertion
+   checker. Unlike other static checkers, Calysto analyzes SSA directly, which
+   means that it not only checks the original code, but also the front-end
+   (including SSA-optimizations) of the compiler which was used to compile the
+   code. The advantage of doing static checking on the SSA is language
+   independency and the fact that the checked code is much closer to the
+   generated assembly than the source code.</p>
 
-<p>
-Several main factors contribute to Calysto's scalability:
+<p>Several main factors contribute to Calysto's scalability:
    <ul>
      <li> A novel SSA symbolic execution algorithm that exploits the
      structure of the control flow graph to minimize the number of
@@ -397,15 +387,13 @@ Several main factors contribute to Calys
   </ul>
 </p>
 
-<p>
-Currently, Calysto is still in the development phase, and the first results
-are encouraging. Most likely, the first public release will happen some
-time in the fall 2007.
-<a href="http://www.cs.ubc.ca/~babic/index_spear.htm">Spear</a>
-and Calysto generated
-<a href="http://www.cs.ubc.ca/~babic/index_benchmarks.htm">benchmarks</a>
-are available.
-</p>
+<p>Currently, Calysto is still in the development phase, and the first results
+   are encouraging. Most likely, the first public release will happen some time
+   in the fall
+   2007. <a href="http://www.cs.ubc.ca/~babic/index_spear.htm">Spear</a> and
+   Calysto
+   generated <a href="http://www.cs.ubc.ca/~babic/index_benchmarks.htm">benchmarks</a>
+   are available.</p>
 
 </div>
 
@@ -420,42 +408,34 @@ By Fernando Pereira, UCLA.
 </div>
 
 <div class="www_text">
-<p>
-The register allocation problem has an exact polynomial solution when restricted
-to programs in the Single Static Assignment (SSA) form.
-Although striking, this major theoretical accomplishment has yet to be endorsed empirically.
-This project consists in the implementation of a complete
-SSA-based register allocator using the
-<A href="http://llvm.org/" target="blank">LLVM</A> compiler framework.
-We have implemented a static transformation of the target program that simplifies the
-implementation of the register allocator and improves the quality of the code that
-it generates.
-We also describe novel techniques to perform register coalescing and
-SSA-elimination.
-In order to validate our allocation technique, we extensively compare different
-flavors of our method against a modern and heavily tuned extension of
-the linear-scan register allocator described
-<A href="2004-Fall-CS426-LS.pdf">here</A>.
-The proposed algorithm consistently produces faster code when the target
-architecture provides a small number of registers.
-For instance, we have achieved an average speed-up of 9.2% when limiting the
-number of registers to four general purpose and three reserved register.
-By augmenting the algorithm with an aggressive coalescing technique, we have
-
-been able to raise the speed improvement up to 13.0%.
-</p>
 
-<P>
-This project was supported by the google's
-<A href="http://code.google.com/soc/" target="blank">Summer of Code</A>
-initiative. Fernando Pereira is funded by
-<A href="http://www.capes.gov.br/capes/portal/" target="blank">CAPES</A>
-under process number 218603-9.
-</P>
+<p>The register allocation problem has an exact polynomial solution when
+   restricted to programs in the Single Static Assignment (SSA) form.  Although
+   striking, this major theoretical accomplishment has yet to be endorsed
+   empirically.  This project consists in the implementation of a complete
+   SSA-based register allocator using the <a href="http://llvm.org/"
+   target="blank">LLVM</a> compiler framework.  We have implemented a static
+   transformation of the target program that simplifies the implementation of
+   the register allocator and improves the quality of the code that it
+   generates.  We also describe novel techniques to perform register coalescing
+   and SSA-elimination.  In order to validate our allocation technique, we
+   extensively compare different flavors of our method against a modern and
+   heavily tuned extension of the linear-scan register allocator described
+   <a href="2004-Fall-CS426-LS.pdf">here</a>.  The proposed algorithm
+   consistently produces faster code when the target architecture provides a
+   small number of registers.  For instance, we have achieved an average
+   speed-up of 9.2% when limiting the number of registers to four general
+   purpose and three reserved register.  By augmenting the algorithm with an
+   aggressive coalescing technique, we have been able to raise the speed
+   improvement up to 13.0%.</p>
+
+<p>This project was supported by the google's
+   <a href="http://code.google.com/soc/" target="blank">Summer of Code</a>
+   initiative. Fernando Pereira is funded
+   by <a href="http://www.capes.gov.br/capes/portal/" target="blank">CAPES</a>
+   under process number 218603-9.</p>
 
-<p>
-<a href="http://compilers.cs.ucla.edu/fernando/projects/soc/">Project page.</a>
-</p>
+<p><a href="http://compilers.cs.ucla.edu/fernando/projects/soc/">Project page.</a></p>
 
 </div>
 
@@ -470,32 +450,29 @@ By Michael O. McCracken, UCSD.
 </div>
 
 <div class="www_text">
-<p>
-The <a href="http://www.cs.ucsd.edu/~mmccrack/lens/">LENS Project</a> 
-is intended to improve the task of measuring programs and
-investigating their behavior.  LENS defines an external representation 
-of a program in XML to store useful information
-that is accessible based on program structure, including loop 
-structure information.</p>
-
-<p>Lens defines a flexible naming scheme for program components 
-based on XPath and the LENS XML document structure. This allows
-users and tools to selectively query program behavior from a
-uniform interface, allowing users or tools to ask a variety of
-questions about program components, which can be answered by any
-tool that understands the query. Queries, metrics and program
-structure are all stored in the LENS file, and are annotated with 
-version names that support historical comparison and scientific 
-record-keeping.</p>
-
-<p>Compiler writers can use LENS to expose results of transformations
-and analyses for a program easily, without worrying about display or
-handling information overload. This functionality has been
-demonstrated using LLVM. LENS uses LLVM for two purposes: first, to
-generate the initial program structure file in XML using an LLVM
-pass, and second, as a demonstration of the advantages of selective
-querying for compiler information, with an interface built into LLVM
-that allows LLVM passes to easily respond to queries in a LENS file.</p>
+
+<p>The <a href="http://www.cs.ucsd.edu/~mmccrack/lens/">LENS Project</a> is
+   intended to improve the task of measuring programs and investigating their
+   behavior.  LENS defines an external representation of a program in XML to
+   store useful information that is accessible based on program structure,
+   including loop structure information.</p>
+
+<p>Lens defines a flexible naming scheme for program components based on XPath
+   and the LENS XML document structure. This allows users and tools to
+   selectively query program behavior from a uniform interface, allowing users
+   or tools to ask a variety of questions about program components, which can be
+   answered by any tool that understands the query. Queries, metrics and program
+   structure are all stored in the LENS file, and are annotated with version
+   names that support historical comparison and scientific record-keeping.</p>
+
+<p>Compiler writers can use LENS to expose results of transformations and
+   analyses for a program easily, without worrying about display or handling
+   information overload. This functionality has been demonstrated using
+   LLVM. LENS uses LLVM for two purposes: first, to generate the initial program
+   structure file in XML using an LLVM pass, and second, as a demonstration of
+   the advantages of selective querying for compiler information, with an
+   interface built into LLVM that allows LLVM passes to easily respond to
+   queries in a LENS file.</p>
 
 </div>
 
@@ -510,21 +487,21 @@ By <a href="http://www.lanl.gov/">Los Al
 </div>
 
 <div class="www_text">
-<p>
-<a href="http://trident.sf.net/">Trident</a> is a compiler for
-floating point algorithms written in C, producing Register Transfer
-Level VHDL descriptions of circuits targetted for reconfigurable logic
-devices. Trident automatically extracts parallelism and pipelines loop
-bodies using conventional compiler optimizations and scheduling
-techniques. Trident also provides an open framework for
-experimentation, analysis, and optimization of floating point
-algorithms on FPGAs and the flexibility to easily integrate custom
-floating point libraries.</p>
-
-<p>
-Trident uses the LLVM C/C++ front-end to parse input languages and
-produce low-level platform independent code.</p>
+
+<p><a href="http://trident.sf.net/">Trident</a> is a compiler for floating point
+   algorithms written in C, producing Register Transfer Level VHDL descriptions
+   of circuits targetted for reconfigurable logic devices. Trident automatically
+   extracts parallelism and pipelines loop bodies using conventional compiler
+   optimizations and scheduling techniques. Trident also provides an open
+   framework for experimentation, analysis, and optimization of floating point
+   algorithms on FPGAs and the flexibility to easily integrate custom floating
+   point libraries.</p>
+
+<p>Trident uses the LLVM C/C++ front-end to parse input languages and produce
+   low-level platform independent code.</p>
+
 </div>
+
 <!--=========================================================================-->
 <div class="www_subsection">
   <a name="ascenium">Ascenium Reconfigurable Processor Compiler</a><br>
@@ -536,30 +513,29 @@ By <a href="http://www.ascenium.com/">As
 </div>
 
 <div class="www_text">
-<p>
-Ascenium is a fine-grained continuously reconfigurable processor that
-handles most instructions at hard-wired speeds while retaining the ability
-to be targeted by conventional high level languages, giving users "all the
-performance of hardware, all the ease of software."</p>
-
-<p>
-The Ascenium team prefers LLVM bytecodes as input to its code generator for
-several reasons:
+
+<p>Ascenium is a fine-grained continuously reconfigurable processor that handles
+   most instructions at hard-wired speeds while retaining the ability to be
+   targeted by conventional high level languages, giving users "all the
+   performance of hardware, all the ease of software."</p>
+
+<p>The Ascenium team prefers LLVM bytecodes as input to its code generator for
+   several reasons:
 <ul>
-<li>LLVM's all inclusive format makes global optimizations and
-consolidations such as global data dependency analysis easy.</li>
+<li>LLVM's all inclusive format makes global optimizations and consolidations
+    such as global data dependency analysis easy.</li>
 <li>LLVM's rich and strictly typed format generally make subtle and
-sophisticated optimizations easy.</li>
-<li>LLVM's great ancillary tools and documentation make it easy to
-work with -- even hardware geeks can understand it!</li>
+    sophisticated optimizations easy.</li>
+<li>LLVM's great ancillary tools and documentation make it easy to work with
+    — even hardware geeks can understand it!</li>
 </ul>
 </p>
 
 <p>Ascenium's <a href="http://www.hotchips.org/archives/hc17/">HOT CHIPS 17</a>
-<a href="Ascenium.pdf">presentation</a> describes the architecture and compiler
-in more detail.</p>
-</div>
+   <a href="Ascenium.pdf">presentation</a> describes the architecture and
+   compiler in more detail.</p>
 
+</div>
 
 <!--=========================================================================-->
 <div class="www_subsection">
@@ -570,19 +546,19 @@ in more detail.</p>
 <div class="www_subsubsection">By Tobias Nurmiranta</div>
 
 <div class="www_text">
-<p>
-This is a <a href="http://www.ida.liu.se/~tobnu/scheme2llvm/">small scheme
-compiler</a> for LLVM, written in scheme.  It is good enough to compile itself
-and work.</p>
+
+<p>This is a <a href="http://www.ida.liu.se/~tobnu/scheme2llvm/">small scheme
+   compiler</a> for LLVM, written in scheme.  It is good enough to compile
+   itself and work.</p>
 
 <p>The code is quite similar to the code in the book SICP (Structure and
-Interpretation of Computer Programs), chapter five, with the difference that it
-implements the extra functionality that SICP assumes that the explicit control
-evaluator (virtual machine) already have. Much functionality of the compiler is
-implemented in a subset of scheme, llvm-defines, which are compiled to llvm
-functions.</p>
-</div>
+   Interpretation of Computer Programs), chapter five, with the difference that
+   it implements the extra functionality that SICP assumes that the explicit
+   control evaluator (virtual machine) already have. Much functionality of the
+   compiler is implemented in a subset of scheme, llvm-defines, which are
+   compiled to llvm functions.</p>
 
+</div>
 
 <!--=========================================================================-->
 <div class="www_subsection">
@@ -599,30 +575,27 @@ By <a href="http://misha.brukman.net/">M
 </div>
 
 <div class="www_text">
-<p>
-The LLVM Visualization Tool (LLVM-TV) can be used to visualize the effects
-of transformations written in the LLVM framework.  Our visualizations
-reflect the state of a compilation unit at a single instant in time,
-between transformations; we call these saved states "snapshots".  A user
-can visualize a sequence of snapshots of the same module---for example,
-as a program is being optimized---or snapshots of different modules,
-for comparison purposes.
-</p>
 
-<p>
-Our target audience consists of developers working within the LLVM
-framework, who are trying to understand the LLVM representation and its
-analyses and transformations.  In addition, LLVM-TV has been designed
-to make it easy to add new kinds of program visualization modules.
-LLVM-TV is based on the <a href="http://www.wxwindows.org">wxWidgets</a>
-cross-platform GUI framework, and uses AT&T Research's
-<a href="http://www.research.att.com/sw/tools/graphviz">GraphViz</a> to
-draw graphs.
-</p>
+<p>The LLVM Visualization Tool (LLVM-TV) can be used to visualize the effects of
+   transformations written in the LLVM framework.  Our visualizations reflect
+   the state of a compilation unit at a single instant in time, between
+   transformations; we call these saved states "snapshots".  A user can
+   visualize a sequence of snapshots of the same module—for example, as a
+   program is being optimized—or snapshots of different modules, for
+   comparison purposes.</p>
+
+<p>Our target audience consists of developers working within the LLVM framework,
+   who are trying to understand the LLVM representation and its analyses and
+   transformations.  In addition, LLVM-TV has been designed to make it easy to
+   add new kinds of program visualization modules.  LLVM-TV is based on
+   the <a href="http://www.wxwindows.org">wxWidgets</a> cross-platform GUI
+   framework, and uses AT&T
+   Research's <a href="http://www.research.att.com/sw/tools/graphviz">GraphViz</a>
+   to draw graphs.</p>
 
 <p><a href="http://wiki.cs.uiuc.edu/cs497rej/LLVM+Visualization+Tool">Wiki
-page</a> with overview; design doc, and user manual.  You can download 
-llvm-tv from LLVM SVN (http://llvm.org/svn/llvm-project/television/trunk).</p>
+   page</a> with overview; design doc, and user manual.  You can download
+   llvm-tv from LLVM SVN (http://llvm.org/svn/llvm-project/television/trunk).</p>
 
 </div>
 
@@ -639,16 +612,18 @@ By <a href="mailto:alkis at cs.uiuc.edu">Al
 </div>
 
 <div class="www_text">
+
 <p>Linear scan register allocation is a fast global register allocation first
-presented in <a href="http://citeseer.ist.psu.edu/poletto99linear.html">Linear
-Scan Register Allocation</a> as an alternative to the more widely used graph
-coloring approach. In this paper, I apply the linear scan register allocation
-algorithm in a system with SSA form and show how to improve the algorithm by
-taking advantage of lifetime holes and memory operands, and also eliminate the
-need for reserving registers for spill code.</p>
+   presented
+   in <a href="http://citeseer.ist.psu.edu/poletto99linear.html">Linear Scan
+   Register Allocation</a> as an alternative to the more widely used graph
+   coloring approach. In this paper, I apply the linear scan register allocation
+   algorithm in a system with SSA form and show how to improve the algorithm by
+   taking advantage of lifetime holes and memory operands, and also eliminate
+   the need for reserving registers for spill code.</p>
 
-<p>Project report: <a href="2004-Fall-CS426-LS.ps">PS</a>, <a
-href="2004-Fall-CS426-LS.pdf">PDF</a></p>
+<p>Project
+   report: <a href="2004-Fall-CS426-LS.ps">PS</a>, <a href="2004-Fall-CS426-LS.pdf">PDF</a></p>
 
 </div>
 
@@ -666,36 +641,35 @@ By <a href="http://misha.brukman.net/">M
 </div>
 
 <div class="www_text">
-<p>"Traditional architectures use the hardware instruction set for dual
-purposes: first, as a language in which to express the semantics of software
-programs, and second, as a means for controlling the hardware. The thesis of
-the <a href="/pubs/2003-10-01-LLVA.html">Low-Level Virtual Architecture</a>
-project is to decouple these two uses from one another, allowing software to be
-expressed in a semantically richer, more easily-manipulated format, and
-allowing for more powerful optimizations and whole-program analyses directly on
-compiled code.</p>
+
+<p>Traditional architectures use the hardware instruction set for dual purposes:
+   first, as a language in which to express the semantics of software programs,
+   and second, as a means for controlling the hardware. The thesis of
+   the <a href="/pubs/2003-10-01-LLVA.html">Low-Level Virtual Architecture</a>
+   project is to decouple these two uses from one another, allowing software to
+   be expressed in a semantically richer, more easily-manipulated format, and
+   allowing for more powerful optimizations and whole-program analyses directly
+   on compiled code.</p>
 
 <p>The semantically rich format we use in LLVA, which is based on the LLVM
-compiler infrastructure's intermediate representation, can best be understood
-as a "virtual instruction set". This means that while its instructions are
-closely matched to those available in the underlying hardware, they may not
-correspond exactly to the instructions understood by the underlying hardware.
-These underlying instructions we call the "implementation instruction set."
-Between the two layers lives the translation layer, typically implemented in
-software.</p>
+   compiler infrastructure's intermediate representation, can best be understood
+   as a "virtual instruction set". This means that while its instructions are
+   closely matched to those available in the underlying hardware, they may not
+   correspond exactly to the instructions understood by the underlying hardware.
+   These underlying instructions we call the "implementation instruction set."
+   Between the two layers lives the translation layer, typically implemented in
+   software.</p>
 
 <p>In this project, we have taken our next logical steps in this effort by (1)
-porting the entire Linux kernel to LLVA, and (2) engineering an environment in
-which a kernel can be run directly from its LLVM bytecode representation --
-essentially, a minimal, but complete, emulated computer system with LLVA as its
-native instruction set. The emulator we have invented, llva-emu, executes
-kernel code by translating programs "just-in-time" from the LLVM bytecode
-format to the processor's native instruction set.</p>
-
-<p>
-Project report: <a href="2003-Fall-CS497YYZ-LLVA-emu.ps">PS</a>,
-<a href="2003-Fall-CS497YYZ-LLVA-emu.pdf">PDF</a>
-</p>
+   porting the entire Linux kernel to LLVA, and (2) engineering an environment
+   in which a kernel can be run directly from its LLVM bytecode representation
+   — essentially, a minimal, but complete, emulated computer system with
+   LLVA as its native instruction set. The emulator we have invented, llva-emu,
+   executes kernel code by translating programs "just-in-time" from the LLVM
+   bytecode format to the processor's native instruction set.</p>
+
+<p>Project
+   report: <a href="2003-Fall-CS497YYZ-LLVA-emu.ps">PS</a>, <a href="2003-Fall-CS497YYZ-LLVA-emu.pdf">PDF</a></p>
 
 </div>
 
@@ -711,22 +685,23 @@ By Brian Fahs
 </div>
 
 <div class="www_text">
-<p>"As every modern computer user has experienced, software updates and
-upgrades frequently require programs and sometimes the entire operating system
-to be restarted. This can be a painful and annoying experience. What if this
-common annoyance could be avoided completely or at least significantly reduced?
-Imagine only rebooting your system when you wanted to shut your computer down or
-only closing an application when you wanted rather than when an update occurs.
-The purpose of this project is to investigate the potential of performing
-dynamic patching of executables and create a patching tool capable of
-automatically generating patches and applying them to applications that are
-already running. This project should answer questions like: How can dynamic
-updating be performed? What type of analysis is required? Can this analysis be
-effectively automated? What can be updated in the running executable (e.g.,
-algorithms, organization, data, etc.)?"</p>
+
+<p>As every modern computer user has experienced, software updates and upgrades
+   frequently require programs and sometimes the entire operating system to be
+   restarted. This can be a painful and annoying experience. What if this common
+   annoyance could be avoided completely or at least significantly reduced?
+   Imagine only rebooting your system when you wanted to shut your computer down
+   or only closing an application when you wanted rather than when an update
+   occurs.  The purpose of this project is to investigate the potential of
+   performing dynamic patching of executables and create a patching tool capable
+   of automatically generating patches and applying them to applications that
+   are already running. This project should answer questions like: How can
+   dynamic updating be performed? What type of analysis is required? Can this
+   analysis be effectively automated? What can be updated in the running
+   executable (e.g., algorithms, organization, data, etc.)?</p>
 
 <p>Project report: <a href="2003-Fall-CS497YYZ-SPEDI.ps">PS</a>, <a
-href="2003-Fall-CS497YYZ-SPEDI.pdf">PDF</a></p>
+   href="2003-Fall-CS497YYZ-SPEDI.pdf">PDF</a></p>
 
 </div>
 
@@ -744,18 +719,20 @@ Joel Stanley, and Bill Wendling
 </div>
 
 <div class="www_text">
-<p>"In this report we present implementation details, empirical performance
-data, and notable modifications to an algorithm for PRE based on [the 1999
-TOPLAS SSAPRE paper].  In [the 1999 TOPLAS SSAPRE paper], a particular
-realization of PRE, known as SSAPRE, is described, which is more efficient than
-traditional PRE implementations because it relies on useful properties of Static
-Single-Assignment (SSA) form to perform dataflow analysis in a much more sparse
-manner than the traditional bit-vector-based approach.  Our implementation is
-specific to a SSA-based compiler infrastructure known as LLVM (Low-Level Virtual
-Machine)."</p>
+
+<p>In this report we present implementation details, empirical performance data,
+   and notable modifications to an algorithm for PRE based on [the 1999 TOPLAS
+   SSAPRE paper].  In [the 1999 TOPLAS SSAPRE paper], a particular realization
+   of PRE, known as SSAPRE, is described, which is more efficient than
+   traditional PRE implementations because it relies on useful properties of
+   Static Single-Assignment (SSA) form to perform dataflow analysis in a much
+   more sparse manner than the traditional bit-vector-based approach.  Our
+   implementation is specific to a SSA-based compiler infrastructure known as
+   LLVM (Low-Level Virtual Machine).</p>
 
 <p>Project report: <a href="2002-Fall-CS426-SSAPRE.ps">PS</a>, <a
-href="2002-Fall-CS426-SSAPRE.pdf">PDF</a></p>
+   href="2002-Fall-CS426-SSAPRE.pdf">PDF</a></p>
+
 </div>
 
 <!--=========================================================================-->
@@ -774,22 +751,24 @@ By <a href="http://nondot.org/sabre/">Ch
 </div>
 
 <div class="www_text">
-<p>"We present the design and implementation of Jello, a <i>retargetable</i>
-Just-In-Time (JIT) compiler for the Intel IA32 architecture.  The input to Jello
-is a C program statically compiled to Low-Level Virtual Machine (LLVM) bytecode.
-Jello takes advantage of the features of the LLVM bytecode representation to
-permit efficient run-time code generation, while emphasizing retargetability.
-Our approach uses an abstract machine code representation in Static Single
-Assignment form that is machine-independent, but can handle machine-specific
-features such as implicit and explicit register references.  Because this
-representation is target-independent, many phases of code generation can be
-target-independent, making the JIT easily retargetable to new platforms without
-changing the code generator.  Jello's ultimate goal is to provide a flexible
-host for future research in runtime optimization for programs written in
-languages which are traditionally compiled statically."</p>
+
+<p>We present the design and implementation of Jello, a <i>retargetable</i>
+   Just-In-Time (JIT) compiler for the Intel IA32 architecture.  The input to
+   Jello is a C program statically compiled to Low-Level Virtual Machine (LLVM)
+   bytecode.  Jello takes advantage of the features of the LLVM bytecode
+   representation to permit efficient run-time code generation, while
+   emphasizing retargetability.  Our approach uses an abstract machine code
+   representation in Static Single Assignment form that is machine-independent,
+   but can handle machine-specific features such as implicit and explicit
+   register references.  Because this representation is target-independent, many
+   phases of code generation can be target-independent, making the JIT easily
+   retargetable to new platforms without changing the code generator.  Jello's
+   ultimate goal is to provide a flexible host for future research in runtime
+   optimization for programs written in languages which are traditionally
+   compiled statically.</p>
 
 <p>Note that Jello eventually evolved into the current LLVM JIT, which is part
-of the tool <b>lli</b>.</p>
+   of the tool <b>lli</b>.</p>
 
 <p>Project report: <a href="2002-Spring-CS497CZ-Jello.ps">PS</a>, <a
 href="2002-Spring-CS497CZ-Jello.pdf">PDF</a></p>
@@ -806,20 +785,21 @@ Emscripten contributors</a>
 </div>
 
 <div class="www_text">
-<p><a href="http://emscripten.org">Emscripten</a> compiles LLVM bitcode
-into JavaScript, which makes it possible to compile C and C++ source code to
-JavaScript (by first compiling it into LLVM bitcode using Clang), which can be
-run on the web. Emscripten has been used to port large existing C and C++
-codebases, for example Python (the standard CPython implementation),
-the Bullet physics engine, and the eSpeak speech synthesizer, among many
-others.</p>
+
+<p><a href="http://emscripten.org">Emscripten</a> compiles LLVM bitcode into
+   JavaScript, which makes it possible to compile C and C++ source code to
+   JavaScript (by first compiling it into LLVM bitcode using Clang), which can
+   be run on the web. Emscripten has been used to port large existing C and C++
+   codebases, for example Python (the standard CPython implementation), the
+   Bullet physics engine, and the eSpeak speech synthesizer, among many
+   others.</p>
 
 <p>Emscripten itself is written in JavaScript. Significant components include the
-<a href="https://github.com/kripken/emscripten/blob/master/docs/paper.pdf?raw=true">
-Relooper Algorithm</a> which generates high-level JavaScript control flow
-structures ("if", "while", etc.) from the low-level basic block information
-present in LLVM bitcode, as well as a JavaScript parser for LLVM assembly.
-</p>
+   <a href="https://github.com/kripken/emscripten/blob/master/docs/paper.pdf?raw=true">
+   Relooper Algorithm</a> which generates high-level JavaScript control flow
+   structures ("if", "while", etc.) from the low-level basic block information
+   present in LLVM bitcode, as well as a JavaScript parser for LLVM assembly.</p>
+
 </div>
 
 <!--=========================================================================-->
@@ -832,30 +812,69 @@ rust contributors</a>
 </div>
 
 <div class="www_text">
-<p><a href="http://www.rust-lang.org/">rust</a> is a curly-brace, block-structured expression language. It visually resembles the C language family, but differs significantly in syntactic and semantic details. Its design is oriented toward concerns of "programming in the large", that is, of creating and maintaining <i>boundaries</i> - both abstract and operational - that preserve large-system <i>integrity</i>, <i>availability</i> and <i>concurrency</i>.</p>
-<p>
-It supports a mixture of imperative procedural, concurrent actor, object-oriented and pure functional styles. Rust also supports generic programming and metaprogramming, in both static and dynamic styles.
-</p>
+
+<p><a href="http://www.rust-lang.org/">rust</a> is a curly-brace,
+   block-structured expression language. It visually resembles the C language
+   family, but differs significantly in syntactic and semantic details. Its
+   design is oriented toward concerns of "programming in the large", that is, of
+   creating and maintaining <i>boundaries</i> - both abstract and operational -
+   that preserve large-system <i>integrity</i>, <i>availability</i>
+   and <i>concurrency</i>.</p>
+
+<p>It supports a mixture of imperative procedural, concurrent actor,
+   object-oriented and pure functional styles. Rust also supports generic
+   programming and metaprogramming, in both static and dynamic styles.</p>
 
 <p>The static/native compilation is using LLVM.</p>
+
 </div>
+
 <!--=========================================================================-->
 <div class="www_subsection">
   <a name="ESL">Embedded Systems Language</a>
 </div>
 
 <div class="www_text">
-<p><a href="http://code.google.com/p/esl">ESL (Embedded Systems Language)</a>
-is a new programming language designed for efficient implementation
-of embedded systems and other low-level system programming projects.
-ESL is a typed compiled language with features that allow the programmer
-to dictate the concrete representation of data values; useful when dealing,
-for example, with communication protocols or device registers.</p>
-
-<p>Novel features are: no reserved words, procedures can return multiple
-values, automatic endian conversion, methods on types (but no classes).
-The syntax is more Pascal-like than C-like, but with C bracketing.
-The compiler bootstraps from LL IR.</p>
+
+<p><a href="http://code.google.com/p/esl">ESL (Embedded Systems Language)</a> is
+   a new programming language designed for efficient implementation of embedded
+   systems and other low-level system programming projects.  ESL is a typed
+   compiled language with features that allow the programmer to dictate the
+   concrete representation of data values; useful when dealing, for example,
+   with communication protocols or device registers.</p>
+
+<p>Novel features are: no reserved words, procedures can return multiple values,
+   automatic endian conversion, methods on types (but no classes).  The syntax
+   is more Pascal-like than C-like, but with C bracketing.  The compiler
+   bootstraps from LL IR.</p>
+
+</div>
+
+<!--=========================================================================-->
+<div class="www_subsection">
+  <a name="RTSC">RTSC: The Real-Time Systems Compiler</a>
+</div>
+
+<div class="www_text">
+
+<p>The Real-Time Systems Compiler (RTSC) is an operating-system–aware compiler
+   that allows for a generic manipulation of the real-time system architecture
+   of a given real-time application.</p>
+
+<p>Currently, its most interesting application is the automatic migration of an
+   event-triggered (i.e. based on OSEK OS) system to a time-triggered
+   (i.e. based on OSEKTime) one. To achieve this, the RTSC derives an abstract
+   global dependency graph from the source code of the application formed by so
+   called "Atomic Basic Blocks" (ABBs). This "ABB-graph" is free of any
+   dependency to the original operating system but includes all control and data
+   dependencies of the application. Thus this graph can be mapped to another
+   target operating system. Currently input applications can be written for OSEK
+   OS and then be migrated to OSEKTime or POSIX systems.</p>
+
+<p>The LLVM-framework is used throughout the whole process. At first the static
+   analysis of the application needed to construct the ABB-graph is performed
+   using the LLVM-framework. Additionally the manipulation and final generation
+   of the target system are also based on the LLVM.</p>
 </div>
 
 <!--#include virtual="../footer.incl" --></html>





More information about the llvm-commits mailing list