sounds like a good idea to me. but one of the current issues of back-patching in the LLVM is that the back-patching is not done atomically on some of the architectures, i.e. Intel x86. and this makes LLVM JIT not thread-safe in lazy compilation mode. what we need to make sure is that the "updating the resolution for a given symbol" you mentioned is done in an atomic fashion.<br>
<br>also, how much more overhead is the "updating the resolution for a given symbol, and asking rt-dyld to re-link the executable code" in comparison to simple overwriting an instruction ?<br><br>Xin<br><br><div class="gmail_quote">
On Mon, Apr 4, 2011 at 1:50 PM, <span dir="ltr"><<a href="mailto:llvmdev-request@cs.uiuc.edu">llvmdev-request@cs.uiuc.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Send LLVMdev mailing list submissions to<br>
<a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:llvmdev-request@cs.uiuc.edu">llvmdev-request@cs.uiuc.edu</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:llvmdev-owner@cs.uiuc.edu">llvmdev-owner@cs.uiuc.edu</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of LLVMdev digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: GSOC Adaptive Compilation Framework for LLVM JIT Compiler<br>
(Stephen Kyle)<br>
2. Re: GSOC Adaptive Compilation Framework for LLVM JIT Compiler<br>
(Xin Tong Utoronto)<br>
3. Re: GSOC Adaptive Compilation Framework for LLVM JIT Compiler<br>
(Owen Anderson)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Mon, 4 Apr 2011 18:19:01 +0100<br>
From: Stephen Kyle <<a href="mailto:s.kyle@ed.ac.uk">s.kyle@ed.ac.uk</a>><br>
Subject: Re: [LLVMdev] GSOC Adaptive Compilation Framework for LLVM<br>
JIT Compiler<br>
To: Xin Tong Utoronto <<a href="mailto:x.tong@utoronto.ca">x.tong@utoronto.ca</a>><br>
Cc: Xin Tong <<a href="mailto:xerox.time@gmail.com">xerox.time@gmail.com</a>>, <a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a><br>
Message-ID:<br>
<AANLkTi=<a href="mailto:z_W2Q%2BfRTFEf%2BZ9R2axfO7pDrn2zu5djiYCiZ@mail.gmail.com">z_W2Q+fRTFEf+Z9R2axfO7pDrn2zu5djiYCiZ@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
On 29 March 2011 12:35, Xin Tong Utoronto <<a href="mailto:x.tong@utoronto.ca">x.tong@utoronto.ca</a>> wrote:<br>
<br>
> *Project Description:*<br>
><br>
> *<br>
> *<br>
><br>
> LLVM has gained much popularity in the programming languages and compiler<br>
> industry from the time it is developed. Lots of researchers have used LLVM<br>
> as frameworks for their researches and many languages have been ported to<br>
> LLVM IR and interpreted, Just-in-Time compiled or statically compiled to<br>
> native code. One of the current drawbacks of the LLVM JIT is the lack of an<br>
> adaptive compilation System. All the non-adaptive bits are already there in<br>
> LLVM: optimizing compiler with the different types of instruction selectors,<br>
> register allocators, preRA schedulers, etc. and a full set of optimizations<br>
> changeable at runtime. What's left is a system that can keep track of and<br>
> dynamically look-up the hotness of methods and re-compile with more<br>
> expensive optimizations as the methods are executed over and over. This<br>
> should improve program startup time and execution time and will bring great<br>
> benefits to all ported languages that intend to use LLVM JIT as one of the<br>
> execution methods<br>
><br>
><br>
> *Project Outline:*<br>
><br>
> *<br>
> *<br>
><br>
> Currently, the LLVM JIT serves as a management layer for the executed LLVM<br>
> IR, it manages the compiled code and calls the LLVM code generator to do the<br>
> real work. There are levels of optimizations for the LLVM code generator,<br>
> and depends on how much optimizations the code generator is asked to do, the<br>
> time taken may vary significantly. The adaptive compilation mechanism should<br>
> be able to detect when a method is getting hot, compiling or recompiling it<br>
> using the appropriate optimization levels. Moreover, this should happen<br>
> transparently to the running application. In order to keep track of how many<br>
> times a JITed function is called. This involves inserting instrumentational<br>
> code into the function's LLVM bitcode before it is sent to the code<br>
> generator. This code will increment a counter when the function is called.<br>
> And when the counter reaches a threshold, the function gives control back to<br>
> the LLVM JIT. Then the JIT will look at the hotness of all the methods and<br>
> find the one that triggered the recompilation threshold. The JIT can then<br>
> choose to raise the level of optimization based on the algorithm below or<br>
> some other algorithms developed later.<br>
><br>
><br>
> IF (getCompilationCount(method) > 50 in the last 100 samples) = > Recompile<br>
> at Aggressive<br>
> ELSE Recompile at the next optimization level.<br>
><br>
><br>
> Even though the invocation counting introduces a few lines of binary, but<br>
> the advantages of adaptive optimization should far overweigh the extra few<br>
> lines of binary introduced. Note the adaptive compilation framework I<br>
> propose here is orthogonal to the LLVM profile-guided optimizations. The<br>
> profile-guided optimization is a technique used to optimize code with some<br>
> profiling or external information. But the adaptive compilation framework is<br>
> concerned with level of optimizations instead of how the optimizations are<br>
> to be performed.<br>
><br>
><br>
> *Project Timeline:*<br>
><br>
> *<br>
> *<br>
><br>
> This is a relatively small project and does not involve a lot of coding,<br>
> but good portion of the time will be spent benchmarking, tuning and<br>
> experimenting with different algorithms, i.e. what would be the algorithm to<br>
> raise the compilation level when a method recompilation threshold is<br>
> reached, can we make this algorithm adaptive too, etc. Therefore, my<br>
> timeline for the project is as follow<br>
><br>
><br>
> Week 1<br>
> Benchmarking the current LLVM JIT compiler, measuring compilation speed<br>
> differences for different levels of compilation. This piece of information<br>
> is required to understand why a heuristic will outperform others<br>
><br>
><br>
> Week 2<br>
> Reading LLVM Execution Engine and Code Generator code. Design the LLVM<br>
> adaptive compilation framework<br>
><br>
><br>
> Week 3 - 9<br>
> Implementing and testing the LLVM adaptive compilation framework. The<br>
> general idea of the compilation framework is described in project outline<br>
><br>
><br>
> Week 10 - 13<br>
> Benchmarking, tuning and experimenting with different recompilation<br>
> algorithms. Typically benchmarking test cases would be<br>
><br>
><br>
> Week 14<br>
> Test and organize code. Documentation<br>
><br>
><br>
> *Overall Goals:*<br>
><br>
><br>
><br>
> My main goal at the end of the summer is to have an automated profiling and<br>
> adaptive compilation framework for the LLVM. Even though the performance<br>
> improvements are still unclear at this point, I believe that this adaptive<br>
> compilation framework will definitely give noticeable performance benefits,<br>
> as the current JIT compilation is either too simple to give a reasonably<br>
> fast code or too expensive to apply to all functions.<br>
><br>
><br>
><br>
> *Background:*<br>
><br>
><br>
><br>
> I have some experience with the Java Just-In-Time compiler and some<br>
> experience with LLVM. I have included my CV for your reference. I don't have<br>
> a specific mentor in mind, but I imagine that the existing mentors from LLVM<br>
> would be extremely helpful.<br>
><br>
><br>
><br>
><br>
><br>
><br>
> Xin* Tong*<br>
><br>
> * *<br>
><br>
> *Email:**<a href="mailto:x.tong@utoronto.ca">x.tong@utoronto.ca</a>*<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> Creative, quality-focused Computer Engineering student brings a strong<br>
> blend of programming, design and analysis skills. Offers solid understanding<br>
> of best practices at each stage of the software development lifecycle.<br>
> Skilled at recognizing and resolving design flaws that have the potential to<br>
> create downstream maintenance, scalability and functionality issues. Adept<br>
> at optimizing complex system processes and dataflows, taking the initiative<br>
> to identify and recommend design and coding modifications to improve overall<br>
> system performance. Excels in dynamic, deadline-sensitive environments that<br>
> demand resourcefulness, astute judgement, and self-motivated quick study<br>
> talents.Utilizes excellent time management skills to balance a demanding<br>
> academic course of studies with employment and volunteer pursuits, achieving<br>
> excellent results in all endeavours.<br>
><br>
><br>
> STRENGTHS & EXPERTISE<br>
><br>
><br>
><br>
> *Compiler Construction ? Compiler Optimization ? Computer Architecture ?<br>
> Bottleneck Analysis & Solutions*<br>
><br>
> *Coding & Debugging ? Workload Prioritization ? Team Collaboration &<br>
> Leadership *<br>
><br>
> *Software Testing & Integration ? Test-Driven Development *<br>
><br>
><br>
> EDUCATION & CREDENTIALS<br>
><br>
> * *<br>
><br>
> *BACHELOR OF COMPUTER ENGINEERING*<br>
><br>
> *University** of Toronto, Toronto, ON, Expected Completion 2011*<br>
><br>
> Compiler*? *Operation Systems *?* Computer Architecture * *<br>
><br>
><br>
><br>
><br>
><br>
> *Cisco Certified Networking Associate*, July 2009<br>
><br>
><br>
> PROFESSIONAL EXPERIENCE<br>
><br>
> * *<br>
><br>
> *Java VIRTUAL MACHINE JIT Developer<br>
> **Aug 2010-May 2011*<br>
><br>
> *IBM, Toronto**, Canada*<br>
><br>
> * *<br>
><br>
> - Working on the PowerPC code generator of IBM Just-in-Time compiler<br>
> for Java Virtual Machine.<br>
> - Benchmarking Just-in-Time compiler performance, analyzing and fixing<br>
> possible regressions.<br>
> - Triaging and fixing defects in the Just-in-Time compiler<br>
> - Acquiring hand-on experience with powerpc assembly and powerpc binary<br>
> debugging with gdb and other related tools<br>
><br>
> * *<br>
><br>
> * *<br>
><br>
> *Java VirTual Mahine Developer , Extreme Blue<br>
><br>
> **May 2010-Aug 2010***<br>
><br>
> *IBM, Ottawa**, Canada***<br>
><br>
> - Architected a multi-tenancy solution for IBM J9 Java Virtual Machine<br>
> for hosting multiple applications within one Java Virtual Machine. Designed<br>
> solutions to provide good tenant isolation and resource control for all<br>
> tenants running in the same Java Virtual Machine.<br>
> - Worked on Java class libraries and different components of J9 Java<br>
> Virtual Machine, including threading library, garbage collector,<br>
> interpreter, etc.<br>
><br>
><br>
><br>
> * *<br>
><br>
> * *<br>
><br>
> *Continued?*<br>
><br>
> *Xin Tong<br>
> ** **page 2*<br>
><br>
> * *<br>
><br>
> *Graphics Compiler Developer **<br>
> May 2009-May 2010*<br>
><br>
> *Qualcomm,**San Diego**, USA***<br>
><br>
> - Recruited for an internship position with this multinational<br>
> telecommunications company to work on their C++ compiler project.<br>
> - Developed a static verifier program which automatically generates and<br>
> addsintermediate language code to test programs to make them<br>
> self-verifying. Then the test programs are used to test the C++<br>
> compiler, ensuring that it can compile code correctly.<br>
> - Utilizes in-depth knowledge of LLVM systems and algorithms to<br>
> generate elegant and robust code.<br>
><br>
><br>
><br>
> * *<br>
><br>
> * *<br>
> ACADEMIC PROJECTS<br>
><br>
> * *<br>
><br>
> *COMPILER OPTIMIZER IMPLEMENTATION (Dec. 2010 ? Apr 2011) :* Implemented a<br>
> compiler optimizer on the SUIF framework. Implemented control flow analysis,<br>
> data flow analysis, loop invariant code motion, global value numbering, loop<br>
> unrolling and various other local optimizations.<br>
><br>
><br>
><br>
> *GPU COMPILER IMPLEMENTATION (Sept. ? Dec. 2010) :* Implemented a GPU<br>
> compiler that compiles a subset of the GLSL language to ARB language which<br>
> then can be executed on GPU. Wrote the scanner and parser using Lex and Yacc<br>
> and a code generator in a OOP fashion<br>
><br>
><br>
><br>
> *Malloc Library Implementation** (Oct.-Nov. 2008) : *Leveraged solid<br>
> understanding of best fit algorithm and linkedlist data structure to design<br>
> a malloc library to perform dynamic memory allocation. Implemented the<br>
> library with C programming language to ensure robust and clear coding for<br>
> 1000 line codes. Optimized library on the code level to obtain a 6%<br>
> increase of allocation throughput. Harnessed knowledge of trace files and<br>
> drivers to test and evaluate the malloc library?s throughput and memory<br>
> utilization.<br>
><br>
><br>
><br>
><br>
><br>
><br>
> COMPUTERSKILLS<br>
><br>
><br>
><br>
> *Programming Languages*<br>
><br>
> C* **?***C++* **?***Java* ***<br>
><br>
> *Operating Systems*<br>
><br>
> Linux**<br>
><br>
> *Software Tools*<br>
><br>
> GDB *?* GCC<br>
><br>
> * *<br>
><br>
><br>
> Extracurricular Activities<br>
><br>
> * *<br>
><br>
> *Elected Officer**, *Institute of Electrical & Electronics Engineers,<br>
> University of Toronto Branch,* Since May 2009*<br>
><br>
> *Member**, *Institute of Electrical & Electronics Engineers,* Since **2007<br>
> *<br>
><br>
> *Member**, *University of Toronto E-Sports Club*, 2007*<br>
><br>
> *Member**, *University of Toronto Engineering Chinese Culture Club*, 2007*<br>
> *Member**, *University of Toronto Robotics Club*, 2007*<br>
><br>
> --<br>
> Kind Regards<br>
><br>
> Xin Tong<br>
><br>
> _______________________________________________<br>
> LLVM Developers mailing list<br>
> <a href="mailto:LLVMdev@cs.uiuc.edu">LLVMdev@cs.uiuc.edu</a> <a href="http://llvm.cs.uiuc.edu" target="_blank">http://llvm.cs.uiuc.edu</a><br>
> <a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev</a><br>
><br>
><br>
Hi Xin,<br>
<br>
If I understand the above correctly, this basically means that whenever an<br>
application calls a function it's been given by getPointerToFunction(),<br>
there's a possibility the function is recompiled with more aggressive<br>
optimisations, should that function meet some hotness threshold. Does the<br>
application have to wait while this compilation takes place, before the<br>
function it called is actually executed?<br>
<br>
If so, it's nice that recompilation is transparent to the application, and<br>
so functions just magically become faster over time, but stalling the<br>
application like this may not be desirable.<br>
<br>
I've added an adaptive optimisation system to an instruction set simulator<br>
developed at my university which heavily relies on LLVM for JIT compilation.<br>
It performs all the compilation in a separate thread from where the<br>
interpretation of the simulated program is taking place, meaning it never<br>
needs to wait for any compilation. Adaptive reoptimisation also takes place<br>
in a separate thread, and this has caused me a multitude of headaches, but I<br>
digress...<br>
<br>
Basically: if the initial compilation is done in a separate thread, can you<br>
ensure that any adaptive reoptimisation also happens asynchronously, or will<br>
such use cases have to do without your system?<br>
<br>
Cheers,<br>
Stephen<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <a href="http://lists.cs.uiuc.edu/pipermail/llvmdev/attachments/20110404/edabf84a/attachment-0001.html" target="_blank">http://lists.cs.uiuc.edu/pipermail/llvmdev/attachments/20110404/edabf84a/attachment-0001.html</a><br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Mon, 4 Apr 2011 13:42:49 -0400<br>
From: Xin Tong Utoronto <<a href="mailto:x.tong@utoronto.ca">x.tong@utoronto.ca</a>><br>
Subject: Re: [LLVMdev] GSOC Adaptive Compilation Framework for LLVM<br>
JIT Compiler<br>
To: Stephen Kyle <<a href="mailto:s.kyle@ed.ac.uk">s.kyle@ed.ac.uk</a>><br>
Cc: <a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a><br>
Message-ID: <BANLkTi=<a href="mailto:QCsGwYG3Y-ckf_YJGPdDFKdX0Pw@mail.gmail.com">QCsGwYG3Y-ckf_YJGPdDFKdX0Pw@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
On Mon, Apr 4, 2011 at 1:19 PM, Stephen Kyle <<a href="mailto:s.kyle@ed.ac.uk">s.kyle@ed.ac.uk</a>> wrote:<br>
<br>
> On 29 March 2011 12:35, Xin Tong Utoronto <<a href="mailto:x.tong@utoronto.ca">x.tong@utoronto.ca</a>> wrote:<br>
><br>
>> *Project Description:*<br>
>><br>
>> *<br>
>> *<br>
>><br>
>> LLVM has gained much popularity in the programming languages and compiler<br>
>> industry from the time it is developed. Lots of researchers have used LLVM<br>
>> as frameworks for their researches and many languages have been ported to<br>
>> LLVM IR and interpreted, Just-in-Time compiled or statically compiled to<br>
>> native code. One of the current drawbacks of the LLVM JIT is the lack of an<br>
>> adaptive compilation System. All the non-adaptive bits are already there in<br>
>> LLVM: optimizing compiler with the different types of instruction selectors,<br>
>> register allocators, preRA schedulers, etc. and a full set of optimizations<br>
>> changeable at runtime. What's left is a system that can keep track of and<br>
>> dynamically look-up the hotness of methods and re-compile with more<br>
>> expensive optimizations as the methods are executed over and over. This<br>
>> should improve program startup time and execution time and will bring great<br>
>> benefits to all ported languages that intend to use LLVM JIT as one of the<br>
>> execution methods<br>
>><br>
>><br>
>> *Project Outline:*<br>
>><br>
>> *<br>
>> *<br>
>><br>
>> Currently, the LLVM JIT serves as a management layer for the executed LLVM<br>
>> IR, it manages the compiled code and calls the LLVM code generator to do the<br>
>> real work. There are levels of optimizations for the LLVM code generator,<br>
>> and depends on how much optimizations the code generator is asked to do, the<br>
>> time taken may vary significantly. The adaptive compilation mechanism should<br>
>> be able to detect when a method is getting hot, compiling or recompiling it<br>
>> using the appropriate optimization levels. Moreover, this should happen<br>
>> transparently to the running application. In order to keep track of how many<br>
>> times a JITed function is called. This involves inserting instrumentational<br>
>> code into the function's LLVM bitcode before it is sent to the code<br>
>> generator. This code will increment a counter when the function is called.<br>
>> And when the counter reaches a threshold, the function gives control back to<br>
>> the LLVM JIT. Then the JIT will look at the hotness of all the methods and<br>
>> find the one that triggered the recompilation threshold. The JIT can then<br>
>> choose to raise the level of optimization based on the algorithm below or<br>
>> some other algorithms developed later.<br>
>><br>
>><br>
>> IF (getCompilationCount(method) > 50 in the last 100 samples) = ><br>
>> Recompile at Aggressive<br>
>> ELSE Recompile at the next optimization level.<br>
>><br>
>><br>
>> Even though the invocation counting introduces a few lines of binary, but<br>
>> the advantages of adaptive optimization should far overweigh the extra few<br>
>> lines of binary introduced. Note the adaptive compilation framework I<br>
>> propose here is orthogonal to the LLVM profile-guided optimizations. The<br>
>> profile-guided optimization is a technique used to optimize code with some<br>
>> profiling or external information. But the adaptive compilation framework is<br>
>> concerned with level of optimizations instead of how the optimizations are<br>
>> to be performed.<br>
>><br>
>><br>
>> *Project Timeline:*<br>
>><br>
>> *<br>
>> *<br>
>><br>
>> This is a relatively small project and does not involve a lot of coding,<br>
>> but good portion of the time will be spent benchmarking, tuning and<br>
>> experimenting with different algorithms, i.e. what would be the algorithm to<br>
>> raise the compilation level when a method recompilation threshold is<br>
>> reached, can we make this algorithm adaptive too, etc. Therefore, my<br>
>> timeline for the project is as follow<br>
>><br>
>><br>
>> Week 1<br>
>> Benchmarking the current LLVM JIT compiler, measuring compilation speed<br>
>> differences for different levels of compilation. This piece of information<br>
>> is required to understand why a heuristic will outperform others<br>
>><br>
>><br>
>> Week 2<br>
>> Reading LLVM Execution Engine and Code Generator code. Design the LLVM<br>
>> adaptive compilation framework<br>
>><br>
>><br>
>> Week 3 - 9<br>
>> Implementing and testing the LLVM adaptive compilation framework. The<br>
>> general idea of the compilation framework is described in project outline<br>
>><br>
>><br>
>> Week 10 - 13<br>
>> Benchmarking, tuning and experimenting with different recompilation<br>
>> algorithms. Typically benchmarking test cases would be<br>
>><br>
>><br>
>> Week 14<br>
>> Test and organize code. Documentation<br>
>><br>
>><br>
>> *Overall Goals:*<br>
>><br>
>><br>
>><br>
>> My main goal at the end of the summer is to have an automated profiling<br>
>> and adaptive compilation framework for the LLVM. Even though the performance<br>
>> improvements are still unclear at this point, I believe that this adaptive<br>
>> compilation framework will definitely give noticeable performance benefits,<br>
>> as the current JIT compilation is either too simple to give a reasonably<br>
>> fast code or too expensive to apply to all functions.<br>
>><br>
>><br>
>><br>
>> *Background:*<br>
>><br>
>><br>
>><br>
>> I have some experience with the Java Just-In-Time compiler and some<br>
>> experience with LLVM. I have included my CV for your reference. I don't have<br>
>> a specific mentor in mind, but I imagine that the existing mentors from LLVM<br>
>> would be extremely helpful.<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> Xin* Tong*<br>
>><br>
>> * *<br>
>><br>
>> *Email:**<a href="mailto:x.tong@utoronto.ca">x.tong@utoronto.ca</a>*<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> Creative, quality-focused Computer Engineering student brings a strong<br>
>> blend of programming, design and analysis skills. Offers solid understanding<br>
>> of best practices at each stage of the software development lifecycle.<br>
>> Skilled at recognizing and resolving design flaws that have the potential to<br>
>> create downstream maintenance, scalability and functionality issues. Adept<br>
>> at optimizing complex system processes and dataflows, taking the initiative<br>
>> to identify and recommend design and coding modifications to improve overall<br>
>> system performance. Excels in dynamic, deadline-sensitive environments that<br>
>> demand resourcefulness, astute judgement, and self-motivated quick study<br>
>> talents.Utilizes excellent time management skills to balance a demanding<br>
>> academic course of studies with employment and volunteer pursuits, achieving<br>
>> excellent results in all endeavours.<br>
>><br>
>><br>
>> STRENGTHS & EXPERTISE<br>
>><br>
>><br>
>><br>
>> *Compiler Construction ? Compiler Optimization ? Computer Architecture ?<br>
>> Bottleneck Analysis & Solutions*<br>
>><br>
>> *Coding & Debugging ? Workload Prioritization ? Team Collaboration &<br>
>> Leadership *<br>
>><br>
>> *Software Testing & Integration ? Test-Driven Development *<br>
>><br>
>><br>
>> EDUCATION & CREDENTIALS<br>
>><br>
>> * *<br>
>><br>
>> *BACHELOR OF COMPUTER ENGINEERING*<br>
>><br>
>> *University** of Toronto, Toronto, ON, Expected Completion 2011*<br>
>><br>
>> Compiler*? *Operation Systems *?* Computer Architecture * *<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> *Cisco Certified Networking Associate*, July 2009<br>
>><br>
>><br>
>> PROFESSIONAL EXPERIENCE<br>
>><br>
>> * *<br>
>><br>
>> *Java VIRTUAL MACHINE JIT Developer<br>
>> **Aug 2010-May 2011*<br>
>><br>
>> *IBM, Toronto**, Canada*<br>
>><br>
>> * *<br>
>><br>
>> - Working on the PowerPC code generator of IBM Just-in-Time compiler<br>
>> for Java Virtual Machine.<br>
>> - Benchmarking Just-in-Time compiler performance, analyzing and fixing<br>
>> possible regressions.<br>
>> - Triaging and fixing defects in the Just-in-Time compiler<br>
>> - Acquiring hand-on experience with powerpc assembly and powerpc<br>
>> binary debugging with gdb and other related tools<br>
>><br>
>> * *<br>
>><br>
>> * *<br>
>><br>
>> *Java VirTual Mahine Developer , Extreme Blue<br>
>><br>
>> **May 2010-Aug 2010***<br>
>><br>
>> *IBM, Ottawa**, Canada***<br>
>><br>
>> - Architected a multi-tenancy solution for IBM J9 Java Virtual Machine<br>
>> for hosting multiple applications within one Java Virtual Machine. Designed<br>
>> solutions to provide good tenant isolation and resource control for all<br>
>> tenants running in the same Java Virtual Machine.<br>
>> - Worked on Java class libraries and different components of J9 Java<br>
>> Virtual Machine, including threading library, garbage collector,<br>
>> interpreter, etc.<br>
>><br>
>><br>
>><br>
>> * *<br>
>><br>
>> * *<br>
>><br>
>> *Continued?*<br>
>><br>
>> *Xin Tong<br>
>> ** **page 2*<br>
>><br>
>> * *<br>
>><br>
>> *Graphics Compiler Developer **<br>
>> May 2009-May 2010*<br>
>><br>
>> *Qualcomm,**San Diego**, USA***<br>
>><br>
>> - Recruited for an internship position with this multinational<br>
>> telecommunications company to work on their C++ compiler project.<br>
>> - Developed a static verifier program which automatically generates<br>
>> and addsintermediate language code to test programs to make them<br>
>> self-verifying. Then the test programs are used to test the C++<br>
>> compiler, ensuring that it can compile code correctly.<br>
>> - Utilizes in-depth knowledge of LLVM systems and algorithms to<br>
>> generate elegant and robust code.<br>
>><br>
>><br>
>><br>
>> * *<br>
>><br>
>> * *<br>
>> ACADEMIC PROJECTS<br>
>><br>
>> * *<br>
>><br>
>> *COMPILER OPTIMIZER IMPLEMENTATION (Dec. 2010 ? Apr 2011) :* Implemented<br>
>> a compiler optimizer on the SUIF framework. Implemented control flow<br>
>> analysis, data flow analysis, loop invariant code motion, global value<br>
>> numbering, loop unrolling and various other local optimizations.<br>
>><br>
>><br>
>><br>
>> *GPU COMPILER IMPLEMENTATION (Sept. ? Dec. 2010) :* Implemented a GPU<br>
>> compiler that compiles a subset of the GLSL language to ARB language which<br>
>> then can be executed on GPU. Wrote the scanner and parser using Lex and Yacc<br>
>> and a code generator in a OOP fashion<br>
>><br>
>><br>
>><br>
>> *Malloc Library Implementation** (Oct.-Nov. 2008) : *Leveraged solid<br>
>> understanding of best fit algorithm and linkedlist data structure to design<br>
>> a malloc library to perform dynamic memory allocation. Implemented the<br>
>> library with C programming language to ensure robust and clear coding for<br>
>> 1000 line codes. Optimized library on the code level to obtain a 6%<br>
>> increase of allocation throughput. Harnessed knowledge of trace files and<br>
>> drivers to test and evaluate the malloc library?s throughput and memory<br>
>> utilization.<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> COMPUTERSKILLS<br>
>><br>
>><br>
>><br>
>> *Programming Languages*<br>
>><br>
>> C* **?***C++* **?***Java* ***<br>
>><br>
>> *Operating Systems*<br>
>><br>
>> Linux**<br>
>><br>
>> *Software Tools*<br>
>><br>
>> GDB *?* GCC<br>
>><br>
>> * *<br>
>><br>
>><br>
>> Extracurricular Activities<br>
>><br>
>> * *<br>
>><br>
>> *Elected Officer**, *Institute of Electrical & Electronics Engineers,<br>
>> University of Toronto Branch,* Since May 2009*<br>
>><br>
>> *Member**, *Institute of Electrical & Electronics Engineers,* Since **<br>
>> 2007*<br>
>><br>
>> *Member**, *University of Toronto E-Sports Club*, 2007*<br>
>><br>
>> *Member**, *University of Toronto Engineering Chinese Culture Club*, 2007<br>
>> *<br>
>> *Member**, *University of Toronto Robotics Club*, 2007*<br>
>><br>
>> --<br>
>> Kind Regards<br>
>><br>
>> Xin Tong<br>
>><br>
>> _______________________________________________<br>
>> LLVM Developers mailing list<br>
>> <a href="mailto:LLVMdev@cs.uiuc.edu">LLVMdev@cs.uiuc.edu</a> <a href="http://llvm.cs.uiuc.edu" target="_blank">http://llvm.cs.uiuc.edu</a><br>
>> <a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev</a><br>
>><br>
>><br>
> Hi Xin,<br>
><br>
> If I understand the above correctly, this basically means that whenever an<br>
> application calls a function it's been given by getPointerToFunction(),<br>
> there's a possibility the function is recompiled with more aggressive<br>
> optimisations, should that function meet some hotness threshold. Does the<br>
> application have to wait while this compilation takes place, before the<br>
> function it called is actually executed?<br>
><br>
><br>
> If so, it's nice that recompilation is transparent to the application, and<br>
> so functions just magically become faster over time, but stalling the<br>
> application like this may not be desirable.<br>
><br>
> I've added an adaptive optimisation system to an instruction set simulator<br>
> developed at my university which heavily relies on LLVM for JIT compilation.<br>
> It performs all the compilation in a separate thread from where the<br>
> interpretation of the simulated program is taking place, meaning it never<br>
> needs to wait for any compilation. Adaptive reoptimisation also takes place<br>
> in a separate thread, and this has caused me a multitude of headaches, but I<br>
> digress...<br>
><br>
> Basically: if the initial compilation is done in a separate thread, can you<br>
> ensure that any adaptive reoptimisation also happens asynchronously, or will<br>
> such use cases have to do without your system?<br>
><br>
> Cheers,<br>
> Stephen<br>
><br>
<br>
Functions will have to meet some hotness threshold before it is recompiled<br>
at a higher optimization level. The application does not have to wait for<br>
the compilation to finish as the compilation will be done asynchronously in<br>
the different thread and the application would use the current copy(less<br>
optimized copy this time) and the more optimized copy later. Thank you for<br>
the suggestion.<br>
<br>
Xin<br>
<br>
<br>
<br>
--<br>
Kind Regards<br>
<br>
Xin Tong<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <a href="http://lists.cs.uiuc.edu/pipermail/llvmdev/attachments/20110404/1b15bbca/attachment-0001.html" target="_blank">http://lists.cs.uiuc.edu/pipermail/llvmdev/attachments/20110404/1b15bbca/attachment-0001.html</a><br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Mon, 04 Apr 2011 10:49:53 -0700<br>
From: Owen Anderson <<a href="mailto:resistor@mac.com">resistor@mac.com</a>><br>
Subject: Re: [LLVMdev] GSOC Adaptive Compilation Framework for LLVM<br>
JIT Compiler<br>
To: LLVMdev List <<a href="mailto:llvmdev@cs.uiuc.edu">llvmdev@cs.uiuc.edu</a>><br>
Message-ID: <<a href="mailto:D7B4856C-5708-47B8-A379-66A7ECDFE047@mac.com">D7B4856C-5708-47B8-A379-66A7ECDFE047@mac.com</a>><br>
Content-Type: text/plain; CHARSET=US-ASCII<br>
<br>
<br>
On Apr 3, 2011, at 12:11 PM, Eric Christopher wrote:<br>
<br>
> <snip conversation about call patching><br>
<br>
It seems to me that there's a general feature here that LLVM is lacking, that would be useful in a number of JIT-compilation contexts, namely the ability to mark certain instructions (direct calls, perhaps branches too) as back-patchable.<br>
<br>
The thing that stands out to me is that back-patching a call or branch in a JIT'd program is very much like relocation resolution that a dynamic linker does at launch time for a statically compiled program. It seems like it might be possible to build on top of the runtime-dyld work that Jim has been doing for the MC-JIT to facilitate this. Here's the idea:<br>
<br>
Suppose we had a means of tagging certain calls (and maybe branches) as explicitly requiring relocations. Any back-patchable call would have a relocation in the generated code, and the MC-JIT would be aware of the location and type of the relocations, and rt-dyld would handle the upfront resolution. Backpatching, then, is just a matter of updating the resolution for a given symbol, and asking rt-dyld to re-link the executable code.<br>
<br>
Thoughts?<br>
<br>
--Owen<br>
<br>
<br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
LLVMdev mailing list<br>
<a href="mailto:LLVMdev@cs.uiuc.edu">LLVMdev@cs.uiuc.edu</a><br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev</a><br>
<br>
<br>
End of LLVMdev Digest, Vol 82, Issue 7<br>
**************************************<br>
</blockquote></div><br><br clear="all"><br>-- <br>Kind Regards <br><br>Xin Tong <br>