[LLVMdev] Memory optimizations for LLVM JIT
Micah Villmow
micah.villmow at smachines.com
Tue Aug 20 09:05:53 PDT 2013
I would not expect a JIT to produce as good of code as a static compiler. A JIT is supposed to run relatively fast, whereas a static compiler may take a lot longer.
Micah
From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] On Behalf Of ???
Sent: Tuesday, August 20, 2013 12:23 AM
To: llvmdev at cs.uiuc.edu
Subject: [LLVMdev] Memory optimizations for LLVM JIT
Hello.
I'm new to LLVM and I've got some problems with LLVM JIT.
I have set ExecutionEngine as CodeGenOpt::Aggressive and PassManagerBuilder.OptLevel as 3. (mem2reg and GVN are included)
However, the machine code generated by JIT is not as good as that generated by clang or llc.
Here is an example:
--------------------------------------------------------------------
source fragment ==> clang or llc
struct {
uint64_t a[10];
} *p;
mov 0x8(%rax),%rdx
p->a[2] = p->a[1]; mov %rdx,0x10(%rax)
p->a[3] = p->a[1]; ==> mov %rdx,0x18(%rax)
p->a[4] = p->a[2]; mov %rdx,0x20(%rax)
p->a[5] = p->a[4]; mov %rdx,0x28(%rax)
--------------------------------------------------------------------
JIT (map p to GlobalVariable) ==> JIT (map p to constant GlobalVariable)
1* movabsq $0x18c6b88, %rax 1* movabsq $0x18c6b88, %rax
2* movq (%rax), %rcx // p 2* movq (%rax), %rax
3* movq 0x8(%rcx), %rdx // a[1] 3* movq 0x8(%rax), %rcx
4* movq %rdx, 0x10(%rcx) // a[2] 4* movq %rcx, 0x10(%rax)
5 movq (%rax), %rcx 5
6 movq 0x8(%rcx), %rdx 6 movq 0x8(%rax), %rcx
7* movq %rdx, 0x18(%rcx) ==> 7* movq %rcx, 0x18(%rax)
8 movq (%rax), %rcx 8
9 movq 0x10(%rcx), %rdx 9 movq 0x10(%rax), %rcx
10* movq %rdx, 0x20(%rcx) 10* movq %rcx, 0x20(%rax)
11 movq (%rax), %rax 11
12 movq 0x20(%rax), %rcx 12
13* movq %rcx, 0x28(%rax) 13* movq %rcx, 0x28(%rax)
----------------------------------------------------------------------
A GlobalValue was declared and mapped to the variable p.
Some LLVM IR instructions were created according to those generated by LLVM from source.
I.e., load p, load a[1] based on p, load p again, store a[2] based on p, etc.
The machine code turned out to be slightly optmized, as shown on the left.
Things were getting better after the GlobalVariable of p was set as a constant.
Redundant Loads of p (line 5, 8 and 11) were removed, and so was line 12 because of line 10.
However, I could not make it better any more, although optimal machine code just need those marked with '*'.
It seems that store instructions block the optimizations across them.
I.e., line 3&6 or 4&9 are similar to line 10&12, but they are not optimized.
The store (line 4 or 7) between them obviously has no alias problem.
My question is: how to make LLVM JIT optimize this code?
Did I miss anything, or need I write some kind of optimization pass?
I will be grateful for any help you can provide.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130820/be979b00/attachment.html>
More information about the llvm-dev
mailing list