[LLVMdev] tbaa differences in llvm 3.0

Maurice Marks maurice.marks at gmail.com
Fri Jan 27 16:15:09 PST 2012


Our application generates IR that is used to generate X86_64 code for a
Jit. We noticed that code generated with llvm 3.0 is worse in some
circumstances that it was with 2.9. I traced the differences to alias
analysis differences. Our IR references data structures that have lots of
derived pointers and we use extensive tbaa metadata to indicate which
pointers dont alias. Some of this seemed to be getting ignored in 3.0. I
found by using opt on our IR that the ordering of the basicaa and tbaa
optimization passes made all the difference. Code in
addInitialAliasAnalysisPasses() adds the tbaa pass then basicaa pass. From
Chris' comments in the forum about backward chaining of alais analysis that
would execute basicaa before tbaa - and that would have the effect of
basicaa doing the first pass of analysis followed by tbaa, possibly
overriding the "wont alias" information we provided in the IR.  The
comments in the code imply that tbaa executes first.

As an experiment I reversed the order of the passes in
addInitialAliasAnalysisPasses, rebuilt llvm 3.0 and our application and
reran our alias  tests. Now we get the same (good) results as llvm 2.9.

So my question is - which order is correct? I would expect that explicit
tbaa tags that say that some pointers dont alias should have precedence.

I can provide a fairly small .ll  test case if necessary.

thanks for your help and advice

/maurice
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120127/b0c2060d/attachment.html>


More information about the llvm-dev mailing list