[LLVMdev] Request for comments: TBAA on call

Filip Pizlo fpizlo at apple.com
Wed Sep 25 17:50:02 PDT 2013

Hi all,

TBAA on loads and stores is super useful for conveying high-level type knowledge to LLVM transformations.  But often translating a high-level language into LLVM IR will result in runtime calls that are known at the high level to only affect (either by reading or by writing) some restricted subset of the heap.  These aren't read-only calls; that is often too strong of a guarantee.  A great example would be a GC barrier or GC allocation that may arise if LLVM is used as a backend for a compiler for a higher-level language.  There are probably other examples that arise in high-level languages, but let's use allocation as an example for now.  I can envision two ways of representing allocation in LLVM IR:

Method #1:

	    %object = call @runtime_allocate(... /* pass size and/or other data */)

Method #2:

	    ... ; code preceding the allocation
	    %objectFast = ... ; IR for the inlined allocation fast path, which produces %objectFast but may yield null if the fast path fails and we need to take slow path
	    %didFail = icmp eq %objectFast, null
	    br %didFail, label %AllocationSlowPath, label &Continue
	    %objectSlow = call @runtime_allocate_slow_path(... /* pass size and/or other data */)
	    br label %Continue
	    %object = phi [%objectFast, SomeBlock], [%objectSlow, AllocationSlowPath]

In both of these cases, the call instruction will appear to clobber the world leading to more pessimistic optimization.  Moreover, it's not always correct to mark the allocation as being readonly; it may read or write observable state.  But if you don't like the allocation example then you can imagine that a GC store barrier would have an almost identical pattern to #2 and it would definitely not be readonly.  Anyway, such a pattern would lead to fewer loads being eliminated or hoisted, fewer stores being eliminated or sunk, etc - all due to those pesky calls.  But the high-level code generator that produces this IR will know for sure that @runtime_allocate or @runtime_allocate_slow_path can only affect a very narrow subset of the heap: namely, it will only read and write the GC's data structures.  So most of the loads and stores surrounding the allocation, that arose from source-level loads and stores, would definitely not have any interference with the call and this would be easy to convey using TBAA.

Hence if we could decorate the calls to these runtime functions as only reading or writing certain TBAA types, then LLVM would be able to perform more CSE/GVN/LICM and probably other goodness as well.

Perhaps there are alternate ways of representing allocation or write barriers - but in my experience when LLVM is integrated into a larger system the above IR appears to be optimal for other reasons; and anyways, this issue arises in a bunch of other situations.  I'm currently working on an LLVM-based compiler backend for a JavaScript VM and in this project I can see the following cases have already arisen:

- GC allocation slow paths
- GC write barrier slow paths
- Object reshaping slow paths (JS VMs get objects to be fast by sometimes changing their shape which isn't observable to the user and only affects certain well-defined fields)
- Array access slow paths (if JS type inference tells me that at array really is an array but I cannot prove that the access is in-bounds then I still have some work to do for the out-of-bounds case but I can prove that it will only read/write some things rather than all of the things)

There are likely to be more such cases.  We out-of-line a lot of gnarly code, and that code is often compiled by a compiler other than LLVM and we won't necessarily be able to feed the IR of those runtime functions to LLVM.  Sometimes that code is hand-written assembly so having LLVM analyze it is pretty much a non-starter.  Much of that out-of-lined code has a narrow set of effects.  We can certainly summarize what those effects are using TBAA, so that seems like the most pragmatic way of doing it.

So here's a strawman for how to represent TBAA on calls.

Ideally a call instruction would list what it reads and what it writes.  In most cases this will be overkill, so if we see:

	call @foo(...) !tbaa !42

Then this ought to be interpreted as both reading and writing.  We could then also have new kinds of meta-data references:


So that saying:

	call @foo(...) !tbaa.read !42 !tbaa.write !84

Indicates that this call to @foo reads whatever !42 is and writes whatever !84 is.

Calls that are not decorated with !tbaa, !tbaa.read, or !tbaa.write must be interpreted as if they read or wrote anything, subject of course to function attributes (like readonly).  If you ignore the TBAA metadata on a call then you should also assume that the function read or wrote anything, again subject to function attributes.  If a function call has both a readonly attribute and some TBAA data, then it should be safe to use either the attribute or the metadata for the purpose of dependency analysis.  If you use both sources of information then the set of possible memory locations that are written is the intersection of the set allowed by the attributes and the set allowed by the TBAA metadata; for example if @foo (in the above example) were marked readonly then the !tbaa.write !84 can be ignored.

I'm not entirely convinced that this representation really is totally general; for example you can imaging wanting to list the set of things read or written and I gather that this would be hard using my suggested syntax.  But it's just a strawman - the bigger question is: can anyone see show-stopper issues that would prevent TBAA on calls from ever being a thing?

Also this would be a sizable change - a bunch of dependency analysis stuff would have to be taught to look at calls in addition to loads and stores.  But I think it could be broadly useful, maybe even beyond the high-level language compilers that I'm familiar with.



More information about the llvm-dev mailing list