[LLVMdev] Loads moving across barriers

Matt Arsenault arsenm2 at gmail.com
Fri Jan 3 20:43:19 PST 2014

On Jan 3, 2014, at 10:17 PM, Andrew Trick <atrick at apple.com> wrote:

>> Because the normal case is nomemfence for most address spaces, something else is needed to avoid needing marking every function with nomemfence(0..max address space) when only a few usually matter. I suggest module metadata named !”llvm.used_addrspaces” that will list the address spaces relevant for the module. The absence of a fence will infer nomemfence for all of these address spaces minus any found potential fences. Without the metadata, only nomemfence(0) will be inferred.
> Just to be clear, only leaf functions (and functions whose callee graph is contained completely within the module) will be marked.
> Is it possible to indicate an unspecified address space? A nomemfence attribute with no address space would imply no fence in any address space. Would that eliminate the need for metadata?

That was my first idea, but I don’t remember why I rejected it originally. I think that would work instead.

> Do you think such an intrinsic would be useful without target support? Do we want a software-only fence?

I’m not sure. I don’t think I have much use for it.

> If you want a hardware fence but want to limit it to the global address space, then maybe the fence instruction should take an address space.

I think the fence instruction should take an address space, but that’s mostly a separate issue. The OpenCL mem_fence/barrier I’m trying to fix isn’t really the same as the fence instruction since it’s for all memory accesses, and not just atomics. I also need at least 6 different target intrinsics with memfences for mem_fence, barrier and all the permutations of address spaces, so a single general memfence intrinsic would be insufficient.

>> Does this sound good?
> Overall yes. Thanks.
> -Andy

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20140103/f72ff75d/attachment.html>

More information about the llvm-dev mailing list