[llvm] r255737 - [IR] Add support for floating pointer atomic loads and stores

Philip Reames via llvm-commits llvm-commits at lists.llvm.org
Tue Dec 15 17:28:43 PST 2015


This seems to have broken all the windows bots and nothing else. I've 
taken a guess as to a possible fix in 255743; hopefully that resolves 
the problem.  I'm not entirely clear why this broke to start with.

Philip


On 12/15/2015 04:49 PM, Philip Reames via llvm-commits wrote:
> Author: reames
> Date: Tue Dec 15 18:49:36 2015
> New Revision: 255737
>
> URL: http://llvm.org/viewvc/llvm-project?rev=255737&view=rev
> Log:
> [IR] Add support for floating pointer atomic loads and stores
>
> This patch allows atomic loads and stores of floating point to be specified in the IR and adds an adapter to allow them to be lowered via existing backend support for bitcast-to-equivalent-integer idiom.
>
> Previously, the only way to specify a atomic float operation was to bitcast the pointer to a i32, load the value as an i32, then bitcast to a float. At it's most basic, this patch simply moves this expansion step to the point we start lowering to the backend.
>
> This patch does not add canonicalization rules to convert the bitcast idioms to the appropriate atomic loads. I plan to do that in the future, but for now, let's simply add the support. I'd like to get instruction selection working through at least one backend (x86-64) without the bitcast conversion before canonicalizing into this form.
>
> Similarly, I haven't yet added the target hooks to opt out of the lowering step I added to AtomicExpand. I figured it would more sense to add those once at least one backend (x86) was ready to actually opt out.
>
> As you can see from the included tests, the generated code quality is not great. I plan on submitting some patches to fix this, but help from others along that line would be very welcome. I'm not super familiar with the backend and my ramp up time may be material.
>
> Differential Revision: http://reviews.llvm.org/D15471
>
>
> Added:
>      llvm/trunk/test/CodeGen/X86/atomic-non-integer.ll
>      llvm/trunk/test/Transforms/AtomicExpand/X86/expand-atomic-non-integer.ll
>      llvm/trunk/test/Verifier/atomics.ll
> Modified:
>      llvm/trunk/docs/LangRef.rst
>      llvm/trunk/lib/CodeGen/AtomicExpandPass.cpp
>      llvm/trunk/lib/IR/Verifier.cpp
>
> Modified: llvm/trunk/docs/LangRef.rst
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/docs/LangRef.rst?rev=255737&r1=255736&r2=255737&view=diff
> ==============================================================================
> --- llvm/trunk/docs/LangRef.rst (original)
> +++ llvm/trunk/docs/LangRef.rst Tue Dec 15 18:49:36 2015
> @@ -6844,12 +6844,12 @@ If the ``load`` is marked as ``atomic``,
>   ``release`` and ``acq_rel`` orderings are not valid on ``load``
>   instructions. Atomic loads produce :ref:`defined <memmodel>` results
>   when they may see multiple atomic stores. The type of the pointee must
> -be an integer type whose bit width is a power of two greater than or
> -equal to eight and less than or equal to a target-specific size limit.
> -``align`` must be explicitly specified on atomic loads, and the load has
> -undefined behavior if the alignment is not set to a value which is at
> -least the size in bytes of the pointee. ``!nontemporal`` does not have
> -any defined semantics for atomic loads.
> +be an integer or floating point type whose bit width is a power of two,
> +greater than or equal to eight, and less than or equal to a
> +target-specific size limit. ``align`` must be explicitly specified on
> +atomic loads, and the load has undefined behavior if the alignment is
> +not set to a value which is at least the size in bytes of the pointee.
> +``!nontemporal`` does not have any defined semantics for atomic loads.
>   
>   The optional constant ``align`` argument specifies the alignment of the
>   operation (that is, the alignment of the memory address). A value of 0
> @@ -6969,12 +6969,13 @@ If the ``store`` is marked as ``atomic``
>   ``acquire`` and ``acq_rel`` orderings aren't valid on ``store``
>   instructions. Atomic loads produce :ref:`defined <memmodel>` results
>   when they may see multiple atomic stores. The type of the pointee must
> -be an integer type whose bit width is a power of two greater than or
> -equal to eight and less than or equal to a target-specific size limit.
> -``align`` must be explicitly specified on atomic stores, and the store
> -has undefined behavior if the alignment is not set to a value which is
> -at least the size in bytes of the pointee. ``!nontemporal`` does not
> -have any defined semantics for atomic stores.
> +be an integer or floating point type whose bit width is a power of two,
> +greater than or equal to eight, and less than or equal to a
> +target-specific size limit.  ``align`` must be explicitly specified
> +on atomic stores, and the store has undefined behavior if the alignment
> +is not set to a value which is at least the size in bytes of the
> +pointee. ``!nontemporal`` does not have any defined semantics for
> +atomic stores.
>   
>   The optional constant ``align`` argument specifies the alignment of the
>   operation (that is, the alignment of the memory address). A value of 0
>
> Modified: llvm/trunk/lib/CodeGen/AtomicExpandPass.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/AtomicExpandPass.cpp?rev=255737&r1=255736&r2=255737&view=diff
> ==============================================================================
> --- llvm/trunk/lib/CodeGen/AtomicExpandPass.cpp (original)
> +++ llvm/trunk/lib/CodeGen/AtomicExpandPass.cpp Tue Dec 15 18:49:36 2015
> @@ -8,8 +8,10 @@
>   //===----------------------------------------------------------------------===//
>   //
>   // This file contains a pass (at IR level) to replace atomic instructions with
> -// either (intrinsic-based) load-linked/store-conditional loops or
> -// AtomicCmpXchg.
> +// target specific instruction which implement the same semantics in a way
> +// which better fits the target backend.  This can include the use of either
> +// (intrinsic-based) load-linked/store-conditional loops, AtomicCmpXchg, or
> +// type coercions.
>   //
>   //===----------------------------------------------------------------------===//
>   
> @@ -46,9 +48,12 @@ namespace {
>     private:
>       bool bracketInstWithFences(Instruction *I, AtomicOrdering Order,
>                                  bool IsStore, bool IsLoad);
> +    IntegerType *getCorrespondingIntegerType(Type *T, const DataLayout &DL);
> +    LoadInst *convertAtomicLoadToIntegerType(LoadInst *LI);
>       bool tryExpandAtomicLoad(LoadInst *LI);
>       bool expandAtomicLoadToLL(LoadInst *LI);
>       bool expandAtomicLoadToCmpXchg(LoadInst *LI);
> +    StoreInst *convertAtomicStoreToIntegerType(StoreInst *SI);
>       bool expandAtomicStore(StoreInst *SI);
>       bool tryExpandAtomicRMW(AtomicRMWInst *AI);
>       bool expandAtomicOpToLLSC(
> @@ -130,9 +135,27 @@ bool AtomicExpand::runOnFunction(Functio
>       }
>   
>       if (LI) {
> +      if (LI->getType()->isFloatingPointTy()) {
> +        // TODO: add a TLI hook to control this so that each target can
> +        // convert to lowering the original type one at a time.
> +        LI = convertAtomicLoadToIntegerType(LI);
> +        assert(LI->getType()->isIntegerTy() && "invariant broken");
> +        MadeChange = true;
> +      }
> +
>         MadeChange |= tryExpandAtomicLoad(LI);
> -    } else if (SI && TLI->shouldExpandAtomicStoreInIR(SI)) {
> -      MadeChange |= expandAtomicStore(SI);
> +    } else if (SI) {
> +      if (SI->getValueOperand()->getType()->isFloatingPointTy()) {
> +        // TODO: add a TLI hook to control this so that each target can
> +        // convert to lowering the original type one at a time.
> +        SI = convertAtomicStoreToIntegerType(SI);
> +        assert(SI->getValueOperand()->getType()->isIntegerTy() &&
> +               "invariant broken");
> +        MadeChange = true;
> +      }
> +
> +      if (TLI->shouldExpandAtomicStoreInIR(SI))
> +        MadeChange |= expandAtomicStore(SI);
>       } else if (RMWI) {
>         // There are two different ways of expanding RMW instructions:
>         // - into a load if it is idempotent
> @@ -172,6 +195,42 @@ bool AtomicExpand::bracketInstWithFences
>     return (LeadingFence || TrailingFence);
>   }
>   
> +/// Get the iX type with the same bitwidth as T.
> +IntegerType *AtomicExpand::getCorrespondingIntegerType(Type *T,
> +                                                       const DataLayout &DL) {
> +  EVT VT = TLI->getValueType(DL, T);
> +  unsigned BitWidth = VT.getStoreSizeInBits();
> +  assert(BitWidth == VT.getSizeInBits() && "must be a power of two");
> +  return IntegerType::get(T->getContext(), BitWidth);
> +}
> +
> +/// Convert an atomic load of a non-integral type to an integer load of the
> +/// equivelent bitwidth.  See the function comment on
> +/// convertAtomicStoreToIntegerType for background.
> +LoadInst *AtomicExpand::convertAtomicLoadToIntegerType(LoadInst *LI) {
> +  auto *M = LI->getModule();
> +  Type *NewTy = getCorrespondingIntegerType(LI->getType(),
> +                                            M->getDataLayout());
> +
> +  IRBuilder<> Builder(LI);
> +
> +  Value *Addr = LI->getPointerOperand();
> +  Type *PT = PointerType::get(NewTy,
> +                              Addr->getType()->getPointerAddressSpace());
> +  Value *NewAddr = Builder.CreateBitCast(Addr, PT);
> +
> +  auto *NewLI = Builder.CreateLoad(NewAddr);
> +  NewLI->setAlignment(LI->getAlignment());
> +  NewLI->setVolatile(LI->isVolatile());
> +  NewLI->setAtomic(LI->getOrdering(), LI->getSynchScope());
> +  DEBUG(dbgs() << "Replaced " << *LI << " with " << *NewLI << "\n");
> +
> +  Value *NewVal = Builder.CreateBitCast(NewLI, LI->getType());
> +  LI->replaceAllUsesWith(NewVal);
> +  LI->eraseFromParent();
> +  return NewLI;
> +}
> +
>   bool AtomicExpand::tryExpandAtomicLoad(LoadInst *LI) {
>     switch (TLI->shouldExpandAtomicLoadInIR(LI)) {
>     case TargetLoweringBase::AtomicExpansionKind::None:
> @@ -222,6 +281,35 @@ bool AtomicExpand::expandAtomicLoadToCmp
>     return true;
>   }
>   
> +/// Convert an atomic store of a non-integral type to an integer store of the
> +/// equivelent bitwidth.  We used to not support floating point or vector
> +/// atomics in the IR at all.  The backends learned to deal with the bitcast
> +/// idiom because that was the only way of expressing the notion of a atomic
> +/// float or vector store.  The long term plan is to teach each backend to
> +/// instruction select from the original atomic store, but as a migration
> +/// mechanism, we convert back to the old format which the backends understand.
> +/// Each backend will need individual work to recognize the new format.
> +StoreInst *AtomicExpand::convertAtomicStoreToIntegerType(StoreInst *SI) {
> +  IRBuilder<> Builder(SI);
> +  auto *M = SI->getModule();
> +  Type *NewTy = getCorrespondingIntegerType(SI->getValueOperand()->getType(),
> +                                            M->getDataLayout());
> +  Value *NewVal = Builder.CreateBitCast(SI->getValueOperand(), NewTy);
> +
> +  Value *Addr = SI->getPointerOperand();
> +  Type *PT = PointerType::get(NewTy,
> +                              Addr->getType()->getPointerAddressSpace());
> +  Value *NewAddr = Builder.CreateBitCast(Addr, PT);
> +
> +  StoreInst *NewSI = Builder.CreateStore(NewVal, NewAddr);
> +  NewSI->setAlignment(SI->getAlignment());
> +  NewSI->setVolatile(SI->isVolatile());
> +  NewSI->setAtomic(SI->getOrdering(), SI->getSynchScope());
> +  DEBUG(dbgs() << "Replaced " << *SI << " with " << *NewSI << "\n");
> +  SI->eraseFromParent();
> +  return NewSI;
> +}
> +
>   bool AtomicExpand::expandAtomicStore(StoreInst *SI) {
>     // This function is only called on atomic stores that are too large to be
>     // atomic if implemented as a native store. So we replace them by an
>
> Modified: llvm/trunk/lib/IR/Verifier.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/IR/Verifier.cpp?rev=255737&r1=255736&r2=255737&view=diff
> ==============================================================================
> --- llvm/trunk/lib/IR/Verifier.cpp (original)
> +++ llvm/trunk/lib/IR/Verifier.cpp Tue Dec 15 18:49:36 2015
> @@ -2732,7 +2732,8 @@ void Verifier::visitLoadInst(LoadInst &L
>       Assert(LI.getAlignment() != 0,
>              "Atomic load must specify explicit alignment", &LI);
>       if (!ElTy->isPointerTy()) {
> -      Assert(ElTy->isIntegerTy(), "atomic load operand must have integer type!",
> +      Assert(ElTy->isIntegerTy() || ElTy->isFloatingPointTy(),
> +             "atomic load operand must have integer or floating point type!",
>                &LI, ElTy);
>         unsigned Size = ElTy->getPrimitiveSizeInBits();
>         Assert(Size >= 8 && !(Size & (Size - 1)),
> @@ -2761,8 +2762,9 @@ void Verifier::visitStoreInst(StoreInst
>       Assert(SI.getAlignment() != 0,
>              "Atomic store must specify explicit alignment", &SI);
>       if (!ElTy->isPointerTy()) {
> -      Assert(ElTy->isIntegerTy(),
> -             "atomic store operand must have integer type!", &SI, ElTy);
> +      Assert(ElTy->isIntegerTy() || ElTy->isFloatingPointTy(),
> +             "atomic store operand must have integer or floating point type!",
> +             &SI, ElTy);
>         unsigned Size = ElTy->getPrimitiveSizeInBits();
>         Assert(Size >= 8 && !(Size & (Size - 1)),
>                "atomic store operand must be power-of-two byte-sized integer",
>
> Added: llvm/trunk/test/CodeGen/X86/atomic-non-integer.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/atomic-non-integer.ll?rev=255737&view=auto
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/atomic-non-integer.ll (added)
> +++ llvm/trunk/test/CodeGen/X86/atomic-non-integer.ll Tue Dec 15 18:49:36 2015
> @@ -0,0 +1,108 @@
> +; RUN: llc < %s -mtriple=x86_64-linux-generic -verify-machineinstrs -mattr=sse2 | FileCheck %s
> +
> +; Note: This test is testing that the lowering for atomics matches what we
> +; currently emit for non-atomics + the atomic restriction.  The presence of
> +; particular lowering detail in these tests should not be read as requiring
> +; that detail for correctness unless it's related to the atomicity itself.
> +; (Specifically, there were reviewer questions about the lowering for halfs
> +;  and their calling convention which remain unresolved.)
> +
> +define void @store_half(half* %fptr, half %v) {
> +; CHECK-LABEL: @store_half
> +; CHECK: movq	%rdi, %rbx
> +; CHECK: callq	__gnu_f2h_ieee
> +; CHECK: movw	%ax, (%rbx)
> +  store atomic half %v, half* %fptr unordered, align 2
> +  ret void
> +}
> +
> +define void @store_float(float* %fptr, float %v) {
> +; CHECK-LABEL: @store_float
> +; CHECK: movd	%xmm0, %eax
> +; CHECK: movl	%eax, (%rdi)
> +  store atomic float %v, float* %fptr unordered, align 4
> +  ret void
> +}
> +
> +define void @store_double(double* %fptr, double %v) {
> +; CHECK-LABEL: @store_double
> +; CHECK: movd	%xmm0, %rax
> +; CHECK: movq	%rax, (%rdi)
> +  store atomic double %v, double* %fptr unordered, align 8
> +  ret void
> +}
> +
> +define void @store_fp128(fp128* %fptr, fp128 %v) {
> +; CHECK-LABEL: @store_fp128
> +; CHECK: callq	__sync_lock_test_and_set_16
> +  store atomic fp128 %v, fp128* %fptr unordered, align 16
> +  ret void
> +}
> +
> +define half @load_half(half* %fptr) {
> +; CHECK-LABEL: @load_half
> +; CHECK: movw	(%rdi), %ax
> +; CHECK: movzwl	%ax, %edi
> +; CHECK: jmp	__gnu_h2f_ieee
> +  %v = load atomic half, half* %fptr unordered, align 2
> +  ret half %v
> +}
> +
> +define float @load_float(float* %fptr) {
> +; CHECK-LABEL: @load_float
> +; CHECK: movl	(%rdi), %eax
> +; CHECK: movd	%eax, %xmm0
> +  %v = load atomic float, float* %fptr unordered, align 4
> +  ret float %v
> +}
> +
> +define double @load_double(double* %fptr) {
> +; CHECK-LABEL: @load_double
> +; CHECK: movq	(%rdi), %rax
> +; CHECK: movd	%rax, %xmm0
> +  %v = load atomic double, double* %fptr unordered, align 8
> +  ret double %v
> +}
> +
> +define fp128 @load_fp128(fp128* %fptr) {
> +; CHECK-LABEL: @load_fp128
> +; CHECK: callq	__sync_val_compare_and_swap_16
> +  %v = load atomic fp128, fp128* %fptr unordered, align 16
> +  ret fp128 %v
> +}
> +
> +
> +; sanity check the seq_cst lowering since that's the
> +; interesting one from an ordering perspective on x86.
> +
> +define void @store_float_seq_cst(float* %fptr, float %v) {
> +; CHECK-LABEL: @store_float_seq_cst
> +; CHECK: movd	%xmm0, %eax
> +; CHECK: xchgl	%eax, (%rdi)
> +  store atomic float %v, float* %fptr seq_cst, align 4
> +  ret void
> +}
> +
> +define void @store_double_seq_cst(double* %fptr, double %v) {
> +; CHECK-LABEL: @store_double_seq_cst
> +; CHECK: movd	%xmm0, %rax
> +; CHECK: xchgq	%rax, (%rdi)
> +  store atomic double %v, double* %fptr seq_cst, align 8
> +  ret void
> +}
> +
> +define float @load_float_seq_cst(float* %fptr) {
> +; CHECK-LABEL: @load_float_seq_cst
> +; CHECK: movl	(%rdi), %eax
> +; CHECK: movd	%eax, %xmm0
> +  %v = load atomic float, float* %fptr seq_cst, align 4
> +  ret float %v
> +}
> +
> +define double @load_double_seq_cst(double* %fptr) {
> +; CHECK-LABEL: @load_double_seq_cst
> +; CHECK: movq	(%rdi), %rax
> +; CHECK: movd	%rax, %xmm0
> +  %v = load atomic double, double* %fptr seq_cst, align 8
> +  ret double %v
> +}
>
> Added: llvm/trunk/test/Transforms/AtomicExpand/X86/expand-atomic-non-integer.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/AtomicExpand/X86/expand-atomic-non-integer.ll?rev=255737&view=auto
> ==============================================================================
> --- llvm/trunk/test/Transforms/AtomicExpand/X86/expand-atomic-non-integer.ll (added)
> +++ llvm/trunk/test/Transforms/AtomicExpand/X86/expand-atomic-non-integer.ll Tue Dec 15 18:49:36 2015
> @@ -0,0 +1,82 @@
> +; RUN: opt -S %s -atomic-expand -mtriple=x86_64-linux-gnu | FileCheck %s
> +
> +; This file tests the functions `llvm::convertAtomicLoadToIntegerType` and
> +; `llvm::convertAtomicStoreToIntegerType`. If X86 stops using this
> +; functionality, please move this test to a target which still is.
> +
> +define float @float_load_expand(float* %ptr) {
> +; CHECK-LABEL: @float_load_expand
> +; CHECK: %1 = bitcast float* %ptr to i32*
> +; CHECK: %2 = load atomic i32, i32* %1 unordered, align 4
> +; CHECK: %3 = bitcast i32 %2 to float
> +; CHECK: ret float %3
> +  %res = load atomic float, float* %ptr unordered, align 4
> +  ret float %res
> +}
> +
> +define float @float_load_expand_seq_cst(float* %ptr) {
> +; CHECK-LABEL: @float_load_expand_seq_cst
> +; CHECK: %1 = bitcast float* %ptr to i32*
> +; CHECK: %2 = load atomic i32, i32* %1 seq_cst, align 4
> +; CHECK: %3 = bitcast i32 %2 to float
> +; CHECK: ret float %3
> +  %res = load atomic float, float* %ptr seq_cst, align 4
> +  ret float %res
> +}
> +
> +define float @float_load_expand_vol(float* %ptr) {
> +; CHECK-LABEL: @float_load_expand_vol
> +; CHECK: %1 = bitcast float* %ptr to i32*
> +; CHECK: %2 = load atomic volatile i32, i32* %1 unordered, align 4
> +; CHECK: %3 = bitcast i32 %2 to float
> +; CHECK: ret float %3
> +  %res = load atomic volatile float, float* %ptr unordered, align 4
> +  ret float %res
> +}
> +
> +define float @float_load_expand_addr1(float addrspace(1)* %ptr) {
> +; CHECK-LABEL: @float_load_expand_addr1
> +; CHECK: %1 = bitcast float addrspace(1)* %ptr to i32 addrspace(1)*
> +; CHECK: %2 = load atomic i32, i32 addrspace(1)* %1 unordered, align 4
> +; CHECK: %3 = bitcast i32 %2 to float
> +; CHECK: ret float %3
> +  %res = load atomic float, float addrspace(1)* %ptr unordered, align 4
> +  ret float %res
> +}
> +
> +define void @float_store_expand(float* %ptr, float %v) {
> +; CHECK-LABEL: @float_store_expand
> +; CHECK: %1 = bitcast float %v to i32
> +; CHECK: %2 = bitcast float* %ptr to i32*
> +; CHECK: store atomic i32 %1, i32* %2 unordered, align 4
> +  store atomic float %v, float* %ptr unordered, align 4
> +  ret void
> +}
> +
> +define void @float_store_expand_seq_cst(float* %ptr, float %v) {
> +; CHECK-LABEL: @float_store_expand_seq_cst
> +; CHECK: %1 = bitcast float %v to i32
> +; CHECK: %2 = bitcast float* %ptr to i32*
> +; CHECK: store atomic i32 %1, i32* %2 seq_cst, align 4
> +  store atomic float %v, float* %ptr seq_cst, align 4
> +  ret void
> +}
> +
> +define void @float_store_expand_vol(float* %ptr, float %v) {
> +; CHECK-LABEL: @float_store_expand_vol
> +; CHECK: %1 = bitcast float %v to i32
> +; CHECK: %2 = bitcast float* %ptr to i32*
> +; CHECK: store atomic volatile i32 %1, i32* %2 unordered, align 4
> +  store atomic volatile float %v, float* %ptr unordered, align 4
> +  ret void
> +}
> +
> +define void @float_store_expand_addr1(float addrspace(1)* %ptr, float %v) {
> +; CHECK-LABEL: @float_store_expand_addr1
> +; CHECK: %1 = bitcast float %v to i32
> +; CHECK: %2 = bitcast float addrspace(1)* %ptr to i32 addrspace(1)*
> +; CHECK: store atomic i32 %1, i32 addrspace(1)* %2 unordered, align 4
> +  store atomic float %v, float addrspace(1)* %ptr unordered, align 4
> +  ret void
> +}
> +
>
> Added: llvm/trunk/test/Verifier/atomics.ll
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Verifier/atomics.ll?rev=255737&view=auto
> ==============================================================================
> --- llvm/trunk/test/Verifier/atomics.ll (added)
> +++ llvm/trunk/test/Verifier/atomics.ll Tue Dec 15 18:49:36 2015
> @@ -0,0 +1,14 @@
> +; RUN: not opt -verify < %s 2>&1 | FileCheck %s
> +
> +; CHECK: atomic store operand must have integer or floating point type!
> +; CHECK: atomic load operand must have integer or floating point type!
> +
> +define void @foo(x86_mmx* %P, x86_mmx %v) {
> +  store atomic x86_mmx %v, x86_mmx* %P unordered, align 8
> +  ret void
> +}
> +
> +define x86_mmx @bar(x86_mmx* %P) {
> +  %v = load atomic x86_mmx, x86_mmx* %P unordered, align 8
> +  ret x86_mmx %v
> +}
>
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits



More information about the llvm-commits mailing list