[clang] [llvm] [LoongArch] Support sc.q instruction for 128bit cmpxchg operation (PR #116771)
via cfe-commits
cfe-commits at lists.llvm.org
Thu Nov 28 00:00:24 PST 2024
================
@@ -0,0 +1,342 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc --mtriple=loongarch64 -mattr=+d,-scq,-ld-seq-sa < %s | FileCheck %s --check-prefix=LA64
+; RUN: llc --mtriple=loongarch64 -mattr=+d,+scq,-ld-seq-sa < %s | FileCheck %s --check-prefixes=LA64-SCQ,NO-LD-SEQ-SA
+; RUN: llc --mtriple=loongarch64 -mattr=+d,+scq,+ld-seq-sa < %s | FileCheck %s --check-prefixes=LA64-SCQ,LD-SEQ-SA
+
+define void @cmpxchg_i128_acquire_acquire(ptr %ptr, i128 %cmp, i128 %val) nounwind {
+; LA64-LABEL: cmpxchg_i128_acquire_acquire:
+; LA64: # %bb.0:
+; LA64-NEXT: addi.d $sp, $sp, -32
+; LA64-NEXT: st.d $ra, $sp, 24 # 8-byte Folded Spill
+; LA64-NEXT: move $a6, $a4
+; LA64-NEXT: st.d $a2, $sp, 8
+; LA64-NEXT: st.d $a1, $sp, 0
+; LA64-NEXT: addi.d $a1, $sp, 0
+; LA64-NEXT: ori $a4, $zero, 2
+; LA64-NEXT: ori $a5, $zero, 2
+; LA64-NEXT: move $a2, $a3
+; LA64-NEXT: move $a3, $a6
+; LA64-NEXT: bl %plt(__atomic_compare_exchange_16)
+; LA64-NEXT: ld.d $ra, $sp, 24 # 8-byte Folded Reload
+; LA64-NEXT: addi.d $sp, $sp, 32
+; LA64-NEXT: ret
+;
+; LA64-SCQ-LABEL: cmpxchg_i128_acquire_acquire:
+; LA64-SCQ: # %bb.0:
+; LA64-SCQ-NEXT: .LBB0_1: # =>This Inner Loop Header: Depth=1
+; LA64-SCQ-NEXT: ll.d $a5, $a0, 0
+; LA64-SCQ-NEXT: ld.d $a6, $a0, 8
----------------
heiher wrote:
For all cases, Insert a `dbar acquire` between `ll.d` and `ld.d` to prevent reordering.
https://github.com/llvm/llvm-project/pull/116771
More information about the cfe-commits
mailing list