[PATCH] D135948: [LoongArch] Add codegen support for cmpxchg on LA64

hev via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Oct 14 02:34:53 PDT 2022


hev added inline comments.


================
Comment at: llvm/test/CodeGen/LoongArch/ir-instruction/atomic-cmpxchg.ll:20
+; LA64-NEXT:  .LBB0_1: # =>This Inner Loop Header: Depth=1
+; LA64-NEXT:    dbar 0
+; LA64-NEXT:    ll.w $a4, $a3, 0
----------------
xry111 wrote:
> xry111 wrote:
> > hev wrote:
> > > Not needed.
> > > 
> > > ll: full-membar + load-exclusive
> > Jiaxun told me (via linux-mips):
> > 
> > > I had checked with Loongson guys and they confirmed that the workaround still needs to be applied to latest 3A4000
> > > processors, including 3A4000 for MIPS and 3A5000 for LoongArch. Though, the reason behind the workaround varies with the evaluation
> > > of  their uArch, for GS464V based core, barrier is required as the uArch design allows regular load to be reordered after an atomic linked
> > > load, and that would break  assumption of compiler atomic constraints.
> > 
> > In GCC we use `dbar 0x700`, so in the future HW engineers can fix this issue and make `dbar 0x700` no-op.
> Ouch, I mean the `dbar 1792` instruction at `LBB0_3`.  Yes this one can be removed for 3A5000.
> 
> But what should we do if `LLDBAR` bit is 0 in CPUCFG?
Nice question.

What case we need memory barrier before atomic-op? Atomic-op with store-release semantics? Why not make sc as membar + store-conditional?

If memory barrier semantics of ll are still membar + load-exclusive in future, and the atomic-op with load-acquire semantics, I think we can't make dbar 0x700 as no-op.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D135948/new/

https://reviews.llvm.org/D135948



More information about the llvm-commits mailing list