[PATCH] D79955: [GlobalISel][InlineAsm] Add early return for memory inputs that need to be indirectified

Leonard Chan via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu May 21 16:49:04 PDT 2020


leonardchan added a comment.

Hi! I believe commits 5425cdc3adf9998aeaf587d93417bd2f4f1373c9 <https://reviews.llvm.org/rG5425cdc3adf9998aeaf587d93417bd2f4f1373c9> and 91063cf85a4038537731f582a27936187fb0759c <https://reviews.llvm.org/rG91063cf85a4038537731f582a27936187fb0759c> lead to the crash we're seeing in https://bugs.llvm.org/show_bug.cgi?id=46028.

The bug has full details, but the tl;dr is that with those 2 commits, we're seeing bad assembly generated for the following code:

  inline Atomic64 NoBarrier_CompareAndSwap(volatile Atomic64* ptr,
                                           Atomic64 old_value,
                                           Atomic64 new_value) {
    Atomic64 prev;
    int32_t temp;
    __asm__ __volatile__ (  // NOLINT
      "0:                                    \n\t"
      "ldxr %[prev], %[ptr]                  \n\t"
      "cmp %[prev], %[old_value]             \n\t"
      "bne 1f                                \n\t"
      "stxr %w[temp], %[new_value], %[ptr]   \n\t"
      "cbnz %w[temp], 0b                     \n\t"
      "1:                                    \n\t"
      : [prev]"=&r" (prev),
        [temp]"=&r" (temp),
        [ptr]"+Q" (*ptr)
      : [old_value]"IJr" (old_value),
        [new_value]"r" (new_value)
      : "cc", "memory"
    );  // NOLINT
    return prev;
  }

produces

  2c: 08 7d 5f c8                   ldxr    x8, [x8]
  30: 1f 01 0a eb                   cmp     x8, x10
  34: 61 00 00 54                   b.ne    0x40 <_ZN6google8protobuf8internal24NoBarrier_CompareAndSwapEPVlll+0x40>
  38: 0b 7d 09 c8                   stxr    w9, x11, [x8]
  3c: 89 ff ff 35                   cbnz    w9, 0x2c <_ZN6google8protobuf8internal24NoBarrier_CompareAndSwapEPVlll+0x2c>
  40: e8 0b 00 f9                   str     x8, [sp, #16]
  44: e9 0f 00 b9                   str     w9, [sp, #12]

when it should be making

  2c: 0c 7d 5f c8                   ldxr    x12, [x8]
  30: 9f 01 0a eb                   cmp     x12, x10
  34: 61 00 00 54                   b.ne    0x40 <_ZN6google8protobuf8internal24NoBarrier_CompareAndSwapEPVlll+0x40>
  38: 0b 7d 09 c8                   stxr    w9, x11, [x8]
  3c: 89 ff ff 35                   cbnz    w9, 0x2c <_ZN6google8protobuf8internal24NoBarrier_CompareAndSwapEPVlll+0x2c>
  40: ec 0b 00 f9                   str     x12, [sp, #16]
  44: e9 0f 00 b9                   str     w9, [sp, #12]

The difference is that the fist case clobbers `x8` in which can lead to reading from a bad address in `ldxr x8, [x8]`. We're only seeing this on our AArch64 debug builds, which I believe is the only configuration where global isel is used.

Could you take a look or revert those 2 patches? Thanks.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D79955/new/

https://reviews.llvm.org/D79955





More information about the llvm-commits mailing list