[PATCH] D49994: Allow constraining virtual register's class within reason

Alexey Zhikhartsev via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Aug 7 12:13:48 PDT 2018


alexey.zhikhar added a reviewer: t.p.northover.
alexey.zhikhar added subscribers: rengolin, t.p.northover, asl.
alexey.zhikhar added a comment.

@t.p.northover @asl @rengolin

For some of the failing ARM/AArch64 tests, I see additional mov-s being performed; for example, in `CodeGen/AArch64/and-sink.ll`: if you take a look at the assembly after applying the patch, you will see an addtional mov for the trace when %c (w1) equals to zero. I'm not sure how important it is, so I would appreciate some feedback from ARM/AArch64 backend people. Please note that for performance-critical atomic compare-and-swap operations, performance is unchanged.

  ; RUN: llc -mtriple=aarch64-linux-gnu -verify-machineinstrs < %s | FileCheck %s
  
  @A = global i32 zeroinitializer
  
  ; Test that and is sunk into cmp block to form tbz.
  define i32 @and_sink1(i32 %a, i1 %c) {
    %and = and i32 %a, 4
    br i1 %c, label %bb0, label %bb2
  bb0:
    %cmp = icmp eq i32 %and, 0
    store i32 0, i32* @A
    br i1 %cmp, label %bb1, label %bb2
  bb1:
    ret i32 1
  bb2:
    ret i32 0
  }

Original assembly:

  and_sink1:                              // @and_sink1
          .cfi_startproc
  // %bb.0:
          tbz     w1, #0, .LBB0_3
  // %bb.1:                               // %bb0
          adrp    x8, A
          str     wzr, [x8, :lo12:A]
          tbnz    w0, #2, .LBB0_3
  // %bb.2:
          orr     w0, wzr, #0x1
          ret
  .LBB0_3:                                // %bb2
          mov     w0, wzr
          ret
  .Lfunc_end0:
          .size   and_sink1, .Lfunc_end0-and_sink1
          .cfi_endproc

Assembly after applying the patch:

  and_sink1:                              // @and_sink1
          .cfi_startproc
  // %bb.0:
          tbz     w1, #0, .LBB0_2
  // %bb.1:                               // %bb0
          adrp    x8, A
          str     wzr, [x8, :lo12:A]
          orr     w8, wzr, #0x1
          tbz     w0, #2, .LBB0_3
  .LBB0_2:                                // %bb2
          mov     w8, wzr
  .LBB0_3:                                // %bb1
          mov     w0, w8
          ret
  .Lfunc_end0:
          .size   and_sink1, .Lfunc_end0-and_sink1
          .cfi_endproc

Here's a list of ARM/AArch64 tests that are suspicious due to additional mov operations:

  LLVM :: CodeGen/AArch64/redundant-copy-elim-empty-mbb.ll
  LLVM :: CodeGen/ARM/2011-04-11-MachineLICMBug.ll
  LLVM :: CodeGen/AArch64/and-sink.ll
  LLVM :: CodeGen/AArch64/arm64-fast-isel-conversion-fallback.ll
  LLVM :: CodeGen/AArch64/optimize-cond-branch.ll

Also, `CodeGen/ARM011-08-25-ldmi_ret.ll` spills one additional register.


https://reviews.llvm.org/D49994





More information about the llvm-commits mailing list