[PATCH] D79870: [RISCV] Add matching of codegen patterns to RISCV Bit Manipulation Zbb asm instructions

Alex Bradbury via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu May 28 07:35:14 PDT 2020


asb added a comment.

Sorry for the delay on this - the lockdown situation is really hurting my review time, though it looks like my childcare situation will improve from the week after next.

Quite a lot of comments below, but I think this actually is almost ready to go with a few extra patterns or at least reworked test cases. Thanks again for your work.

I've added a few comments inline, beyond that:

- For the test files, I'd probably just have RV32IB and RV64IB check lines. The RV32I-NOT/RV64I-NOT lines are helpful defence in depth, but they're not very compatible with update_llc_test_checks.py which we prefer to use whenever possible. This is like the float-*.ll test files. Alternatively we just let update_llc_test_checks generate code for the non-bitmanip targets (more like the mul.ll test).
- Use update_llc_test_checks.py to generate and maintain the check lines
- Missing slliu.w?
- No immediate variants? e.g. sloi, sroi
- Missing some W instructions such as SLOW and SROW
- I'd suggest having one file called something like rvZbb.ll which contains tests that are relevant for both RV32 and RV64. Note this likely does include both i64 and i32 test cases - we want to ensure reasonable codegen for i32 values on RV64 and for i64 values on RV32, in both cases using hardware instructions when possible. If there are tests that really would just be noise for RV32, then put those in rvZ64bb.ll.

I think we do have flexibility to commit something that falls short of handling codegen for all Zbb instructions, but if doing so it would be helpful to note in e.g. the test file (ideally even with tests!) cases that aren't yet handled, so it's easy to return to. If it's easy enough to add the missing cases, that would be preferred.



================
Comment at: llvm/test/CodeGen/RISCV/rv32Zbb.ll:41
+; RV32IB:       # %bb.0:
+; RV32IB-NEXT:    beqz a0, .LBB2_2
+; RV32IB-NEXT:  # %bb.1: # %cond.false
----------------
clz on a zero is a well defined operation that will return XLEN. So shouldn't this just lower to clz and ret?


================
Comment at: llvm/test/CodeGen/RISCV/rv32Zbb.ll:61
+; RV32IB-NEXT:  # %bb.1: # %cond.false
+; RV32IB-NEXT:    ctz a0, a0
+; RV32IB-NEXT:    ret
----------------
Same comment as for clz above


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D79870/new/

https://reviews.llvm.org/D79870





More information about the llvm-commits mailing list