[PATCH] D62857: [RISCV] Prevent re-ordering some adds after shifts

Alex Bradbury via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Jun 7 23:42:35 PDT 2019


asb added a comment.

I added some minor comments, along with a bigger suggestion. What do you think about adding a getIntMatCost taking APInt and IsRV64, which will split the immediate into XLEN-sized chunks (see comment for reference to similar code in AArch64)?



================
Comment at: llvm/lib/Target/RISCV/RISCVISelLowering.cpp:848
+  // The following folds are only desirable if constant `c1 << c2` can be
+  // materialised in fewer instructions than the constant `c1`:
+  //   (shl (add x, c1), c2) -> (add (shl x, c2), c1 << c2)
----------------
Actually, with the logic there now I guess we're checking that (add val, c1) is fewer instructions than (add, val', c1 << c2) which is a similar condition but not quite the same. i.e. both c1 and c1 << c2 might be materialisable with a single instruction, but if c1 << c2 is materialised using a single lui it's still unprofitable as we can't merge it into the add.


================
Comment at: llvm/lib/Target/RISCV/RISCVISelLowering.cpp:860
+
+      // We can materialise `c1 << c2` into a shift immediate, so it's free,
+      // and we should let the combine happen, so maybe more can happen later
----------------
shift immediate -> add immediate


================
Comment at: llvm/lib/Target/RISCV/RISCVISelLowering.cpp:861
+      // We can materialise `c1 << c2` into a shift immediate, so it's free,
+      // and we should let the combine happen, so maybe more can happen later
+      if (isLegalAddImmediate(ShiftedC1Int.getSExtValue()))
----------------
End comment with a full stop


================
Comment at: llvm/lib/Target/RISCV/RISCVISelLowering.cpp:870
+
+      if ((VT == MVT::i64 || VT == MVT::i32) && C1Int.isSignedIntN(64) &&
+          ShiftedC1Int.isSignedIntN(64)) {
----------------
I'm not totally liking calling getIntMatCost with something that isn't logically equivalent to IsRV64 (you could of course have MVT::i64 on RV32 pre-legalisation). The cost of materialising a 64-bit constant on RV64 also isn't necessarily the same as materialising an i64 split into two i32 on RV2.

In an ideal world, we'd have a getIntMatCost taking an APInt+IsRV64, with similar logic to `int AArch64TTIImpl::getIntImmCost(const APInt &Imm, Type *Ty)` - i.e. splitting into 32-bit/64-bit chunks. Hard to imagine this being a big deal for this particular case, but it's good infrastructure to have.



Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D62857/new/

https://reviews.llvm.org/D62857





More information about the llvm-commits mailing list