[llvm] [RISCV][TTI] Fix a costing mistake for truncate/fp_round with LMUL>m1 (PR #101051)
Shih-Po Hung via llvm-commits
llvm-commits at lists.llvm.org
Tue Jul 30 19:33:32 PDT 2024
================
@@ -1108,60 +1108,60 @@ define void @trunc() {
; RV32-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %nxv1i64_nxv1i1 = trunc <vscale x 1 x i64> undef to <vscale x 1 x i1>
; RV32-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %nxv2i16_nxv2i8 = trunc <vscale x 2 x i16> undef to <vscale x 2 x i8>
; RV32-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %nxv2i32_nxv2i8 = trunc <vscale x 2 x i32> undef to <vscale x 2 x i8>
-; RV32-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %nxv2i64_nxv2i8 = trunc <vscale x 2 x i64> undef to <vscale x 2 x i8>
+; RV32-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %nxv2i64_nxv2i8 = trunc <vscale x 2 x i64> undef to <vscale x 2 x i8>
; RV32-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %nxv2i32_nxv2i16 = trunc <vscale x 2 x i32> undef to <vscale x 2 x i16>
-; RV32-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %nxv2i64_nxv2i16 = trunc <vscale x 2 x i64> undef to <vscale x 2 x i16>
-; RV32-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %nxv2i64_nxv2i32 = trunc <vscale x 2 x i64> undef to <vscale x 2 x i32>
+; RV32-NEXT: Cost Model: Found an estimated cost of 3 for instruction: %nxv2i64_nxv2i16 = trunc <vscale x 2 x i64> undef to <vscale x 2 x i16>
+; RV32-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %nxv2i64_nxv2i32 = trunc <vscale x 2 x i64> undef to <vscale x 2 x i32>
----------------
arcbbb wrote:
The throughput tests show that the processing time doubles when the LMUL doubles.
I am not sure if I understand it correctly, but it looks like the vnsrl on this machine is capable to output VLEN/4 per cycle.
Based on the observation that CPI is ~1 for mf4, ~2 for mf2, ~4 for m1, and ~8 for m2, regardless of sew.
And I would say the number of uops for each vnsrl is LMUL*4.
It is just assumption because I don't know the micro-architecture.
Based on my understanding, vnsrl on sifive-x280 is capable to handle DLEN/2 per cycle, and vnsrl on sifive-p670 is DLEN per cycle.
https://github.com/llvm/llvm-project/pull/101051
More information about the llvm-commits
mailing list