[PATCH] D155459: [AArch64] Change the cost of vector insert/extract to 2

Eli Friedman via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Jul 17 07:57:26 PDT 2023


efriedma added a comment.

Noted a couple regressions on testcases which explicitly say they're not supposed to be vectorized.

A lot of the numbers for intrinsics are pretty clearly off by a very large amount.  If nobody is going to look at them, maybe we should just kill off the tests in question so reviewers don't have to read meaningless updates to them?



================
Comment at: llvm/test/Analysis/CostModel/AArch64/fshr.ll:183
 ; CHECK-LABEL: 'fshr_v4i32_3rd_arg_var'
-; CHECK-NEXT:  Cost Model: Found an estimated cost of 36 for instruction: %fshr = tail call <4 x i32> @llvm.fshr.v4i32(<4 x i32> %a, <4 x i32> %b, <4 x i32> %c)
+; CHECK-NEXT:  Cost Model: Found an estimated cost of 34 for instruction: %fshr = tail call <4 x i32> @llvm.fshr.v4i32(<4 x i32> %a, <4 x i32> %b, <4 x i32> %c)
 ; CHECK-NEXT:  Cost Model: Found an estimated cost of 0 for instruction: ret <4 x i32> %fshr
----------------
Weird cost modeling.


================
Comment at: llvm/test/Analysis/CostModel/AArch64/getIntrinsicInstrCost-vector-reverse.ll:25
+; CHECK-NEXT:  Cost Model: Found an estimated cost of 28 for instruction: %15 = call <8 x bfloat> @llvm.experimental.vector.reverse.v8bf16(<8 x bfloat> undef)
+; CHECK-NEXT:  Cost Model: Found an estimated cost of 56 for instruction: %16 = call <16 x bfloat> @llvm.experimental.vector.reverse.v16bf16(<16 x bfloat> undef)
 ; CHECK-NEXT:  Cost Model: Found an estimated cost of 0 for instruction: ret void
----------------
Weird cost modeling.


================
Comment at: llvm/test/Analysis/CostModel/AArch64/shuffle-select.ll:35
 ; COST-LABEL: sel.v8i16
-; COST:       Found an estimated cost of 42 for instruction: %tmp0 = shufflevector <8 x i16> %v0, <8 x i16> %v1, <8 x i32> <i32 0, i32 9, i32 2, i32 11, i32 4, i32 13, i32 6, i32 15>
+; COST:       Found an estimated cost of 28 for instruction: %tmp0 = shufflevector <8 x i16> %v0, <8 x i16> %v1, <8 x i32> <i32 0, i32 9, i32 2, i32 11, i32 4, i32 13, i32 6, i32 15>
 ; CODE-LABEL: sel.v8i16
----------------
Weird cost modeling.


================
Comment at: llvm/test/Analysis/CostModel/AArch64/vector-select.ll:692
 ; COST-LABEL: v8f16_select_une
-; COST-NOFP16-NEXT:  Cost Model: Found an estimated cost of 29 for instruction:   %cmp.1 = fcmp une <8 x half> %a, %b
+; COST-NOFP16-NEXT:  Cost Model: Found an estimated cost of 22 for instruction:   %cmp.1 = fcmp une <8 x half> %a, %b
 ; COST-NOFP16-NEXT:  Cost Model: Found an estimated cost of 2 for instruction:   %s.1 = select <8 x i1> %cmp.1, <8 x half> %a, <8 x half> %c
----------------
Cost modeling is weird.


================
Comment at: llvm/test/Transforms/LoopVectorize/AArch64/strict-fadd-cost.ll:53
 ; CHECK-VF4: Found an estimated cost of 1 for VF 4 For instruction:   %muladd = tail call float @llvm.fmuladd.f32(float %0, float %1, float %sum.07)
-; CHECK-VF8: Found an estimated cost of 38 for VF 8 For instruction:   %muladd = tail call float @llvm.fmuladd.f32(float %0, float %1, float %sum.07)
+; CHECK-VF8: Found an estimated cost of 32 for VF 8 For instruction:   %muladd = tail call float @llvm.fmuladd.f32(float %0, float %1, float %sum.07)
 
----------------
This cost modeling is weird.


================
Comment at: llvm/test/Transforms/SLPVectorizer/AArch64/slp-fma-loss.ll:204
 
 ; Test case where not vectorizing is more profitable because multiple
 ; fmul/{fadd,fsub} pairs can be lowered to fma instructions.
----------------
Regression?


================
Comment at: llvm/test/Transforms/VectorCombine/AArch64/load-extractelement-scalarization.ll:521
 ; Scalarizing the load for multiple extracts is profitable in this case,
 ; because the vector large vector requires 2 vector registers.
 define i32 @load_multiple_extracts_with_constant_idx_profitable(ptr %x) {
----------------
Regression?


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D155459/new/

https://reviews.llvm.org/D155459



More information about the llvm-commits mailing list