[PATCH] D37170: [TargetTransformInfo] Add a new public interface getInstructionCost

Hal Finkel via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Sat Sep 2 11:51:38 PDT 2017


hfinkel added a comment.

This is looking good. A few more comments.

You can add an implementation of getInstructionLatency to include/llvm/CodeGen/BasicTTIImpl.h something like:

  int getInstructionLatency(const Instruction *I) {
    if (isa<LoadInst>(I))
      return getST()->.getSchedModel().DefaultLoadLatency;
  
    return BaseT::getInstructionLatency(I);
  }

The default value of DefaultLoadLatency is also 4, but I'd like this to happen before we end up with any actual users of this interface, plus...

The other thing that needs to happen is that we need some way of testing this. Can you please add a command-line option to the CostModel analysis what allows us to switch which cost model is being queried (the default can still be the reciprocal throughput model). Then you can add a couple of simple tests for the latency model (and code-size model) as well. This will provide us with an independent way of testing the model (and that's important).



================
Comment at: include/llvm/Analysis/TargetTransformInfo.h:112
+  ///
+  /// Now we support following 3 kinds of cost model.
+  enum TargetCostKind {
----------------
Please replace with something like:

  /// There are several different cost models that can be customized by the target. The normalization of each cost model may be target specific.


================
Comment at: include/llvm/Analysis/TargetTransformInfo.h:114
+  enum TargetCostKind {
+    TCK_Throughput, ///< The traditional cost model, it actually
+                    ///< means reciprocal throughput.
----------------
I'd remove the "The traditional cost model, it actually means", because both this model and the "user cost" model could be considered traditional (arguably, the latter is actually older).


================
Comment at: include/llvm/Analysis/TargetTransformInfo.h:114
+  enum TargetCostKind {
+    TCK_Throughput, ///< The traditional cost model, it actually
+                    ///< means reciprocal throughput.
----------------
hfinkel wrote:
> I'd remove the "The traditional cost model, it actually means", because both this model and the "user cost" model could be considered traditional (arguably, the latter is actually older).
I'd prefer this be named `TCK_RecipThroughput`, or maybe just `TCK_RThroughput`, to make it clear what it means.


================
Comment at: include/llvm/Analysis/TargetTransformInfo.h:123
+  /// Clients should mainly use this interface to query the cost of the
+  /// specified instruction.
+  int getInstructionCost(const Instruction *I, enum TargetCostKind kind) const {
----------------
  /// Clients should use this interface to query the cost of an existing instruction. The instruction must have a valid parent (basic block).


================
Comment at: include/llvm/Analysis/TargetTransformInfo.h:136
+    default:
+      assert("Wrong cost kind passed in.");
+      return 0;
----------------
llvm_unreachable("Unknown instruction cost kind");


================
Comment at: include/llvm/Analysis/TargetTransformInfo.h:882
+  /// Returns -1 if the cost is unknown.
+  /// Note, this method does not cache the cost calculation and it
+  /// can be expensive in some cases.
----------------
Move the note about expense to the comment on `getInstructionCost`. This is true for the two existing cost models, and may become true for the latency estimate as well.


https://reviews.llvm.org/D37170





More information about the llvm-commits mailing list