[PATCH] D36896: [TargetTransformInfo] Call target getMemoryOpCost for LoadInst from getUserCost

Guozhi Wei via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Aug 23 14:07:01 PDT 2017


Carrot added a comment.

>> Thank you for the clarification that they are actually belong to two different cost models. 
>>  My problem is in SimplifyCFG(if conversion), I want to compute the latency of a dependence chain. Currently I use getUserCost for each instruction, but I get 1 for memory load, it is much lower than reality, the big gap makes it almost useless. So which API should I use to get a more precise latency of instructions?
> 
> We don't have one (at the IR level). We should. I really want to see TTI have one cost model that returns <reciprocal throughput, latency, size>, all normalized, for each query. Then each client would be able to pick the right numbers for what it wants. While I dislike suggesting that someone take on a big project to fix a small problem, maybe we can do this incrementally somehow, so that we can keep things moving in the right direction. I suggest that you change the TTI cost interfaces to return some kind of tuple or structure, and then, make all of the latencies default to 1 for scalar operations, 5 for FP/vector ops (or 4 or 6 or whatever you think makes a reasonable default), and STI.getSchedModel().LoadLatency for loads. That will get you the right load latency, and provide an interface we can improve over time to get latency estimates for the other operations.

Thank you for the good suggestion.
I will add following new public interface to TTI

enum TargetCostKind {

  TCK_THROUGHPUT,
  TCK_LATENCY,
  TCK_CODESIZE,

};

int getInstructionCost(Instruction *I, enum TargetCostKind kind);

So clients can use this consistent and clear API.

I will also add some simple implementation of getInstructionLatency and getInstructionSize.


https://reviews.llvm.org/D36896





More information about the llvm-commits mailing list