[llvm] [TTI] Simplify implementation (NFCI) (PR #136674)
via llvm-commits
llvm-commits at lists.llvm.org
Tue Apr 22 04:30:56 PDT 2025
llvmbot wrote:
<!--LLVM PR SUMMARY COMMENT-->
@llvm/pr-subscribers-backend-systemz
@llvm/pr-subscribers-backend-aarch64
Author: Sergei Barannikov (s-barannikov)
<details>
<summary>Changes</summary>
Replace "concept based polymorphism" with simpler PImpl idiom.
This pursues two goals:
* Enforce static type checking. Previously, target implementations hid base class methods and type checking was impossible. Now that they override the methods, the compiler will complain on mismatched signatures.
* Make the code easier to navigate. Previously, if you asked your favorite LSP server to show a method (e.g. `getInstructionCost()`), it would show you the methods from `TTI`, `TTI::Concept`, `TTI::Model`, `TTIImplBase`, and target overrides. Now it is two less :)
There are three commits to hopefully simplify the review.
The first commit removes `TTI::Model`. This is done by deriving `TargetTransformInfoImplBase` from `TTI::Concept`. This is possible because they implement the same set of interfaces with identical signatures.
The first commit makes `TargetTransformImplBase` polymorphic, which means all derived classes should `override` its methods. This is done in second commit to make the first one smaller. It appeared infeasible to extract this into a separate PR because the first commit landed separately would result in tons of `-Woverloaded-virtual` warnings (and break `-Werror` builds).
The third commit eliminates `TTI::Concept` by merging it with the only derived class `TargetTransformImplBase`. This commit could be extracted into a separate PR, but it touches the same lines in `TargetTransformInfoImpl.h` (removes `override` added by the second commit and adds `virtual`), so I thought it may make sense to land these two commits together.
I don't expect any compile-time regressions/improvements here (there is still one virtual call). It might be slightly faster to build as `TargetTransformInfo.h` now has three times fewer methods.
I plan to do more cleanup in follow-up PRs.
---
Patch is 315.61 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/136674.diff
45 Files Affected:
- (modified) llvm/include/llvm/Analysis/TargetTransformInfo.h (+4-1354)
- (modified) llvm/include/llvm/Analysis/TargetTransformInfoImpl.h (+399-329)
- (modified) llvm/include/llvm/CodeGen/BasicTTIImpl.h (+150-138)
- (modified) llvm/lib/Analysis/TargetTransformInfo.cpp (+6-2)
- (modified) llvm/lib/CodeGen/CodeGenTargetMachineImpl.cpp (+1-1)
- (modified) llvm/lib/Target/AArch64/AArch64TargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/AArch64/AArch64TargetTransformInfo.h (+120-109)
- (modified) llvm/lib/Target/AMDGPU/AMDGPUTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/AMDGPU/AMDGPUTargetTransformInfo.h (+64-59)
- (modified) llvm/lib/Target/AMDGPU/R600TargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/AMDGPU/R600TargetTransformInfo.h (+12-11)
- (modified) llvm/lib/Target/ARC/ARCTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/ARM/ARMTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/ARM/ARMTargetTransformInfo.h (+86-76)
- (modified) llvm/lib/Target/BPF/BPFTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/BPF/BPFTargetTransformInfo.h (+6-9)
- (modified) llvm/lib/Target/DirectX/DirectXTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/DirectX/DirectXTargetTransformInfo.h (+4-4)
- (modified) llvm/lib/Target/Hexagon/HexagonTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/Hexagon/HexagonTargetTransformInfo.h (+55-47)
- (modified) llvm/lib/Target/Lanai/LanaiTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/Lanai/LanaiTargetTransformInfo.h (+11-11)
- (modified) llvm/lib/Target/LoongArch/LoongArchTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/LoongArch/LoongArchTargetTransformInfo.h (+8-6)
- (modified) llvm/lib/Target/Mips/MipsTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/Mips/MipsTargetTransformInfo.h (+2-2)
- (modified) llvm/lib/Target/NVPTX/NVPTXTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/NVPTX/NVPTXTargetTransformInfo.h (+31-29)
- (modified) llvm/lib/Target/PowerPC/PPCTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/PowerPC/PPCTargetTransformInfo.cpp (+5-1)
- (modified) llvm/lib/Target/PowerPC/PPCTargetTransformInfo.h (+58-55)
- (modified) llvm/lib/Target/RISCV/RISCVTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/RISCV/RISCVTargetTransformInfo.h (+102-93)
- (modified) llvm/lib/Target/SPIRV/SPIRVTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/SPIRV/SPIRVTargetTransformInfo.h (+1-1)
- (modified) llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/SystemZ/SystemZTargetTransformInfo.h (+44-45)
- (modified) llvm/lib/Target/VE/VETargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/VE/VETargetTransformInfo.h (+11-10)
- (modified) llvm/lib/Target/WebAssembly/WebAssemblyTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/WebAssembly/WebAssemblyTargetTransformInfo.h (+21-21)
- (modified) llvm/lib/Target/X86/X86TargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/X86/X86TargetTransformInfo.h (+94-90)
- (modified) llvm/lib/Target/XCore/XCoreTargetMachine.cpp (+1-1)
- (modified) llvm/lib/Target/XCore/XCoreTargetTransformInfo.h (+1-1)
``````````diff
diff --git a/llvm/include/llvm/Analysis/TargetTransformInfo.h b/llvm/include/llvm/Analysis/TargetTransformInfo.h
index ab5306b7b614e..76a77737394b8 100644
--- a/llvm/include/llvm/Analysis/TargetTransformInfo.h
+++ b/llvm/include/llvm/Analysis/TargetTransformInfo.h
@@ -210,6 +210,7 @@ struct TailFoldingInfo {
class TargetTransformInfo;
typedef TargetTransformInfo TTI;
+class TargetTransformInfoImplBase;
/// This pass provides access to the codegen interfaces that are needed
/// for IR-level transformations.
@@ -226,7 +227,8 @@ class TargetTransformInfo {
///
/// This is used by targets to construct a TTI wrapping their target-specific
/// implementation that encodes appropriate costs for their target.
- template <typename T> TargetTransformInfo(T Impl);
+ explicit TargetTransformInfo(
+ std::unique_ptr<const TargetTransformInfoImplBase> Impl);
/// Construct a baseline TTI object using a minimal implementation of
/// the \c Concept API below.
@@ -1915,1361 +1917,9 @@ class TargetTransformInfo {
SmallVectorImpl<std::pair<StringRef, int64_t>> &LB) const;
private:
- /// The abstract base class used to type erase specific TTI
- /// implementations.
- class Concept;
-
- /// The template model for the base class which wraps a concrete
- /// implementation in a type erased interface.
- template <typename T> class Model;
-
- std::unique_ptr<const Concept> TTIImpl;
-};
-
-class TargetTransformInfo::Concept {
-public:
- virtual ~Concept() = 0;
- virtual const DataLayout &getDataLayout() const = 0;
- virtual InstructionCost getGEPCost(Type *PointeeType, const Value *Ptr,
- ArrayRef<const Value *> Operands,
- Type *AccessType,
- TTI::TargetCostKind CostKind) const = 0;
- virtual InstructionCost
- getPointersChainCost(ArrayRef<const Value *> Ptrs, const Value *Base,
- const TTI::PointersChainInfo &Info, Type *AccessTy,
- TTI::TargetCostKind CostKind) const = 0;
- virtual unsigned getInliningThresholdMultiplier() const = 0;
- virtual unsigned getInliningCostBenefitAnalysisSavingsMultiplier() const = 0;
- virtual unsigned
- getInliningCostBenefitAnalysisProfitableMultiplier() const = 0;
- virtual int getInliningLastCallToStaticBonus() const = 0;
- virtual unsigned adjustInliningThreshold(const CallBase *CB) const = 0;
- virtual int getInlinerVectorBonusPercent() const = 0;
- virtual unsigned getCallerAllocaCost(const CallBase *CB,
- const AllocaInst *AI) const = 0;
- virtual InstructionCost getMemcpyCost(const Instruction *I) const = 0;
- virtual uint64_t getMaxMemIntrinsicInlineSizeThreshold() const = 0;
- virtual unsigned
- getEstimatedNumberOfCaseClusters(const SwitchInst &SI, unsigned &JTSize,
- ProfileSummaryInfo *PSI,
- BlockFrequencyInfo *BFI) const = 0;
- virtual InstructionCost getInstructionCost(const User *U,
- ArrayRef<const Value *> Operands,
- TargetCostKind CostKind) const = 0;
- virtual BranchProbability getPredictableBranchThreshold() const = 0;
- virtual InstructionCost getBranchMispredictPenalty() const = 0;
- virtual bool hasBranchDivergence(const Function *F = nullptr) const = 0;
- virtual bool isSourceOfDivergence(const Value *V) const = 0;
- virtual bool isAlwaysUniform(const Value *V) const = 0;
- virtual bool isValidAddrSpaceCast(unsigned FromAS, unsigned ToAS) const = 0;
- virtual bool addrspacesMayAlias(unsigned AS0, unsigned AS1) const = 0;
- virtual unsigned getFlatAddressSpace() const = 0;
- virtual bool collectFlatAddressOperands(SmallVectorImpl<int> &OpIndexes,
- Intrinsic::ID IID) const = 0;
- virtual bool isNoopAddrSpaceCast(unsigned FromAS, unsigned ToAS) const = 0;
- virtual bool
- canHaveNonUndefGlobalInitializerInAddressSpace(unsigned AS) const = 0;
- virtual unsigned getAssumedAddrSpace(const Value *V) const = 0;
- virtual bool isSingleThreaded() const = 0;
- virtual std::pair<const Value *, unsigned>
- getPredicatedAddrSpace(const Value *V) const = 0;
- virtual Value *rewriteIntrinsicWithAddressSpace(IntrinsicInst *II,
- Value *OldV,
- Value *NewV) const = 0;
- virtual bool isLoweredToCall(const Function *F) const = 0;
- virtual void
- getUnrollingPreferences(Loop *L, ScalarEvolution &, UnrollingPreferences &UP,
- OptimizationRemarkEmitter *ORE) const = 0;
- virtual void getPeelingPreferences(Loop *L, ScalarEvolution &SE,
- PeelingPreferences &PP) const = 0;
- virtual bool isHardwareLoopProfitable(Loop *L, ScalarEvolution &SE,
- AssumptionCache &AC,
- TargetLibraryInfo *LibInfo,
- HardwareLoopInfo &HWLoopInfo) const = 0;
- virtual unsigned getEpilogueVectorizationMinVF() const = 0;
- virtual bool preferPredicateOverEpilogue(TailFoldingInfo *TFI) const = 0;
- virtual TailFoldingStyle
- getPreferredTailFoldingStyle(bool IVUpdateMayOverflow = true) const = 0;
- virtual std::optional<Instruction *>
- instCombineIntrinsic(InstCombiner &IC, IntrinsicInst &II) const = 0;
- virtual std::optional<Value *>
- simplifyDemandedUseBitsIntrinsic(InstCombiner &IC, IntrinsicInst &II,
- APInt DemandedMask, KnownBits &Known,
- bool &KnownBitsComputed) const = 0;
- virtual std::optional<Value *> simplifyDemandedVectorEltsIntrinsic(
- InstCombiner &IC, IntrinsicInst &II, APInt DemandedElts, APInt &UndefElts,
- APInt &UndefElts2, APInt &UndefElts3,
- std::function<void(Instruction *, unsigned, APInt, APInt &)>
- SimplifyAndSetOp) const = 0;
- virtual bool isLegalAddImmediate(int64_t Imm) const = 0;
- virtual bool isLegalAddScalableImmediate(int64_t Imm) const = 0;
- virtual bool isLegalICmpImmediate(int64_t Imm) const = 0;
- virtual bool isLegalAddressingMode(Type *Ty, GlobalValue *BaseGV,
- int64_t BaseOffset, bool HasBaseReg,
- int64_t Scale, unsigned AddrSpace,
- Instruction *I,
- int64_t ScalableOffset) const = 0;
- virtual bool isLSRCostLess(const TargetTransformInfo::LSRCost &C1,
- const TargetTransformInfo::LSRCost &C2) const = 0;
- virtual bool isNumRegsMajorCostOfLSR() const = 0;
- virtual bool shouldDropLSRSolutionIfLessProfitable() const = 0;
- virtual bool isProfitableLSRChainElement(Instruction *I) const = 0;
- virtual bool canMacroFuseCmp() const = 0;
- virtual bool canSaveCmp(Loop *L, BranchInst **BI, ScalarEvolution *SE,
- LoopInfo *LI, DominatorTree *DT, AssumptionCache *AC,
- TargetLibraryInfo *LibInfo) const = 0;
- virtual AddressingModeKind
- getPreferredAddressingMode(const Loop *L, ScalarEvolution *SE) const = 0;
- virtual bool isLegalMaskedStore(Type *DataType, Align Alignment,
- unsigned AddressSpace) const = 0;
- virtual bool isLegalMaskedLoad(Type *DataType, Align Alignment,
- unsigned AddressSpace) const = 0;
- virtual bool isLegalNTStore(Type *DataType, Align Alignment) const = 0;
- virtual bool isLegalNTLoad(Type *DataType, Align Alignment) const = 0;
- virtual bool isLegalBroadcastLoad(Type *ElementTy,
- ElementCount NumElements) const = 0;
- virtual bool isLegalMaskedScatter(Type *DataType, Align Alignment) const = 0;
- virtual bool isLegalMaskedGather(Type *DataType, Align Alignment) const = 0;
- virtual bool forceScalarizeMaskedGather(VectorType *DataType,
- Align Alignment) const = 0;
- virtual bool forceScalarizeMaskedScatter(VectorType *DataType,
- Align Alignment) const = 0;
- virtual bool isLegalMaskedCompressStore(Type *DataType,
- Align Alignment) const = 0;
- virtual bool isLegalMaskedExpandLoad(Type *DataType,
- Align Alignment) const = 0;
- virtual bool isLegalStridedLoadStore(Type *DataType,
- Align Alignment) const = 0;
- virtual bool isLegalInterleavedAccessType(VectorType *VTy, unsigned Factor,
- Align Alignment,
- unsigned AddrSpace) const = 0;
-
- virtual bool isLegalMaskedVectorHistogram(Type *AddrType,
- Type *DataType) const = 0;
- virtual bool isLegalAltInstr(VectorType *VecTy, unsigned Opcode0,
- unsigned Opcode1,
- const SmallBitVector &OpcodeMask) const = 0;
- virtual bool enableOrderedReductions() const = 0;
- virtual bool hasDivRemOp(Type *DataType, bool IsSigned) const = 0;
- virtual bool hasVolatileVariant(Instruction *I, unsigned AddrSpace) const = 0;
- virtual bool prefersVectorizedAddressing() const = 0;
- virtual InstructionCost getScalingFactorCost(Type *Ty, GlobalValue *BaseGV,
- StackOffset BaseOffset,
- bool HasBaseReg, int64_t Scale,
- unsigned AddrSpace) const = 0;
- virtual bool LSRWithInstrQueries() const = 0;
- virtual bool isTruncateFree(Type *Ty1, Type *Ty2) const = 0;
- virtual bool isProfitableToHoist(Instruction *I) const = 0;
- virtual bool useAA() const = 0;
- virtual bool isTypeLegal(Type *Ty) const = 0;
- virtual unsigned getRegUsageForType(Type *Ty) const = 0;
- virtual bool shouldBuildLookupTables() const = 0;
- virtual bool shouldBuildLookupTablesForConstant(Constant *C) const = 0;
- virtual bool shouldBuildRelLookupTables() const = 0;
- virtual bool useColdCCForColdCall(Function &F) const = 0;
- virtual bool
- isTargetIntrinsicTriviallyScalarizable(Intrinsic::ID ID) const = 0;
- virtual bool
- isTargetIntrinsicWithScalarOpAtArg(Intrinsic::ID ID,
- unsigned ScalarOpdIdx) const = 0;
- virtual bool isTargetIntrinsicWithOverloadTypeAtArg(Intrinsic::ID ID,
- int OpdIdx) const = 0;
- virtual bool
- isTargetIntrinsicWithStructReturnOverloadAtField(Intrinsic::ID ID,
- int RetIdx) const = 0;
- virtual InstructionCost
- getScalarizationOverhead(VectorType *Ty, const APInt &DemandedElts,
- bool Insert, bool Extract, TargetCostKind CostKind,
- ArrayRef<Value *> VL = {}) const = 0;
- virtual InstructionCost
- getOperandsScalarizationOverhead(ArrayRef<const Value *> Args,
- ArrayRef<Type *> Tys,
- TargetCostKind CostKind) const = 0;
- virtual bool supportsEfficientVectorElementLoadStore() const = 0;
- virtual bool supportsTailCalls() const = 0;
- virtual bool supportsTailCallFor(const CallBase *CB) const = 0;
- virtual bool enableAggressiveInterleaving(bool LoopHasReductions) const = 0;
- virtual MemCmpExpansionOptions
- enableMemCmpExpansion(bool OptSize, bool IsZeroCmp) const = 0;
- virtual bool enableSelectOptimize() const = 0;
- virtual bool shouldTreatInstructionLikeSelect(const Instruction *I) const = 0;
- virtual bool enableInterleavedAccessVectorization() const = 0;
- virtual bool enableMaskedInterleavedAccessVectorization() const = 0;
- virtual bool isFPVectorizationPotentiallyUnsafe() const = 0;
- virtual bool allowsMisalignedMemoryAccesses(LLVMContext &Context,
- unsigned BitWidth,
- unsigned AddressSpace,
- Align Alignment,
- unsigned *Fast) const = 0;
- virtual PopcntSupportKind
- getPopcntSupport(unsigned IntTyWidthInBit) const = 0;
- virtual bool haveFastSqrt(Type *Ty) const = 0;
- virtual bool
- isExpensiveToSpeculativelyExecute(const Instruction *I) const = 0;
- virtual bool isFCmpOrdCheaperThanFCmpZero(Type *Ty) const = 0;
- virtual InstructionCost getFPOpCost(Type *Ty) const = 0;
- virtual InstructionCost getIntImmCodeSizeCost(unsigned Opc, unsigned Idx,
- const APInt &Imm,
- Type *Ty) const = 0;
- virtual InstructionCost getIntImmCost(const APInt &Imm, Type *Ty,
- TargetCostKind CostKind) const = 0;
- virtual InstructionCost
- getIntImmCostInst(unsigned Opc, unsigned Idx, const APInt &Imm, Type *Ty,
- TargetCostKind CostKind,
- Instruction *Inst = nullptr) const = 0;
- virtual InstructionCost
- getIntImmCostIntrin(Intrinsic::ID IID, unsigned Idx, const APInt &Imm,
- Type *Ty, TargetCostKind CostKind) const = 0;
- virtual bool preferToKeepConstantsAttached(const Instruction &Inst,
- const Function &Fn) const = 0;
- virtual unsigned getNumberOfRegisters(unsigned ClassID) const = 0;
- virtual bool hasConditionalLoadStoreForType(Type *Ty, bool IsStore) const = 0;
- virtual unsigned getRegisterClassForType(bool Vector,
- Type *Ty = nullptr) const = 0;
- virtual const char *getRegisterClassName(unsigned ClassID) const = 0;
- virtual TypeSize getRegisterBitWidth(RegisterKind K) const = 0;
- virtual unsigned getMinVectorRegisterBitWidth() const = 0;
- virtual std::optional<unsigned> getMaxVScale() const = 0;
- virtual std::optional<unsigned> getVScaleForTuning() const = 0;
- virtual bool isVScaleKnownToBeAPowerOfTwo() const = 0;
- virtual bool
- shouldMaximizeVectorBandwidth(TargetTransformInfo::RegisterKind K) const = 0;
- virtual ElementCount getMinimumVF(unsigned ElemWidth,
- bool IsScalable) const = 0;
- virtual unsigned getMaximumVF(unsigned ElemWidth, unsigned Opcode) const = 0;
- virtual unsigned getStoreMinimumVF(unsigned VF, Type *ScalarMemTy,
- Type *ScalarValTy) const = 0;
- virtual bool shouldConsiderAddressTypePromotion(
- const Instruction &I, bool &AllowPromotionWithoutCommonHeader) const = 0;
- virtual unsigned getCacheLineSize() const = 0;
- virtual std::optional<unsigned> getCacheSize(CacheLevel Level) const = 0;
- virtual std::optional<unsigned> getCacheAssociativity(CacheLevel Level)
- const = 0;
- virtual std::optional<unsigned> getMinPageSize() const = 0;
-
- /// \return How much before a load we should place the prefetch
- /// instruction. This is currently measured in number of
- /// instructions.
- virtual unsigned getPrefetchDistance() const = 0;
-
- /// \return Some HW prefetchers can handle accesses up to a certain
- /// constant stride. This is the minimum stride in bytes where it
- /// makes sense to start adding SW prefetches. The default is 1,
- /// i.e. prefetch with any stride. Sometimes prefetching is beneficial
- /// even below the HW prefetcher limit, and the arguments provided are
- /// meant to serve as a basis for deciding this for a particular loop.
- virtual unsigned getMinPrefetchStride(unsigned NumMemAccesses,
- unsigned NumStridedMemAccesses,
- unsigned NumPrefetches,
- bool HasCall) const = 0;
-
- /// \return The maximum number of iterations to prefetch ahead. If
- /// the required number of iterations is more than this number, no
- /// prefetching is performed.
- virtual unsigned getMaxPrefetchIterationsAhead() const = 0;
-
- /// \return True if prefetching should also be done for writes.
- virtual bool enableWritePrefetching() const = 0;
-
- /// \return if target want to issue a prefetch in address space \p AS.
- virtual bool shouldPrefetchAddressSpace(unsigned AS) const = 0;
-
- /// \return The cost of a partial reduction, which is a reduction from a
- /// vector to another vector with fewer elements of larger size. They are
- /// represented by the llvm.experimental.partial.reduce.add intrinsic, which
- /// takes an accumulator and a binary operation operand that itself is fed by
- /// two extends. An example of an operation that uses a partial reduction is a
- /// dot product, which reduces two vectors to another of 4 times fewer and 4
- /// times larger elements.
- virtual InstructionCost
- getPartialReductionCost(unsigned Opcode, Type *InputTypeA, Type *InputTypeB,
- Type *AccumType, ElementCount VF,
- PartialReductionExtendKind OpAExtend,
- PartialReductionExtendKind OpBExtend,
- std::optional<unsigned> BinOp) const = 0;
-
- virtual unsigned getMaxInterleaveFactor(ElementCount VF) const = 0;
- virtual InstructionCost
- getArithmeticInstrCost(unsigned Opcode, Type *Ty,
- TTI::TargetCostKind CostKind,
- OperandValueInfo Opd1Info, OperandValueInfo Opd2Info,
- ArrayRef<const Value *> Args,
- const Instruction *CxtI = nullptr) const = 0;
- virtual InstructionCost getAltInstrCost(
- VectorType *VecTy, unsigned Opcode0, unsigned Opcode1,
- const SmallBitVector &OpcodeMask,
- TTI::TargetCostKind CostKind = TTI::TCK_RecipThroughput) const = 0;
-
- virtual InstructionCost getShuffleCost(ShuffleKind Kind, VectorType *Tp,
- ArrayRef<int> Mask,
- TTI::TargetCostKind CostKind,
- int Index, VectorType *SubTp,
- ArrayRef<const Value *> Args,
- const Instruction *CxtI) const = 0;
- virtual InstructionCost getCastInstrCost(unsigned Opcode, Type *Dst,
- Type *Src, CastContextHint CCH,
- TTI::TargetCostKind CostKind,
- const Instruction *I) const = 0;
- virtual InstructionCost getExtractWithExtendCost(unsigned Opcode, Type *Dst,
- VectorType *VecTy,
- unsigned Index) const = 0;
- virtual InstructionCost
- getCFInstrCost(unsigned Opcode, TTI::TargetCostKind CostKind,
- const Instruction *I = nullptr) const = 0;
- virtual InstructionCost
- getCmpSelInstrCost(unsigned Opcode, Type *ValTy, Type *CondTy,
- CmpInst::Predicate VecPred, TTI::TargetCostKind CostKind,
- OperandValueInfo Op1Info, OperandValueInfo Op2Info,
- const Instruction *I) const = 0;
- virtual InstructionCost getVectorInstrCost(unsigned Opcode, Type *Val,
- TTI::TargetCostKind CostKind,
- unsigned Index, Value *Op0,
- Value *Op1) const = 0;
-
- /// \param ScalarUserAndIdx encodes the information about extracts from a
- /// vector with 'Scalar' being the value being extracted,'User' being the user
- /// of the extract(nullptr if user is not known before vectorization) and
- /// 'Idx' being the extract lane.
- virtual InstructionCost getVectorInstrCost(
- unsigned Opcode, Type *Val, TTI::TargetCostKind CostKind, unsigned Index,
- Value *Scalar,
- ArrayRef<std::tuple<Value *, User *, int>> ScalarUserAndIdx) const = 0;
-
- virtual InstructionCost getVectorInstrCost(const Instruction &I, Type *Val,
- TTI::Targ...
[truncated]
``````````
</details>
https://github.com/llvm/llvm-project/pull/136674
More information about the llvm-commits
mailing list