[llvm-dev] RFC: Exposing TargetTransformInfo factories from TargetMachine
Eric Christopher via llvm-dev
llvm-dev at lists.llvm.org
Wed Dec 20 11:36:41 PST 2017
SGTM. :)
Also, you should have per function target attributes from the start.
-eric
On Wed, Dec 20, 2017 at 11:30 AM Sanjoy Das <sanjoy at playingwithpointers.com>
wrote:
> On Mon, Dec 18, 2017 at 6:26 PM, Eric Christopher <echristo at gmail.com>
> wrote:
> > Instead, is there any reason why TTI for a given Subtarget shouldn't
> live on
> > the Subtarget? Just construct it the same way we do TargetLowering, etc?
>
> Then stuff that depends on Analysis today will have to depend on
> CodeGen, which is probably not ideal.
>
> I think I'll go with Matthias' idea -- remove the function pointer
> indirection altogether, and instead just have a virtual
> getTargetTransformInfo(Function&) in TargetMachine.
>
> I was also wondering if it makes sense to make the llvm::Function
> argument optional -- that'll help make XLA's use case slightly cleaner
> since we don't (yet) have per-function target attributes.
>
> -- Sanjoy
>
> >
> > -eric
> >
> >
> > On Fri, Dec 15, 2017 at 10:13 AM Sanjoy Das via llvm-dev
> > <llvm-dev at lists.llvm.org> wrote:
> >>
> >> On Fri, Dec 15, 2017 at 5:30 AM, Hal Finkel <hfinkel at anl.gov> wrote:
> >> > Are there reasons why we might not want to do this? Other options we
> >> > should
> >> > consider?
> >>
> >> It does make the TargetMachine -> TargetIRAnalysis path less abstract,
> >> but given that all targets have the same pattern of instantiating a
> >> TargetIRAnalysis with a Function->TargetTransformInfo hook, the
> >> abstraction does not seem particularly useful.
> >>
> >> I might do even a simpler form of the patch though -- instead of
> >> returning a function pointer from TargetMachine, just add a virtual
> >> function to TargetMachine that creates the TargetTransformInfo
> >> directly from a Function.
> >>
> >> -- Sanjoy
> >>
> >> >
> >> > -Hal
> >> >
> >> >>
> >> >> [0]: XLA is a machine learning focussed linear algebra compiler
> >> >> https://www.tensorflow.org/performance/xla/ that uses LLVM for its
> CPU
> >> >> and GPU backends.
> >> >>
> >> >> -- Sanjoy
> >> >> _______________________________________________
> >> >> LLVM Developers mailing list
> >> >> llvm-dev at lists.llvm.org
> >> >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
> >> >
> >> >
> >> > --
> >> > Hal Finkel
> >> > Lead, Compiler Technology and Programming Languages
> >> > Leadership Computing Facility
> >> > Argonne National Laboratory
> >> >
> >> _______________________________________________
> >> LLVM Developers mailing list
> >> llvm-dev at lists.llvm.org
> >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20171220/fdd6c898/attachment.html>
More information about the llvm-dev
mailing list