[llvm-dev] RFC: A change in InstCombine canonical form
Chandler Carruth via llvm-dev
llvm-dev at lists.llvm.org
Mon Mar 28 14:18:20 PDT 2016
Sorry for my delay responding, finally caught up on my email to this point
and read through the whole thread.
First and foremost: we should *definitely* not sit on our hands and wait
for typeless pointers to arrive. However, we also shouldn't (IMO) take on
lots of technical debt instead of working on causing typeless pointers to
arrive sooner. But I don't think any of the options here are likely to
incur large technical debt, so we should IMO feel free to pursue either
approach.
I do actually think that in the face of typeless pointers, we will likely
want to use integer loads and stores in the absence of some operation that
makes a particular type a better fit. The reason I feel this way is because
that will give us more consistent and thus "canonical" results from
different input programs.
I think leaning on pointer type to pick a better type when lowering memcpy
is a bad idea because it will essentially cause us to not know about the
optimizations that are currently blocked by pointer bitcasts.
I have been advocating in other places that we should keep canonicalizing
exactly as we currently do, and teach every optimization pass to look
through pointer bitcasts (until they go away). The particular reason I
advocate for this is that I expect this to make it *easier to get to
typeless pointers*. Every time we fix optimization passes to look through
bitcasts we get the rest of the optimizer closer in semantics to the world
with typeless pointers. As an example, there may be in some passes work
that will be needed to support this, and we can parallelize that work with
the other typeless pointer work.
When typeless pointers arrive, yes, we will have lots of code that looks
through bitcasts that are no longer there, but that will be both harmless
and easily found and removed I suspect. Whereas, adding usage of pointer
types runs the risk of continued barriers to typeless pointers creeping
into the optimization layers.
So, I vote for approach #2 above FWIW.
-Chandler
On Wed, Mar 23, 2016 at 8:58 AM Ehsan Amiri via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> OK. I will do some experiments with (1) on Power PC. Will update this
> email chain about the results.
>
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160328/11c83550/attachment.html>
More information about the llvm-dev
mailing list