[llvm-dev] Questions on LLVM vectorization diagnostics
Renato Golin via llvm-dev
llvm-dev at lists.llvm.org
Sat Aug 27 07:15:28 PDT 2016
On 25 August 2016 at 05:46, Saito, Hideki via llvm-dev
<llvm-dev at lists.llvm.org> wrote:
> Now, I have one question. Suppose we'd like to split the vectorization decision as an Analysis pass and vectorization
> transformation as a Transformation pass. Is it acceptable if an Analysis pass creates new Instructions and new BasicBlocks,
> keep them unreachable from the underlying IR of the Function/Loop, and pass those to the Transformation pass as
> part of Analysis's internal data? We've been operating under the assumption that such Analysis pass behavior is unacceptable.
First let me say, impressive work you guys are planning for the
vectoriser. Outer loop vectorisation is not an easy task, so feel free
to share your ideas early and often, as that would probably mean a lot
less work for you guys, too.
Regarding generation of dead code, I don't remember any pass doing
this (though I haven't looked at many). Most passes do some kind of
clean up at the end, and DCE ends up getting rid of spurious things
here and there, so you can't *rely* on it being there. It's even worse
than metadata, which is normally left alone *unless* needs to be
destroyed, dead code is purposely destroyed.
But analysis passes shouldn't be touching code in the first place. Of
course, creating additional dead code is not strictly changing code,
but this could be cause for code bloat, leaks, or making it worse for
other analysis. My personal view is that this is a bad move.
> Please let us know if this is a generally acceptable way for an Analysis pass to work ---- this might make our development
> move quicker. Why we'd want to do this? As mentioned above, we need to "pseudo-massage inner loop control flow"
> before deciding where/whether to vectorize. Hope someone can give us a clear answer.
We have discussed the split of analysis vs transformation with Polly
years ago, and it was considered "a good idea". But that relied
exclusively on metadata.
So, first, the vectorisers and Polly would pass on the IR as an
analysis pass first, leaving a trail of width/unroll factors, loop
dependency trackers, recommended skew factors, etc. Then, the
transformation passes (Loop/SLP/Polly) would use that information and
transform the loop the best they can, and clean up the metadata,
leaving only a single "width=1", which means, "don't try to vectorise
any more". Clean ups as required, after the transformation pass.
The current loop vectoriser is split in three stages: validity, cost
and transformation. We only check the cost if we know of any valid
transformation, and we only transform if we know of any better cost
than width=1. Where the cost analysis would be, depends on how we
arrange Polly, Loop and SLP vectoriser and their analysis passes.
Conservatively, I'd leave the cost analysis with the transformation,
so we only do it once.
The outer loop proposal, then, suffers from the cost analysis not
being done at the same time as the validity analysis. It would also
complicate a lot to pass "more than one" types of possible
vectorisation techniques via the same metadata structure, which will
probably already be complex enough. This is the main reason why we
haven't split yet.
Given that scenario of split responsibility, I'm curious as to your
opinion on the matter of carrying (and sharing) metadata between
different vectorisation analysis passes and different transformation
More information about the llvm-dev