<div dir="ltr"><div dir="ltr">On Sat, 31 Jul 2021 at 00:33, Luke Kenneth Casson Leighton via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">if however instead of an NxM problem this was turned into N+M,<br>
separating out "scalar base" from "augmentation" throughout the IR,<br>
the problem disappears entirely.<br></blockquote><div><br></div><div>Hi Luke,</div><div><br></div><div>It's not entirely clear to me what you are suggesting here.</div><div><br></div><div>For context:</div><div> * Historically, we have tried to keep as many instructions as native IR as possible to avoid the explosion of intrinsics, as you describe.</div><div> * However, traditionally, intrinsics reduce the number of instructions in a basic block instead of increasing them, so there's always the balance.</div><div> * For example, some reduction intrinsics were added to address bloat, but no target is forced to use them.</div><div> * If you can represent the operation as a series of native IR instructions, by all means, you should do so.</div><div><br></div><div>I get it that a lot of intrinsics are repeated patterns over all variations and that most targets don't have that many, so it's "ok".<br></div><div><br></div><div>I also get it that most SIMD vector operations aren't intrinsically vector, but expansions of scalar operations for the benefit of vectorisation (plus predication, to avoid undefined behaviour and to allow "funny" patterns, etc).</div><div><br></div><div>But it's not clear to me what the "augmentation" part would be in other targets.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">even permute / shuffle Vector/SIMD operations are separateable into<br>
"base" and "abstract Vector Concept": the "base" operation in that<br>
case being "MV.X" (scalar register copy, indexable - reg[RT] =<br>
reg[reg[RA]] and immediate variant reg[RT] = reg[RA+imm])<br></blockquote><div><br></div><div>Shuffles are already represented as IR instructions (insert/extract vector), so I'm not sure this clarifies much.</div><div><br></div><div>Have you looked at the current scalable vector implementation?</div><div><br></div><div>It allows a set of operations on open-ended vectors that are controlled by a predicate, which is possibly the "augmentation" that you're looking for?</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">the issue is that this is a massive intrusive change, effectively a<br>
low-level redesign of LLVM IR internals for every single back-end.<br></blockquote><div><br></div><div>Not necessarily.</div><div><br></div><div>For example, scalable vectors are being introduced in a way that non-scalable back-ends (mostly) won't notice.</div><div>And it's not just adding a few intrinsics, the very concept of vectors was changed.</div><div>There could be a (set of) construct(s) for your particular back-end that is invisible to others.</div><div><br></div><div>Of course, the more invisible things, the harder it is to validate and change intersections of code, so the change must really be worth the extra hassle.</div><div>With both Arm and RISCV implementing scalable extensions, that change was deemed worthy and work is progressing.</div><div>So, if you could leverage the existing code to your advantage, you'd avoid having to convince a huge community to implement a large breaking change.</div><div>And you'd also give us one more reason for the scalable extension to exist. :)</div><div><br></div><div>Hope this helps.</div><div><br></div><div>cheers,</div><div>--renato <br></div></div></div>