<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Oct 24, 2014, at 10:57 AM, Adam Nemet <<a href="mailto:anemet@apple.com" class="">anemet@apple.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html charset=windows-1252" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">On Oct 24, 2014, at 4:24 AM, Demikhovsky, Elena <<a href="mailto:elena.demikhovsky@intel.com" class="">elena.demikhovsky@intel.com</a>> wrote:</div><div class=""><br class="Apple-interchange-newline"><blockquote type="cite" class=""><div style="font-family: Helvetica; font-size: 14px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><font face="Calibri" size="2" class=""><span style="font-size: 11pt;" class=""><div class=""><font color="#1F497D" class="">Hi,</font></div><div class=""><font color="#1F497D" class=""> </font></div><div class=""><font color="#1F497D" class="">We would like to add support for masked vector loads and stores by introducing new target-independent intrinsics. The loop vectorizer will then be enhanced to optimize loops containing conditional memory accesses by generating these intrinsics for existing targets such as AVX2 and AVX-512. The vectorizer will first ask the target about availability of masked vector loads and stores. The SLP vectorizer can potentially be enhanced to use these intrinsics as well.</font></div><div class=""><font color="#1F497D" class=""> </font></div></span></font></div></blockquote></div></div></div></blockquote><div><br class=""></div><div>I am happy to hear that you are working on this because it means that in the future we would be able to teach the SLP Vectorizer to vectorize types of <3 x float>. </div><br class=""><blockquote type="cite" class=""><div class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class=""><blockquote type="cite" class=""><div style="font-family: Helvetica; font-size: 14px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><font face="Calibri" size="2" class=""><span style="font-size: 11pt;" class=""><div class=""><font color="#1F497D" class="">The intrinsics would be legal for all targets; targets that do not support masked vector loads or stores will scalarize them.</font></div></span></font></div></blockquote><div class=""><br class=""></div></div></div></div></blockquote><div><br class=""></div><div>+1. I think that this is an important requirement. </div><br class=""><blockquote type="cite" class=""><div class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class=""><div class="">I do agree that we would like to have one IR node to capture these so that they survive until ISel and that their specific semantics can be expressed. However, can you discuss the other options (new IR instructions, target-specific intrinsics) and why you went with target-independent intrinsics.</div><div class=""><br class=""></div></div></div></div></blockquote><div><br class=""></div><div>I agree with the approach of adding target-independent masked memory intrinsics. One reason is that I would like to keep the vectorizers target independent (and use the target transform info to query the backends). I oppose adding new first-level instructions because we would need to teach all of the existing optimizations about the new instructions, and considering the limited usefulness of masked operations it is not worth the effort. </div><div><br class=""></div><blockquote type="cite" class=""><div class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class=""><div class="">My intuition would have been to go with target-specific intrinsics until we have something solid implemented and then potentially turn this into native IR instructions as the next step (for other targets, etc.). I am particularly worried whether we really want to generate these for targets that don’t have vector predication support.</div></div></div></div></blockquote><div><br class=""></div><div>Probably not, but this is a cost-benefit decision that the vectorizers would need to make. </div><div><br class=""></div><blockquote type="cite" class=""><div class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class=""><div class=""><br class=""></div><div class=""><div class="">There is also the related question of vector predicating any other instruction beyond just loads and stores which AVX512 supports. This is probably a smaller gain but should probably be part of the plan as well.</div><div class=""><br class=""></div></div><div class="">Adam</div><br class=""><blockquote type="cite" class=""><div style="font-family: Helvetica; font-size: 14px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><font face="Calibri" size="2" class=""><span style="font-size: 11pt;" class=""><div class=""><font color="#1F497D" class="">The addressed memory will not be touched for masked-off lanes. In particular, if all lanes are masked off no address will be accessed.</font></div><div class=""><font color="#1F497D" class=""> </font></div><div class=""><font color="#1F497D" class=""> call void @llvm.masked.store (i32* %addr, <16 x i32> %data, i32 4, <16 x i1> %mask)</font></div><div class=""><font color="#1F497D" class=""> </font></div><div class=""><font color="#1F497D" class=""> %data = call <8 x i32> @llvm.masked.load (i32* %addr, <8 x i32> %passthru, i32 4, <8 x i1> %mask)</font></div><div class=""><font color="#1F497D" class=""> </font></div><div class=""><font color="#1F497D" class="">where %passthru is used to fill the elements of %data that are masked-off (if any; can be zeroinitializer or undef).</font></div><div class=""><font color="#1F497D" class=""> </font></div><div class=""><font color="#1F497D" class="">Comments so far, before we dive into more details?</font></div><div class=""><font color="#1F497D" class=""> </font></div><div class=""><font color="#1F497D" class="">Thank you.</font></div><div class=""><font color="#1F497D" class=""> </font></div><div class=""><font color="#1F497D" class="">- Elena and Ayal</font></div><div class=""><font color="#1F497D" class=""> </font></div><div class=""> </div></span></font><p class="">---------------------------------------------------------------------<br class="">Intel Israel (74) Limited</p><p class="">This e-mail and any attachments may contain confidential material for<br class="">the sole use of the intended recipient(s). Any review or distribution<br class="">by others is strictly prohibited. If you are not the intended<br class="">recipient, please contact the sender and delete all copies.</p></div></blockquote></div><br class=""></div></div></blockquote></div><br class=""></body></html>