[PATCH] D89031: [SVE] Add support to vectorize_width loop pragma for scalable vectors

Sander de Smalen via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Tue Nov 3 01:18:24 PST 2020


sdesmalen requested changes to this revision.
sdesmalen added inline comments.
This revision now requires changes to proceed.


================
Comment at: clang/docs/LanguageExtensions.rst:3039
+useful for specifying the optimal width/count of the set of target
+architectures supported by your application.
 
----------------
Can you add a comment saying that the use of `"scalable"` is still experimental and is currently only intended to work for targets that support scalable vectors?


================
Comment at: clang/lib/Sema/SemaStmtAttr.cpp:144
+      assert(ValueExpr && "Attribute must have a valid value expression.");
+      if (S.CheckLoopHintExpr(ValueExpr, St->getBeginLoc()))
+        return nullptr;
----------------
fhahn wrote:
> sdesmalen wrote:
> > david-arm wrote:
> > > fhahn wrote:
> > > > Is there a way to only accept `fixed_width/scalable` for targets that support it? Not sure if we have enough information here, but we might be able to reject it eg per target basis or something
> > > Hi @fhahn, I think if possible we'd prefer not to reject scalable vectors at this point. Theoretically there is no reason why we can't perform scalable vectorisation for targets that don't have hardware support for scalable vectors. In this case it simply means that vscale is 1. If you want we could add some kind of opt-remark in the vectoriser that says something like "target does not support scalable vectors, vectorising for vscale=1"?
> > I agree with @david-arm that we shouldn't prevent this in the front-end. Even if the architecture may not support scalable vectors natively, there may still be reasons to want to create scalable vectors in IR, for example to have more portable IR.
> Hm, I am just a bit worried that it might be a bit confusing to users that do not know what scalable vectors are (it is obvious when knowing all about SVE, but I would assume most people don't necessarily know what this means). I guess that is not the biggest deal, as long `vectorize_width(X, scalable)` works for every target.
> 
> >  Even if the architecture may not support scalable vectors natively, there may still be reasons to want to create scalable vectors in IR, for example to have more portable IR.
> 
> Sure, but there are so many other target-specific things encoded that make the IR really un-portable between targets. Granted, it is not impossible to convert IR between some architectures (as in arm64_32)
Sorry, forgot to reply to this.

> Hm, I am just a bit worried that it might be a bit confusing to users that do not know what scalable vectors are (it is obvious when knowing all about SVE, but I would assume most people don't necessarily know what this means). I guess that is not the biggest deal, as long vectorize_width(X, scalable) works for every target.
At the moment this feature is still experimental, so I don't think any target would be able to return `true` to the question if this is supported :) That said, I agree that the compiler shouldn't crash for other targets after support in the loop-vectorizer stops being experimental. So I'm changing my mind here, and am happy to go with your suggestion to ignore the flag for other targets. When some default mechanism is added to lower scalable vectors to fixed-width vectors (for targets that don't natively support them), this check can probably be removed. @david-arm can you add some target hook to ignore the hint?

> Sure, but there are so many other target-specific things encoded that make the IR really un-portable between targets. Granted, it is not impossible to convert IR between some architectures (as in arm64_32)
I didn't mean portable between targets, but more as keeping the length of the vector agnostic in IR and leaving it until code-generation to pick a suitable/available vector extension, so that the same IR could be used to generate code for Neon or 256bit SVE for example. This is more a hypothetical use-case at the moment though.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D89031/new/

https://reviews.llvm.org/D89031



More information about the cfe-commits mailing list