[PATCH] D42724: [X86] Don't make 512-bit vectors legal when preferred vector width is 256 bits and 512 bits aren't required

Craig Topper via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Aug 22 09:42:14 PDT 2023


craig.topper added inline comments.


================
Comment at: llvm/trunk/lib/Target/X86/X86TargetMachine.cpp:275
+  // Extract required-vector-width attribute.
+  unsigned RequiredVectorWidth = UINT32_MAX;
+  if (F.hasFnAttribute("required-vector-width")) {
----------------
danilaml wrote:
> @craig.topper Sorry for commenting on such an old review, but I was investigating some codegen differences for very similar IR and come across this code (the attribute later changed to the `min-legal-vector-width` but otherwise it's the same on main). Is RequiredVectorWidth intended to be initialized to `UINT32_MAX`? What is the rationale? It forces maximum vector width if the function is missing the attribute for some reason, ignoring the `prefer-*` attributes. To me it seems that the conservative approach would be to set it to `0` and increase according to the attribute, since zero length vectors are always legal/"required".
It is intentionally set to UINT32_MAX if the attribute is missing. If the IR contains any 512 bit inline assembly, function arguments, returns, or X86 specific vector, the backend will crash or violate the ABI. The presence of the attribute indicates that those cases have been checked and nothing requires 512 bit vectors.


Repository:
  rL LLVM

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D42724/new/

https://reviews.llvm.org/D42724



More information about the llvm-commits mailing list