[llvm] [RISCV][GISel] Don't custom legalize load/store of vector of pointers if ELEN < XLEN. (PR #101565)

Michael Maitland via llvm-commits llvm-commits at lists.llvm.org
Tue Aug 13 11:51:53 PDT 2024


================
@@ -302,38 +317,28 @@ RISCVLegalizerInfo::RISCVLegalizerInfo(const RISCVSubtarget &ST)
                                                {nxv8s32, p0, nxv8s32, 32},
                                                {nxv16s32, p0, nxv16s32, 32}});
 
-  auto &ExtLoadActions =
-      getActionDefinitionsBuilder({G_SEXTLOAD, G_ZEXTLOAD})
-          .legalForTypesWithMemDesc({{s32, p0, s8, 8}, {s32, p0, s16, 16}});
-  if (XLen == 64) {
-    LoadStoreActions.legalForTypesWithMemDesc({{s64, p0, s8, 8},
-                                               {s64, p0, s16, 16},
-                                               {s64, p0, s32, 32},
-                                               {s64, p0, s64, 64}});
-    ExtLoadActions.legalForTypesWithMemDesc(
-        {{s64, p0, s8, 8}, {s64, p0, s16, 16}, {s64, p0, s32, 32}});
-  } else if (ST.hasStdExtD()) {
-    LoadStoreActions.legalForTypesWithMemDesc({{s64, p0, s64, 64}});
-  }
-  if (ST.hasVInstructions() && ST.getELen() == 64)
-    LoadStoreActions.legalForTypesWithMemDesc({{nxv1s8, p0, nxv1s8, 8},
-                                               {nxv1s16, p0, nxv1s16, 16},
-                                               {nxv1s32, p0, nxv1s32, 32}});
+    if (ST.getELen() == 64)
+      LoadStoreActions.legalForTypesWithMemDesc({{nxv1s8, p0, nxv1s8, 8},
+                                                 {nxv1s16, p0, nxv1s16, 16},
+                                                 {nxv1s32, p0, nxv1s32, 32}});
+
+    if (ST.hasVInstructionsI64())
+      LoadStoreActions.legalForTypesWithMemDesc({{nxv1s64, p0, nxv1s64, 64},
+                                                 {nxv2s64, p0, nxv2s64, 64},
+                                                 {nxv4s64, p0, nxv4s64, 64},
+                                                 {nxv8s64, p0, nxv8s64, 64}});
 
-  if (ST.hasVInstructionsI64())
-    LoadStoreActions.legalForTypesWithMemDesc({{nxv1s64, p0, nxv1s64, 64},
+    // we will take the custom lowering logic if we have scalable vector types
+    // with non-standard alignments
+    LoadStoreActions.customIf(typeIsLegalIntOrFPVec(0, IntOrFPVecTys, ST));
 
-                                               {nxv2s64, p0, nxv2s64, 64},
-                                               {nxv4s64, p0, nxv4s64, 64},
-                                               {nxv8s64, p0, nxv8s64, 64}});
+    // Pointers require that XLen sized elements are legal.
+    if (XLen <= ST.getELen())
+      LoadStoreActions.customIf(typeIsLegalPtrVec(0, PtrVecTys, ST));
----------------
michaelmaitland wrote:

> On the other hand calling allowsMemoryAccessForAlignment allows us to handle FeatureUnalignedVectorMem correctly. So maybe that's a vote for removing the legalIf and just letting the custom handle always handle it?

Do we care about FeatureUnalignedVectorMem in the aligned cases? If we were to move the legalIf, it could probably be done in another patch because it doesn't have to do with pointers.

https://github.com/llvm/llvm-project/pull/101565


More information about the llvm-commits mailing list