[llvm] [X86] LowerStore - always split 512-bit concatenated stores (PR #143426)
via llvm-commits
llvm-commits at lists.llvm.org
Mon Jun 9 12:16:46 PDT 2025
llvmbot wrote:
<!--LLVM PR SUMMARY COMMENT-->
@llvm/pr-subscribers-backend-x86
Author: Simon Pilgrim (RKSimon)
<details>
<summary>Changes</summary>
All BWI regressions have now been addressed, so remove the special case handling.
---
Full diff: https://github.com/llvm/llvm-project/pull/143426.diff
1 Files Affected:
- (modified) llvm/lib/Target/X86/X86ISelLowering.cpp (+5-7)
``````````diff
diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp
index 9ea45513cc019..0cae8dbcc3bc7 100644
--- a/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -25549,14 +25549,12 @@ static SDValue LowerStore(SDValue Op, const X86Subtarget &Subtarget,
if (St->isTruncatingStore())
return SDValue();
- // If this is a 256-bit store of concatenated ops, we are better off splitting
- // that store into two 128-bit stores. This avoids spurious use of 256-bit ops
- // and each half can execute independently. Some cores would split the op into
- // halves anyway, so the concat (vinsertf128) is purely an extra op.
+ // If this is a 256/512-bit store of concatenated ops, we are better off
+ // splitting that store into two half-size stores. This avoids spurious use of
+ // concatenated ops and each half can execute independently. Some cores would
+ // split the op into halves anyway, so the concat is purely an extra op.
MVT StoreVT = StoredVal.getSimpleValueType();
- if (StoreVT.is256BitVector() ||
- ((StoreVT == MVT::v32i16 || StoreVT == MVT::v64i8) &&
- !Subtarget.hasBWI())) {
+ if (StoreVT.is256BitVector() || StoreVT.is512BitVector()) {
if (StoredVal.hasOneUse() && isFreeToSplitVector(StoredVal, DAG))
return splitVectorStore(St, DAG);
return SDValue();
``````````
</details>
https://github.com/llvm/llvm-project/pull/143426
More information about the llvm-commits
mailing list