[clang] [llvm] [clang-tools-extra] [X86] Use plain load/store instead of cmpxchg16b for atomics with AVX (PR #74275)
James Y Knight via cfe-commits
cfe-commits at lists.llvm.org
Mon Jan 8 20:10:06 PST 2024
================
@@ -30113,32 +30120,40 @@ TargetLoweringBase::AtomicExpansionKind
X86TargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
Type *MemType = SI->getValueOperand()->getType();
- bool NoImplicitFloatOps =
- SI->getFunction()->hasFnAttribute(Attribute::NoImplicitFloat);
- if (MemType->getPrimitiveSizeInBits() == 64 && !Subtarget.is64Bit() &&
- !Subtarget.useSoftFloat() && !NoImplicitFloatOps &&
- (Subtarget.hasSSE1() || Subtarget.hasX87()))
- return AtomicExpansionKind::None;
+ if (!SI->getFunction()->hasFnAttribute(Attribute::NoImplicitFloat) &&
+ !Subtarget.useSoftFloat()) {
+ if (MemType->getPrimitiveSizeInBits() == 64 && !Subtarget.is64Bit() &&
+ (Subtarget.hasSSE1() || Subtarget.hasX87()))
+ return AtomicExpansionKind::None;
+
+ if (MemType->getPrimitiveSizeInBits() == 128 && Subtarget.is64Bit() &&
+ Subtarget.hasAVX())
+ return AtomicExpansionKind::None;
+ }
return needsCmpXchgNb(MemType) ? AtomicExpansionKind::Expand
: AtomicExpansionKind::None;
}
// Note: this turns large loads into lock cmpxchg8b/16b.
-// TODO: In 32-bit mode, use MOVLPS when SSE1 is available?
TargetLowering::AtomicExpansionKind
X86TargetLowering::shouldExpandAtomicLoadInIR(LoadInst *LI) const {
Type *MemType = LI->getType();
- // If this a 64 bit atomic load on a 32-bit target and SSE2 is enabled, we
- // can use movq to do the load. If we have X87 we can load into an 80-bit
- // X87 register and store it to a stack temporary.
- bool NoImplicitFloatOps =
- LI->getFunction()->hasFnAttribute(Attribute::NoImplicitFloat);
- if (MemType->getPrimitiveSizeInBits() == 64 && !Subtarget.is64Bit() &&
- !Subtarget.useSoftFloat() && !NoImplicitFloatOps &&
- (Subtarget.hasSSE1() || Subtarget.hasX87()))
- return AtomicExpansionKind::None;
+ if (!LI->getFunction()->hasFnAttribute(Attribute::NoImplicitFloat) &&
+ !Subtarget.useSoftFloat()) {
+ // If this a 64 bit atomic load on a 32-bit target and SSE2 is enabled, we
+ // can use movq to do the load. If we have X87 we can load into an 80-bit
+ // X87 register and store it to a stack temporary.
+ if (MemType->getPrimitiveSizeInBits() == 64 && !Subtarget.is64Bit() &&
----------------
jyknight wrote:
Yikes, that's a nasty footgun -- obvious once you think about it, but I never would've looked out for that issue! The API of `getPrimitiveSizeInBits` is just _asking_ for mistakes like this. It ought to either assert when called on a non-primitive, or else return an optional, rather than simply returning 0 on failure.
But, looking through this code and the rest of the related atomics support on X86, I believe there is actually _not_ a correctness impact, because (not coincidentally!) atomics always work "normally" on pointer-sized values. That is, here, getPrimitiveSizeInBits() will return 0 rather than the perhaps-expected `is64Bit() ? 64 : 32`. Despite being unexpected, though, the logic still works out fine in that case.
So, given that this is a longstanding pre-existing issue -- which appears not to be a correctness issue -- I'd like to leave it alone in this patch.
https://github.com/llvm/llvm-project/pull/74275
More information about the cfe-commits
mailing list