[llvm] [SystemZ] Simplify handling of AtomicRMW instructions. (PR #74789)
Ulrich Weigand via llvm-commits
llvm-commits at lists.llvm.org
Fri Dec 8 05:18:37 PST 2023
================
@@ -8023,39 +8004,28 @@ MachineBasicBlock *SystemZTargetLowering::emitAtomicLoadBinary(
BuildMI(MBB, DL, TII->get(SystemZ::PHI), OldVal)
.addReg(OrigVal).addMBB(StartMBB)
.addReg(Dest).addMBB(LoopMBB);
- if (IsSubWord)
- BuildMI(MBB, DL, TII->get(SystemZ::RLL), RotatedOldVal)
- .addReg(OldVal).addReg(BitShift).addImm(0);
+ BuildMI(MBB, DL, TII->get(SystemZ::RLL), RotatedOldVal)
+ .addReg(OldVal).addReg(BitShift).addImm(0);
if (Invert) {
// Perform the operation normally and then invert every bit of the field.
- Register Tmp = MRI.createVirtualRegister(RC);
+ Register Tmp = MRI.createVirtualRegister(&SystemZ::GR32BitRegClass);
BuildMI(MBB, DL, TII->get(BinOpcode), Tmp).addReg(RotatedOldVal).add(Src2);
- if (BitSize <= 32)
- // XILF with the upper BitSize bits set.
- BuildMI(MBB, DL, TII->get(SystemZ::XILF), RotatedNewVal)
- .addReg(Tmp).addImm(-1U << (32 - BitSize));
- else {
- // Use LCGR and add -1 to the result, which is more compact than
- // an XILF, XILH pair.
----------------
uweigand wrote:
As an aside, if LCGR/AGHI is really a more efficient way to implement a 64-bit NOT, we should do this generally (not just special-cased for inside atomic ops. This can be looked at as a follow-on.
https://github.com/llvm/llvm-project/pull/74789
More information about the llvm-commits
mailing list