[llvm] [TargetInstrInfo] enable foldMemoryOperand for InlineAsm (PR #70743)

via llvm-commits llvm-commits at lists.llvm.org
Tue Nov 14 02:18:26 PST 2023


================
@@ -565,6 +565,64 @@ static MachineInstr *foldPatchpoint(MachineFunction &MF, MachineInstr &MI,
   return NewMI;
 }
 
+static void foldInlineAsmMemOperand(MachineInstr *MI, unsigned OpNo, int FI,
+                                    const TargetInstrInfo &TII) {
+  MachineOperand &MO = MI->getOperand(OpNo);
+  const VirtRegInfo &RI = AnalyzeVirtRegInBundle(*MI, MO.getReg());
+
+  // If the machine operand is tied, untie it first.
+  if (MO.isTied()) {
+    unsigned TiedTo = MI->findTiedOperandIdx(OpNo);
+    MI->untieRegOperand(OpNo);
----------------
qcolombet wrote:

TL;DR Sounds fine to me, we're not creating new problems.

Agree on all what you're saying but my point is when we replace operands with memory locations, the resulting inline assembly may not be correct or put differently, I don't see where we check that the resulting asm is correct.
I actually realize now that this is not specific to tied operands.

E.g., consider the following inline asm constraints on x86 (with `someop` being something that makes sense)
```
    asm (
        "someop %0, %1, %2"
        : "=rm"(dst) : "rm"(a1), "rm"(a2));
```

The resulting code will be fine as long as regalloc spills at most one operand. If it spills two or more, we'll have more than one memory location on `someop`. Assuming someop is a single x86 operation, I believe this will produce something invalid.

With tied operands, I believe we just make this more likely to happen, but then considering that previously we were just using the `m` constraint in these cases, we already had this issue anyway.

Therefore, nothing to worry about, I guess this is the responsibility of the developers to set something that make sense.

https://github.com/llvm/llvm-project/pull/70743


More information about the llvm-commits mailing list