[PATCH] D57970: [WinEH] Allocate unique stack slots for xmm CSRs in funclets

Andy Kaylor via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Feb 8 13:39:15 PST 2019


andrew.w.kaylor marked an inline comment as done.
andrew.w.kaylor added inline comments.


================
Comment at: test/CodeGen/X86/catchpad-realign-savexmm.ll:65-66
+; CHECK:	leaq	80(%rdx), %rbp
+; CHECK:	movapd	%xmm6, -32(%rbp)        # 16-byte Spill
+; CHECK:	.seh_savexmm 6, 48
+; CHECK:	.seh_endprologue
----------------
rnk wrote:
> I don't think this will work long in practice because we are storing XMM into the parent function's stack frame, but the .seh_savexmm directive describes locations relative the the funclet's RSP. So, when the stack is unwound, (throw an exception out of a catch block) XMM CSRs will not be restored correctly.
> 
> Another idea I had for fixing this was to change X86FrameLowering::getFrameIndexReference to do something special for XMM CSRs (we should be able to find the list of them somewhere), and resolve them to some SP-relative offset in the funclet's frame. We'll have to adjust the "SUB RSP, 32" that we currently emit for every funclet as well for that to work.
I see what you mean, and that makes the hardcoded offset suggestion make more sense.

That said, I think this would probably "work" (i.e. not fail) for a lot of cases. If the funclet throws an exception the unwinder will put garbage in the CSRs, but if the exception isn't caught by the parent function the correct value will be filled in when the parent's stack unwinds. If my thinking on this is correct, we'd really only keep incorrect values when a funclet throws an exception and the parent catches it. I'm not suggesting that this is in any way good, but it's slightly less bad than what we're doing now.

I'll see if I can figure out how to do it the way you are suggesting. In the meantime do you think what I have here is possibly useful as a workaround for desperate people (of which I am currently one)? And if so, do you think it's worth committing if I'm not going to have a proper fix soon?


Repository:
  rL LLVM

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D57970/new/

https://reviews.llvm.org/D57970





More information about the llvm-commits mailing list