[PATCH] D80131: [x86] favor vector constant load to avoid GPR to XMM transfer, part 2

Craig Topper via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu May 21 12:27:20 PDT 2020


craig.topper added a comment.

In D80131#2049497 <https://reviews.llvm.org/D80131#2049497>, @RKSimon wrote:

> Ideally we'd have a way to fold vzext_load ops inside X86InstrInfo::foldMemoryOperandCustom (by zero padding the constant pool entry where necessary) but I'm not certain how easy that is.
>
> So we probably want to go with this variant (sorry for the wild goose chase).
>
> @craig.topper any thoughts?


I think the caller of foldMemoryOperandImpl is responsible for copying the memoperand over to the new instruction. So changing the memory reference out from under it will break that at the very least. We'd also be deferring our usual load folding to the peephole pass which isn't as quite strong as SelectionDAG I think.

If the load is in a loop we potentialy unfold it in MachineLICM and hoist it out of the loop. So maybe what we really want is a later constant pool shrinking pass that runs after Machine LICM. We have a similar issue with broadcasts from constant pool don't we? Lowing of build_vector favors forming broadcasts of constants without knowing if we can fold.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D80131/new/

https://reviews.llvm.org/D80131





More information about the llvm-commits mailing list