[PATCH] D122900: [x86] Disable optimization involved in an infinite loop inside DAG combiner.

pierre gousseau via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Apr 1 09:56:09 PDT 2022


pgousseau added a comment.

In D122900#3422413 <https://reviews.llvm.org/D122900#3422413>, @lebedev.ri wrote:

> In D122900#3422365 <https://reviews.llvm.org/D122900#3422365>, @spatel wrote:
>
>> In D122900#3422344 <https://reviews.llvm.org/D122900#3422344>, @pgousseau wrote:
>>
>>> In D122900#3422266 <https://reviews.llvm.org/D122900#3422266>, @lebedev.ri wrote:
>>>
>>>> Do you know which commit introduced the problem, or which transforms conflict?
>>>> This looks like a pretty heavy hammer.
>>>
>>> Thank you Roman for reviewing!
>>> The regression appeared after https://github.com/llvm/llvm-project/commit/8a156d1c2795189389fadbf33702384f522f2506 (thank you @wristow  for finding it)
>>> As @RKSimon  found out, the IR does not get generated anymore from the original C++ test case but the issue is still present.
>>> I am hopeful we can reintroduce the optimization (at a later codegen stage maybe?) but I would like to do some performance analysis first to see if it is still a win.
>>
>> That change might have exposed the bug from source/IR, but the backend loop has existed since at least the 10.0 release based on a godbolt check:
>> https://godbolt.org/z/Tfnaevboz
>
> Then i think just fixing it may be better than removing some chunks of code.

Yes, I agree fixing would be best, unfortunately I can’t see an obvious way to fix it inside DAG combiner nor if one the transformation involved is doing something wrong.
My understanding of the issue is that some optimizations disagree on which transformation is the best, causing a ping pong effect. I thought we could eventually fix it by moving the optimisation to the instruction selection stage, any other ideas?


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D122900/new/

https://reviews.llvm.org/D122900



More information about the llvm-commits mailing list