<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Sep 21, 2014 at 6:57 PM, Jiangning Liu <span dir="ltr"><<a href="mailto:liujiangning1@gmail.com" target="_blank">liujiangning1@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">HI Dan,<div class="gmail_extra"><br><div class="gmail_quote"><span class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="gmail_quote"><span><div>To be a little more forceful here:<br></div></span><div><br></div><div>The base invariant of all of these kinds of algorithms (value range analysis, constant propagation, etc) is that values only go in one direction on the lattice.</div></div></blockquote><div><br></div></span><div>I don't really see the wrong lowering direction.</div></div></div></div></blockquote><div><br></div><div><br></div><div>You are taking values that got evaluated, went "down" the lattice to overdefined, and reevaluating them, and raising them back *up* the lattice to "not overdefined".</div><div>That is definitely going in the wrong direction on the lattice.</div><div><br>What am I missing?<br></div><div>Besides the compile time problems, in such situations, you can also make the algorithm no longer fixpoint, ever.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="gmail_quote"><div>If you have found a case where this is not true, either</div><div>1. The implementation is buggy/broken</div><div>2. The algorithm isn't powerful enough to handle what you really want to happen, and you should change algorithms :)</div></div></blockquote><div><br></div></span><div>I personally think this algorithm itself is powerful. But I'm not sure what would happen if we don't bail out of the lowering early before we could lower the value completely as possible as we could at compile time. I was told the algorithm was initially designed to save compile time. I personally think it might true in terms of the number of values to be checked, but I don't really have data to justify it. At least for the case I'm fixing with the new patch, the large number of basic block is really a problem.</div><div> </div><div>Thanks,</div><div>-Jiangning</div><span class=""><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="gmail_quote"><div><br></div><div>One of these is the real problem.</div><div><br></div><div>There is no case where the right solution should involved reevaluating values and moving them in the wrong direction on the lattice.</div><div>Besides hiding whatever the real problem is, it also changes the time bounds of the algorithm.</div></div>
</blockquote></span></div><br></div></div>
</blockquote></div><br></div></div>