[llvm-dev] Debugify and Verify-each mode

Son Tuan VU via llvm-dev llvm-dev at lists.llvm.org
Fri Mar 16 15:54:53 PDT 2018


Mhm I see now, thanks for your explanation!

Son Tuan Vu

On Fri, Mar 16, 2018 at 10:58 PM, Vedant Kumar <vsk at apple.com> wrote:

>
> On Mar 16, 2018, at 2:30 PM, Son Tuan VU <sontuan.vu119 at gmail.com> wrote:
>
> Hi Vedant,
>
> Thank you for your reply. I think I can make this debugify-each mode, but
> I guess this is reserved for your GSoC project ?
>
>
> No, there's no reserved work. If you'd like to work on this I encourage
> you to do so. There's plenty of other work slated for the GSoC project.
> That said, let's make sure to sync up on the mailing lists to make sure
> work isn't being duplicated.
>
>
> However, if I understand correctly, we do not want to take the output of
> the first check-debugify (I mean the .ll file with potentially all the
> WARNINGs and ERRORs after the first pass) as input for the second debugify.
> What we need is to take the fresh output of *clang -Xclang -emit-llvm
> -disable-O0-optnone -S *and iteratively test each optimization. Am I
> right?
>
>
> The intermediate textual output is all irrelevant. And clang isn't in the
> picture here. Opt's regular mode of operation is to run pass1, then run
> pass2, etc. In the debugify-each mode, this instead looks like: debugify,
> pass1, check-debugify, strip debug info, debugify, pass2, etc. etc.
>
> vedant
>
>
>
> Cheers,
>
>
> Son Tuan Vu
>
> On Thu, Mar 15, 2018 at 4:05 AM, Vedant Kumar <vsk at apple.com> wrote:
>
>> Hi Son Tuan,
>>
>> Thanks for taking a look at this :). Responses inline --
>>
>> On Mar 14, 2018, at 8:11 AM, Son Tuan VU <sontuan.vu119 at gmail.com> wrote:
>>
>> Hi Vedant, hi all,
>>
>> My goal is to measure debug info loss of *each* optimization pass in
>> LLVM. I am trying to create a debugify-each mode in opt, inspired by
>> verify-each mode which is supposed to already work.
>>
>>
>> + Anastasis, who's interested in working on this as well. There's
>> definitely enough work to go around: once we can measure debug info loss
>> after each pass, we'll need a testing harness.
>>
>>
>> However, if I understand correctly, the verify-each mode (triggered by
>> -verify-each option in opt) only works when we provide a pass list or a
>> pass pipeline.
>>
>>
>> Yes, you're correct.
>>
>>
>> Is this intended? I mean, why do not let people verify each pass in
>> -O{1,2,3} pipeline?
>>
>>
>> That's a good question! Like you, I assumed -verify-each "does the right
>> thing" when you pass -O1/-O2/etc. to opt.
>>
>> I'm not sure if the current behavior is intended (hopefully others will
>> chime in about this :). If no one does, please file a bug.
>>
>> I imagine this is pretty simple to fix. You can just define and use
>> custom pass managers within opt which inject debugify passes as needed:
>>
>> // In opt.cpp:
>>
>>
>> class DebugifyEachFunctionPassManager : public
>> legacy::FunctionPassManager {
>> public:
>>   explicit DebugifyEachFunctionPassManager(Module *M)
>>       : FunctionPassManager(M) {}
>>
>>   void add(Pass *P) override {
>>     // FunctionPassManager::add(<debugify>)
>>     FunctionPassManager::add(P);
>>   }
>> };
>>
>>
>>
>> My second question is more about debugify: what should be the best way to
>> debugify each pass? Adding a debugify-each mode would make the output
>> unreadable!
>>
>>
>> The intermediate output is all irrelevant. I think it'd be best to simply
>> throw it away. What really matters are the debug info loss statistics: we
>> should capture these stats after each pass and dump them as JSON, at the
>> end of the pipeline.
>>
>> vedant
>>
>>
>> Maybe writing a script that collects all optimization options (like
>> -mem2reg or -constmerge), then pass each one of them to opt with
>> -enable-debugify so that we have 1 output file for each debugified pass?
>>
>> Thank you for your help,
>>
>> Son Tuan Vu
>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180316/c5a45eb4/attachment.html>


More information about the llvm-dev mailing list