[LLVMdev] Debug Info Slowing Things Down?!

Manman Ren manman.ren at gmail.com
Mon Nov 18 14:03:42 PST 2013


On Mon, Nov 18, 2013 at 11:55 AM, Eric Christopher <echristo at gmail.com>wrote:

> On Mon, Nov 18, 2013 at 11:06 AM, Manman Ren <manman.ren at gmail.com> wrote:
> >
> >
> >
> > On Mon, Nov 18, 2013 at 10:55 AM, Eric Christopher <echristo at gmail.com>
> > wrote:
> >>
> >> On Sun, Nov 17, 2013 at 6:35 PM, Manman Ren <manman.ren at gmail.com>
> wrote:
> >> > Hi Bill,
> >> >
> >> > Thanks for the testing case. Most of the time is spent on debug info
> >> > verifier.
> >> > I fixed a bug in r194974, now it takes too long to run debug info
> >> > verification.
> >> >
> >> > Debug info verifier is part of the verifier which is a Function Pass.
> >> > Tot
> >> > currently tries to pull all reachable debug info MDNodes in each
> >> > function,
> >> > which is too time-consuming. The correct fix seems to be separating
> >> > debug
> >> > info verification to its own module pass.
> >> >
> >> > I will disable the debug info verifier until a correct fix is found.
> >> >
> >>
> >> Most likely what's going on here is that we're verifying the same sets
> >> of debug info multiple times. A way to quickly speed it up would be to
> >> cache/memoize the nodes we've already visited.
> >
> >
> > Yes, we are verifying some debug info MDNodes multiple times. I am not
> sure
> > if caching MDNodes visited in one function makes sense.
> > For each function, we apply a series of passes, so there are
> optimizations
> > run between verifying one function and verifying another function.
> > The content of MDNodes verified in one function can potentially change
> when
> > we are trying to verify another function.
> >
>
> I figured this was happen at bitcode load time. If we're verifying
> anywhere else then we should still cache per-function what we've
> analyzed.
>

Verifier can be called a few times, for LTO, it is called 3 times.
It is common that we call verifier before optimization and on completion of
optimization.
>From the current source codes, verifier is grouped with a few analysis
passes, so it should be
okay to cache the results. But there is no guarantee that verifier will not
be grouped together with
optimization passes in a single FPPassManager. In that case, caching may
not make sense since
the optimization passes can modify the MDNodes.

Currently we perform module-level verification in doFinalization which is
not ideal. For example, if we have
two verifier passes, doFinalization for the two passes will be run at the
end, which is a total duplication since they
will be verifying the same module twice, at the end of optimizations.

So that is one advantage of moving debug info verifier to its own module
pass.

Manman


> -eric
>
> > Manman
> >
> >>
> >>
> >> -eric
> >>
> >> > Thanks,
> >> > Manman
> >> >
> >> >
> >> >
> >> > On Sun, Nov 17, 2013 at 6:04 PM, Bill Wendling <isanbard at gmail.com>
> >> > wrote:
> >> >>
> >> >> I think it might be. I’m attaching a preprocessed file that can show
> >> >> the
> >> >> problem. Compile it with ToT.
> >> >>
> >> >> $ clang++ -g -fvisibility-inlines-hidden -fno-exceptions -fno-rtti
> >> >> -fno-common -Woverloaded-virtual -Wcast-qual -fno-strict-aliasing
> -m64
> >> >> -pedantic -Wno-long-long -Wall -W -Wno-unused-parameter
> -Wwrite-strings
> >> >> -Wcovered-switch-default -Wno-uninitialized
> >> >> -Wno-missing-field-initializers
> >> >> -c LLVMTidyModule.ii
> >> >>
> >> >> -bw
> >> >>
> >> >>
> >> >>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20131118/a0d08e41/attachment.html>


More information about the llvm-dev mailing list