[cfe-dev] libclang: Memory management

Alex L via cfe-dev cfe-dev at lists.llvm.org
Fri Mar 10 02:39:53 PST 2017


On 10 March 2017 at 10:21, Jusufadis Bakamovic via cfe-dev <
cfe-dev at lists.llvm.org> wrote:

> Reproducing the issue is fairly easy. If you like I could provide a
> smallish demo which monitors the (system) memory consumption at various
> steps of execution?
>

Hi,

I am interested in running the demo and investigating the issue.


>
> Better question is if there is anything we can do to improve this? I
> believe this is a highly important aspect of the library.
>

If it's caused by data that we don't need but that's still hanging around
then yes.

Alex


>
> On 9 March 2017 at 17:53, Manuel Klimek <klimek at google.com> wrote:
>
>> Benjamin had tried to come up with a repro at some point, too, iirc
>>
>> On Thu, Mar 9, 2017 at 4:48 PM Jusufadis Bakamovic via cfe-dev <
>> cfe-dev at lists.llvm.org> wrote:
>>
>>> Hi,
>>>
>>> I am a little bit puzzled about the memory management in libclang. What
>>> I observed is, when run on mid-sized code base, a very high
>>> (understandable) memory consumption whereas it seems it is not possible to
>>> reclaim all the consumed memory back once we are finished with whatever we
>>> have been doing with libclang. For example, this is a code excerpt which
>>> should explain my words:
>>>
>>> // 1. Create an index
>>> vector<CXTranslationUnit> tunits;
>>> CXIndex idx = clang_createIndex(1, 1);
>>>
>>> // 2. Parse each file found in directory and store corresponding TU into
>>> the vector
>>> for (auto file& : directory) {
>>>     CXTranslationUnit tu;
>>>     if (!clang_parseTranslationUnit2(idx, file.path().c_str(),
>>> cmd_args, cmd_args_len, 0, 0, CXTranslationUnit_DetailedPreprocessingRecord,
>>> &tu)
>>>         tunits.push_back(tu);
>>> }
>>>
>>> // 3. Cleanup
>>> for (auto tu& : tunits) {
>>>     clang_disposeTranslationUnit(tu);
>>> }
>>> clang_disposeIndex(idx);
>>>
>>>
>>> If I run this code on `cppcheck` code base (
>>> https://github.com/danmar/cppcheck) which is somewhat a mid-sized
>>> project (it counts cca 300 C/C++ files), I get the following figures in
>>> terms of memory consumption for that particular process (based on `htop`
>>> output):
>>>
>>> * app memory consumption after 2nd step: virt(5763M) / res(5533M) /
>>> shr(380M)
>>> * app memory consumption after 3rd step:  virt(4423M) / res(4288M) /
>>> shr(65188)
>>>
>>>
>>> So, as it can be seen from the figures, even after the cleanup stage,
>>> both virtual and resident memory figures are still very high. Seems like
>>> the only part which has been reclaimed is the memory that has been
>>> associated with the TU's. All the other, I can guess, parsing artifacts are
>>> still being hold somewhere in the memory to which we don't have access to
>>> neither we can flush them using the libclang API. I even ran the code with
>>> valgrind and there were no memory leaks detected (only `still reachable`
>>> blocks).
>>>
>>> Either I am missing something here or this might impose a memory issues
>>> for long-running non-single-shot tools (i.e. think indexer).
>>>
>>> Can anyone comment on this issue?
>>>
>>> Thanks,
>>> Adi
>>>
>>>
>>> _______________________________________________
>>> cfe-dev mailing list
>>> cfe-dev at lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>>>
>>
>
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20170310/6bef8651/attachment.html>


More information about the cfe-dev mailing list