[cfe-commits] [PATCH] cindex.py optimization

Francisco Lopes da Silva oblita at gmail.com
Sat Aug 18 17:52:58 PDT 2012


Hi Tobias, here's the data with and without the strings comparisons

WITHOUT:

libclang code completion
========================
File: /Users/francisco/Desktop/sample/simple_bimap.cpp
Line: 56, Column: 10

    std::

    libclang code completion -                    Get TU: 0.001s (  0.1% )
    libclang code completion -             Code Complete: 0.252s ( 39.2% )
    libclang code completion -    Count # Results (1740): 0.002s (  0.2% )
    libclang code completion -                    Filter: 0.000s (  0.0% )
    libclang code completion -                      Sort: 0.007s (  1.1% )
    libclang code completion -                    Format: 0.293s ( 45.7% )
    libclang code completion -       Load into vimscript: 0.023s (  3.6% )
    libclang code completion -      vimscript + snippets: 0.065s ( 10.1% )

    Overall: 0.642 s
    ========================

    clang_complete: completion time (library) 0.642779

WITH:

libclang code completion
========================
File: /Users/francisco/Desktop/sample/simple_bimap.cpp
Line: 55, Column: 10

    std::

libclang code completion -                    Get TU: 0.001s (  0.2%)
libclang code completion -             Code Complete: 0.235s ( 49.2%)
libclang code completion -    Count # Results (1740): 0.001s (  0.2%)
libclang code completion -                    Filter: 0.000s (  0.0%)
libclang code completion -                      Sort: 0.007s (  1.5%)
libclang code completion -                    Format: 0.145s ( 30.3%)
libclang code completion -       Load into vimscript: 0.023s (  4.9%)
libclang code completion -      vimscript + snippets: 0.066s ( 13.8%)

Overall: 0.478 s
========================

clang_complete: completion time (library) 0.479182


The rate between the two is .293/.145 = 2.02. So the truth is that nearly all the benefits got in the Format
phase came by avoiding string comparisons, the caching does nearly nothing in this case because, in
clang_complete, hardly there's double calls for the methods that are caching results.


Regards,
    Francisco.

Em 18/08/2012, às 21:12, Tobias Grosser <tobias at grosser.es> escreveu:

> On 08/18/2012 11:56 PM, Francisco Lopes da Silva wrote:
>> Hi, this patch contains optimizations for the python cindex binding,
>> specifically for the CompletionChunk class. It tries to avoid calls to
>> the c interface by caching the results, also, it improves internal
>> checking by avoiding string comparisons and dictionary lookups.
> 
> Hi Francisco,
> 
> it is very impressive to see how much the caching of properties actual benefits clang_complete. Such a speedup is a very good reason to enable caching of properties.
> The way the caching is implemented is correct, however I expect that we want to do more and more caching in the future. Hence, limiting the code bloat for caching seems to be important. I remember Gregory Szorc had a patch available that introduced a @CachedProperty that allows us to cache properties by just adding an attribute: https://github.com/indygreg/clang/commit/a4b533cea8cfce5d211d8e0477dd12fd66b35f5d
> 
> What about using this opportunity to add CachedProperty to the current cindex.py and to use it to implement the caching that you propose for the CompletionChunks.
> 
> Also, I did some experiments myself and I had the feeling a large part of the speedup you show came from caching, whereas avoiding string comparisons and dictionary lookups did not make such a big difference. Hence, I propose to first add the caching to cindex.py and then reevaluate the other changes on their own to see if the performance change they cause is worth the added complexity.
> 
> Cheers
> Tobi
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-commits/attachments/20120818/637a6e93/attachment.html>


More information about the cfe-commits mailing list