<html><head><meta http-equiv="Content-Type" content="text/html charset=iso-8859-1"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>Hi Tobias, here's the data with and without the strings comparisons</div><div><br></div><div>WITHOUT:</div><div><br></div><div><div>libclang code completion</div><div>========================</div><div>File: /Users/francisco/Desktop/sample/simple_bimap.cpp</div><div>Line: 56, Column: 10</div><div><br></div><div> std::</div><div><br></div><div> libclang code completion - Get TU: 0.001s ( 0.1% )</div><div> libclang code completion - Code Complete: 0.252s ( 39.2% )</div><div> libclang code completion - Count # Results (1740): 0.002s ( 0.2% )</div><div> libclang code completion - Filter: 0.000s ( 0.0% )</div><div> libclang code completion - Sort: 0.007s ( 1.1% )</div><div> <b> libclang code completion - Format: 0.293s ( 45.7% )</b></div><div> libclang code completion - Load into vimscript: 0.023s ( 3.6% )</div><div> libclang code completion - vimscript + snippets: 0.065s ( 10.1% )</div><div><br></div><div> Overall: 0.642 s</div><div> ========================</div><div><br></div><div> clang_complete: completion time (library) 0.642779</div></div><div><br></div><div>WITH:</div><div><br></div><div><div>libclang code completion</div><div>========================</div><div>File: /Users/francisco/Desktop/sample/simple_bimap.cpp</div><div>Line: 55, Column: 10</div><div><br></div><div> std::</div><div><br></div><div>libclang code completion - Get TU: 0.001s ( 0.2%)</div><div>libclang code completion - Code Complete: 0.235s ( 49.2%)</div><div>libclang code completion - Count # Results (1740): 0.001s ( 0.2%)</div><div>libclang code completion - Filter: 0.000s ( 0.0%)</div><div>libclang code completion - Sort: 0.007s ( 1.5%)</div><div><b>libclang code completion - Format: 0.145s ( 30.3%)</b></div><div>libclang code completion - Load into vimscript: 0.023s ( 4.9%)</div><div>libclang code completion - vimscript + snippets: 0.066s ( 13.8%)</div><div><br></div><div>Overall: 0.478 s</div><div>========================</div><div><br></div><div>clang_complete: completion time (library) 0.479182</div></div><div><br></div><div><br></div><div>The rate between the two is .293/.145 = 2.02. So the truth is that nearly all the benefits got in the Format</div><div>phase came by avoiding string comparisons, the caching does nearly nothing <i>in this case</i> because, in</div><div>clang_complete, hardly there's double calls for the methods that are caching results.</div><div><br></div><div><br></div><div>Regards,</div><div> Francisco.</div><br><div><div>Em 18/08/2012, ās 21:12, Tobias Grosser <<a href="mailto:tobias@grosser.es">tobias@grosser.es</a>> escreveu:</div><br class="Apple-interchange-newline"><blockquote type="cite">On 08/18/2012 11:56 PM, Francisco Lopes da Silva wrote:<br><blockquote type="cite">Hi, this patch contains optimizations for the python cindex binding,<br>specifically for the CompletionChunk class. It tries to avoid calls to<br>the c interface by caching the results, also, it improves internal<br>checking by avoiding string comparisons and dictionary lookups.<br></blockquote><br>Hi Francisco,<br><br>it is very impressive to see how much the caching of properties actual benefits clang_complete. Such a speedup is a very good reason to enable caching of properties.<br>The way the caching is implemented is correct, however I expect that we want to do more and more caching in the future. Hence, limiting the code bloat for caching seems to be important. I remember Gregory Szorc had a patch available that introduced a @CachedProperty that allows us to cache properties by just adding an attribute: <a href="https://github.com/indygreg/clang/commit/a4b533cea8cfce5d211d8e0477dd12fd66b35f5d">https://github.com/indygreg/clang/commit/a4b533cea8cfce5d211d8e0477dd12fd66b35f5d</a><br><br>What about using this opportunity to add CachedProperty to the current cindex.py and to use it to implement the caching that you propose for the CompletionChunks.<br><br>Also, I did some experiments myself and I had the feeling a large part of the speedup you show came from caching, whereas avoiding string comparisons and dictionary lookups did not make such a big difference. Hence, I propose to first add the caching to cindex.py and then reevaluate the other changes on their own to see if the performance change they cause is worth the added complexity.<br><br>Cheers<br>Tobi<br><br></blockquote></div><br></body></html>