[cfe-commits] PATCH: Add support for C++ namespace-aware typo correcting

Kaelyn Uhrain rikka at google.com
Fri Jun 10 10:36:17 PDT 2011


On Thu, Jun 9, 2011 at 6:34 PM, Chandler Carruth <chandlerc at google.com>wrote:

> On Thu, Jun 9, 2011 at 6:18 PM, Kaelyn Uhrain <rikka at google.com> wrote:
>
>> Quick question about doing this: would it be more cost effective to use a
>> std::multimap to allow iterating through the namespaces in ascending order
>> of NNS length and exiting the loop once NNS length + edit distance are worse
>> than the best, or to use an llvm::SmallVector instead and always looping
>> over all of the namespaces, but skipping the lookup if (NNS length + ED) is
>> too long?
>
>
> Iterating over a std::map or std::multimap is rather slow. Why can't we
> just keep the SmallVector sorted by the NNS length?
>

I had a feeling that was the case. The issue I'm having is in sorting the
NNSes by length--they vary based on the context of the typo correction, plus
the NNS, its length, and the DeclContext it points to have to remain
associated. Right now the list of known namespaces is a simple SmallPtrSet,
and the NNS and its length are calculated when needed--between when a symbol
is successfully looked up in a given DeclContext and when it is added with
its qualifier to the TypoCorrectionConsumer--and cached in a std::map for
the duration of the call to CorrectTypo (for use with other identifiers in
the same DeclContext). I'm not sure how the cost of calculating the
NestedNameSpecifiers for every namespace is affected by PCH files, but I'm
guessing the cost is less than what is saved by avoiding lookups in
namespaces stored in PCH files.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-commits/attachments/20110610/c128fe4d/attachment.html>


More information about the cfe-commits mailing list