<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi Matthieu,<div><br><div><div>On Nov 8, 2010, at 3:05 PM, Matthieu Monrocq wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite">Hi Doug,<br><br>We had a discussion a month ago I think about the Spell Correction algorithm which was used in CLang, notably for auto-completion, based on the Levenstein distance.<br><br>It turns out I just came upon this link today: <a href="http://nlp.stanford.edu/IR-book/html/htmledition/k-gram-indexes-for-spelling-correction-1.html">http://nlp.stanford.edu/IR-book/html/htmledition/k-gram-indexes-for-spelling-correction-1.html</a><br>
<br>The idea is to use bigrams (2-letters parts of a word) to build an index of the form (bigram > all words containing this bigram), then use this index to retrieve all the words with enough bigrams in common with the word you are currently trying to approximate.<br>
<br>This drastically reduces the set of identifiers on which computing the Levenstein distance, especially if we directly trim those which have a length incompatible anyway from the beginning. Furthermore, the result may be ordered by the number of common bigrams, and thus the Levenstein distance may be computed first on the most promising candidates in order to have the maximum edit distance drop quickly.<br>
<br>I have implemented a quick algorithm in python (~70 lines, few comments though), to test it out, and I find the results quite promising.<br><br>I have used the following corpus: <a href="http://norvig.com/big.txt">http://norvig.com/big.txt</a> (~6 MB) which can be found on Peter Norvig's page: <a href="http://norvig.com/spell-correct.html">http://norvig.com/spell-correct.html</a><br>
<br>For example, looking for the word "neaer":<br><br>Number of distinct Words from the corpus: 48911<br>Number of distinct Bigrams from the corpus: 1528<br>Number of results: [(1, 34), (2, 1133)]<br>Most promising results: (1, ['Kearney', 'NEAR', 'nearing', 'linear', 'learner', 'uneasy', 'nearest', 'neat', 'lineage', 'leaned', 'Nearer', 'Learned', 'Nearly', 'cleaner', 'cleaned', 'neatly', 'nearer', 'earned', 'n<br>
eared', 'nearly', 'learned', 'nearby', 'Nearest', 'near', 'meanest', 'earnest', 'Near', 'beneath', 'gleaned', 'Beneath', 'kneaded', 'weaned', 'Freneau', 'guineas'])<br>
<br>Or for the word "sppose":<br><br>Number of distinct Words from the corpus: 48911<br>Number of distinct Bigrams from the corpus: 1528<br>Number of results: [(1, 14), (2, 136), (3, 991)]<br>Most promising results: (1, ['disposal', 'opposed', 'opposing', 'Oppose', 'suppose', 'Dispose', 'dispose', 'apposed', 'disposed', 'oppose', 'supposed', 'opposite', 'supposes', 'Suppose'])<br>
<br><br>Do you think the method worth investigating ?<br></blockquote></div><br></div><div>Yes, I do. Typo correction performance is very important (and has historically been very poor). </div><div><br></div><div>Note that there are also more-efficient algorithms for computing the Levenstein distance (e.g., d*min(m, n) rather than m*n). We should also consider those.</div><div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>- Doug</div></body></html>