<div style="font-family: arial, helvetica, sans-serif"><font size="2">On Tue, Jun 19, 2012 at 2:53 PM, Evgeny Panasyuk <span dir="ltr"><<a href="mailto:evgeny.panasyuk@gmail.com" target="_blank">evgeny.panasyuk@gmail.com</a>></span> wrote:<br>
<div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><div class="im">
19.06.2012 16:35, Manuel Klimek wrote:<br>
<blockquote type="cite">
<div style="font-family:arial,helvetica,sans-serif"><font size="2">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>
<blockquote type="cite">
<div style="font-family:arial,helvetica,sans-serif"><font size="2">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>
<blockquote type="cite">
<div style="font-family:arial,helvetica,sans-serif"><font size="2">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Or maybe
about some interactive
(maybe gui) tool for
building predicates? I
remember that Chandler
mentioned about something
similar at <a href="http://www.youtube.com/watch?v=yuIOGfcOH0k&t=27m56s" target="_blank">http://www.youtube.com/watch?v=yuIOGfcOH0k&t=27m56s</a>
<br>
</div>
</blockquote>
<div><br>
</div>
<div>Now we're talking the next
step :) Yea, having a GUI
would be *great* (and just so
we're clear: with GUI I mean a
web page :P)</div>
</div>
</font></div>
</blockquote>
<br>
</div>
And maybe AST database optimized for fast
predicate matches :)<br>
</div>
</blockquote>
<div><br>
</div>
<div>For small projects this might be
interesting - for us the question is how
that would scale - we've found parsing the
C++ code to be actually an interesting way
to scale the AST, for the small price of
needing up 3-4 seconds per TU (on average).
Denormalizing the AST itself produces a huge
amount of data, and denormalizing even more
seems like a non-starter.</div>
<div><br>
</div>
<div>Thoughts?</div>
</div>
</font></div>
</blockquote>
<br>
</div>
It depends on how much you would like to scale. And yes,
it also depends on project sizes.<br>
For instance, if required scaling is task per TU - it is
one case.<br>
</div>
</blockquote>
<div><br>
</div>
<div>Perhaps I need to expand on what I mean here:</div>
<div>Imagine you have on the order of 100MLOC.</div>
<div>
If you want an "AST database" for predicate matches, the
question is what indexes you create. If you basically want
to create an extra index per "matcher", the
denormalization takes too much data. If you don't create
an index per matcher, how do you efficiently evaluate
matchers?</div>
</div>
</font></div>
</blockquote>
<br></div>
I understood that part of previous message.<br>
My point was, that if you have 1k translation units and need to
scale up to 100k parallel tasks, then it is obvious that "task per
TU" is not sufficient, and need to use another approach (maybe
pre-parse and split AST).<br></div></blockquote><div><br></div><div>I don't understand the point you're trying to make here yet :)</div><div>Are you talking about having the same (parametrized) task done 100k times in parallel (like: find all references to X done by many engineeres), or something else? How would a pre-parsed AST help? Perhaps you can expand on the "obvious" part ;)</div>
<div><br></div><div>Cheers,</div><div>/Manuel</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
<br>
Best Regards,<br>
Evgeny<br>
</div>
</blockquote></div><br></font></div>