[cfe-dev] FW: RFC: YAML as an intermediate format for clang::tooling::Replacement data on disk
Manuel Klimek
klimek at google.com
Wed Jul 31 13:35:30 PDT 2013
On Wed, Jul 31, 2013 at 10:24 PM, Chandler Carruth <chandlerc at google.com>wrote:
> On Wed, Jul 31, 2013 at 11:03 AM, Manuel Klimek <klimek at google.com> wrote:
>
>> On Wed, Jul 31, 2013 at 7:59 PM, Chandler Carruth <chandlerc at google.com>wrote:
>>
>>>
>>> On Wed, Jul 31, 2013 at 8:49 AM, Manuel Klimek <klimek at google.com>wrote:
>>>
>>>> As noted, I think the design generally makes sense - my main concern is
>>>> that the tool that applies the replacements should not be
>>>> cpp11-migrator-specific. This would be useful for basically *all*
>>>> refactorings.
>>>>
>>>> I think in the end we'll want 2 things:
>>>> - some functions in lib/Tooling/ that allow outputting those
>>>> replacement collections from clang-tools
>>>> - a tool to aggregate, deduplicate and apply all the changes
>>>>
>>>
>>> Just to clarify an important design constraint from my perspective:
>>>
>>> It shouldn't be just any tool to aggregate, deduplicate, and apply all
>>> the changes. It should specifically be the *same* code path that a tool
>>> uses when it runs over all TUs in-process. This is to me *really* important
>>> to ensure we get consistent behavior across these two possible workflows:
>>>
>>> 1) Run tool X over code in a.cc, b.cc, and c.cc in a single invocation.
>>> Internally hands rewrites to a library in Clang that de-dups and applies
>>> the edits.
>>>
>>> 2) Run tool X over code in a.cc, b.cc, and c.cc; one invocation pre
>>> file. Each run writes out its edits in some form. Run tool Y which reads
>>> these edits and hands them to a library in Clang that de-dups and applies
>>> the edits.
>>>
>>>
>>> So, I essentially think that it *can't* be a separate system from Clang
>>> itself, it is intrinsically tied or it will yield different behavior (and
>>> endless confusion).
>>>
>>
>> I agree in principle, but I think the common core should live in
>> lib/Tooling (and already does), and it's really really small (as it simply
>> deduplicates the edits, perhaps reports conflicts, and then just uses the
>> Rewriter to apply the changes - those are around 10 LOC).
>>
>> Everything else in the extra tool is about reading the changes from disk,
>>
>
> This makes sense to be in the helper tool...
>
>
>> and using multi-threading to apply them in a scalable way.
>>
>
> I think this should be in the core code. If you have 10k files that you
> want to edit and you can analyze them in-process fast enough (may become
> realistic w/ modules), we should also apply the edits in a scalable way.
>
I think the sequential part of that step should be in the core code, but
the parallelization will be in a (small) tool. This is also where
special-case logic for custom networked file systems or strange version
control systems would fall in.
> Clearly, this can always be an incremental thing, I'm just trying to
> clarify the important constraint for me on the end state.
>
Agreed.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20130731/5e949bcb/attachment.html>
More information about the cfe-dev
mailing list