[llvm-dev] RFC: XRay in the LLVM Library
Eric Christopher via llvm-dev
llvm-dev at lists.llvm.org
Tue Jan 3 17:13:11 PST 2017
Sorry for coming into this thread late.
I can see a few uses for different formats, but I'm not quite convinced on
the usefulness of a universal exchange library. That said, if Dean really
wants to implement a way of converting between all of these things I'm not
going to stop him. I'd probably suggest just dumping some formats and using
some sort of human readable format for input as a way of testing, but
that's just me.
On Thu, Dec 1, 2016 at 2:06 PM David Blaikie via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> On Wed, Nov 30, 2016 at 3:26 AM Renato Golin <renato.golin at linaro.org>
> On 30 November 2016 at 05:08, Dean Michael Berris via llvm-dev
> <llvm-dev at lists.llvm.org> wrote:
> > - Is there a preference between the two options provided above?
> > - Any other alternatives we should consider?
> > - Which parts of which options do you prefer, and is there a synthesis
> of either of those options that appeals to you?
> Hi Dean,
> I haven't followed the XRay project that closely, but I have been
> around file formats being formed and either of your two approaches
> (which are pretty standard) will fail in different ways. But that's
> ok, because the "fixes" work, they're just not great.
> If you take the LLVM IR, there were lots of changes, but we always
> aimed to have one canonical representation. Not just at the syntax of
> each instruction/construct, but how to represent complex behaviour in
> the same series of instructions, so that all back-ends can identify
> and work with it. Of course, the second (semantic) level is less
> stringent than the first (syntactical), but we try to make it as
> strict as possible.
> This hasn't come for free. The two main costs were destructive
> semantics, for example when we lower C++ classes into arrays and
> change all the access to jumbled reads and writes because IR readers
> don't need to understand the ABI of all targets, and backwards
> incompatibility, for example when we completely changed how exception
> handling is lowered (from special basic blocks to special constructs
> as heads/tails of common basic blocks). That price was cheaper than
> the alternative, but it's still not free.
> Another approach I followed was SwissProt , a manually curated
> machine readable text file with protein information for cross
> referencing. Cutting short to the chase, they introduced "line types"
> with strict formatting for the most common information, and one line
> type called "comment" where free text was allowed, for additional
> information. With time, adding a new line type became impossible, so
> all new fields ended up being added in the comment lines, with a
> pseudo-strict formatting, which was (probably still is) a nightmare
> for parsers and humans alike.
> Between the two, the LLVM IR policy for changes is orders of magnitude
> better. I suggest you follow that.
> I also suggest you don't keep multiple canonical representations, and
> create tools to convert from any other to the canonical format.
> Finally, I'd separate the design in two phases:
> 1. Experimental, where the canonical form changes constantly in light
> of new input and there are no backwards/forwards compatibility
> guarantees at all. This is where all of you get creative and try to
> sort out the problems in the best way possible.
> 2. Stable, when most of the problems were solved, and you now document
> a final stable version of the representation. Every new input will
> have to be represented as a combination of existing ones, so make them
> generic enough. In need of real change, make sure you have a process
> that identifies versions and compatibility (for example, having a
> version tag on every dump), and letting the canonical tool know all of
> the issues.
> This last point is important if you want to continue reading old files
> that don't have the compatibility issue, warn when they do but it's
> irrelevant, or error when they do and it'll produce garbage. You can
> also write more efficient converting tools.
> From what I understood of this XRay, you could in theory keep the data
> for years in a tape somewhere in the attic, and want to read it later
> to compare to a current run, so being compatible is important, but
> having a canonical form that can be converted to and from other forms
> is more important, or the comparison tools will get really messy
> really quickly.
> Not sure I quite follow here - perhaps some misunderstanding.
> My mental model here is that the formats are semantically equivalent -
> with a common in-memory representation (like LLVM IR APIs). It
> doesn't/shouldn't complicate a comparison tool to support both LLVM IR and
> bitcode input (or some other hypothetical formats that are semantically
> equivalent that we could integrate into a common reading API). At least
> that's my mental model.
> Is there something different here?
> What I'm picturing is that we need an API for reading all these formats
> and either we use that API only in the conversion tool - and users then
> have to run the conversion tool before running the tool they want. Or we
> sink that API into a common place, and all tools use that API to load
> inputs - making the user experience simpler (they don't have to run an
> extra conversion step/tool) but it doesn't seem like it should make the
> development experience more complicated/messy/difficult.
> - Dave
> Hope that helps,
>  http://web.expasy.org/docs/swiss-prot_guideline.html
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev