[llvm-dev] [RFC] Placing profile name data, and coverage data, outside of object files

Vedant Kumar via llvm-dev llvm-dev at lists.llvm.org
Mon Jul 3 23:29:13 PDT 2017


> On Jun 30, 2017, at 8:16 PM, Xinliang David Li <davidxl at google.com> wrote:
> 
> 
> 
> On Fri, Jun 30, 2017 at 5:54 PM,  <vsk at apple.com <mailto:vsk at apple.com>> wrote:
> Problem
> -------
> 
> Instrumentation for PGO and frontend-based coverage places a large amount of
> data in object files, even though the majority of this data is not needed at
> run-time. All the data is needlessly duplicated while generating archives, and
> again while linking. PGO name data is written out into raw profiles by
> instrumented programs, slowing down the training and code coverage workflows.
> 
> Here are some numbers from a coverage + RA build of ToT clang:
> 
>   * Size of the build directory: 4.3 GB
> 
>   * Wall time needed to run "clang -help" with an SSD: 0.5 seconds
> 
>   * Size of the clang binary: 725.24 MB
> 
>   * Space wasted on duplicate name/coverage data (*.o + *.a): 923.49 MB
>     - Size contributed by __llvm_covmap sections: 1.02 GB
>       \_ Just within clang: 340.48 MB
> 
>     - Size contributed by __llvm_prf_names sections: 327.46 MB
>       \_ Just within clang: 106.76 MB
> 
>     => Space wasted within the clang binary: 447.24 MB
> 
> Running an instrumented clang binary triggers a 143MB raw profile write which
> is slow even with an SSD. This problem is particularly bad for frontend-based
> coverage because it generates a lot of extra name data: however, the situation
> can also be improved for PGO instrumentation.
> 
> 
> I want to point out that this is a problem with FE instrumentation with coverage turned on. Without coverage turned on, the name section size will be significantly smaller. 

Yes, it's 15MB, or about 7 times smaller.

> With IR PGO, the name section size is even smaller. For instance, the IR instrumented clang size is 122.3MB, while the name section size is only 2.3MB, the space wasted is < 2%.

It's also worth pointing out that with r306561, name data is only written out by the runtime a constant number of times per program, and not on every program invocation. That's a big win.

The other side to this is that there are valid use cases the online profile merging mode doesn't support, e.g generating separate sets of profiles by training on different inputs, or generating separate coverage reports for each test case. In these cases, having the option to not write out name data is a win.

> 
> Proposal
> --------
> 
> Place PGO name data and coverage data outside of object files. This would
> eliminate data duplication in *.a/*.o files, shrink binaries, shrink raw
> profiles, and speed up instrumented programs.
> 
> 
> 
> This sounds fine as long as the behavior (for name) is controlled under an option. For IR PGO, name size is *not* an issue, so keeping the name data in binary and dumped with profile data has advantage in terms of usability -- the profile data is self contained.  Turning on coverage trigger the behavior difference is one possible choice.

The splitting behavior would be opt-in.

> As for coverage mapping data, splitting it out by default seems to be a more desirable behavior.

This can't be a default behavior. The user/build system/IDE would need to specify a metadata file/directory.

> The data embedded in the binary is not even used by the profile runtime (of course the runtime can choose to dump it so that the llvm-cov data does not need to look for the executable binary). The sole purpose of emitting it with the object file is to treat the executable/object as the mapping data container.  The usability of llvm-cov won't reduce with the proposed change.
> 
>  
> In more detail:
> 
> 1. The frontends get a new `-fprofile-metadata-dir=<path>` option. This lets
> users specify where llvm will store profile metadata. If the metadata starts to
> take up too much space, there's just one directory to clean.
> 
> 
> Why not leverage -fcoverage-mapping option -- i.e. add a new flavor of this option that accepts the meta data path: -fcoverage-mapping=<path>.   If the path is not specified, the data will be emitted with object files.

This would limit the ability to store name data outside of a binary to FE-style coverage users. I recognize that large name sections aren't always as problematic when using IR/FE PGO, but this seems like an unnecessary restriction.

> 2. The frontends continue emitting PGO name data and coverage data in the same
> llvm::Module. So does LLVM's IR-based PGO implementation. No change here.
> 
> 3. If the InstrProf lowering pass sees that a metadata directory is available,
> it constructs a new module, copies the name/coverage data into it, hashes the
> module, and attempts to write that module to:
> 
>   <metadata-dir>/<module-hash>.bc   (the metadata module)
> 
> If this write operation fails, it scraps the new module: it keeps all the
> metadata in the original module, and there are no changes from the current
> process. I.e with this proposal we preserve backwards compatibility.
> 
> 
> Or simply emit the raw file as coverage notes files (gcno).

After reading through the comments, I think it would be better to have the build system specify where the external data goes, and to have just one external file formed post-link.

> 4. Once the metadata module is written, the name/coverage data are entirely
> stripped out of the original module. They are replaced by a path to the
> metadata module:
> 
>   @__llvm_profiling_metadata = "<metadata-dir>/<module-hash>.bc",
>                                section "__llvm_prf_link"
> 
> This allows incremental builds to work properly, which is an important use case
> for code coverage users. When an object is rebuilt, it gets a fresh link to a
> fresh profiling metadata file. Although stale files can accumulate in the
> metadata directory, the stale files cannot ever be used.
> 
> Why is this needed for incremental build? The file emitted is simply a build artifact, not an input to the build.

If llvm-cov just has a path to a directory, it can only load all of the data in the directory. But the aggregate data would not be self-consistent:

$ ninja foo
<Rename/delete/edit a file.>
$ ninja foo
<Multiple instances of coverage data appear.>
<Incorrect coverage reports generated.>

This isn't a problem if there is only one external metadata file (that the build system knows about).

> In an IDE like Xcode, since there's just one target binary per scheme, it's
> possible to clean the metadata directory by removing the modules which aren't
> referenced by the target binary.
> 
> 5. The raw profile format is updated so that links to metadata files are written
> out in each profile. This makes it possible for all existing llvm-profdata and
> llvm-cov commands to work, seamlessly.
> 
> 
> It may not be as smooth as you hope: the directory containing the build artifact may not be accessible when llvm-profdata tool is run. This is especially true for for distributed build system -- without telling the build system , the meta data won't even be copied back to the user.

This is another reason the build system should be aware of any metadata stored outside of the object file.

> Since user explicitly asks for emitting the data into a directory, it won't be a usability regression to require the user to specify the path to locate the meta data -- this is especially true for llvm-cov which requires user to specify the binary path anyway.
> 
> This requirement can simplify the implementation even more as there seems no need to write any link data in the binary.


This is from a later email, but I'd like to follow up to this comment here:

> For coverage mapping data, another possible solution is to introduce a post-link tool that strips and compresses the coverage mapping data from the final binary and copies it to a different file. This step can be manually done by the user or by the compiler driver when coverage mapping is on. The name data can be copied too, but it requires slight llvm-profdata work flow change under a flag.

I've already alluded to this: this sounds like a simpler plan. Kinda like dsymutil + strip.

I'm currently traveling (sorry for the delayed responses), and will send out a revised proposal in a week or so.

thanks,
vedant

> The indexed profile format will *not* be updated: i.e, it will contain a full
> symbol table, and no links. This simplifies the coverage mapping reader, because
> a full symbol table is guaranteed to exist before any function records are
> parsed. It also reduces the amount of coding, and makes it easier to preserve
> backwards compatibility :).
> 
> 6. The raw profile reader will learn how to read links, open up the metadata
> modules it finds links to, and collect name data from those modules.
> 
> See above, I think it is better to explicitly pass the directory to the reader.
>  
> 
> 7. The coverage reader will learn how to read the __llvm_prf_link section, open
> up metadata modules, and lazily read coverage mapping data.
> 
> Alternate Solutions
> -------------------
> 
> 1. Instead of copying name data into an external metadata module, just copy the
> coverage mapping data.
> 
> I've actually prototyped this. This might be a good way to split up patches,
> although I don't see why we wouldn't want to tackle the name data problem
> eventually.
> 
> 
> 
> I think this can be a good first step.
> 
>  
> 2. Instead of emitting links to external metadata modules, modify llvm-cov and
> llvm-profdata so that they require a path to the metadata directory.
> 
> I second this.
>  
> 
> The issue with this is that it's way too easy to read stale metadata. It's also
> less user-friendly, which hurts adoption.
> 
> I don't think it will be less user-friendly. See reasons mentioned above.
> 
> 3. Use something other than llvm bitcode for the metadata module format.
> 
> Since we're mostly writing large binary blobs (compressed name data or
> pre-encoded source range mapping info), using bitcode shouldn't be too slow, and
> we're not likely to get better compression with a different format.
> 
> Bitcode is also convenient, and is nice for backwards compatibility.
> 
> Or a simpler wrapper format. Some data is probably needed to justify the decision.
> 
> David
>  
> 
> --------------------------------------------------------------------------------
> 
> If you've made it this far, thanks for taking a look! I'd appreciate any
> feedback.
> 
> vedant

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170703/5c48b4e2/attachment-0001.html>


More information about the llvm-dev mailing list