[PATCH] D81515: [llvm] Release-mode ML InlineAdvisor

Mircea Trofin via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Jun 9 20:25:19 PDT 2020


mtrofin added a comment.

In D81515#2083938 <https://reviews.llvm.org/D81515#2083938>, @efriedma wrote:

> Including the models in the LLVM tree is problematic.
>
> I'm not sure there's a formal policy on this, but generally part of being an open-source project is that the source is available in human-readable format.  With the exception of a few regression tests for binary parsers, the entire LLVM  tree is human-readable.  A model clearly doesn't count as human-readable.
>
> If it isn't practical to train the model as part of the LLVM build (because it would take too long), it might make sense to commit binary files.  There's some precedent for this in-tree: lowering for shuffles on some targets is based on a precomputed table, built using a utility that isn't run as part of the normal build process. But I would expect reproducible instructions for how to generate the files.


Indeed, training part of the build would be impractical. But that still doesn't mean we need binary files.

I believe there are 2 concerns:

- binary files: I agree with the sentiment about binaries. We want to explore a way to offer the model in a text format. That would require changes to the AOT compiler. We decided to start with what we had, believing that, given this part of the project is a built-time opt-in, it shouldn't cause much hindrance for the interim until we develop a text format.

- how to train a model. There is a high level description of the means we used to train a model in the RFC, and, as outlined there, we intend to open source a reference training tool. Our plan is to do that in the next step.



In D81515#2083938 <https://reviews.llvm.org/D81515#2083938>, @efriedma wrote:

> Including the models in the LLVM tree is problematic.
>
> I'm not sure there's a formal policy on this, but generally part of being an open-source project is that the source is available in human-readable format.  With the exception of a few regression tests for binary parsers, the entire LLVM  tree is human-readable.  A model clearly doesn't count as human-readable.
>
> If it isn't practical to train the model as part of the LLVM build (because it would take too long), it might make sense to commit binary files.  There's some precedent for this in-tree: lowering for shuffles on some targets is based on a precomputed table, built using a utility that isn't run as part of the normal build process. But I would expect reproducible instructions for how to generate the files.





Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D81515/new/

https://reviews.llvm.org/D81515





More information about the llvm-commits mailing list