[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation

Finkel, Hal J. via llvm-dev llvm-dev at lists.llvm.org
Mon Sep 9 10:46:34 PDT 2019


Hi, Chris, et al.,

I support adding MLIR as an LLVM subproject. Here are my thoughts:

 1. MLIR uses LLVM. LLVM is one of the MLIR dialects, MLIR is compiler infrastructure, and it fits as a natural part of our ecosystem.

 2. As a community, we have a lot of different LLVM frontends, many of which have their own IRs on which higher-level transformations are performed. We don't currently offer much, in terms of infrastructure, to support the development of these pre-LLVM transformations. MLIR provides a base on which many of these kinds of implementations can be constructed, and I believe that will add value to the overall ecosystem.

 3. As a specific example of the above, the current development of the new Flang compiler depends on MLIR. Flang is becoming a subproject of LLVM and MLIR should be part of LLVM.

 4. The MLIR project has developed capabilities, such as for the analysis of multidimensional loops, that can be moved into LLVM and used by both LLVM- and MLIR-level transformations. As we work to improve LLVM's capabilities in loop optimizations, leveraging continuing work to improve MLIR's loop capabilities in LLVM as well will benefit many of us.

 5. As a community, we have been moving toward increasing support for heterogeneous computing and accelerators (and given industry trends, I expect this to continue), and MLIR can facilitate that support in many cases (although I expect we'll see further enhancements in the core LLVM libraries as well).

That all having been said, I think that it's going to be very important to develop some documentation on how a frontend author looking to use LLVM backend technology, and a developer looking to implement different kinds of functionality, might reasonably choose whether to target or enhance MLIR components, LLVM components, or both. I expect that this kind of advice will evolve over time, but I'm sure we'll need it sooner rather than later.

Thanks again,

Hal

On 9/9/19 10:30 AM, Chris Lattner via llvm-dev wrote:
Hi all,

The TensorFlow team at Google has been leading the charge to build a new set of compiler infrastructure, known as the MLIR project<https://github.com/tensorflow/mlir>.  The initial focus has been on machine learning infrastructure, high performance accelerators, heterogeneous compute, and HPC-style computations.  That said, the implementation and design of this infrastructure is state of the art, is not specific to these applications, and is already being adopted (e.g.) by the Flang compiler<https://llvm.org/devmtg/2019-10/talk-abstracts.html#tech19>.  If you are interested in learning more about MLIR and the technical design, I’d encourage you to look at the MLIR Keynote and Tutorial at the last LLVM Developer Meeting<http://llvm.org/devmtg/2019-04/>.

MLIR is already open source on GitHub<https://medium.com/tensorflow/mlir-a-new-intermediate-representation-and-compiler-framework-beba999ed18d>, and includes a significant amount of code in two repositories.  “MLIR Core” is located in github/tensorflow/mlir<https://github.com/tensorflow/mlir>, including an application independent IR, the code generation infrastructure, common graph transformation infrastructure, declarative operation definition and rewrite infrastructure, polyhedral transformations etc.  The primary TensorFlow repository at github/tensorflow/tensorflow<https://github.com/tensorflow/tensorflow/> contains TensorFlow-specific functionality built using MLIR Core infrastructure.

In discussions with a large number of industry partners<https://blog.google/technology/ai/mlir-accelerating-ai-open-source-infrastructure/>, we’ve achieved consensus that it would be best to build a shared ML compiler infrastructure under a common umbrella with well known neutral governance.  As such, we’d like to propose that MLIR Core join the non-profit LLVM Foundation as a new subproject! We plan to follow the LLVM Developer Policy<http://llvm.org/docs/DeveloperPolicy.html>, and have been following an LLVM-style development process from the beginning - including all relevant coding and testing styles, and we build on core LLVM infrastructure pervasively.

We think that MLIR is a nice complement to existing LLVM functionality, providing common infrastructure for higher level optimization and transformation problems, and dovetails naturally with LLVM IR optimizations and code generation.  Please let us know if you have any thoughts, questions, or concerns!

-Chris




_______________________________________________
LLVM Developers mailing list
llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev


--
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190909/cfd721f3/attachment.html>


More information about the llvm-dev mailing list