[llvm-dev] Google’s TensorFlow team would like to contribute MLIR to the LLVM Foundation

Adve, Vikram Sadanand via llvm-dev llvm-dev at lists.llvm.org
Wed Sep 11 08:25:00 PDT 2019


FWIW, I think this could be a very positive step for LLVM and the community.  Most of the discussion has been about the details (documentation, in-tree vs. out-of-tree, etc.), but I think there are key bigger picture reasons:


  1.  Obviously, ML languages and frameworks are becoming widespread and a number of teams are investing resources into compilers for them.  Having a successful LLVM project that provides good infrastructure for these compilers would be valuable.  While the TensorFlow compiler may (or may not) remain a Google project, having a core infrastructure available to the wider community should be super valuable.
  2.  Related point to #1: ML models and even core data types and approaches are evolving rapidly, so there is a lot of research happening on the underlying system infrastructure, from hardware to compilers to languages.  If MLIR can become the infrastructure of choice for these research projects (like LLVM did for the scalar and vector compiler world 15 years ago), that would be a big win.
  3.  As Hal said, the LLVM infrastructure has not provided explicit support for high-level analyses and transformations, e.g., loop restructuring, multidimensional arrays, etc.  Having a good infrastructure to support these will make a number of languages and hardware targets easier to implement / target.

--Vikram Adve

+ Donald B. Gillies Professor of Computer Science, University of Illinois at Urbana-Champaign
+ Scheduling: Kimberly Baker – kabaker at illinois.edu<mailto:kabaker at illinois.edu>
+ Skype: vikramsadve || Zoom: https://illinois.zoom.us/j/2173900467
+ Home page: http://vikram.cs.illinois.edu<http://vikram.cs.illinois.edu/>
+ Center for Digital Agriculture: https://digitalag.illinois.edu<https://digitalag.illinois.edu/>


From: llvm-dev <llvm-dev-bounces at lists.llvm.org> on behalf of via llvm-dev <llvm-dev at lists.llvm.org>
Reply-To: "llvm-dev at lists.llvm.org" <llvm-dev at lists.llvm.org>, "llvm-dev-request at lists.llvm.org" <llvm-dev-request at lists.llvm.org>
Date: Monday, September 9, 2019 at 1:58 PM
To: "llvm-dev at lists.llvm.org" <llvm-dev at lists.llvm.org>
Subject: llvm-dev Digest, Vol 183, Issue 22

Date: Mon, 9 Sep 2019 17:46:34 +0000
From: "Finkel, Hal J. via llvm-dev" <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>>
To: Chris Lattner <clattner at google.com<mailto:clattner at google.com>>, llvm-dev
                <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>>
Cc: Reid Tatge <tatge at google.com<mailto:tatge at google.com>>, Mehdi Amini <aminim at google.com<mailto:aminim at google.com>>,
                Tatiana Shpeisman <shpeisman at google.com<mailto:shpeisman at google.com>>
Subject: Re: [llvm-dev] Google’s TensorFlow team would like to
                contribute MLIR to the LLVM Foundation
Message-ID: <7611fea3-ba64-f587-ca64-fbdf49055a04 at anl.gov<mailto:7611fea3-ba64-f587-ca64-fbdf49055a04 at anl.gov>>
Content-Type: text/plain; charset="utf-8"

Hi, Chris, et al.,

I support adding MLIR as an LLVM subproject. Here are my thoughts:

1. MLIR uses LLVM. LLVM is one of the MLIR dialects, MLIR is compiler infrastructure, and it fits as a natural part of our ecosystem.

2. As a community, we have a lot of different LLVM frontends, many of which have their own IRs on which higher-level transformations are performed. We don't currently offer much, in terms of infrastructure, to support the development of these pre-LLVM transformations. MLIR provides a base on which many of these kinds of implementations can be constructed, and I believe that will add value to the overall ecosystem.

3. As a specific example of the above, the current development of the new Flang compiler depends on MLIR. Flang is becoming a subproject of LLVM and MLIR should be part of LLVM.

4. The MLIR project has developed capabilities, such as for the analysis of multidimensional loops, that can be moved into LLVM and used by both LLVM- and MLIR-level transformations. As we work to improve LLVM's capabilities in loop optimizations, leveraging continuing work to improve MLIR's loop capabilities in LLVM as well will benefit many of us.

5. As a community, we have been moving toward increasing support for heterogeneous computing and accelerators (and given industry trends, I expect this to continue), and MLIR can facilitate that support in many cases (although I expect we'll see further enhancements in the core LLVM libraries as well).

That all having been said, I think that it's going to be very important to develop some documentation on how a frontend author looking to use LLVM backend technology, and a developer looking to implement different kinds of functionality, might reasonably choose whether to target or enhance MLIR components, LLVM components, or both. I expect that this kind of advice will evolve over time, but I'm sure we'll need it sooner rather than later.

Thanks again,

Hal

On 9/9/19 10:30 AM, Chris Lattner via llvm-dev wrote:
Hi all,

The TensorFlow team at Google has been leading the charge to build a new set of compiler infrastructure, known as the MLIR project<https://github.com/tensorflow/mlir>.  The initial focus has been on machine learning infrastructure, high performance accelerators, heterogeneous compute, and HPC-style computations.  That said, the implementation and design of this infrastructure is state of the art, is not specific to these applications, and is already being adopted (e.g.) by the Flang compiler<https://llvm.org/devmtg/2019-10/talk-abstracts.html#tech19>.  If you are interested in learning more about MLIR and the technical design, I’d encourage you to look at the MLIR Keynote and Tutorial at the last LLVM Developer Meeting<http://llvm.org/devmtg/2019-04/>.

MLIR is already open source on GitHub<https://medium.com/tensorflow/mlir-a-new-intermediate-representation-and-compiler-framework-beba999ed18d>, and includes a significant amount of code in two repositories.  “MLIR Core” is located in github/tensorflow/mlir<https://github.com/tensorflow/mlir>, including an application independent IR, the code generation infrastructure, common graph transformation infrastructure, declarative operation definition and rewrite infrastructure, polyhedral transformations etc.  The primary TensorFlow repository at github/tensorflow/tensorflow<https://github.com/tensorflow/tensorflow/> contains TensorFlow-specific functionality built using MLIR Core infrastructure.

In discussions with a large number of industry partners<https://blog.google/technology/ai/mlir-accelerating-ai-open-source-infrastructure/>, we’ve achieved consensus that it would be best to build a shared ML compiler infrastructure under a common umbrella with well known neutral governance.  As such, we’d like to propose that MLIR Core join the non-profit LLVM Foundation as a new subproject! We plan to follow the LLVM Developer Policy<http://llvm.org/docs/DeveloperPolicy.html>, and have been following an LLVM-style development process from the beginning - including all relevant coding and testing styles, and we build on core LLVM infrastructure pervasively.

We think that MLIR is a nice complement to existing LLVM functionality, providing common infrastructure for higher level optimization and transformation problems, and dovetails naturally with LLVM IR optimizations and code generation.  Please let us know if you have any thoughts, questions, or concerns!

-Chris




_______________________________________________
LLVM Developers mailing list
llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org><mailto:llvm-dev at lists.llvm.org>
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev


--
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190909/cfd721f3/attachment-0001.html>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190911/c8b80181/attachment.html>


More information about the llvm-dev mailing list