[llvm-dev] [RFC] Proposal for TLX: Tensor LLVM eXtensions

Chris Lattner via llvm-dev llvm-dev at lists.llvm.org
Sat Nov 27 17:57:29 PST 2021


Thank you for the interesting proposal Akash (et al).  I have a few other questions:

Florian pointed out that this is a very large proposal which is introducing a bunch of new concepts which makes it difficult to review.  My major concern with it is that it is proposing a single tensor model for LLVM, something that is inappropriate for a wide variety of frameworks, and doesn’t appear to be very general.  For example, it isn’t clear how to model the strided tensor model of pytorch, doesn’t appear to support dynamic shapes, sparse tensors, and it isn’t clear (in a quick reading) what the op-extensibiliy story is.  Further, there are a bunch of design decisions inherent to this approach (e.g. putting the layout information on the ops, instead of in the types) that make certain optimizations (e.g. layout transformations) more complicated.

This isn’t to say that this is the _wrong_ design, merely that it is only one of many plausible and important designs.  Standardizing "one thing" in LLVM can have a chilling effect on innovation (particularly for such a rapidly evolving field) which is one of the reasons that MLIR favors an “open extensibility” approach.

In terms of detailed design, it isn’t clear to me that representing heap allocated things like this as a token type will work out well.  There have been a variety of proposals over the years (incl adding F90 style arrays as a first class entity) that haven’t worked well because of a wide variety of design assumptions in LLVM).  The token type <https://llvm.org/docs/LangRef.html#token-type> in particular is not composable with control flow, functional calls and other things, and ML models frequently have loops and other controls flow in them - how do you plan to represent that?

In your convolution operation, it doesn’t look like you’re handling the various edge conditions (replicating, mirroring, etc) common in ML frameworks. How do you plan to handle that?  Similarly, how do you handle quantization?

As per the motivation section, you point out "Crucially, however, MLIR does not have a low-level code generation framework that is retargetable to diverse hardware: it relies on LLVM for this purpose.”  I happen to agree with you, but the lack of this in MLIR isn’t evidence that LLVM IR is the natural place to put matrix lowering support.  Why do you think LLVM IR is a better place to put this than a high level IR?  Whether it is MLIR, XLA, or something else, it seems that there is a very clear separation of concerns here, and (as you point out) LLVM is being successfully used as the backend for a wide variety of tensor compilers already.

Finally, I’m also a bit concerned because the IR extensions are not really the meat of this proposal - this is effectively proposing something akin to the entire LLVM “CodeGen” framework but for tensors.  The IR abstractions and framework need to be co-designed together, and it isn’t clear how general or powerful the framework will turn out to be.  We’ve seen a *LOT* of ML compiler frameworks (incl notably Glow, XLA, TVM, etc) that are successful handling important subsets of the ML inference space, but very few have scaled up to solving the full generality of the problem.

-Chris


> On Nov 15, 2021, at 10:18 AM, Kothari, Akash via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> 
> For those who may have been having trouble viewing the RFC in plain text format, we have our proposal in a Google doc: https://docs.google.com/document/d/1IW6VIJ4lMYbGRTOle7S5QXP7Sb5UlucZ3gf-L-4Ccfs/edit?usp=sharing <https://docs.google.com/document/d/1IW6VIJ4lMYbGRTOle7S5QXP7Sb5UlucZ3gf-L-4Ccfs/edit?usp=sharing>. It would be great if y’all could comment in the google doc or respond via email.
> 
> Thanks,
> Akash Kothari
> 
>> On Nov 12, 2021, at 1:28 PM, Kothari, Akash <akashk4 at illinois.edu <mailto:akashk4 at illinois.edu>> wrote:
>> 
>> **** Proposal for TLX: Tensor LLVM eXtensions
>> ===================================================================================
>>  
>> Authors: Akash Kothari (UIUC), Abdul Rafae Noor (UIUC), Dounia Khaldi (Intel),
>>     Vikram Adve (UIUC), Yuanke Luo(Intel), Sudipta Sengupta (Amazon AWS),
>>     Milind Girkar (Intel), Charith Mendis (UIUC)
>>  
>> ------------------------------------------------------------------------------------
>>  
>>  
> 
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20211127/e51a11b6/attachment.html>


More information about the llvm-dev mailing list