[cfe-dev] Matrix Support in Clang

Florian Hahn via cfe-dev cfe-dev at lists.llvm.org
Thu Jan 16 11:48:12 PST 2020


Bump.

I’d like to share a bit more context/motivation for the proposal. We are still looking for feedback and would appreciate any feedback/concerns on the overall proposal or any of the details!

I’ve uploaded WIP patches that add the various proposed builtins and linked them to the initial commit: https://reviews.llvm.org/D72281 <https://reviews.llvm.org/D72281>

Besides that I’ve prepared a two examples to illustrate the use of the builtins. First, I’ve put up a patch for test-suite that adds a set of tests that check the matrix builtins match the spec and a naive loop based implementation: https://reviews.llvm.org/D72770 <https://reviews.llvm.org/D72770>

Second, I ran a few benchmarks comparing the performance of the matrix builtins and Eigen for operations on small matrixes (ranging from 3x3 to 16x16). The benchmarks compare the performance of a single matrix multiply, a matrix multiply add and a larger matrix expression. I’ve shared the numbers below. For sizes smaller than 16x16, the matrix builtins comfortably beat Eigen (between 1.5x and 3x speedups).

Currently, Eigen still outperforms the matrix builtins for the following cases

	• 3x3 matrixes (we are aware of the issue there and have a good idea of how to improve those cases without much effort)
	• Larger matrixes (roughly 15x15+)

The regression on larger matrixes is not surprising at the moment, as we have not implemented any sort of tiling for the matrix builtins. But this is certainly something we are planning on implementing in the future, extending the number of cases that can be handled by the matrix builtins.

To summarize, we think that Eigen (and similar libraries) could get a nice speedup for operations on small matrixes by using the builtins, while also likely simplifying the implementation by off-leading vector code generation to the compiler, rather than using target-specific intrinsics.

The benchmark code can be found here: https://gist.github.com/fhahn/03796abb21bfc242c083cf7333ac960c <https://gist.github.com/fhahn/03796abb21bfc242c083cf7333ac960c>

The numbers are gathered with LLVM master as of today, with the Clang patches applied on top of them. The benchmarks where built with -O3.


Cheers,
Florian


Benchmark numbers (CPU time in ns shown). Values < 1 in the (Matrix builtins / Eigen) column means the matrix builtin version out-performs Eigen.

X86 macOS

name                                                                                          

Matrix builtins 

Eigen.         

(Matrix builtins / Eigen)

BM_GEMM_Mult_Square<float, 3, 3, 3, 3>

4.61

6.65

0.693

BM_GEMM_Mult_Square<double, 3, 3, 3, 3>

3.93

6.17

0.638

BM_GEMM_Mult_Square<float, 5, 5, 5, 5>

13.06

25.44

0.513

BM_GEMM_Mult_Square<double, 5, 5, 5, 5>

18.53

42.29

0.438

BM_GEMM_Mult_Square<float, 8, 8, 8, 8>

34.26

108.32

0.316

BM_GEMM_Mult_Square<double, 8, 8, 8, 8>

70.07

178.25

0.393

BM_GEMM_Mult_Square<float, 11, 11, 11, 11>

151.37

306.53

0.494

BM_GEMM_Mult_Square<double, 11, 11, 11, 11>

215.47

422.73

0.51

BM_GEMM_Mult_Square<float, 16, 16, 16, 16>

357.9

466.22

0.768

BM_GEMM_Mult_Square<double, 16, 16, 16, 16>

719.86

722.31

0.997

BM_GEMM_Mult_Add_Square<float, 3, 3, 3, 3>

6.44

7.43

0.867

BM_GEMM_Mult_Add_Square<double, 3, 3, 3, 3>

4.77

7.94

0.601

BM_GEMM_Mult_Add_Square<float, 5, 5, 5, 5>

13.7

27.09

0.506

BM_GEMM_Mult_Add_Square<double, 5, 5, 5, 5>

20.83

44.82

0.465

BM_GEMM_Mult_Add_Square<float, 8, 8, 8, 8>

39.36

113.99

0.345

BM_GEMM_Mult_Add_Square<double, 8, 8, 8, 8>

75.4

186.2

0.405

BM_GEMM_Mult_Add_Square<float, 11, 11, 11, 11>

169.78

310.76

0.546

BM_GEMM_Mult_Add_Square<double, 11, 11, 11, 11>

243.95

497.39

0.49

BM_GEMM_Mult_Add_Square<float, 16, 16, 16, 16>

430.09

487.65

0.882

BM_GEMM_Mult_Add_Square<double, 16, 16, 16, 16>

854.13

843.71

1.012

BM_GEMM_Expr_Square<float, 3, 3, 3, 3>

10.89

18.13

0.6

BM_GEMM_Expr_Square<double, 3, 3, 3, 3>

9.41

17.17

0.548

BM_GEMM_Expr_Square<float, 5, 5, 5, 5>

28.07

54.42

0.516

BM_GEMM_Expr_Square<double, 5, 5, 5, 5>

40.45

72.36

0.559

BM_GEMM_Expr_Square<float, 8, 8, 8, 8>

79.37

222.89

0.356

BM_GEMM_Expr_Square<double, 8, 8, 8, 8>

152.4

393.13

0.388

BM_GEMM_Expr_Square<float, 11, 11, 11, 11>

299.12

659.46

0.454

BM_GEMM_Expr_Square<double, 11, 11, 11, 11>

444.06

862.66

0.515

BM_GEMM_Expr_Square<float, 16, 16, 16, 16>

772.21

842.29

0.917

BM_GEMM_Expr_Square<double, 16, 16, 16, 16>

1580.45

1578.02

1.002




ARM64 Darwin
name                                                                                     

Matrix builtins  

Eigen           (

Matrix builtins / Eigen)

BM_GEMM_Mult_Square<float, 3, 3, 3, 3>

6.29

6

1.048

BM_GEMM_Mult_Square<double, 3, 3, 3, 3>

5.14

4.87

1.056

BM_GEMM_Mult_Square<float, 5, 5, 5, 5>

14.8

37.46

0.395

BM_GEMM_Mult_Square<double, 5, 5, 5, 5>

21

65.01

0.323

BM_GEMM_Mult_Square<float, 8, 8, 8, 8>

39.56

88.73

0.446

BM_GEMM_Mult_Square<double, 8, 8, 8, 8>

84.58

156.45

0.541

BM_GEMM_Mult_Square<float, 11, 11, 11, 11>

184.59

298.33

0.619

BM_GEMM_Mult_Square<double, 11, 11, 11, 11>

270.03

399.78

0.675

BM_GEMM_Mult_Square<float, 16, 16, 16, 16>

430.07

345.05

1.246

BM_GEMM_Mult_Square<double, 16, 16, 16, 16>

891.57

608.66

1.465

BM_GEMM_Mult_Add_Square<float, 3, 3, 3, 3>

8.87

6.77

1.31

BM_GEMM_Mult_Add_Square<double, 3, 3, 3, 3>

7.1

6.8

1.044

BM_GEMM_Mult_Add_Square<float, 5, 5, 5, 5>

16.23

37.89

0.428

BM_GEMM_Mult_Add_Square<double, 5, 5, 5, 5>

23.32

68.01

0.343

BM_GEMM_Mult_Add_Square<float, 8, 8, 8, 8>

42.61

91.3

0.467

BM_GEMM_Mult_Add_Square<double, 8, 8, 8, 8>

89.15

162.19

0.55

BM_GEMM_Mult_Add_Square<float, 11, 11, 11, 11>

216.04

304.33

0.71

BM_GEMM_Mult_Add_Square<double, 11, 11, 11, 11>

300.04

423.92

0.708

BM_GEMM_Mult_Add_Square<float, 16, 16, 16, 16>

440.06

374.63

1.175

BM_GEMM_Mult_Add_Square<double, 16, 16, 16, 16>

913

777.27

1.175

BM_GEMM_Expr_Square<float, 3, 3, 3, 3>

16.69

32.46

0.514

BM_GEMM_Expr_Square<double, 3, 3, 3, 3>

14.36

30.56

0.47

BM_GEMM_Expr_Square<float, 5, 5, 5, 5>

33.48

72.54

0.461

BM_GEMM_Expr_Square<double, 5, 5, 5, 5>

47.15

108.44

0.435

BM_GEMM_Expr_Square<float, 8, 8, 8, 8>

91.3

205.74

0.444

BM_GEMM_Expr_Square<double, 8, 8, 8, 8>

187.31

385.48

0.486

BM_GEMM_Expr_Square<float, 11, 11, 11, 11>

400.05

660.08

0.606

BM_GEMM_Expr_Square<double, 11, 11, 11, 11>

582.93

874.4

0.667

BM_GEMM_Expr_Square<float, 16, 16, 16, 16>

938.7

788.69

1.19

BM_GEMM_Expr_Square<double, 16, 16, 16, 16>

1900.25

1543.1

1.231


> 
> Ping.
> 
> I’ve also put up 2 patches on Phabricator, to illustrate how the implementation could look like:
> 
> 1. [Matrix] Add matrix type to Clang (WIP). https://reviews.llvm.org/D72281
> 2. [Matrix] Add __builtin_matrix_insert to Clang (WIP). https://reviews.llvm.org/D72283
> 
> 
> Cheers,
> Florian
> 
>> On Dec 20, 2019, at 18:31, Florian Hahn via cfe-dev <cfe-dev at lists.llvm.org> wrote:
>> 
>> Hello,
>> 
>> This is a Clang-focused follow up to the original proposal on llvm-dev (http://lists.llvm.org/pipermail/llvm-dev/2019-October/136240.html). On the LLVM side, we recently landed the first commit adding matrix intrinsics as proposed.
>> 
>> On the Clang side, we would like to propose adding support for matrix math operations to Clang. This includes adding a new matrix type (similar to ext_vector_type) and a set of builtins to operate on values of the matrix type.
>> 
>> Our main motivation for the matrix support in Clang is to give users a way to
>> 	• Guarantee generation of high-quality code for matrix operations. For isolated operations, we can guarantee vector code generation suitable for the target. For trees of operations, the proposed value type helps with eliminating temporary loads & stores. 
>> 	• Make use of specialized matrix ISA extensions, like the new matrix instructions in ARM v8.6 or various proprietary matrix accelerators, in their C/C++ code. 
>> 	• Move optimizations from matrix wrapper libraries into the compiler. We use it internally to simplify an Eigen-style matrix library, by relying on LLVM for generating tiled & fused loops for matrix operations. 
>> The rest of this RFC is structured as follows: First we propose a draft specification for the matrix type and accompanying builtins. Next we show an example of how matrix operations will be lowered by Clang, followed by a discussion of the contributing criteria for new extensions.  We wrap up the RFC by discussing possible extensions to the matrix type.
>> Draft Specification
>> 
>> Matrix TYPE Attribute
>> 
>> The attribute-token matrix_type is used to declare a matrix type. It shall appear at most once in each attribute-list. The attribute shall only appertain to a typedef-name of a typedef of a non-volatile type that is a signed integer type, an unsigned integer type, or a floating-point type. An attribute-argument-clause must be present and it shall have the form:
>> 
>> (constant-expression, constant-expression)
>> 
>> Both constant-expressions shall be a positive non-zero integral constant expressions. The maximum of the product of the constants is implementation defined. If that implementation defined limit is exceeded, the program is ill-formed.
>> 
>> An attribute of the form matrix_type(R, C) forms a matrix type with an element type of the cv-qualified type the attribute appertains to and R rows and C columns.
>> 
>> If a declaration of a typedef-name has a matrix_type attribute, then all declaration of that typedef-name shall have a matrix_type attribute with the same element type, number of rows, and number of columns.
>> 
>> Matrix Type
>> 
>> A matrix type has an underlying element type, a constant number of rows, and a constant number of columns. Matrix types with the same element type, rows, and columns are the same type. A value of a matrix type contains rows * columns values of the element type laid out in column-major order without padding in a way compatible with an array of at least that many elements of the underlying element type.
>> 
>> A matrix type is a scalar type with the same alignment as its underlying element type, but objects of matrix type are not usable in constant expressions.
>> 
>> TODO: Allow reinterpret_cast from pointer to element type. Make aliasing work.
>> Future Work: Initialization syntax.
>> Future Work: Access syntax. m[col][row].
>> Future Work: Conversions between matrix types with const qualified and unqualified element types.
>> Future Work: Conversions between matrix types with different element types.
>> 
>> Matrix Type builtin Operations
>> 
>> Each matrix type supports a collection of builtin expressions that look like function calls but do not form an overload set. Here they are described as function declarations with rules for how to construct the argument list types and return type and the library description elements from [library.description.structure.specifications]/3 in the C++ standard. 
>> 
>> Definitions:
>> 	• M, M1, M2, M3 - Matrix types 
>> 	• T - Element type 
>> 	• row, col - Row and column arguments respectively. 
>> All operations on matrix types match the behavior of the underlying element type with respect to signed overflows.
>> 
>> 
>> Element Operations
>> 
>> Preconditions: row and col are in the ranges [0, rows in M) and [0, columns in M) respectively.
>> 
>> M __builtin_matrix_insert(M matrix, int row, int col, T elt)
>> 
>> Remarks: The return type and the type T are inferred from the cv-unqualified type of the matrix argument and its cv-unqualified element type respectively.
>> 
>> Returns: a copy of matrix with the element at the specified row and column set to elt.
>> 
>> 
>> T __builtin_matrix_extract(M matrix, int row, int col)
>> 
>> The return type is inferred from the cv-unqualified type of the matrix argument’s element type.
>> 
>> Returns: a copy of the element at the specified row and column.
>> 
>> 
>> Simple Binary Operations
>> 
>> For the following binary operations matrix1 and matrix2 shall be matrix values of the same cv-unqualified type, and the return type is the cv-unqualified version of that type. 
>> 
>> M __builtin_matrix_add(M matrix1, M matrix2)
>> 
>> Returns: A matrix Res equivalent to the code below, where col refers to the number of columns of M, row to the number of rows of M and EltTy to the element type of M.
>> M Res;
>> for (int C = 0; C < col; ++C) {
>>   for (int R = 0; R < row; ++R) {
>>     EltTy Elt = __builtin_matrix_extract(matrix1, R, C) + 
>>                      __builtin_matrix_extract(matrix2, R, C)
>>     Res = __builtin_matrix_insert(Res, R, C, Elt);
>>   }
>> }
>> 
>> 
>> M __builtin_matrix_sub(M matrix1, M matrix2)
>> 
>> Returns: A matrix Res equivalent to the code below, where col refers to the number of columns of M, row to the number of rows of M and EltTy to the element type of M.
>> M Res;
>> for (int C = 0; C < col; ++C) {
>>   for (int R = 0; R < row; ++R) {
>>     EltTy Elt = __builtin_matrix_extract(matrix1, R, C) - 
>>                      __builtin_matrix_extract(matrix2, R, C)
>>     Res = __builtin_matrix_insert(Res, R, C, Elt);
>>   }
>> }
>> 
>> 
>> Other Operations
>> 
>> M3 __builtin_matrix_multiply(M1 matrix1, M2 matrix2)
>> 
>> Mandates: M1 and M2 shall be matrix types with the same cv-unqualified element type and M1’s number of columns matching M2’s number of row.
>> 
>> Remarks: The return type is a cv-unqualified matrix type with the same element type as M1 and M2 if both M1 and M2’s element type is const, or the cv-unqualified element type otherwise, and with the same number of rows as M1 and the same number of columns as M2.
>> 
>> Returns: A matrix Res equivalent to the code below, where col refers to the number of columns of M, row to the number of rows of M, EltTy to the element type of M and inner refers to the number of columns of M1.
>> M Res;
>> for (int C = 0; C < col; ++C) {
>>   for (int R = 0; R < row; ++R) {
>>     EltTy Elt = 0;
>>     for (int K = 0; K < inner; ++K) {
>>       Elt += __builtin_matrix_extract(matrix1, R, K) * 
>>                  __builtin_matrix_extract(matrix2, K, C)
>>   }
>>   Res = __builtin_matrix_insert(Res, R, C, Elt);
>> }
>> Remark: With respect to rounding errors, the operation preserves the behavior of the separate multiply and add operations by default. We propose to provide a Clang option to override this behavior and allow contraction of those operations (e.g. -ffp-contract=matrix).
>> 
>> 
>> M2 __builtin_matrix_transpose(M1 matrix)
>> 
>> Remarks: The return type is a cv-unqualified matrix type that has the same element type as M1 and has the the same number of rows as M1 has columns and the same number of columns as M1 has rows.
>> 
>> Returns: A matrix Res equivalent to the code below, where col refers to the number of columns of M, and row to the number of rows of M.
>> M Res;
>> for (int C = 0; C < col; ++C) {
>>   for (int R = 0; R < row; ++R) {
>>     EltTy Elt = __builtin_matrix_extract(matrix, R, C);
>>     Res = __builtin_matrix_insert(Res, C, R, Elt);
>>   }
>> }
>> 
>> 
>> M __builtin_matrix_column_load(T *ptr, int row, int col, int stride)
>> 
>> Mandates: row and col shall be integral constants greater than 0. 
>> 
>> Preconditions: stride >= row.
>> 
>> Remarks: The return type is a cv-unqualified matrix type with an element type of the cv-unqualified version of T and a number of rows and columns equal to row and col respectively.
>> 
>> Returns: A matrix Res equivalent to:
>> M Res;
>> for (int C = 0; C < col; ++C) {
>>   for (int R = 0; R < row; ++K)
>>     Res = __builtin_matrix_insert(Res, R, C, ptr[R]);
>>   ptr += stride
>> }
>> 
>> 
>> void __builtin_matrix_column_store(M matrix, T *ptr, int stride)
>> 
>> Preconditions: stride is greater than or equal to the number of rows in M.
>> 
>> Effects: Equivalent to:
>> for (int C = 0; C < columns in M; ++C) {
>>   for (int R = 0; R < rows in M; ++K)
>>     ptr[R] = __builtin_matrix_extract(matrix, R, C);
>>   ptr += stride
>> }
>> Remarks: The type T is the const-unqualified version of the matrix argument’s element type.
>> 
>> M __builtin_matrix_scalar_multiply(M matrix, T scalar)
>> 
>> Returns: A matrix Res equivalent to the code below, where col refers to the number of columns of M, and row to the number of rows of M.
>> M Res;
>> for (int C = 0; C < col; ++C) {
>>   for (int R = 0; R < row; ++R) {
>>     EltTy Elt = __builtin_matrix_extract(matrix, R, C) * scalar;
>>     Res = __builtin_matrix_insert(Res, R, C, Elt);
>>   }
>> }
>> Remarks: The return type and the type T are the cv-unqualified type of the matrix argument and its cv-unqualified element type respectively.
>> 
>> Example 
>> 
>> This code performs a matrix-multiply of two 4x4 matrices followed by an matrix addition:
>> typedef float m4x4_t __attribute__((matrix_type(4, 4)));
>> void f(m4x4_t *a, m4x4_t *b, m4x4_t *c, m4x4_t *r) {
>>   *r = __builtin_matrix_add(__builtin_matrix_multiply(*a, *b), *c);
>> }
>> This will get lowered by Clang to the LLVM IR below. In our current implementation, we use LLVM’s array type as storage type for the matrix data. Before accessing the data, we cast the array to a vector type. This allows us to use the element width as alignment, without running into issues with LLVM’s large default alignment for vector types, which is problematic in structs.
>> define void @f([16 x float]* %a, [16 x float]* %b, [16 x float]* %c, [16 x float]* %r) #0 {
>> entry:
>>   %a.addr = alloca [16 x float]*, align 8
>>   %b.addr = alloca [16 x float]*, align 8
>>   %c.addr = alloca [16 x float]*, align 8
>>   %r.addr = alloca [16 x float]*, align 8
>>   store [16 x float]* %a, [16 x float]** %a.addr, align 8
>>   store [16 x float]* %b, [16 x float]** %b.addr, align 8
>>   store [16 x float]* %c, [16 x float]** %c.addr, align 8
>>   store [16 x float]* %r, [16 x float]** %r.addr, align 8
>>   %0 = load [16 x float]*, [16 x float]** %a.addr, align 8
>>   %1 = bitcast [16 x float]* %0 to <16 x float>*
>>   %2 = load <16 x float>, <16 x float>* %1, align 4
>>   %3 = load [16 x float]*, [16 x float]** %b.addr, align 8
>>   %4 = bitcast [16 x float]* %3 to <16 x float>*
>>   %5 = load <16 x float>, <16 x float>* %4, align 4
>>   %6 = call <16 x float> @llvm.matrix.multiply.v16f32.v16f32.v16f32(<16 x float> %2, <16 x float> %5, i32 4, i32 4, i32 4)
>>   %7 = load [16 x float]*, [16 x float]** %c.addr, align 8
>>   %8 = bitcast [16 x float]* %7 to <16 x float>*
>>   %9 = load <16 x float>, <16 x float>* %8, align 4
>>   %10 = fadd <16 x float> %6, %9
>>   %11 = load [16 x float]*, [16 x float]** %r.addr, align 8
>>   %12 = bitcast [16 x float]* %11 to <16 x float>*
>>   store <16 x float> %10, <16 x float>* %12, align 4
>>   ret void
>> }
>> declare <16 x float> @llvm.matrix.multiply.v16f32.v16f32.v16f32(<16 x float>, <16 x float>, i32 immarg, i32 immarg, i32 immarg)
>> 
>> Contributing Criteria
>> 
>> Evidence of a significant user community: This is based on a number of factors, including an existing user community, the perceived likelihood that users would adopt such a feature if it were available, and any secondary effects that come from, e.g., a library adopting the feature and providing benefits to its users.
>> Currently this is part of one of our compiler toolchains and used on a few large internal codebases. The matrix type can be used by matrix libraries like Eigen, to offload some of the optimization responsibility from the library to the compiler. It would also be suitable target for implementing a standard matrix library. It also provides functionality similar to various libraries for matrix math on small matrixes, like https://developer.apple.com/documentation/accelerate/working_with_matrices, with more flexibility (supports any combination of input dimensions).
>> 
>> A specific need to reside within the Clang tree: There are some extensions that would be better expressed as a separate tool, and should remain as separate tools even if they end up being hosted as part of the LLVM umbrella project.
>> We want to expose this feature at the C/C++ level. For that, it needs to be part of Clang.
>> 
>> A specification: The specification must be sufficient to understand the design of the feature as well as interpret the meaning of specific examples. The specification should be detailed enough that another compiler vendor could implement the feature.
>> We currently have the design above and will work on a more comprehensive spec.
>> 
>> Representation within the appropriate governing organization: For extensions to a language governed by a standards committee (C, C++, OpenCL), the extension itself must have an active proposal and proponent within that committee and have a reasonable chance of acceptance. Clang should drive the standard, not diverge from it. This criterion does not apply to all extensions, since some extensions fall outside of the realm of the standards bodies.
>> We think this extension would fall outside of the realm of the standards bodies. It is an implementation detail used to implement matrix math libraries and such, much like the vector extensions are an implementation detail for SIMD libraries.
>> 
>> A long-term support plan: increasingly large or complex extensions to Clang need matching commitments to supporting them over time, including improving their implementation and specification as Clang evolves. The capacity of the contributor to make that commitment is as important as the commitment itself.
>> We are using this internally and adding this feature to Clang upstream means we intend to support it as part of our ongoing Clang work.
>> 
>> A high-quality implementation: The implementation must fit well into Clang's architecture, follow LLVM's coding conventions, and meet Clang's quality standards, including diagnostics and complete AST representations. This is particularly important for language extensions, because users will learn how those extensions work through the behavior of the compiler.
>> Will we provide a series of patches to implement the extension soon and look forward to any feedback to make sure the patches meet the quality requirement.
>> 
>> A test suite: Extensive testing is crucial to ensure that the language extension is not broken by ongoing maintenance in Clang. The test suite should be complete enough that another compiler vendor could conceivably validate their implementation of the feature against it
>> We will provide this as part of Clang’s unit tests and test-suite.
>> Extensions
>> 
>> Initially we want to focus on 2D matrixes without padding in column-major layout as a concrete use case. This is similar to the defaults for the Matrix type in Eigen, for example. But our proposed type can be extended naturally to
>> 	• Support N (known constant) dimensions by turning matrix_type attribute into a variadic attribute. 
>> 	• Support column/row-wise padding, by adding a column_padding clause to the attribute.
>> Dealing with the padding could be exclusively handled on the frontend side, by emitting additional shufflevector instructions to extract the data. If there is a desire to exploit the padding more on the LLVM side, we can add a set of intrinsics for that. 
>> 	• Support row & column major layouts, by adding a layout clause to the attribute.
>> Again, this naively could be handled while lowering to LLVM IR in Clang using shufflevector to produce flattened vectors with the required layout. For better optimisations, the LLVM intrinsics relying on shape/layout information can be extended to take the layout as additional argument. Through propagating the layout information similar to the dimensions, we should be able to optimise the points where we need to transform the layout of the underlying matrixes. 
>> In all cases, we require known integer constants as dimensions and we do not plan to support dynamic dimensions for now, as the main optimization potential comes from the fact that we know the dimensions. Supporting dynamic dimensions should be fairly straight forward, but means we lose the ability to type check matrix expressions at compile time and we also have to rely on dynamic dimension during code generation.
>> 
>> Cheers,
>>  Florian
>> _______________________________________________
>> cfe-dev mailing list
>> cfe-dev at lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
> 
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20200116/09609e93/attachment-0001.html>


More information about the cfe-dev mailing list