[LLVMdev] Parametric polymorphism
DeLesley Hutchins
delesley.spambox at googlemail.com
Wed Feb 18 11:53:25 PST 2009
Thanks for the detailed response! :-)
> This is by design. LLVM's type system is very low-level...
Yes, and it should remain low-level. :-)
> Expecting it to directly support generics seems a third-order-of-
> magnitude leap of faith. :) But there is good news for the faithful?
Let us distinguish between generics as found in java or .Net,
and parametric polymorphism in general. Generics are intimately
tied to classes, which are big, complex, OOP-centric, and definitely
not low-level. Parametric polymorphism only means that the LLVM IR
should support type variables.
I do not regard type variables as being high-level, or a leap of
faith. The llvm type system already supports recursive types,
which are a whole lot more complicated than simple type variables.
In fact, the current version of recursive types already have type
variables -- they are called "up references".
In type theory, recursive types are generally represented using
the syntax:
rec X. <type>
Where <type> is a type expression. The variable X can appear
within <type> and acts as a recursive reference to <type>. The
llvm type { int, \1* } would thus ordinarily be written as:
rec X. { int, X* }.
I am proposing to extend the llvm type system with types of the
form:
forall X. <type>
Conceptually, this is not any higher-level than a recursive type.
> One could argue that LLVM could have much better support for type
> genericity by simply allowing full use of abstract data types (those
> containing opaque types) to be valid in IR, but not for codegen.
That's more or less what I'm suggesting. A type variable refers
to an unknown, or ``opaque'' type, which then becomes known later
when the variable is instantiated. However, abstract data types
are not necessarily invalid for codegen; it depends on how the
types are used. More on this below...
> Still, there are a large number of potential foibles here. For
> instance, passing an argument can require platform-specific
> contortions to conform to the platform ABI...
Are those contortions done by the native code generator back-end,
or are they done when the C compiler generates llvm IR? I'm
assuming it's done by the back-end, because it would be bad
if the C compiler had to generate different IR for every platform.
If ABI conformance is done by the back-end however, then that's
a good reason to put type specialization in llvm, where the back-end
can see it. :-)
> Given your concerns, you clearly have strong ideas about how type
> specialization should be implemented; why do you think having LLVM
> make the decision for you internally would be better than making the
> decision yourself, as you can do today?
I want llvm to do the specialization, because specialization is
inextricably tied to similar optimizations, like inlining and
partial evaluation. Doing it within llvm has many advantages,
such as JIT support and link-time optimization.
Moreover, specialization should really be done at the codegen level
in order to do it properly. C++ templates are a great example of
why *NOT* to do specialization within the high-level language.
The first problem with C++ templates is that they don't support
separate compilation, which makes it a royal pain to design and use
template libraries. The library has to be distributed in source
code form, compilation times go through the roof, and the linker
has to weed out all the duplicate instantiations.
The second problem with C++ templates is that every template
instantiation always generates completely new code, even when it
doesn't need to. Consider the following two functions:
struct PairPtr<T> { T* first, T* second };
struct Pair<T> { T first, T second };
T* getSecondPtr<T>(PairPtr<T>* pair) { return pair->second; }
T getSecond<T>(Pair<T>* pair) { return pair->second; }
The generated code for getSecondPtr() is the same for every T.
There is absolutely no need to generate a bazillion specialized
copies. Notice that the .Net value-type/reference-type distinction
would be overly naive in this case: we can instantiate T to a value
type and still get the exact same generated code.
The generated code for getSecond() needs to know the size of T in
order to calculate the correct offset into the record. However,
we still don't need to generate a bazillion specialized copies;
we can simply pass the size of T as an argument:
void getSecond(int sizeT, void* pair, void* result) {
return memcpy(result, ((char*) pair) + sizeT, sizeT);
}
Transformations of this kind are most definitely not high-level.
In order to avoid code bloat, I have been forced to completely
bypass the llvm type system. Llvm is supposed to take care of
determining offsets and allocating space for return values on
the stack. By trying to do that myself, I may have created
alignment issues on certain architectures, and possibly broken
the platform ABI. Moreover, existing llvm optimization passes
probably have no idea how to deal with the code.
The code above is somewhat contrived, but I do have a real reason
for wanting to pass offsets as arguments. Offsets are an elegant
way of implementing mixin classes, e.g.
class Mix<X> extends X { ... }
If I implement Mix as a template, then every instantiation of Mix
would generate new code for every method. However, if I simply
pass the size of X as a hidden argument to each method, then the
code can be generated once, and compiled to a separate library.
-DeLesley
More information about the llvm-dev
mailing list