[llvm-dev] Can LLVM emit machine code faster with no optimization passes?

mats petersson via llvm-dev llvm-dev at lists.llvm.org
Thu Oct 13 04:00:44 PDT 2016


It is clear that there are passes in LLVM that are non-linear with regards
to the size of the function, particularly the size of the generated machine
code, and badly generated IR can easily trigger this with very few and
relatively simple IR instructions - in particular load/store of large data
structures, and LLVM doesn't "understand" that this is unsuitable and
perhaps should be converted to memcpy() [or a loop, or whatever]. In fact,
I think it's more important to have a basic-block than a function, but I've
not investigated the exact details. From memory and my relatively small
effort, it's about "selecting the right instruction(s)", and this becomes a
"for each instruction, check all other instructions being generated" -
which is O(N^2) in complexity.

I suggested, but never completed, a pass to translate "large load to
memcpy" - probably as a separate pass, rather than the current "memcpy
optimisation" pass that does the opposite, takes small calls to memcpy and
translates to the relevant load and store operations. Maybe, one of those
months full of Sundays, I'll get around to it...

Of course, without actually knowing what the original code and/or generated
IR is, it's hard to say if the problem Jonas and I has seen is actually the
problem that the original post is about. It is certainly a PLAUSIBLE
scenario, but perhaps not the only one.

--
Mats

On 12 October 2016 at 20:58, Jonas Maebe via llvm-dev <
llvm-dev at lists.llvm.org> wrote:

> On 12/10/16 20:32, Matthias Braun via llvm-dev wrote:
>
>> But just as food for though: What if msvc did some minimal
>> optimisations, found out that half the sourcecode is unreachable and
>> removes it, while llvm with no optimisations just compiles everything?
>>
>
> llvm is actually extremely slow when it has to remove lots of dead code. I
> experienced that in the beginning when working on our llvm backend. I had
> some bugs in our code generator that caused about half of the llvm IR code
> to be dead, and compiling that code with -O1 made llvm extremely slow.
>
> Another thing that makes llvm incredibly slow is loading/storing large
> aggregates directly (I know, now, that you're not supposed to do that). I
> guess it's the generation of the resulting spilling code that takes
> forever. See e.g. http://pastebin.com/krXhuEzF
>
> All that said: we will also keep our original code generators in our
> compiler, and keep llvm as an option to optimise extra. In terms of speed,
> our code generators are much less complex and hence much faster than
> llvm's. We don't have instruction selection, but directly generate
> assembler via virtual methods of our parse tree node classes. That would be
> very hard to beat, even if things have gotten slower lately due to the
> addition of extra abstraction layers to support generating JVM bytecode
> and, yes, LLVM IR :)
>
> There are also a few other reasons, but they're not relevant to this
> thread. (*)
>
>
> Jonas
>
> (*) We support several platforms that LLVM no longer supports and/or will
> probably never support (OS/2, 16 and 32 bit MS-DOS, Gameboy Advance, Amiga,
> Darwin/PowerPC), and the preference of some code generator/optimisation
> developers to write Pascal rather than C++ (our compiler is a self-hosted
> Pascal compiler)
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20161013/2009624d/attachment.html>


More information about the llvm-dev mailing list