[LLVMdev] -march and -mtune options on x86

Ghassan Shobaki ghassan_shobaki at yahoo.com
Tue Jan 17 04:38:39 PST 2012


That explains most of my results very well. Explaining the huge positive impact of the -m32 switch on bwaves certainly requires a careful analysis of the generated code. Unfortunately, this is not the focus of my group at this point, but may be an interesting exercise to do some time in the near future. We will definitely report any interesting findings. I don't know why you are implying that anyone who is not willing to do performance analysis or submit patches should not be posting to this mailing list. I meant to report some results that I could not explain with my limited LLVM experience. I thought that people who know the impact of each flag may be able to explain the results, and you have basically done that. Also, discussing flag combination may help some people achieve better performance by simply trying different flags. Of course, you can always ignore any posting that is not interesting to you. Thank you for the explanation!   


-Ghassan



________________________________
 From: Chandler Carruth <chandlerc at google.com>
To: Ghassan Shobaki <ghassan_shobaki at yahoo.com> 
Cc: Gordon Keiser <gkeiser at arxan.com>; "llvmdev at cs.uiuc.edu" <llvmdev at cs.uiuc.edu> 
Sent: Monday, January 16, 2012 10:46 AM
Subject: Re: [LLVMdev] -march and -mtune options on x86
 

On Mon, Jan 16, 2012 at 12:29 AM, Ghassan Shobaki <ghassan_shobaki at yahoo.com> wrote:

Let me describe more precisely what I am doing and why the results I got may help improve LLVM's performance on modern x86-64 processors regardless of the front end (GCC, Clang or DragonEgg).
>
>
>
>I am running ALL my tests on an Intel Xeon E5540 processor, which is an x86-64 Nehalem processor. The OS is a 64-bit version of Ubuntu. So, I am running all my tests on the same x86-64 machine and am only experimenting with compiler options. What I meant by running in x86-32 mode is simply using the -m32 option (passed through the llvm-gcc front end, but I believe the front end is irrelevant with regard to this matter, any confirmation?).

That is not quite accurate. This option is inherently a frontend option. The frontend lowers that into various different options to the backend, and what those are varies from frontend to frontend. Please try using Clang or DragonEgg. That said, I think I've got a pretty good understanding of what went on with these options... read my detailed responses below.


The original reason for using this option was forcing the back end to work with a smaller number of physical registers to study register pressure reduction under more stringent constraints (as part of an academic research project). However, the observation that I am trying to communicate in this posting may not be related to register pressure. 
>
>
>
>The observation is that using the -march=core2 and -mtune=core2 options makes a significant positive difference in x86-32 mode (that is, with the -m32 option), while it does not make any significant difference in the x86-64 mode (without the -m32 option).

This is exactly the phenomenon we were describing to you.

With no '-march' options, the default architecture for a 32-bit build is i386, which is very old and has quite limited instructions available.

With no '-march' options, the default architecture for a 64-bit build is x86-64, which is newer than i686 and has a broad and very useful instruction set available to it.

The delta of architectures from i386 -> core2 is *massive*. The delta from x86-64 -> core2 is quite small.

My hypothesis is that the LLVM back end is making some good target-specific optimizations when the -m32 option is used but those good optimizations are disabled when this option is not used.

That is not how the backend works. The target specific optimizations are being *turned off* by specifying '-m32' in order to hit the very old target of i386.

 Here are the SPEC CPU2006 geometric-mean scores that I am measuring:
>
>
>Native x86-64 mode (without the -m32 option):Using -O3 only:                                               INT score: 19.24     FP score: 15.64
> 
>Using -O3 -march=core2 -mtune=core2:     INT score: 19.16     FP score: 15.57
>
>So, there is no significant difference in this case. The small difference may just be random variation.
>
>
>
>x86-32 mode (adding the -m32 option):Using -O3 -m32 only:                                               INT score: 16.86     FP score: 14.09
> 
>Using -O3 -m32 -march=core2 -mtune=core2:     INT score: 17.02     FP score: 15.24
>

Note that these scores are still lower than those x86-64 got in both modes. I'm not sure why you think some optimization was enabled here rather than disabled...
 
So, there is a significant difference in this case. In fact, the 8% geometric-mean improvement on FP2006 is a huge improvement that comes out of many double-digit percentage improvements on some individual benchmarks. The biggest improvement is 48% on gromacs, which is a CPU-intensive FP benchmark. 
>

I haven't tested it, but I'd be willing to bet this is a difference of using x87 FPU versus the SSE FP operations... I'd have to look into the compiled result much more to tell, but genuinely it isn't interesting. The behavior you describe is exactly what I would expect from these options.
 
The above geometric means are consistent with the logical expectation that LLVM's performance in the native x86-64 mode is better, because more spill code is generated when the -m32 option is used. However, these aggregate numbers hide the fact that LLVM generates much faster code for some benchmarks when the -m32 option is used. An extreme example is the bwaves benchmark, which has a score of 15.5 in the native mode and a score of 23 (a 48% speedup) when the -m32 option is used. If LLVM is capable of achieving the higher score in the -m32 mode, it should be able to achieve at least the same score in the native mode. So, the question is: are there any good optimizations that we are losing in the native x86-64 mode? If yes, can we enable them to get better performance on x86-64?

Have you looked at the generated code for optimizations made in 32-bits that are missed in 64-bits? That's the hard part. I know for a fact (having done such an investigation) that there are some, but I strongly doubt they are responsible for these huge swings in performance you're observing.

There is a much bigger difference than any backend optimizations between 32-bit binaries and 64-bit binaries: pointers are half the size in 32-bits. This can cause significant shrink to in-memory data structures which can cause significantly denser cache lines and significantly better performance. That would seem like a much better explanation for a few benchmarks being significantly faster in 32-bit mode.

It might still be worth investigating to see if there are missed optimizations here, but all of the ones I have found have been subtle results of different code patterns due to differently sized and structure data. I think you're going to need to do a lot of very detailed analysis to draw any conclusions about missed optimizations.
 
It seems to me that the back end is somehow assuming that modern x86-64 machines magically solve all the scheduling and tuning problems and do not need any help from the compiler. Any truth to this?

None at all. Please go look at the x86-64 backend before making wild guesses. For reference, the 64-bit and 32-bit x86 backend *share the same code*.
 
Can anyone who is interested in performance on x86-64 try rerunning his/her tests using the -m32 mode to see if that gives any speedup? 

I do this all the time. If you want to help, please actually investigate the performance differences you find, isolate specific missed optimizations with clear testcases demonstrating the transform that should be applied and isn't and file bugs or submit patches.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120117/3884779e/attachment.html>


More information about the llvm-dev mailing list