patch submission: x86 trapsleds

Todd Mortimer via llvm-commits llvm-commits at lists.llvm.org
Sat Jul 22 21:34:30 PDT 2017


On Thu, Jul 20, 2017 at 11:02:16AM +0200, Joerg Sonnenberger wrote:
> On Wed, Jul 19, 2017 at 09:35:58PM -0400, Todd Mortimer via llvm-commits wrote:
> > I have attached a patch that converts NOP padding emitted by
> > X86AsmBackend::writeNopData into a short JMP over a sequence of INT3
> > insructions. The idea is to remove potentially convenient NOP sleds
> > which may be used in ROP attacks. Programs which would have normally
> > executed through a NOP sled will now just JMP over the INT3s, but an
> > attacker hoping to hit the NOP sled on their way to some code will now
> > get a core dump.
> 
> I don't believe turning a single padding instruction into a jump is a
> good idea for any intra-function place. You should at the very least
> demonstrate the performance (non-)regression by running LNT. You will
> also need at least a functional test case before this can be further
> considered. Note that the lld case is completely different -- it is only
> about inter-function padding.

Okay, I have run LNT with this, and compared to the baseline
performance. I ran lnt nightly, with --user-perf=1 and --multisample=5.

The build with trapsleds showed

Performace improvement on:
SingleSource/Benchmarks/Misc/salsa20:              4.70s vs 4.83s

Performance regression on:
SingleSource/Benchmarks/Shootout-C++/fibo:         1.51s vs 1.40s
MultiSource/Benchmarks/Ptrdist/yacr2/yar2:         0.55s vs 0.51s
SingleSource/Benchmarks/Shootout-C++/ackermann:    0.97s vs 0.91s
SingleSource/Benchmarks/BenchmarkGame/recursive:   0.53s vs 0.52s
MultiSource/Applications/lambda-0_1_1/lambda:      2.86s vs 2.81s
MultiSource/Applications/ash/aha:                  1.53s vs 1.50s
SingleSource/Benchmarks/Adobe-C++/functionobjects: 2.77s vs 2.74s

All other performance tests benched the same. 

I don't know if these results are significant enough to be of concern,
of it it would be preferable to put trapsled generation behind an option.

What is the next step to moving this along, if it can be considered for
inclusion?

Thank you,
Todd


More information about the llvm-commits mailing list