[PATCH] D88396: [X86] Replace movaps with movups when avx is enabled.

Roman Lebedev via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Sep 30 01:58:50 PDT 2020


lebedev.ri added a comment.

In D88396#2302727 <https://reviews.llvm.org/D88396#2302727>, @LuoYuanke wrote:

>>> Compiling for SSE this code will likely use the memory form of addps which will fault on the misalignment. I know this patch only targets AVX.
>>>
>>> I don’t think you can motivate this change by showing what code you want to accept if the code would crash when compiled with the default SSE2 target.
>>
>> Note that even if x86 codegen will always emit unaligned ops (which will cause new questions/bugreports),
>> the original IR will still contain UB, and it will be only a question of time until that causes some other 'miscompile'.
>> I really think this should be approached from front-end diag side.
>
> Sorry, what does 'UB' means?

undefined behavior

> Why cause 'miscompile', compiler still think the address is aligned.

That is very precisely my point.

> Selecting movups doesn't break compiler assumption. Is there any reason movaps is better than movups? To detect the alignment exception?




Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D88396/new/

https://reviews.llvm.org/D88396



More information about the llvm-commits mailing list