[PATCH] D88396: [X86] Replace movaps with movups when avx is enabled.

LuoYuanke via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Sep 30 02:51:02 PDT 2020


LuoYuanke added a comment.

In D88396#2302728 <https://reviews.llvm.org/D88396#2302728>, @lebedev.ri wrote:

> In D88396#2302727 <https://reviews.llvm.org/D88396#2302727>, @LuoYuanke wrote:
>
>>>> Compiling for SSE this code will likely use the memory form of addps which will fault on the misalignment. I know this patch only targets AVX.
>>>>
>>>> I don’t think you can motivate this change by showing what code you want to accept if the code would crash when compiled with the default SSE2 target.
>>>
>>> Note that even if x86 codegen will always emit unaligned ops (which will cause new questions/bugreports),
>>> the original IR will still contain UB, and it will be only a question of time until that causes some other 'miscompile'.
>>> I really think this should be approached from front-end diag side.
>>
>> Sorry, what does 'UB' means?
>
> undefined behavior
>
>> Why cause 'miscompile', compiler still think the address is aligned.
>
> That is very precisely my point.
>
>> Selecting movups doesn't break compiler assumption. Is there any reason movaps is better than movups? To detect the alignment exception?

Why we need to detect the alignment exception? This is just like assert, it can be done in debug mode. So can we select movaps in debug build, and select movups in non-debug build?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D88396/new/

https://reviews.llvm.org/D88396



More information about the llvm-commits mailing list