[PATCH] D99565: [X86] Support replacing aligned vector moves with unaligned moves when avx is enabled.
Sergey Maslov via Phabricator via cfe-commits
cfe-commits at lists.llvm.org
Thu Apr 8 15:15:36 PDT 2021
smaslov added a comment.
> I really don't think this should go in.
Here are more arguments for why, I think, this is an useful option in my opinion, in arbitrary order:
1. This was requested by and added for users of Intel Compiler. Having similar option in LLVM would make the two compilers more compatible and ease the transition of new customers to LLVM.
2. This fixes an inconsistency in optimization; suppose a load operation was merged into another instruction (e.g., load and add becomes `add [memop]'). If a misaligned pointer is passed to the two-instruction sequence, it will raise an exception. If the same pointer is passed to the memop instruction, it will work. Thus, the behavior of misalignment depends upon what optimization levels and passes are applied, and small source changes could cause issues to appear and disappear. It's better for the user to consistently use unaligned load/store to improve the debug experience.
3. Makes good use of HW that is capable of handling misaligned data gracefully. It is not necessarily a bug in users code but a third-part library. For example it would allow using a library built in old ages where stack alignment was 4-byte only.
If you still think this can hinder the raise of a desired exception for a mis-aligned access (I'd argue that "going slower" is better than "raising exception"), then let's consider adding this as an option that is OFF by default.
This would give the most flexibility to everyone.
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D99565/new/
https://reviews.llvm.org/D99565
More information about the cfe-commits
mailing list