[llvm] 4326497 - [X86] vec_insert-5.ll - ensure we build with +mmx as we reference x86_mmx types

Simon Pilgrim via llvm-commits llvm-commits at lists.llvm.org
Mon Oct 30 05:43:36 PDT 2023


Author: Simon Pilgrim
Date: 2023-10-30T12:43:18Z
New Revision: 432649700db1bcfd5c991296242195129f03b4b1

URL: https://github.com/llvm/llvm-project/commit/432649700db1bcfd5c991296242195129f03b4b1
DIFF: https://github.com/llvm/llvm-project/commit/432649700db1bcfd5c991296242195129f03b4b1.diff

LOG: [X86] vec_insert-5.ll - ensure we build with +mmx as we reference x86_mmx types

Enabling SSE doesn't guarantee MMX is enabled on all targets

Avoids a crash in D152928 (although we still currently see a regression with that patch applied resulting in MMX codegen)

Added: 
    

Modified: 
    llvm/test/CodeGen/X86/vec_insert-5.ll

Removed: 
    


################################################################################
diff  --git a/llvm/test/CodeGen/X86/vec_insert-5.ll b/llvm/test/CodeGen/X86/vec_insert-5.ll
index be155969e0b5e22..34280aa647aab77 100644
--- a/llvm/test/CodeGen/X86/vec_insert-5.ll
+++ b/llvm/test/CodeGen/X86/vec_insert-5.ll
@@ -1,7 +1,7 @@
 ; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
-; RUN: llc < %s -mtriple=i386-unknown -mattr=+sse2,+ssse3 | FileCheck %s --check-prefix=X86
-; RUN: llc < %s -mtriple=x86_64-unknown -mattr=+sse2,+ssse3 | FileCheck %s --check-prefixes=X64,ALIGN
-; RUN: llc < %s -mtriple=x86_64-unknown -mattr=+sse2,+ssse3,sse-unaligned-mem | FileCheck %s --check-prefixes=X64,UNALIGN
+; RUN: llc < %s -mtriple=i386-unknown -mattr=+mmx,+sse2,+ssse3 | FileCheck %s --check-prefix=X86
+; RUN: llc < %s -mtriple=x86_64-unknown -mattr=+mmx,+sse2,+ssse3 | FileCheck %s --check-prefixes=X64,ALIGN
+; RUN: llc < %s -mtriple=x86_64-unknown -mattr=+mmx,+sse2,+ssse3,sse-unaligned-mem | FileCheck %s --check-prefixes=X64,UNALIGN
 
 ; There are no MMX operations in @t1
 


        


More information about the llvm-commits mailing list