[llvm] r209577 - AArch64/ARM64: move ARM64 into AArch64's place
Jim Grosbach
grosbach at apple.com
Wed May 28 12:25:43 PDT 2014
Absolutely fantastic. I'll kinda miss the "arm64" name, though. ;)
> On May 24, 2014, at 5:50 AM, Tim Northover <tnorthover at apple.com> wrote:
>
> Author: tnorthover
> Date: Sat May 24 07:50:23 2014
> New Revision: 209577
>
> URL: http://llvm.org/viewvc/llvm-project?rev=209577&view=rev
> Log:
> AArch64/ARM64: move ARM64 into AArch64's place
>
> This commit starts with a "git mv ARM64 AArch64" and continues out
> from there, renaming the C++ classes, intrinsics, and other
> target-local objects for consistency.
>
> "ARM64" test directories are also moved, and tests that began their
> life in ARM64 use an arm64 triple, those from AArch64 use an aarch64
> triple. Both should be equivalent though.
>
> This finishes the AArch64 merge, and everyone should feel free to
> continue committing as normal now.
>
> Added:
> llvm/trunk/include/llvm/IR/IntrinsicsAArch64.td
> - copied, changed from r209576, llvm/trunk/include/llvm/IR/IntrinsicsARM64.td
> llvm/trunk/lib/Target/AArch64/AArch64.h
> llvm/trunk/lib/Target/AArch64/AArch64.td
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64.td
> llvm/trunk/lib/Target/AArch64/AArch64AddressTypePromotion.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64AddressTypePromotion.cpp
> llvm/trunk/lib/Target/AArch64/AArch64AdvSIMDScalarPass.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64AdvSIMDScalarPass.cpp
> llvm/trunk/lib/Target/AArch64/AArch64AsmPrinter.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64AsmPrinter.cpp
> llvm/trunk/lib/Target/AArch64/AArch64BranchRelaxation.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64BranchRelaxation.cpp
> llvm/trunk/lib/Target/AArch64/AArch64CallingConv.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64CallingConv.h
> llvm/trunk/lib/Target/AArch64/AArch64CallingConvention.td
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64CallingConvention.td
> llvm/trunk/lib/Target/AArch64/AArch64CleanupLocalDynamicTLSPass.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64CleanupLocalDynamicTLSPass.cpp
> llvm/trunk/lib/Target/AArch64/AArch64CollectLOH.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64CollectLOH.cpp
> llvm/trunk/lib/Target/AArch64/AArch64ConditionalCompares.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64ConditionalCompares.cpp
> llvm/trunk/lib/Target/AArch64/AArch64DeadRegisterDefinitionsPass.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64DeadRegisterDefinitionsPass.cpp
> llvm/trunk/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64ExpandPseudoInsts.cpp
> llvm/trunk/lib/Target/AArch64/AArch64FastISel.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64FastISel.cpp
> llvm/trunk/lib/Target/AArch64/AArch64FrameLowering.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64FrameLowering.cpp
> llvm/trunk/lib/Target/AArch64/AArch64FrameLowering.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64FrameLowering.h
> llvm/trunk/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64ISelDAGToDAG.cpp
> llvm/trunk/lib/Target/AArch64/AArch64ISelLowering.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64ISelLowering.cpp
> llvm/trunk/lib/Target/AArch64/AArch64ISelLowering.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64ISelLowering.h
> llvm/trunk/lib/Target/AArch64/AArch64InstrAtomics.td
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64InstrAtomics.td
> llvm/trunk/lib/Target/AArch64/AArch64InstrFormats.td
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64InstrFormats.td
> llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.cpp
> llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.h
> llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.td
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.td
> llvm/trunk/lib/Target/AArch64/AArch64LoadStoreOptimizer.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64LoadStoreOptimizer.cpp
> llvm/trunk/lib/Target/AArch64/AArch64MCInstLower.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64MCInstLower.cpp
> llvm/trunk/lib/Target/AArch64/AArch64MCInstLower.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64MCInstLower.h
> llvm/trunk/lib/Target/AArch64/AArch64MachineFunctionInfo.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64MachineFunctionInfo.h
> llvm/trunk/lib/Target/AArch64/AArch64PerfectShuffle.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64PerfectShuffle.h
> llvm/trunk/lib/Target/AArch64/AArch64PromoteConstant.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64PromoteConstant.cpp
> llvm/trunk/lib/Target/AArch64/AArch64RegisterInfo.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64RegisterInfo.cpp
> llvm/trunk/lib/Target/AArch64/AArch64RegisterInfo.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64RegisterInfo.h
> llvm/trunk/lib/Target/AArch64/AArch64RegisterInfo.td
> llvm/trunk/lib/Target/AArch64/AArch64SchedA53.td
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64SchedA53.td
> llvm/trunk/lib/Target/AArch64/AArch64SchedCyclone.td
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64SchedCyclone.td
> llvm/trunk/lib/Target/AArch64/AArch64Schedule.td
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64Schedule.td
> llvm/trunk/lib/Target/AArch64/AArch64SelectionDAGInfo.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64SelectionDAGInfo.cpp
> llvm/trunk/lib/Target/AArch64/AArch64SelectionDAGInfo.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64SelectionDAGInfo.h
> llvm/trunk/lib/Target/AArch64/AArch64StorePairSuppress.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64StorePairSuppress.cpp
> llvm/trunk/lib/Target/AArch64/AArch64Subtarget.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64Subtarget.cpp
> llvm/trunk/lib/Target/AArch64/AArch64Subtarget.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64Subtarget.h
> llvm/trunk/lib/Target/AArch64/AArch64TargetMachine.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64TargetMachine.cpp
> llvm/trunk/lib/Target/AArch64/AArch64TargetMachine.h
> llvm/trunk/lib/Target/AArch64/AArch64TargetObjectFile.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64TargetObjectFile.cpp
> llvm/trunk/lib/Target/AArch64/AArch64TargetObjectFile.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64TargetObjectFile.h
> llvm/trunk/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/ARM64TargetTransformInfo.cpp
> llvm/trunk/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/AsmParser/ARM64AsmParser.cpp
> llvm/trunk/lib/Target/AArch64/AsmParser/CMakeLists.txt
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/AsmParser/CMakeLists.txt
> llvm/trunk/lib/Target/AArch64/AsmParser/LLVMBuild.txt
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/AsmParser/LLVMBuild.txt
> llvm/trunk/lib/Target/AArch64/AsmParser/Makefile
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/AsmParser/Makefile
> llvm/trunk/lib/Target/AArch64/CMakeLists.txt
> llvm/trunk/lib/Target/AArch64/Disassembler/AArch64Disassembler.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/Disassembler/ARM64Disassembler.cpp
> llvm/trunk/lib/Target/AArch64/Disassembler/AArch64Disassembler.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/Disassembler/ARM64Disassembler.h
> llvm/trunk/lib/Target/AArch64/Disassembler/AArch64ExternalSymbolizer.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/Disassembler/ARM64ExternalSymbolizer.cpp
> llvm/trunk/lib/Target/AArch64/Disassembler/AArch64ExternalSymbolizer.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/Disassembler/ARM64ExternalSymbolizer.h
> llvm/trunk/lib/Target/AArch64/Disassembler/CMakeLists.txt
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/Disassembler/CMakeLists.txt
> llvm/trunk/lib/Target/AArch64/Disassembler/LLVMBuild.txt
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/Disassembler/LLVMBuild.txt
> llvm/trunk/lib/Target/AArch64/Disassembler/Makefile
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/Disassembler/Makefile
> llvm/trunk/lib/Target/AArch64/InstPrinter/AArch64InstPrinter.cpp
> llvm/trunk/lib/Target/AArch64/InstPrinter/AArch64InstPrinter.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/InstPrinter/ARM64InstPrinter.h
> llvm/trunk/lib/Target/AArch64/InstPrinter/CMakeLists.txt
> llvm/trunk/lib/Target/AArch64/InstPrinter/LLVMBuild.txt
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/InstPrinter/LLVMBuild.txt
> llvm/trunk/lib/Target/AArch64/InstPrinter/Makefile
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/InstPrinter/Makefile
> llvm/trunk/lib/Target/AArch64/LLVMBuild.txt
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/LLVMBuild.txt
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64AddressingModes.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64AddressingModes.h
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64AsmBackend.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64AsmBackend.cpp
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64ELFObjectWriter.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64ELFObjectWriter.cpp
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64ELFStreamer.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64ELFStreamer.cpp
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64ELFStreamer.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64ELFStreamer.h
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64FixupKinds.h
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64MCAsmInfo.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCAsmInfo.cpp
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64MCAsmInfo.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCAsmInfo.h
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64MCCodeEmitter.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCCodeEmitter.cpp
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64MCExpr.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCExpr.cpp
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64MCExpr.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCExpr.h
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64MCTargetDesc.cpp
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64MCTargetDesc.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCTargetDesc.h
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/AArch64MachObjectWriter.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MachObjectWriter.cpp
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/CMakeLists.txt
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/LLVMBuild.txt
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/LLVMBuild.txt
> llvm/trunk/lib/Target/AArch64/MCTargetDesc/Makefile
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/MCTargetDesc/Makefile
> llvm/trunk/lib/Target/AArch64/Makefile
> llvm/trunk/lib/Target/AArch64/TargetInfo/AArch64TargetInfo.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/TargetInfo/ARM64TargetInfo.cpp
> llvm/trunk/lib/Target/AArch64/TargetInfo/CMakeLists.txt
> llvm/trunk/lib/Target/AArch64/TargetInfo/LLVMBuild.txt
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/Utils/LLVMBuild.txt
> llvm/trunk/lib/Target/AArch64/TargetInfo/Makefile
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/TargetInfo/Makefile
> llvm/trunk/lib/Target/AArch64/Utils/AArch64BaseInfo.cpp
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/Utils/ARM64BaseInfo.cpp
> llvm/trunk/lib/Target/AArch64/Utils/AArch64BaseInfo.h
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/Utils/ARM64BaseInfo.h
> llvm/trunk/lib/Target/AArch64/Utils/CMakeLists.txt
> llvm/trunk/lib/Target/AArch64/Utils/LLVMBuild.txt
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/TargetInfo/LLVMBuild.txt
> llvm/trunk/lib/Target/AArch64/Utils/Makefile
> - copied, changed from r209576, llvm/trunk/lib/Target/ARM64/Utils/Makefile
> llvm/trunk/test/Analysis/CostModel/AArch64/
> llvm/trunk/test/Analysis/CostModel/AArch64/lit.local.cfg
> - copied, changed from r209576, llvm/trunk/test/Analysis/CostModel/ARM64/lit.local.cfg
> llvm/trunk/test/Analysis/CostModel/AArch64/select.ll
> - copied, changed from r209576, llvm/trunk/test/Analysis/CostModel/ARM64/select.ll
> llvm/trunk/test/Analysis/CostModel/AArch64/store.ll
> - copied, changed from r209576, llvm/trunk/test/Analysis/CostModel/ARM64/store.ll
> llvm/trunk/test/CodeGen/AArch64/aarch64-neon-v1i1-setcc.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-v1i1-setcc.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2011-03-09-CPSRSpill.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2011-03-09-CPSRSpill.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2011-03-17-AsmPrinterCrash.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2011-03-17-AsmPrinterCrash.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2011-03-21-Unaligned-Frame-Index.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2011-03-21-Unaligned-Frame-Index.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2011-04-21-CPSRBug.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2011-04-21-CPSRBug.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2011-10-18-LdStOptBug.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2011-10-18-LdStOptBug.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2012-01-11-ComparisonDAGCrash.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2012-01-11-ComparisonDAGCrash.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2012-05-07-DAGCombineVectorExtract.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2012-05-07-DAGCombineVectorExtract.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2012-05-07-MemcpyAlignBug.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2012-05-07-MemcpyAlignBug.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2012-05-09-LOADgot-bug.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2012-05-09-LOADgot-bug.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2012-05-22-LdStOptBug.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2012-05-22-LdStOptBug.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2012-06-06-FPToUI.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2012-06-06-FPToUI.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2012-07-11-InstrEmitterBug.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2012-07-11-InstrEmitterBug.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2013-01-13-ffast-fcmp.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2013-01-13-ffast-fcmp.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2013-01-23-frem-crash.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2013-01-23-frem-crash.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2013-01-23-sext-crash.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2013-01-23-sext-crash.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2013-02-12-shufv8i8.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2013-02-12-shufv8i8.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2014-04-16-AnInfiniteLoopInDAGCombine.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2014-04-16-AnInfiniteLoopInDAGCombine.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2014-04-28-sqshl-uqshl-i64Contant.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2014-04-28-sqshl-uqshl-i64Contant.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-2014-04-29-EXT-undef-mask.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/2014-04-29-EXT-undef-mask.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-AdvSIMD-Scalar.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/AdvSIMD-Scalar.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-aapcs.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aapcs.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-abi-varargs.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/abi-varargs.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-abi.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/abi.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-abi_align.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/abi_align.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-addp.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/addp.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-addr-mode-folding.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/addr-mode-folding.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-addr-type-promotion.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/addr-type-promotion.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-addrmode.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/addrmode.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-alloc-no-stack-realign.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/alloc-no-stack-realign.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-alloca-frame-pointer-offset.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/alloca-frame-pointer-offset.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-andCmpBrToTBZ.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/andCmpBrToTBZ.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-ands-bad-peephole.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/ands-bad-peephole.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-anyregcc-crash.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/anyregcc-crash.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-anyregcc.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/anyregcc.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-arith-saturating.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/arith-saturating.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-arith.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/arith.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-arm64-dead-def-elimination-flag.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/arm64-dead-def-elimination-flag.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-atomic-128.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/atomic-128.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-atomic.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/atomic.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-basic-pic.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/basic-pic.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-big-endian-bitconverts.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/big-endian-bitconverts.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-big-endian-eh.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/big-endian-eh.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-big-endian-varargs.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/big-endian-varargs.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-big-endian-vector-callee.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/big-endian-vector-callee.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-big-endian-vector-caller.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/big-endian-vector-caller.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-big-imm-offsets.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/big-imm-offsets.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-big-stack.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/big-stack.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-bitfield-extract.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/bitfield-extract.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-blockaddress.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/blockaddress.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-build-vector.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/build-vector.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-call-tailcalls.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/call-tailcalls.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-cast-opt.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/cast-opt.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-ccmp-heuristics.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/ccmp-heuristics.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-ccmp.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/ccmp.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-clrsb.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/clrsb.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-coalesce-ext.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/coalesce-ext.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-code-model-large-abs.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/code-model-large-abs.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-collect-loh-garbage-crash.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/collect-loh-garbage-crash.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-collect-loh-str.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/collect-loh-str.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-collect-loh.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/collect-loh.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-complex-copy-noneon.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/complex-copy-noneon.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-complex-ret.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/complex-ret.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-const-addr.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/const-addr.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-convert-v2f64-v2i32.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/convert-v2f64-v2i32.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-convert-v2i32-v2f64.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/convert-v2i32-v2f64.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-copy-tuple.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/copy-tuple.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-crc32.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/crc32.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-crypto.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/crypto.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-cse.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/cse.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-csel.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/csel.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-cvt.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/cvt.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-convergence.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/dagcombiner-convergence.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-dead-indexed-load.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/dagcombiner-dead-indexed-load.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-indexed-load.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/dagcombiner-indexed-load.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/dagcombiner-load-slicing.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-dead-def-frame-index.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/dead-def-frame-index.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-dead-register-def-bug.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/dead-register-def-bug.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-dup.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/dup.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-early-ifcvt.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/early-ifcvt.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-elf-calls.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/elf-calls.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-elf-constpool.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/elf-constpool.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-elf-globals.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/elf-globals.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-ext.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/ext.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-extend-int-to-fp.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/extend-int-to-fp.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-extend.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/extend.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-extern-weak.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/extern-weak.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-extload-knownzero.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/extload-knownzero.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-extract.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/extract.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-extract_subvector.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/extract_subvector.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-addr-offset.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-addr-offset.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-alloca.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-alloca.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-br.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-br.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-call.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-call.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-conversion.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-conversion.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-fcmp.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-fcmp.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-gv.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-gv.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-icmp.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-icmp.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-indirectbr.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-indirectbr.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-intrinsic.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-intrinsic.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-materialize.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-materialize.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-noconvert.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-noconvert.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-rem.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-rem.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-ret.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-ret.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-select.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel-select.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fast-isel.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fastcc-tailcall.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fastcc-tailcall.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fastisel-gep-promote-before-add.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fastisel-gep-promote-before-add.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fcmp-opt.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fcmp-opt.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fcopysign.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fcopysign.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fixed-point-scalar-cvt-dagcombine.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fixed-point-scalar-cvt-dagcombine.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fmadd.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fmadd.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fmax.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fmax.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fminv.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fmuladd.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fmuladd.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fold-address.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fold-address.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fold-lsl.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fold-lsl.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fp-contract-zero.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fp-contract-zero.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fp-imm.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fp-imm.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fp.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fp.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fp128-folding.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fp128-folding.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-fp128.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/fp128.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-frame-index.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/frame-index.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-frameaddr.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/frameaddr.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-global-address.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/global-address.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-hello.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/hello.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-i16-subreg-extract.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/i16-subreg-extract.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-icmp-opt.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/icmp-opt.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-illegal-float-ops.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/illegal-float-ops.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-indexed-memory.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/indexed-memory.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-indexed-vector-ldst-2.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/indexed-vector-ldst-2.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-indexed-vector-ldst.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/indexed-vector-ldst.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm-error-I.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/inline-asm-error-I.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm-error-J.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/inline-asm-error-J.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm-error-K.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/inline-asm-error-K.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm-error-L.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/inline-asm-error-L.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm-error-M.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/inline-asm-error-M.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm-error-N.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/inline-asm-error-N.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm-zero-reg-error.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/inline-asm-zero-reg-error.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/inline-asm.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-join-reserved.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/join-reserved.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-jumptable.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/jumptable.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-large-frame.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-large-frame.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-ld1.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/ld1.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-ldp.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/ldp.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-ldur.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/ldur.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-ldxr-stxr.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/ldxr-stxr.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-leaf.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/leaf.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-long-shift.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/long-shift.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-memcpy-inline.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/memcpy-inline.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-memset-inline.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/memset-inline.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-memset-to-bzero.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/memset-to-bzero.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-misched-basic-A53.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/misched-basic-A53.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-misched-forwarding-A53.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/misched-forwarding-A53.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-movi.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/movi.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-mul.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/mul.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-named-reg-alloc.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/named-reg-alloc.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-named-reg-notareg.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/named-reg-notareg.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neg.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/neg.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-2velem-high.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-2velem-high.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-2velem.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-2velem.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-3vdiff.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-3vdiff.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-aba-abd.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-aba-abd.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-across.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-across.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-add-pairwise.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-add-pairwise.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-add-sub.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-add-sub.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-compare-instructions.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/neon-compare-instructions.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-copy.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-copy.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-copyPhysReg-tuple.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-copyPhysReg-tuple.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-mul-div.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-mul-div.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-scalar-by-elem-mul.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-scalar-by-elem-mul.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-select_cc.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-select_cc.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-simd-ldst-one.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-simd-ldst-one.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-simd-shift.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-simd-shift.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-simd-vget.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-simd-vget.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-v1i1-setcc.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/neon-v1i1-setcc.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-neon-vector-list-spill.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/aarch64-neon-vector-list-spill.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-patchpoint.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/patchpoint.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-pic-local-symbol.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/pic-local-symbol.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-platform-reg.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/platform-reg.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-popcnt.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/popcnt.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-prefetch.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/prefetch.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-promote-const.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/promote-const.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-redzone.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/redzone.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-reg-copy-noneon.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/reg-copy-noneon.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-register-offset-addressing.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/register-offset-addressing.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-register-pairing.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/register-pairing.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-regress-f128csel-flags.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/regress-f128csel-flags.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-regress-interphase-shift.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/regress-interphase-shift.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-return-vector.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/return-vector.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-returnaddr.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/returnaddr.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-rev.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/rev.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-rounding.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/rounding.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-scaled_iv.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/scaled_iv.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-scvt.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/scvt.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-shifted-sext.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/shifted-sext.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-simd-scalar-to-vector.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-simplest-elf.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/simplest-elf.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-sincos.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/sincos.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-sitofp-combine-chains.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/sitofp-combine-chains.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-sli-sri-opt.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/sli-sri-opt.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-smaxv.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/smaxv.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-sminv.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/sminv.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-spill-lr.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/spill-lr.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-spill.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/spill.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-st1.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/st1.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-stack-no-frame.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/stack-no-frame.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-stackmap.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/stackmap.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-stackpointer.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/stackpointer.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-stacksave.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/stacksave.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-stp.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/stp.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-strict-align.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/strict-align.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-stur.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/stur.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-subsections.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/subsections.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-subvector-extend.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/subvector-extend.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-swizzle-tbl-i16-layout.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/swizzle-tbl-i16-layout.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-tbl.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-this-return.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/this-return.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-tls-darwin.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/tls-darwin.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-tls-dynamic-together.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/tls-dynamic-together.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-tls-dynamics.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/tls-dynamics.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-tls-execs.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/tls-execs.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-trap.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/trap.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-trn.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/trn.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-trunc-store.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/trunc-store.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-umaxv.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/umaxv.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-uminv.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/uminv.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-umov.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/umov.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-unaligned_ldst.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/unaligned_ldst.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-uzp.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/uzp.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vaargs.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vaargs.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vabs.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vabs.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vadd.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vadd.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vaddlv.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vaddlv.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vaddv.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vaddv.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-variadic-aapcs.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/variadic-aapcs.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vbitwise.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vbitwise.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vclz.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vclz.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vcmp.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vcmp.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vcnt.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vcombine.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vcombine.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vcvt.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vcvt.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vcvt_f.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vcvt_f.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vcvt_f32_su32.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vcvt_f32_su32.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vcvt_n.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vcvt_su32_f32.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vcvt_su32_f32.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vcvtxd_f32_f64.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vcvtxd_f32_f64.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vecCmpBr.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vecCmpBr.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vecFold.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vecFold.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vector-ext.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vector-ext.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vector-imm.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vector-imm.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vector-insertion.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vector-insertion.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vector-ldst.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vector-ldst.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vext.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vext.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vext_reverse.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vext_reverse.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vfloatintrinsics.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vfloatintrinsics.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vhadd.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vhadd.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vhsub.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vhsub.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-virtual_base.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/virtual_base.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vmax.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vmax.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vminmaxnm.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vmovn.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vmovn.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vmul.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vmul.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-volatile.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/volatile.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vpopcnt.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vpopcnt.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vqadd.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vqadd.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vqsub.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vselect.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vselect.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vsetcc_fp.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vsetcc_fp.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vshift.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vshift.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vshr.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vshr.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vshuffle.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vshuffle.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vsqrt.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vsqrt.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vsra.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vsra.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-vsub.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/vsub.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-weak-reference.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/weak-reference.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-xaluo.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/xaluo.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-zero-cycle-regmov.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/zero-cycle-regmov.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-zero-cycle-zeroing.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/zero-cycle-zeroing.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-zext.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/zext.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-zextload-unscaled.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/zextload-unscaled.ll
> llvm/trunk/test/CodeGen/AArch64/arm64-zip.ll
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/zip.ll
> llvm/trunk/test/CodeGen/AArch64/lit.local.cfg
> - copied, changed from r209576, llvm/trunk/test/CodeGen/ARM64/lit.local.cfg
> llvm/trunk/test/DebugInfo/AArch64/struct_by_value.ll
> - copied, changed from r209576, llvm/trunk/test/DebugInfo/ARM64/struct_by_value.ll
> llvm/trunk/test/MC/AArch64/arm64-adr.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/adr.s
> llvm/trunk/test/MC/AArch64/arm64-advsimd.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/advsimd.s
> llvm/trunk/test/MC/AArch64/arm64-aliases.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/aliases.s
> llvm/trunk/test/MC/AArch64/arm64-arithmetic-encoding.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/arithmetic-encoding.s
> llvm/trunk/test/MC/AArch64/arm64-arm64-fixup.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/arm64-fixup.s
> llvm/trunk/test/MC/AArch64/arm64-basic-a64-instructions.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/basic-a64-instructions.s
> llvm/trunk/test/MC/AArch64/arm64-be-datalayout.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/be-datalayout.s
> llvm/trunk/test/MC/AArch64/arm64-bitfield-encoding.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/bitfield-encoding.s
> llvm/trunk/test/MC/AArch64/arm64-branch-encoding.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/branch-encoding.s
> llvm/trunk/test/MC/AArch64/arm64-condbr-without-dots.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/condbr-without-dots.s
> llvm/trunk/test/MC/AArch64/arm64-crypto.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/crypto.s
> llvm/trunk/test/MC/AArch64/arm64-diagno-predicate.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/diagno-predicate.s
> llvm/trunk/test/MC/AArch64/arm64-diags.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/diags.s
> llvm/trunk/test/MC/AArch64/arm64-directive_loh.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/directive_loh.s
> llvm/trunk/test/MC/AArch64/arm64-elf-reloc-condbr.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/elf-reloc-condbr.s
> llvm/trunk/test/MC/AArch64/arm64-elf-relocs.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/elf-relocs.s
> llvm/trunk/test/MC/AArch64/arm64-fp-encoding.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/fp-encoding.s
> llvm/trunk/test/MC/AArch64/arm64-large-relocs.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/large-relocs.s
> llvm/trunk/test/MC/AArch64/arm64-leaf-compact-unwind.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/leaf-compact-unwind.s
> llvm/trunk/test/MC/AArch64/arm64-logical-encoding.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/logical-encoding.s
> llvm/trunk/test/MC/AArch64/arm64-mapping-across-sections.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/mapping-across-sections.s
> llvm/trunk/test/MC/AArch64/arm64-mapping-within-section.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/mapping-within-section.s
> llvm/trunk/test/MC/AArch64/arm64-memory.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/memory.s
> llvm/trunk/test/MC/AArch64/arm64-nv-cond.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/nv-cond.s
> llvm/trunk/test/MC/AArch64/arm64-optional-hash.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/optional-hash.s
> llvm/trunk/test/MC/AArch64/arm64-separator.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/separator.s
> llvm/trunk/test/MC/AArch64/arm64-simd-ldst.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/simd-ldst.s
> llvm/trunk/test/MC/AArch64/arm64-small-data-fixups.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/small-data-fixups.s
> llvm/trunk/test/MC/AArch64/arm64-spsel-sysreg.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/spsel-sysreg.s
> llvm/trunk/test/MC/AArch64/arm64-system-encoding.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/system-encoding.s
> llvm/trunk/test/MC/AArch64/arm64-target-specific-sysreg.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/target-specific-sysreg.s
> llvm/trunk/test/MC/AArch64/arm64-tls-modifiers-darwin.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/tls-modifiers-darwin.s
> llvm/trunk/test/MC/AArch64/arm64-tls-relocs.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/tls-relocs.s
> llvm/trunk/test/MC/AArch64/arm64-v128_lo-diagnostics.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/v128_lo-diagnostics.s
> llvm/trunk/test/MC/AArch64/arm64-variable-exprs.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/variable-exprs.s
> llvm/trunk/test/MC/AArch64/arm64-vector-lists.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/vector-lists.s
> llvm/trunk/test/MC/AArch64/arm64-verbose-vector-case.s
> - copied, changed from r209576, llvm/trunk/test/MC/ARM64/verbose-vector-case.s
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-advsimd.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/advsimd.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-arithmetic.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/arithmetic.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-basic-a64-undefined.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/basic-a64-undefined.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-bitfield.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/bitfield.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-branch.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/branch.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-canonical-form.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/canonical-form.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-crc32.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/crc32.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-crypto.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/crypto.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-invalid-logical.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/invalid-logical.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-logical.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/logical.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-memory.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/memory.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-non-apple-fmov.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/non-apple-fmov.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-scalar-fp.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/scalar-fp.txt
> llvm/trunk/test/MC/Disassembler/AArch64/arm64-system.txt
> - copied, changed from r209576, llvm/trunk/test/MC/Disassembler/ARM64/system.txt
> llvm/trunk/test/MC/MachO/AArch64/
> llvm/trunk/test/MC/MachO/AArch64/darwin-ARM64-local-label-diff.s
> - copied, changed from r209576, llvm/trunk/test/MC/MachO/ARM64/darwin-ARM64-local-label-diff.s
> llvm/trunk/test/MC/MachO/AArch64/darwin-ARM64-reloc.s
> - copied, changed from r209576, llvm/trunk/test/MC/MachO/ARM64/darwin-ARM64-reloc.s
> llvm/trunk/test/MC/MachO/AArch64/lit.local.cfg
> - copied, changed from r209576, llvm/trunk/test/MC/MachO/ARM64/lit.local.cfg
> llvm/trunk/test/Transforms/ConstantHoisting/AArch64/
> llvm/trunk/test/Transforms/ConstantHoisting/AArch64/const-addr.ll
> - copied, changed from r209576, llvm/trunk/test/Transforms/ConstantHoisting/ARM64/const-addr.ll
> llvm/trunk/test/Transforms/ConstantHoisting/AArch64/large-immediate.ll
> - copied, changed from r209576, llvm/trunk/test/Transforms/ConstantHoisting/ARM64/large-immediate.ll
> llvm/trunk/test/Transforms/ConstantHoisting/AArch64/lit.local.cfg
> - copied, changed from r209576, llvm/trunk/test/Analysis/CostModel/ARM64/lit.local.cfg
> llvm/trunk/test/Transforms/GlobalMerge/AArch64/arm64.ll
> - copied, changed from r209576, llvm/trunk/test/Transforms/GlobalMerge/ARM64/arm64.ll
> llvm/trunk/test/Transforms/GlobalMerge/AArch64/lit.local.cfg
> - copied, changed from r209576, llvm/trunk/test/DebugInfo/ARM64/lit.local.cfg
> llvm/trunk/test/Transforms/LoopStrengthReduce/AArch64/
> llvm/trunk/test/Transforms/LoopStrengthReduce/AArch64/lit.local.cfg
> - copied, changed from r209576, llvm/trunk/test/Transforms/LoopStrengthReduce/ARM64/lit.local.cfg
> llvm/trunk/test/Transforms/LoopStrengthReduce/AArch64/lsr-memcpy.ll
> - copied, changed from r209576, llvm/trunk/test/Transforms/LoopStrengthReduce/ARM64/lsr-memcpy.ll
> llvm/trunk/test/Transforms/LoopStrengthReduce/AArch64/lsr-memset.ll
> - copied, changed from r209576, llvm/trunk/test/Transforms/LoopStrengthReduce/ARM64/lsr-memset.ll
> llvm/trunk/test/Transforms/LoopStrengthReduce/AArch64/req-regs.ll
> - copied, changed from r209576, llvm/trunk/test/Transforms/LoopStrengthReduce/ARM64/req-regs.ll
> llvm/trunk/test/Transforms/LoopVectorize/AArch64/arm64-unroll.ll
> - copied, changed from r209576, llvm/trunk/test/Transforms/LoopVectorize/ARM64/arm64-unroll.ll
> llvm/trunk/test/Transforms/LoopVectorize/AArch64/gather-cost.ll
> - copied, changed from r209576, llvm/trunk/test/Transforms/LoopVectorize/ARM64/gather-cost.ll
> llvm/trunk/test/Transforms/SLPVectorizer/AArch64/
> llvm/trunk/test/Transforms/SLPVectorizer/AArch64/lit.local.cfg
> - copied, changed from r209576, llvm/trunk/test/Analysis/CostModel/ARM64/lit.local.cfg
> llvm/trunk/test/Transforms/SLPVectorizer/AArch64/mismatched-intrinsics.ll
> - copied, changed from r209576, llvm/trunk/test/Transforms/SLPVectorizer/ARM64/mismatched-intrinsics.ll
> Removed:
> llvm/trunk/include/llvm/IR/IntrinsicsARM64.td
> llvm/trunk/lib/Target/ARM64/ARM64.h
> llvm/trunk/lib/Target/ARM64/ARM64.td
> llvm/trunk/lib/Target/ARM64/ARM64AddressTypePromotion.cpp
> llvm/trunk/lib/Target/ARM64/ARM64AdvSIMDScalarPass.cpp
> llvm/trunk/lib/Target/ARM64/ARM64AsmPrinter.cpp
> llvm/trunk/lib/Target/ARM64/ARM64BranchRelaxation.cpp
> llvm/trunk/lib/Target/ARM64/ARM64CallingConv.h
> llvm/trunk/lib/Target/ARM64/ARM64CallingConvention.td
> llvm/trunk/lib/Target/ARM64/ARM64CleanupLocalDynamicTLSPass.cpp
> llvm/trunk/lib/Target/ARM64/ARM64CollectLOH.cpp
> llvm/trunk/lib/Target/ARM64/ARM64ConditionalCompares.cpp
> llvm/trunk/lib/Target/ARM64/ARM64DeadRegisterDefinitionsPass.cpp
> llvm/trunk/lib/Target/ARM64/ARM64ExpandPseudoInsts.cpp
> llvm/trunk/lib/Target/ARM64/ARM64FastISel.cpp
> llvm/trunk/lib/Target/ARM64/ARM64FrameLowering.cpp
> llvm/trunk/lib/Target/ARM64/ARM64FrameLowering.h
> llvm/trunk/lib/Target/ARM64/ARM64ISelDAGToDAG.cpp
> llvm/trunk/lib/Target/ARM64/ARM64ISelLowering.cpp
> llvm/trunk/lib/Target/ARM64/ARM64ISelLowering.h
> llvm/trunk/lib/Target/ARM64/ARM64InstrAtomics.td
> llvm/trunk/lib/Target/ARM64/ARM64InstrFormats.td
> llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.cpp
> llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.h
> llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.td
> llvm/trunk/lib/Target/ARM64/ARM64LoadStoreOptimizer.cpp
> llvm/trunk/lib/Target/ARM64/ARM64MCInstLower.cpp
> llvm/trunk/lib/Target/ARM64/ARM64MCInstLower.h
> llvm/trunk/lib/Target/ARM64/ARM64MachineFunctionInfo.h
> llvm/trunk/lib/Target/ARM64/ARM64PerfectShuffle.h
> llvm/trunk/lib/Target/ARM64/ARM64PromoteConstant.cpp
> llvm/trunk/lib/Target/ARM64/ARM64RegisterInfo.cpp
> llvm/trunk/lib/Target/ARM64/ARM64RegisterInfo.h
> llvm/trunk/lib/Target/ARM64/ARM64RegisterInfo.td
> llvm/trunk/lib/Target/ARM64/ARM64SchedA53.td
> llvm/trunk/lib/Target/ARM64/ARM64SchedCyclone.td
> llvm/trunk/lib/Target/ARM64/ARM64Schedule.td
> llvm/trunk/lib/Target/ARM64/ARM64SelectionDAGInfo.cpp
> llvm/trunk/lib/Target/ARM64/ARM64SelectionDAGInfo.h
> llvm/trunk/lib/Target/ARM64/ARM64StorePairSuppress.cpp
> llvm/trunk/lib/Target/ARM64/ARM64Subtarget.cpp
> llvm/trunk/lib/Target/ARM64/ARM64Subtarget.h
> llvm/trunk/lib/Target/ARM64/ARM64TargetMachine.cpp
> llvm/trunk/lib/Target/ARM64/ARM64TargetMachine.h
> llvm/trunk/lib/Target/ARM64/ARM64TargetObjectFile.cpp
> llvm/trunk/lib/Target/ARM64/ARM64TargetObjectFile.h
> llvm/trunk/lib/Target/ARM64/ARM64TargetTransformInfo.cpp
> llvm/trunk/lib/Target/ARM64/AsmParser/ARM64AsmParser.cpp
> llvm/trunk/lib/Target/ARM64/AsmParser/CMakeLists.txt
> llvm/trunk/lib/Target/ARM64/AsmParser/LLVMBuild.txt
> llvm/trunk/lib/Target/ARM64/AsmParser/Makefile
> llvm/trunk/lib/Target/ARM64/CMakeLists.txt
> llvm/trunk/lib/Target/ARM64/Disassembler/ARM64Disassembler.cpp
> llvm/trunk/lib/Target/ARM64/Disassembler/ARM64Disassembler.h
> llvm/trunk/lib/Target/ARM64/Disassembler/ARM64ExternalSymbolizer.cpp
> llvm/trunk/lib/Target/ARM64/Disassembler/ARM64ExternalSymbolizer.h
> llvm/trunk/lib/Target/ARM64/Disassembler/CMakeLists.txt
> llvm/trunk/lib/Target/ARM64/Disassembler/LLVMBuild.txt
> llvm/trunk/lib/Target/ARM64/Disassembler/Makefile
> llvm/trunk/lib/Target/ARM64/InstPrinter/ARM64InstPrinter.cpp
> llvm/trunk/lib/Target/ARM64/InstPrinter/ARM64InstPrinter.h
> llvm/trunk/lib/Target/ARM64/InstPrinter/CMakeLists.txt
> llvm/trunk/lib/Target/ARM64/InstPrinter/LLVMBuild.txt
> llvm/trunk/lib/Target/ARM64/InstPrinter/Makefile
> llvm/trunk/lib/Target/ARM64/LLVMBuild.txt
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64AddressingModes.h
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64AsmBackend.cpp
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64ELFObjectWriter.cpp
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64ELFStreamer.cpp
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64ELFStreamer.h
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64FixupKinds.h
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCAsmInfo.cpp
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCAsmInfo.h
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCCodeEmitter.cpp
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCExpr.cpp
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCExpr.h
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCTargetDesc.cpp
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MCTargetDesc.h
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/ARM64MachObjectWriter.cpp
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/CMakeLists.txt
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/LLVMBuild.txt
> llvm/trunk/lib/Target/ARM64/MCTargetDesc/Makefile
> llvm/trunk/lib/Target/ARM64/Makefile
> llvm/trunk/lib/Target/ARM64/TargetInfo/ARM64TargetInfo.cpp
> llvm/trunk/lib/Target/ARM64/TargetInfo/CMakeLists.txt
> llvm/trunk/lib/Target/ARM64/TargetInfo/LLVMBuild.txt
> llvm/trunk/lib/Target/ARM64/TargetInfo/Makefile
> llvm/trunk/lib/Target/ARM64/Utils/ARM64BaseInfo.cpp
> llvm/trunk/lib/Target/ARM64/Utils/ARM64BaseInfo.h
> llvm/trunk/lib/Target/ARM64/Utils/CMakeLists.txt
> llvm/trunk/lib/Target/ARM64/Utils/LLVMBuild.txt
> llvm/trunk/lib/Target/ARM64/Utils/Makefile
> llvm/trunk/test/Analysis/CostModel/ARM64/lit.local.cfg
> llvm/trunk/test/Analysis/CostModel/ARM64/select.ll
> llvm/trunk/test/Analysis/CostModel/ARM64/store.ll
> llvm/trunk/test/CodeGen/ARM64/2011-03-09-CPSRSpill.ll
> llvm/trunk/test/CodeGen/ARM64/2011-03-17-AsmPrinterCrash.ll
> llvm/trunk/test/CodeGen/ARM64/2011-03-21-Unaligned-Frame-Index.ll
> llvm/trunk/test/CodeGen/ARM64/2011-04-21-CPSRBug.ll
> llvm/trunk/test/CodeGen/ARM64/2011-10-18-LdStOptBug.ll
> llvm/trunk/test/CodeGen/ARM64/2012-01-11-ComparisonDAGCrash.ll
> llvm/trunk/test/CodeGen/ARM64/2012-05-07-DAGCombineVectorExtract.ll
> llvm/trunk/test/CodeGen/ARM64/2012-05-07-MemcpyAlignBug.ll
> llvm/trunk/test/CodeGen/ARM64/2012-05-09-LOADgot-bug.ll
> llvm/trunk/test/CodeGen/ARM64/2012-05-22-LdStOptBug.ll
> llvm/trunk/test/CodeGen/ARM64/2012-06-06-FPToUI.ll
> llvm/trunk/test/CodeGen/ARM64/2012-07-11-InstrEmitterBug.ll
> llvm/trunk/test/CodeGen/ARM64/2013-01-13-ffast-fcmp.ll
> llvm/trunk/test/CodeGen/ARM64/2013-01-23-frem-crash.ll
> llvm/trunk/test/CodeGen/ARM64/2013-01-23-sext-crash.ll
> llvm/trunk/test/CodeGen/ARM64/2013-02-12-shufv8i8.ll
> llvm/trunk/test/CodeGen/ARM64/2014-04-16-AnInfiniteLoopInDAGCombine.ll
> llvm/trunk/test/CodeGen/ARM64/2014-04-28-sqshl-uqshl-i64Contant.ll
> llvm/trunk/test/CodeGen/ARM64/2014-04-29-EXT-undef-mask.ll
> llvm/trunk/test/CodeGen/ARM64/AdvSIMD-Scalar.ll
> llvm/trunk/test/CodeGen/ARM64/aapcs.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-large-frame.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-2velem-high.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-2velem.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-3vdiff.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-aba-abd.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-across.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-add-pairwise.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-add-sub.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-copy.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-copyPhysReg-tuple.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-mul-div.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-scalar-by-elem-mul.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-select_cc.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-simd-ldst-one.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-simd-shift.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-simd-vget.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-v1i1-setcc.ll
> llvm/trunk/test/CodeGen/ARM64/aarch64-neon-vector-list-spill.ll
> llvm/trunk/test/CodeGen/ARM64/abi-varargs.ll
> llvm/trunk/test/CodeGen/ARM64/abi.ll
> llvm/trunk/test/CodeGen/ARM64/abi_align.ll
> llvm/trunk/test/CodeGen/ARM64/addp.ll
> llvm/trunk/test/CodeGen/ARM64/addr-mode-folding.ll
> llvm/trunk/test/CodeGen/ARM64/addr-type-promotion.ll
> llvm/trunk/test/CodeGen/ARM64/addrmode.ll
> llvm/trunk/test/CodeGen/ARM64/alloc-no-stack-realign.ll
> llvm/trunk/test/CodeGen/ARM64/alloca-frame-pointer-offset.ll
> llvm/trunk/test/CodeGen/ARM64/andCmpBrToTBZ.ll
> llvm/trunk/test/CodeGen/ARM64/ands-bad-peephole.ll
> llvm/trunk/test/CodeGen/ARM64/anyregcc-crash.ll
> llvm/trunk/test/CodeGen/ARM64/anyregcc.ll
> llvm/trunk/test/CodeGen/ARM64/arith-saturating.ll
> llvm/trunk/test/CodeGen/ARM64/arith.ll
> llvm/trunk/test/CodeGen/ARM64/arm64-dead-def-elimination-flag.ll
> llvm/trunk/test/CodeGen/ARM64/atomic-128.ll
> llvm/trunk/test/CodeGen/ARM64/atomic.ll
> llvm/trunk/test/CodeGen/ARM64/basic-pic.ll
> llvm/trunk/test/CodeGen/ARM64/big-endian-bitconverts.ll
> llvm/trunk/test/CodeGen/ARM64/big-endian-eh.ll
> llvm/trunk/test/CodeGen/ARM64/big-endian-varargs.ll
> llvm/trunk/test/CodeGen/ARM64/big-endian-vector-callee.ll
> llvm/trunk/test/CodeGen/ARM64/big-endian-vector-caller.ll
> llvm/trunk/test/CodeGen/ARM64/big-imm-offsets.ll
> llvm/trunk/test/CodeGen/ARM64/big-stack.ll
> llvm/trunk/test/CodeGen/ARM64/bitfield-extract.ll
> llvm/trunk/test/CodeGen/ARM64/blockaddress.ll
> llvm/trunk/test/CodeGen/ARM64/build-vector.ll
> llvm/trunk/test/CodeGen/ARM64/call-tailcalls.ll
> llvm/trunk/test/CodeGen/ARM64/cast-opt.ll
> llvm/trunk/test/CodeGen/ARM64/ccmp-heuristics.ll
> llvm/trunk/test/CodeGen/ARM64/ccmp.ll
> llvm/trunk/test/CodeGen/ARM64/clrsb.ll
> llvm/trunk/test/CodeGen/ARM64/coalesce-ext.ll
> llvm/trunk/test/CodeGen/ARM64/code-model-large-abs.ll
> llvm/trunk/test/CodeGen/ARM64/collect-loh-garbage-crash.ll
> llvm/trunk/test/CodeGen/ARM64/collect-loh-str.ll
> llvm/trunk/test/CodeGen/ARM64/collect-loh.ll
> llvm/trunk/test/CodeGen/ARM64/compact-unwind-unhandled-cfi.S
> llvm/trunk/test/CodeGen/ARM64/complex-copy-noneon.ll
> llvm/trunk/test/CodeGen/ARM64/complex-ret.ll
> llvm/trunk/test/CodeGen/ARM64/const-addr.ll
> llvm/trunk/test/CodeGen/ARM64/convert-v2f64-v2i32.ll
> llvm/trunk/test/CodeGen/ARM64/convert-v2i32-v2f64.ll
> llvm/trunk/test/CodeGen/ARM64/copy-tuple.ll
> llvm/trunk/test/CodeGen/ARM64/crc32.ll
> llvm/trunk/test/CodeGen/ARM64/crypto.ll
> llvm/trunk/test/CodeGen/ARM64/cse.ll
> llvm/trunk/test/CodeGen/ARM64/csel.ll
> llvm/trunk/test/CodeGen/ARM64/cvt.ll
> llvm/trunk/test/CodeGen/ARM64/dagcombiner-convergence.ll
> llvm/trunk/test/CodeGen/ARM64/dagcombiner-dead-indexed-load.ll
> llvm/trunk/test/CodeGen/ARM64/dagcombiner-indexed-load.ll
> llvm/trunk/test/CodeGen/ARM64/dagcombiner-load-slicing.ll
> llvm/trunk/test/CodeGen/ARM64/dead-def-frame-index.ll
> llvm/trunk/test/CodeGen/ARM64/dead-register-def-bug.ll
> llvm/trunk/test/CodeGen/ARM64/dup.ll
> llvm/trunk/test/CodeGen/ARM64/early-ifcvt.ll
> llvm/trunk/test/CodeGen/ARM64/elf-calls.ll
> llvm/trunk/test/CodeGen/ARM64/elf-constpool.ll
> llvm/trunk/test/CodeGen/ARM64/elf-globals.ll
> llvm/trunk/test/CodeGen/ARM64/ext.ll
> llvm/trunk/test/CodeGen/ARM64/extend-int-to-fp.ll
> llvm/trunk/test/CodeGen/ARM64/extend.ll
> llvm/trunk/test/CodeGen/ARM64/extern-weak.ll
> llvm/trunk/test/CodeGen/ARM64/extload-knownzero.ll
> llvm/trunk/test/CodeGen/ARM64/extract.ll
> llvm/trunk/test/CodeGen/ARM64/extract_subvector.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-addr-offset.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-alloca.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-br.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-call.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-conversion.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-fcmp.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-gv.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-icmp.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-indirectbr.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-intrinsic.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-materialize.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-noconvert.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-rem.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-ret.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel-select.ll
> llvm/trunk/test/CodeGen/ARM64/fast-isel.ll
> llvm/trunk/test/CodeGen/ARM64/fastcc-tailcall.ll
> llvm/trunk/test/CodeGen/ARM64/fastisel-gep-promote-before-add.ll
> llvm/trunk/test/CodeGen/ARM64/fcmp-opt.ll
> llvm/trunk/test/CodeGen/ARM64/fcopysign.ll
> llvm/trunk/test/CodeGen/ARM64/fixed-point-scalar-cvt-dagcombine.ll
> llvm/trunk/test/CodeGen/ARM64/fmadd.ll
> llvm/trunk/test/CodeGen/ARM64/fmax.ll
> llvm/trunk/test/CodeGen/ARM64/fminv.ll
> llvm/trunk/test/CodeGen/ARM64/fmuladd.ll
> llvm/trunk/test/CodeGen/ARM64/fold-address.ll
> llvm/trunk/test/CodeGen/ARM64/fold-lsl.ll
> llvm/trunk/test/CodeGen/ARM64/fp-contract-zero.ll
> llvm/trunk/test/CodeGen/ARM64/fp-imm.ll
> llvm/trunk/test/CodeGen/ARM64/fp.ll
> llvm/trunk/test/CodeGen/ARM64/fp128-folding.ll
> llvm/trunk/test/CodeGen/ARM64/fp128.ll
> llvm/trunk/test/CodeGen/ARM64/frame-index.ll
> llvm/trunk/test/CodeGen/ARM64/frameaddr.ll
> llvm/trunk/test/CodeGen/ARM64/global-address.ll
> llvm/trunk/test/CodeGen/ARM64/hello.ll
> llvm/trunk/test/CodeGen/ARM64/i16-subreg-extract.ll
> llvm/trunk/test/CodeGen/ARM64/icmp-opt.ll
> llvm/trunk/test/CodeGen/ARM64/illegal-float-ops.ll
> llvm/trunk/test/CodeGen/ARM64/indexed-memory.ll
> llvm/trunk/test/CodeGen/ARM64/indexed-vector-ldst-2.ll
> llvm/trunk/test/CodeGen/ARM64/indexed-vector-ldst.ll
> llvm/trunk/test/CodeGen/ARM64/inline-asm-error-I.ll
> llvm/trunk/test/CodeGen/ARM64/inline-asm-error-J.ll
> llvm/trunk/test/CodeGen/ARM64/inline-asm-error-K.ll
> llvm/trunk/test/CodeGen/ARM64/inline-asm-error-L.ll
> llvm/trunk/test/CodeGen/ARM64/inline-asm-error-M.ll
> llvm/trunk/test/CodeGen/ARM64/inline-asm-error-N.ll
> llvm/trunk/test/CodeGen/ARM64/inline-asm-zero-reg-error.ll
> llvm/trunk/test/CodeGen/ARM64/inline-asm.ll
> llvm/trunk/test/CodeGen/ARM64/join-reserved.ll
> llvm/trunk/test/CodeGen/ARM64/jumptable.ll
> llvm/trunk/test/CodeGen/ARM64/ld1.ll
> llvm/trunk/test/CodeGen/ARM64/ldp.ll
> llvm/trunk/test/CodeGen/ARM64/ldur.ll
> llvm/trunk/test/CodeGen/ARM64/ldxr-stxr.ll
> llvm/trunk/test/CodeGen/ARM64/leaf.ll
> llvm/trunk/test/CodeGen/ARM64/lit.local.cfg
> llvm/trunk/test/CodeGen/ARM64/long-shift.ll
> llvm/trunk/test/CodeGen/ARM64/memcpy-inline.ll
> llvm/trunk/test/CodeGen/ARM64/memset-inline.ll
> llvm/trunk/test/CodeGen/ARM64/memset-to-bzero.ll
> llvm/trunk/test/CodeGen/ARM64/misched-basic-A53.ll
> llvm/trunk/test/CodeGen/ARM64/misched-forwarding-A53.ll
> llvm/trunk/test/CodeGen/ARM64/movi.ll
> llvm/trunk/test/CodeGen/ARM64/mul.ll
> llvm/trunk/test/CodeGen/ARM64/named-reg-alloc.ll
> llvm/trunk/test/CodeGen/ARM64/named-reg-notareg.ll
> llvm/trunk/test/CodeGen/ARM64/neg.ll
> llvm/trunk/test/CodeGen/ARM64/neon-compare-instructions.ll
> llvm/trunk/test/CodeGen/ARM64/neon-v1i1-setcc.ll
> llvm/trunk/test/CodeGen/ARM64/patchpoint.ll
> llvm/trunk/test/CodeGen/ARM64/pic-local-symbol.ll
> llvm/trunk/test/CodeGen/ARM64/platform-reg.ll
> llvm/trunk/test/CodeGen/ARM64/popcnt.ll
> llvm/trunk/test/CodeGen/ARM64/prefetch.ll
> llvm/trunk/test/CodeGen/ARM64/promote-const.ll
> llvm/trunk/test/CodeGen/ARM64/redzone.ll
> llvm/trunk/test/CodeGen/ARM64/reg-copy-noneon.ll
> llvm/trunk/test/CodeGen/ARM64/register-offset-addressing.ll
> llvm/trunk/test/CodeGen/ARM64/register-pairing.ll
> llvm/trunk/test/CodeGen/ARM64/regress-f128csel-flags.ll
> llvm/trunk/test/CodeGen/ARM64/regress-interphase-shift.ll
> llvm/trunk/test/CodeGen/ARM64/return-vector.ll
> llvm/trunk/test/CodeGen/ARM64/returnaddr.ll
> llvm/trunk/test/CodeGen/ARM64/rev.ll
> llvm/trunk/test/CodeGen/ARM64/rounding.ll
> llvm/trunk/test/CodeGen/ARM64/scaled_iv.ll
> llvm/trunk/test/CodeGen/ARM64/scvt.ll
> llvm/trunk/test/CodeGen/ARM64/shifted-sext.ll
> llvm/trunk/test/CodeGen/ARM64/simd-scalar-to-vector.ll
> llvm/trunk/test/CodeGen/ARM64/simplest-elf.ll
> llvm/trunk/test/CodeGen/ARM64/sincos.ll
> llvm/trunk/test/CodeGen/ARM64/sitofp-combine-chains.ll
> llvm/trunk/test/CodeGen/ARM64/sli-sri-opt.ll
> llvm/trunk/test/CodeGen/ARM64/smaxv.ll
> llvm/trunk/test/CodeGen/ARM64/sminv.ll
> llvm/trunk/test/CodeGen/ARM64/spill-lr.ll
> llvm/trunk/test/CodeGen/ARM64/spill.ll
> llvm/trunk/test/CodeGen/ARM64/st1.ll
> llvm/trunk/test/CodeGen/ARM64/stack-no-frame.ll
> llvm/trunk/test/CodeGen/ARM64/stackmap.ll
> llvm/trunk/test/CodeGen/ARM64/stackpointer.ll
> llvm/trunk/test/CodeGen/ARM64/stacksave.ll
> llvm/trunk/test/CodeGen/ARM64/stp.ll
> llvm/trunk/test/CodeGen/ARM64/strict-align.ll
> llvm/trunk/test/CodeGen/ARM64/stur.ll
> llvm/trunk/test/CodeGen/ARM64/subsections.ll
> llvm/trunk/test/CodeGen/ARM64/subvector-extend.ll
> llvm/trunk/test/CodeGen/ARM64/swizzle-tbl-i16-layout.ll
> llvm/trunk/test/CodeGen/ARM64/tbl.ll
> llvm/trunk/test/CodeGen/ARM64/this-return.ll
> llvm/trunk/test/CodeGen/ARM64/tls-darwin.ll
> llvm/trunk/test/CodeGen/ARM64/tls-dynamic-together.ll
> llvm/trunk/test/CodeGen/ARM64/tls-dynamics.ll
> llvm/trunk/test/CodeGen/ARM64/tls-execs.ll
> llvm/trunk/test/CodeGen/ARM64/trap.ll
> llvm/trunk/test/CodeGen/ARM64/trn.ll
> llvm/trunk/test/CodeGen/ARM64/trunc-store.ll
> llvm/trunk/test/CodeGen/ARM64/umaxv.ll
> llvm/trunk/test/CodeGen/ARM64/uminv.ll
> llvm/trunk/test/CodeGen/ARM64/umov.ll
> llvm/trunk/test/CodeGen/ARM64/unaligned_ldst.ll
> llvm/trunk/test/CodeGen/ARM64/uzp.ll
> llvm/trunk/test/CodeGen/ARM64/vaargs.ll
> llvm/trunk/test/CodeGen/ARM64/vabs.ll
> llvm/trunk/test/CodeGen/ARM64/vadd.ll
> llvm/trunk/test/CodeGen/ARM64/vaddlv.ll
> llvm/trunk/test/CodeGen/ARM64/vaddv.ll
> llvm/trunk/test/CodeGen/ARM64/variadic-aapcs.ll
> llvm/trunk/test/CodeGen/ARM64/vbitwise.ll
> llvm/trunk/test/CodeGen/ARM64/vclz.ll
> llvm/trunk/test/CodeGen/ARM64/vcmp.ll
> llvm/trunk/test/CodeGen/ARM64/vcnt.ll
> llvm/trunk/test/CodeGen/ARM64/vcombine.ll
> llvm/trunk/test/CodeGen/ARM64/vcvt.ll
> llvm/trunk/test/CodeGen/ARM64/vcvt_f.ll
> llvm/trunk/test/CodeGen/ARM64/vcvt_f32_su32.ll
> llvm/trunk/test/CodeGen/ARM64/vcvt_n.ll
> llvm/trunk/test/CodeGen/ARM64/vcvt_su32_f32.ll
> llvm/trunk/test/CodeGen/ARM64/vcvtxd_f32_f64.ll
> llvm/trunk/test/CodeGen/ARM64/vecCmpBr.ll
> llvm/trunk/test/CodeGen/ARM64/vecFold.ll
> llvm/trunk/test/CodeGen/ARM64/vector-ext.ll
> llvm/trunk/test/CodeGen/ARM64/vector-imm.ll
> llvm/trunk/test/CodeGen/ARM64/vector-insertion.ll
> llvm/trunk/test/CodeGen/ARM64/vector-ldst.ll
> llvm/trunk/test/CodeGen/ARM64/vext.ll
> llvm/trunk/test/CodeGen/ARM64/vext_reverse.ll
> llvm/trunk/test/CodeGen/ARM64/vfloatintrinsics.ll
> llvm/trunk/test/CodeGen/ARM64/vhadd.ll
> llvm/trunk/test/CodeGen/ARM64/vhsub.ll
> llvm/trunk/test/CodeGen/ARM64/virtual_base.ll
> llvm/trunk/test/CodeGen/ARM64/vmax.ll
> llvm/trunk/test/CodeGen/ARM64/vminmaxnm.ll
> llvm/trunk/test/CodeGen/ARM64/vmovn.ll
> llvm/trunk/test/CodeGen/ARM64/vmul.ll
> llvm/trunk/test/CodeGen/ARM64/volatile.ll
> llvm/trunk/test/CodeGen/ARM64/vpopcnt.ll
> llvm/trunk/test/CodeGen/ARM64/vqadd.ll
> llvm/trunk/test/CodeGen/ARM64/vqsub.ll
> llvm/trunk/test/CodeGen/ARM64/vselect.ll
> llvm/trunk/test/CodeGen/ARM64/vsetcc_fp.ll
> llvm/trunk/test/CodeGen/ARM64/vshift.ll
> llvm/trunk/test/CodeGen/ARM64/vshr.ll
> llvm/trunk/test/CodeGen/ARM64/vshuffle.ll
> llvm/trunk/test/CodeGen/ARM64/vsqrt.ll
> llvm/trunk/test/CodeGen/ARM64/vsra.ll
> llvm/trunk/test/CodeGen/ARM64/vsub.ll
> llvm/trunk/test/CodeGen/ARM64/weak-reference.ll
> llvm/trunk/test/CodeGen/ARM64/xaluo.ll
> llvm/trunk/test/CodeGen/ARM64/zero-cycle-regmov.ll
> llvm/trunk/test/CodeGen/ARM64/zero-cycle-zeroing.ll
> llvm/trunk/test/CodeGen/ARM64/zext.ll
> llvm/trunk/test/CodeGen/ARM64/zextload-unscaled.ll
> llvm/trunk/test/CodeGen/ARM64/zip.ll
> llvm/trunk/test/DebugInfo/ARM64/lit.local.cfg
> llvm/trunk/test/DebugInfo/ARM64/struct_by_value.ll
> llvm/trunk/test/MC/ARM64/adr.s
> llvm/trunk/test/MC/ARM64/advsimd.s
> llvm/trunk/test/MC/ARM64/aliases.s
> llvm/trunk/test/MC/ARM64/arithmetic-encoding.s
> llvm/trunk/test/MC/ARM64/arm64-fixup.s
> llvm/trunk/test/MC/ARM64/basic-a64-instructions.s
> llvm/trunk/test/MC/ARM64/be-datalayout.s
> llvm/trunk/test/MC/ARM64/bitfield-encoding.s
> llvm/trunk/test/MC/ARM64/branch-encoding.s
> llvm/trunk/test/MC/ARM64/condbr-without-dots.s
> llvm/trunk/test/MC/ARM64/crypto.s
> llvm/trunk/test/MC/ARM64/diagno-predicate.s
> llvm/trunk/test/MC/ARM64/diags.s
> llvm/trunk/test/MC/ARM64/directive_loh.s
> llvm/trunk/test/MC/ARM64/elf-reloc-condbr.s
> llvm/trunk/test/MC/ARM64/elf-relocs.s
> llvm/trunk/test/MC/ARM64/fp-encoding.s
> llvm/trunk/test/MC/ARM64/large-relocs.s
> llvm/trunk/test/MC/ARM64/leaf-compact-unwind.s
> llvm/trunk/test/MC/ARM64/lit.local.cfg
> llvm/trunk/test/MC/ARM64/logical-encoding.s
> llvm/trunk/test/MC/ARM64/mapping-across-sections.s
> llvm/trunk/test/MC/ARM64/mapping-within-section.s
> llvm/trunk/test/MC/ARM64/memory.s
> llvm/trunk/test/MC/ARM64/nv-cond.s
> llvm/trunk/test/MC/ARM64/optional-hash.s
> llvm/trunk/test/MC/ARM64/separator.s
> llvm/trunk/test/MC/ARM64/simd-ldst.s
> llvm/trunk/test/MC/ARM64/small-data-fixups.s
> llvm/trunk/test/MC/ARM64/spsel-sysreg.s
> llvm/trunk/test/MC/ARM64/system-encoding.s
> llvm/trunk/test/MC/ARM64/target-specific-sysreg.s
> llvm/trunk/test/MC/ARM64/tls-modifiers-darwin.s
> llvm/trunk/test/MC/ARM64/tls-relocs.s
> llvm/trunk/test/MC/ARM64/v128_lo-diagnostics.s
> llvm/trunk/test/MC/ARM64/variable-exprs.s
> llvm/trunk/test/MC/ARM64/vector-lists.s
> llvm/trunk/test/MC/ARM64/verbose-vector-case.s
> llvm/trunk/test/MC/Disassembler/ARM64/advsimd.txt
> llvm/trunk/test/MC/Disassembler/ARM64/arithmetic.txt
> llvm/trunk/test/MC/Disassembler/ARM64/basic-a64-undefined.txt
> llvm/trunk/test/MC/Disassembler/ARM64/bitfield.txt
> llvm/trunk/test/MC/Disassembler/ARM64/branch.txt
> llvm/trunk/test/MC/Disassembler/ARM64/canonical-form.txt
> llvm/trunk/test/MC/Disassembler/ARM64/crc32.txt
> llvm/trunk/test/MC/Disassembler/ARM64/crypto.txt
> llvm/trunk/test/MC/Disassembler/ARM64/invalid-logical.txt
> llvm/trunk/test/MC/Disassembler/ARM64/lit.local.cfg
> llvm/trunk/test/MC/Disassembler/ARM64/logical.txt
> llvm/trunk/test/MC/Disassembler/ARM64/memory.txt
> llvm/trunk/test/MC/Disassembler/ARM64/non-apple-fmov.txt
> llvm/trunk/test/MC/Disassembler/ARM64/scalar-fp.txt
> llvm/trunk/test/MC/Disassembler/ARM64/system.txt
> llvm/trunk/test/MC/MachO/ARM64/darwin-ARM64-local-label-diff.s
> llvm/trunk/test/MC/MachO/ARM64/darwin-ARM64-reloc.s
> llvm/trunk/test/MC/MachO/ARM64/lit.local.cfg
> llvm/trunk/test/Transforms/ConstantHoisting/ARM64/const-addr.ll
> llvm/trunk/test/Transforms/ConstantHoisting/ARM64/large-immediate.ll
> llvm/trunk/test/Transforms/ConstantHoisting/ARM64/lit.local.cfg
> llvm/trunk/test/Transforms/GlobalMerge/ARM64/arm64.ll
> llvm/trunk/test/Transforms/GlobalMerge/ARM64/lit.local.cfg
> llvm/trunk/test/Transforms/LoopStrengthReduce/ARM64/lit.local.cfg
> llvm/trunk/test/Transforms/LoopStrengthReduce/ARM64/lsr-memcpy.ll
> llvm/trunk/test/Transforms/LoopStrengthReduce/ARM64/lsr-memset.ll
> llvm/trunk/test/Transforms/LoopStrengthReduce/ARM64/req-regs.ll
> llvm/trunk/test/Transforms/LoopVectorize/ARM64/arm64-unroll.ll
> llvm/trunk/test/Transforms/LoopVectorize/ARM64/gather-cost.ll
> llvm/trunk/test/Transforms/LoopVectorize/ARM64/lit.local.cfg
> llvm/trunk/test/Transforms/SLPVectorizer/ARM64/lit.local.cfg
> llvm/trunk/test/Transforms/SLPVectorizer/ARM64/mismatched-intrinsics.ll
> Modified:
> llvm/trunk/CMakeLists.txt
> llvm/trunk/autoconf/configure.ac
> llvm/trunk/cmake/config-ix.cmake
> llvm/trunk/configure
> llvm/trunk/docs/LangRef.rst
> llvm/trunk/include/llvm/IR/Intrinsics.td
> llvm/trunk/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.cpp
> llvm/trunk/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.h
> llvm/trunk/lib/LTO/LTOCodeGenerator.cpp
> llvm/trunk/lib/LTO/LTOModule.cpp
> llvm/trunk/lib/MC/MCObjectFileInfo.cpp
> llvm/trunk/lib/Target/LLVMBuild.txt
> llvm/trunk/lib/Transforms/InstCombine/InstCombineCalls.cpp
> llvm/trunk/test/CodeGen/AArch64/128bit_load_store.ll
> llvm/trunk/test/CodeGen/AArch64/addsub.ll
> llvm/trunk/test/CodeGen/AArch64/addsub_ext.ll
> llvm/trunk/test/CodeGen/AArch64/alloca.ll
> llvm/trunk/test/CodeGen/AArch64/analyze-branch.ll
> llvm/trunk/test/CodeGen/AArch64/atomic-ops-not-barriers.ll
> llvm/trunk/test/CodeGen/AArch64/atomic-ops.ll
> llvm/trunk/test/CodeGen/AArch64/basic-pic.ll
> llvm/trunk/test/CodeGen/AArch64/bitfield-insert-0.ll
> llvm/trunk/test/CodeGen/AArch64/bitfield-insert.ll
> llvm/trunk/test/CodeGen/AArch64/bitfield.ll
> llvm/trunk/test/CodeGen/AArch64/blockaddress.ll
> llvm/trunk/test/CodeGen/AArch64/bool-loads.ll
> llvm/trunk/test/CodeGen/AArch64/breg.ll
> llvm/trunk/test/CodeGen/AArch64/callee-save.ll
> llvm/trunk/test/CodeGen/AArch64/code-model-large-abs.ll
> llvm/trunk/test/CodeGen/AArch64/compare-branch.ll
> llvm/trunk/test/CodeGen/AArch64/complex-copy-noneon.ll
> llvm/trunk/test/CodeGen/AArch64/cond-sel.ll
> llvm/trunk/test/CodeGen/AArch64/directcond.ll
> llvm/trunk/test/CodeGen/AArch64/dp1.ll
> llvm/trunk/test/CodeGen/AArch64/eliminate-trunc.ll
> llvm/trunk/test/CodeGen/AArch64/extern-weak.ll
> llvm/trunk/test/CodeGen/AArch64/fastcc-reserved.ll
> llvm/trunk/test/CodeGen/AArch64/fastcc.ll
> llvm/trunk/test/CodeGen/AArch64/fcmp.ll
> llvm/trunk/test/CodeGen/AArch64/fcvt-fixed.ll
> llvm/trunk/test/CodeGen/AArch64/flags-multiuse.ll
> llvm/trunk/test/CodeGen/AArch64/floatdp_2source.ll
> llvm/trunk/test/CodeGen/AArch64/fp-cond-sel.ll
> llvm/trunk/test/CodeGen/AArch64/fp-dp3.ll
> llvm/trunk/test/CodeGen/AArch64/fp128-folding.ll
> llvm/trunk/test/CodeGen/AArch64/fpimm.ll
> llvm/trunk/test/CodeGen/AArch64/func-argpassing.ll
> llvm/trunk/test/CodeGen/AArch64/func-calls.ll
> llvm/trunk/test/CodeGen/AArch64/global-alignment.ll
> llvm/trunk/test/CodeGen/AArch64/got-abuse.ll
> llvm/trunk/test/CodeGen/AArch64/illegal-float-ops.ll
> llvm/trunk/test/CodeGen/AArch64/init-array.ll
> llvm/trunk/test/CodeGen/AArch64/inline-asm-constraints-badI.ll
> llvm/trunk/test/CodeGen/AArch64/inline-asm-constraints-badK2.ll
> llvm/trunk/test/CodeGen/AArch64/jump-table.ll
> llvm/trunk/test/CodeGen/AArch64/large-consts.ll
> llvm/trunk/test/CodeGen/AArch64/ldst-regoffset.ll
> llvm/trunk/test/CodeGen/AArch64/ldst-unscaledimm.ll
> llvm/trunk/test/CodeGen/AArch64/ldst-unsignedimm.ll
> llvm/trunk/test/CodeGen/AArch64/literal_pools_float.ll
> llvm/trunk/test/CodeGen/AArch64/local_vars.ll
> llvm/trunk/test/CodeGen/AArch64/logical_shifted_reg.ll
> llvm/trunk/test/CodeGen/AArch64/mature-mc-support.ll
> llvm/trunk/test/CodeGen/AArch64/movw-consts.ll
> llvm/trunk/test/CodeGen/AArch64/movw-shift-encoding.ll
> llvm/trunk/test/CodeGen/AArch64/neon-bitcast.ll
> llvm/trunk/test/CodeGen/AArch64/neon-bitwise-instructions.ll
> llvm/trunk/test/CodeGen/AArch64/neon-compare-instructions.ll
> llvm/trunk/test/CodeGen/AArch64/neon-diagnostics.ll
> llvm/trunk/test/CodeGen/AArch64/neon-extract.ll
> llvm/trunk/test/CodeGen/AArch64/neon-fma.ll
> llvm/trunk/test/CodeGen/AArch64/neon-fpround_f128.ll
> llvm/trunk/test/CodeGen/AArch64/neon-idiv.ll
> llvm/trunk/test/CodeGen/AArch64/neon-mla-mls.ll
> llvm/trunk/test/CodeGen/AArch64/neon-mov.ll
> llvm/trunk/test/CodeGen/AArch64/neon-or-combine.ll
> llvm/trunk/test/CodeGen/AArch64/neon-perm.ll
> llvm/trunk/test/CodeGen/AArch64/neon-scalar-by-elem-fma.ll
> llvm/trunk/test/CodeGen/AArch64/neon-scalar-copy.ll
> llvm/trunk/test/CodeGen/AArch64/neon-shift-left-long.ll
> llvm/trunk/test/CodeGen/AArch64/neon-truncStore-extLoad.ll
> llvm/trunk/test/CodeGen/AArch64/pic-eh-stubs.ll
> llvm/trunk/test/CodeGen/AArch64/regress-f128csel-flags.ll
> llvm/trunk/test/CodeGen/AArch64/regress-fp128-livein.ll
> llvm/trunk/test/CodeGen/AArch64/regress-tblgen-chains.ll
> llvm/trunk/test/CodeGen/AArch64/regress-w29-reserved-with-fp.ll
> llvm/trunk/test/CodeGen/AArch64/setcc-takes-i32.ll
> llvm/trunk/test/CodeGen/AArch64/sibling-call.ll
> llvm/trunk/test/CodeGen/AArch64/sincos-expansion.ll
> llvm/trunk/test/CodeGen/AArch64/sincospow-vector-expansion.ll
> llvm/trunk/test/CodeGen/AArch64/tail-call.ll
> llvm/trunk/test/CodeGen/AArch64/zero-reg.ll
> llvm/trunk/test/MC/AArch64/adrp-relocation.s
> llvm/trunk/test/MC/AArch64/basic-a64-diagnostics.s
> llvm/trunk/test/MC/AArch64/basic-a64-instructions.s
> llvm/trunk/test/MC/AArch64/basic-pic.s
> llvm/trunk/test/MC/AArch64/elf-extern.s
> llvm/trunk/test/MC/AArch64/elf-objdump.s
> llvm/trunk/test/MC/AArch64/elf-reloc-addsubimm.s
> llvm/trunk/test/MC/AArch64/elf-reloc-ldrlit.s
> llvm/trunk/test/MC/AArch64/elf-reloc-ldstunsimm.s
> llvm/trunk/test/MC/AArch64/elf-reloc-movw.s
> llvm/trunk/test/MC/AArch64/elf-reloc-pcreladdressing.s
> llvm/trunk/test/MC/AArch64/elf-reloc-tstb.s
> llvm/trunk/test/MC/AArch64/elf-reloc-uncondbrimm.s
> llvm/trunk/test/MC/AArch64/gicv3-regs-diagnostics.s
> llvm/trunk/test/MC/AArch64/gicv3-regs.s
> llvm/trunk/test/MC/AArch64/inline-asm-modifiers.s
> llvm/trunk/test/MC/AArch64/jump-table.s
> llvm/trunk/test/MC/AArch64/lit.local.cfg
> llvm/trunk/test/MC/AArch64/mapping-across-sections.s
> llvm/trunk/test/MC/AArch64/mapping-within-section.s
> llvm/trunk/test/MC/AArch64/neon-3vdiff.s
> llvm/trunk/test/MC/AArch64/neon-aba-abd.s
> llvm/trunk/test/MC/AArch64/neon-add-pairwise.s
> llvm/trunk/test/MC/AArch64/neon-add-sub-instructions.s
> llvm/trunk/test/MC/AArch64/neon-bitwise-instructions.s
> llvm/trunk/test/MC/AArch64/neon-compare-instructions.s
> llvm/trunk/test/MC/AArch64/neon-diagnostics.s
> llvm/trunk/test/MC/AArch64/neon-facge-facgt.s
> llvm/trunk/test/MC/AArch64/neon-frsqrt-frecp.s
> llvm/trunk/test/MC/AArch64/neon-halving-add-sub.s
> llvm/trunk/test/MC/AArch64/neon-max-min-pairwise.s
> llvm/trunk/test/MC/AArch64/neon-max-min.s
> llvm/trunk/test/MC/AArch64/neon-mla-mls-instructions.s
> llvm/trunk/test/MC/AArch64/neon-mov.s
> llvm/trunk/test/MC/AArch64/neon-mul-div-instructions.s
> llvm/trunk/test/MC/AArch64/neon-rounding-halving-add.s
> llvm/trunk/test/MC/AArch64/neon-rounding-shift.s
> llvm/trunk/test/MC/AArch64/neon-saturating-add-sub.s
> llvm/trunk/test/MC/AArch64/neon-saturating-rounding-shift.s
> llvm/trunk/test/MC/AArch64/neon-saturating-shift.s
> llvm/trunk/test/MC/AArch64/neon-scalar-abs.s
> llvm/trunk/test/MC/AArch64/neon-scalar-add-sub.s
> llvm/trunk/test/MC/AArch64/neon-scalar-by-elem-mla.s
> llvm/trunk/test/MC/AArch64/neon-scalar-by-elem-mul.s
> llvm/trunk/test/MC/AArch64/neon-scalar-by-elem-saturating-mla.s
> llvm/trunk/test/MC/AArch64/neon-scalar-by-elem-saturating-mul.s
> llvm/trunk/test/MC/AArch64/neon-scalar-compare.s
> llvm/trunk/test/MC/AArch64/neon-scalar-cvt.s
> llvm/trunk/test/MC/AArch64/neon-scalar-dup.s
> llvm/trunk/test/MC/AArch64/neon-scalar-extract-narrow.s
> llvm/trunk/test/MC/AArch64/neon-scalar-fp-compare.s
> llvm/trunk/test/MC/AArch64/neon-scalar-mul.s
> llvm/trunk/test/MC/AArch64/neon-scalar-neg.s
> llvm/trunk/test/MC/AArch64/neon-scalar-recip.s
> llvm/trunk/test/MC/AArch64/neon-scalar-reduce-pairwise.s
> llvm/trunk/test/MC/AArch64/neon-scalar-rounding-shift.s
> llvm/trunk/test/MC/AArch64/neon-scalar-saturating-add-sub.s
> llvm/trunk/test/MC/AArch64/neon-scalar-saturating-rounding-shift.s
> llvm/trunk/test/MC/AArch64/neon-scalar-saturating-shift.s
> llvm/trunk/test/MC/AArch64/neon-scalar-shift-imm.s
> llvm/trunk/test/MC/AArch64/neon-scalar-shift.s
> llvm/trunk/test/MC/AArch64/neon-shift-left-long.s
> llvm/trunk/test/MC/AArch64/neon-shift.s
> llvm/trunk/test/MC/AArch64/neon-simd-copy.s
> llvm/trunk/test/MC/AArch64/neon-simd-shift.s
> llvm/trunk/test/MC/AArch64/neon-sxtl.s
> llvm/trunk/test/MC/AArch64/neon-uxtl.s
> llvm/trunk/test/MC/AArch64/noneon-diagnostics.s
> llvm/trunk/test/MC/AArch64/optional-hash.s
> llvm/trunk/test/MC/AArch64/tls-relocs.s
> llvm/trunk/test/MC/AArch64/trace-regs-diagnostics.s
> llvm/trunk/test/MC/AArch64/trace-regs.s
> llvm/trunk/test/MC/Disassembler/AArch64/lit.local.cfg
> llvm/trunk/test/Transforms/InstCombine/2012-04-23-Neon-Intrinsics.ll
>
> Modified: llvm/trunk/CMakeLists.txt
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/CMakeLists.txt?rev=209577&r1=209576&r2=209577&view=diff
> ==============================================================================
> --- llvm/trunk/CMakeLists.txt (original)
> +++ llvm/trunk/CMakeLists.txt Sat May 24 07:50:23 2014
> @@ -127,7 +127,7 @@ set(LLVM_INCLUDE_DIR ${CMAKE_CURRENT_BIN
> set(LLVM_LIBDIR_SUFFIX "" CACHE STRING "Define suffix of library directory name (32/64)" )
>
> set(LLVM_ALL_TARGETS
> - ARM64
> + AArch64
> ARM
> CppBackend
> Hexagon
> @@ -143,7 +143,7 @@ set(LLVM_ALL_TARGETS
> )
>
> # List of targets with JIT support:
> -set(LLVM_TARGETS_WITH_JIT X86 PowerPC ARM64 ARM Mips SystemZ)
> +set(LLVM_TARGETS_WITH_JIT X86 PowerPC AArch64 ARM Mips SystemZ)
>
> set(LLVM_TARGETS_TO_BUILD "all"
> CACHE STRING "Semicolon-separated list of targets to build, or \"all\".")
>
> Modified: llvm/trunk/autoconf/configure.ac
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/autoconf/configure.ac?rev=209577&r1=209576&r2=209577&view=diff
> ==============================================================================
> --- llvm/trunk/autoconf/configure.ac (original)
> +++ llvm/trunk/autoconf/configure.ac Sat May 24 07:50:23 2014
> @@ -419,9 +419,9 @@ AC_CACHE_CHECK([target architecture],[ll
> amd64-* | x86_64-*) llvm_cv_target_arch="x86_64" ;;
> sparc*-*) llvm_cv_target_arch="Sparc" ;;
> powerpc*-*) llvm_cv_target_arch="PowerPC" ;;
> - arm64*-*) llvm_cv_target_arch="ARM64" ;;
> + arm64*-*) llvm_cv_target_arch="AArch64" ;;
> arm*-*) llvm_cv_target_arch="ARM" ;;
> - aarch64*-*) llvm_cv_target_arch="ARM64" ;;
> + aarch64*-*) llvm_cv_target_arch="AArch64" ;;
> mips-* | mips64-*) llvm_cv_target_arch="Mips" ;;
> mipsel-* | mips64el-*) llvm_cv_target_arch="Mips" ;;
> xcore-*) llvm_cv_target_arch="XCore" ;;
> @@ -455,9 +455,9 @@ case $host in
> amd64-* | x86_64-*) host_arch="x86_64" ;;
> sparc*-*) host_arch="Sparc" ;;
> powerpc*-*) host_arch="PowerPC" ;;
> - arm64*-*) host_arch="ARM64" ;;
> + arm64*-*) host_arch="AArch64" ;;
> arm*-*) host_arch="ARM" ;;
> - aarch64*-*) host_arch="ARM64" ;;
> + aarch64*-*) host_arch="AArch64" ;;
> mips-* | mips64-*) host_arch="Mips" ;;
> mipsel-* | mips64el-*) host_arch="Mips" ;;
> xcore-*) host_arch="XCore" ;;
> @@ -796,7 +796,7 @@ else
> esac
> fi
>
> -TARGETS_WITH_JIT="ARM ARM64 Mips PowerPC SystemZ X86"
> +TARGETS_WITH_JIT="ARM AArch64 Mips PowerPC SystemZ X86"
> AC_SUBST(TARGETS_WITH_JIT,$TARGETS_WITH_JIT)
>
> dnl Allow enablement of building and installing docs
> @@ -949,7 +949,7 @@ if test "$llvm_cv_enable_crash_overrides
> fi
>
> dnl List all possible targets
> -ALL_TARGETS="X86 Sparc PowerPC ARM ARM64 Mips XCore MSP430 CppBackend NVPTX Hexagon SystemZ R600"
> +ALL_TARGETS="X86 Sparc PowerPC ARM AArch64 Mips XCore MSP430 CppBackend NVPTX Hexagon SystemZ R600"
> AC_SUBST(ALL_TARGETS,$ALL_TARGETS)
>
> dnl Allow specific targets to be specified for building (or not)
> @@ -970,8 +970,8 @@ case "$enableval" in
> x86_64) TARGETS_TO_BUILD="X86 $TARGETS_TO_BUILD" ;;
> sparc) TARGETS_TO_BUILD="Sparc $TARGETS_TO_BUILD" ;;
> powerpc) TARGETS_TO_BUILD="PowerPC $TARGETS_TO_BUILD" ;;
> - aarch64) TARGETS_TO_BUILD="ARM64 $TARGETS_TO_BUILD" ;;
> - arm64) TARGETS_TO_BUILD="ARM64 $TARGETS_TO_BUILD" ;;
> + aarch64) TARGETS_TO_BUILD="AArch64 $TARGETS_TO_BUILD" ;;
> + arm64) TARGETS_TO_BUILD="AArch64 $TARGETS_TO_BUILD" ;;
> arm) TARGETS_TO_BUILD="ARM $TARGETS_TO_BUILD" ;;
> mips) TARGETS_TO_BUILD="Mips $TARGETS_TO_BUILD" ;;
> mipsel) TARGETS_TO_BUILD="Mips $TARGETS_TO_BUILD" ;;
> @@ -989,7 +989,7 @@ case "$enableval" in
> x86_64) TARGETS_TO_BUILD="X86 $TARGETS_TO_BUILD" ;;
> Sparc) TARGETS_TO_BUILD="Sparc $TARGETS_TO_BUILD" ;;
> PowerPC) TARGETS_TO_BUILD="PowerPC $TARGETS_TO_BUILD" ;;
> - AArch64) TARGETS_TO_BUILD="ARM64 $TARGETS_TO_BUILD" ;;
> + AArch64) TARGETS_TO_BUILD="AArch64 $TARGETS_TO_BUILD" ;;
> ARM) TARGETS_TO_BUILD="ARM $TARGETS_TO_BUILD" ;;
> Mips) TARGETS_TO_BUILD="Mips $TARGETS_TO_BUILD" ;;
> XCore) TARGETS_TO_BUILD="XCore $TARGETS_TO_BUILD" ;;
>
> Modified: llvm/trunk/cmake/config-ix.cmake
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/cmake/config-ix.cmake?rev=209577&r1=209576&r2=209577&view=diff
> ==============================================================================
> --- llvm/trunk/cmake/config-ix.cmake (original)
> +++ llvm/trunk/cmake/config-ix.cmake Sat May 24 07:50:23 2014
> @@ -372,7 +372,7 @@ elseif (LLVM_NATIVE_ARCH MATCHES "powerp
> elseif (LLVM_NATIVE_ARCH MATCHES "aarch64")
> set(LLVM_NATIVE_ARCH AArch64)
> elseif (LLVM_NATIVE_ARCH MATCHES "arm64")
> - set(LLVM_NATIVE_ARCH ARM64)
> + set(LLVM_NATIVE_ARCH AArch64)
> elseif (LLVM_NATIVE_ARCH MATCHES "arm")
> set(LLVM_NATIVE_ARCH ARM)
> elseif (LLVM_NATIVE_ARCH MATCHES "mips")
>
> Modified: llvm/trunk/configure
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/configure?rev=209577&r1=209576&r2=209577&view=diff
> ==============================================================================
> --- llvm/trunk/configure (original)
> +++ llvm/trunk/configure Sat May 24 07:50:23 2014
> @@ -4151,9 +4151,9 @@ else
> amd64-* | x86_64-*) llvm_cv_target_arch="x86_64" ;;
> sparc*-*) llvm_cv_target_arch="Sparc" ;;
> powerpc*-*) llvm_cv_target_arch="PowerPC" ;;
> - arm64*-*) llvm_cv_target_arch="ARM64" ;;
> + arm64*-*) llvm_cv_target_arch="AArch64" ;;
> arm*-*) llvm_cv_target_arch="ARM" ;;
> - aarch64*-*) llvm_cv_target_arch="ARM64" ;;
> + aarch64*-*) llvm_cv_target_arch="AArch64" ;;
> mips-* | mips64-*) llvm_cv_target_arch="Mips" ;;
> mipsel-* | mips64el-*) llvm_cv_target_arch="Mips" ;;
> xcore-*) llvm_cv_target_arch="XCore" ;;
> @@ -4188,9 +4188,9 @@ case $host in
> amd64-* | x86_64-*) host_arch="x86_64" ;;
> sparc*-*) host_arch="Sparc" ;;
> powerpc*-*) host_arch="PowerPC" ;;
> - arm64*-*) host_arch="ARM64" ;;
> + arm64*-*) host_arch="AArch64" ;;
> arm*-*) host_arch="ARM" ;;
> - aarch64*-*) host_arch="ARM64" ;;
> + aarch64*-*) host_arch="AArch64" ;;
> mips-* | mips64-*) host_arch="Mips" ;;
> mipsel-* | mips64el-*) host_arch="Mips" ;;
> xcore-*) host_arch="XCore" ;;
> @@ -5120,7 +5120,7 @@ else
> esac
> fi
>
> -TARGETS_WITH_JIT="ARM ARM64 Mips PowerPC SystemZ X86"
> +TARGETS_WITH_JIT="ARM AArch64 Mips PowerPC SystemZ X86"
> TARGETS_WITH_JIT=$TARGETS_WITH_JIT
>
>
> @@ -5357,7 +5357,7 @@ _ACEOF
>
> fi
>
> -ALL_TARGETS="X86 Sparc PowerPC ARM ARM64 Mips XCore MSP430 CppBackend NVPTX Hexagon SystemZ R600"
> +ALL_TARGETS="X86 Sparc PowerPC ARM AArch64 Mips XCore MSP430 CppBackend NVPTX Hexagon SystemZ R600"
> ALL_TARGETS=$ALL_TARGETS
>
>
> @@ -5380,8 +5380,8 @@ case "$enableval" in
> x86_64) TARGETS_TO_BUILD="X86 $TARGETS_TO_BUILD" ;;
> sparc) TARGETS_TO_BUILD="Sparc $TARGETS_TO_BUILD" ;;
> powerpc) TARGETS_TO_BUILD="PowerPC $TARGETS_TO_BUILD" ;;
> - aarch64) TARGETS_TO_BUILD="ARM64 $TARGETS_TO_BUILD" ;;
> - arm64) TARGETS_TO_BUILD="ARM64 $TARGETS_TO_BUILD" ;;
> + aarch64) TARGETS_TO_BUILD="AArch64 $TARGETS_TO_BUILD" ;;
> + arm64) TARGETS_TO_BUILD="AArch64 $TARGETS_TO_BUILD" ;;
> arm) TARGETS_TO_BUILD="ARM $TARGETS_TO_BUILD" ;;
> mips) TARGETS_TO_BUILD="Mips $TARGETS_TO_BUILD" ;;
> mipsel) TARGETS_TO_BUILD="Mips $TARGETS_TO_BUILD" ;;
> @@ -5399,7 +5399,7 @@ case "$enableval" in
> x86_64) TARGETS_TO_BUILD="X86 $TARGETS_TO_BUILD" ;;
> Sparc) TARGETS_TO_BUILD="Sparc $TARGETS_TO_BUILD" ;;
> PowerPC) TARGETS_TO_BUILD="PowerPC $TARGETS_TO_BUILD" ;;
> - AArch64) TARGETS_TO_BUILD="ARM64 $TARGETS_TO_BUILD" ;;
> + AArch64) TARGETS_TO_BUILD="AArch64 $TARGETS_TO_BUILD" ;;
> ARM) TARGETS_TO_BUILD="ARM $TARGETS_TO_BUILD" ;;
> Mips) TARGETS_TO_BUILD="Mips $TARGETS_TO_BUILD" ;;
> XCore) TARGETS_TO_BUILD="XCore $TARGETS_TO_BUILD" ;;
>
> Modified: llvm/trunk/docs/LangRef.rst
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/docs/LangRef.rst?rev=209577&r1=209576&r2=209577&view=diff
> ==============================================================================
> --- llvm/trunk/docs/LangRef.rst (original)
> +++ llvm/trunk/docs/LangRef.rst Sat May 24 07:50:23 2014
> @@ -6877,7 +6877,7 @@ register in surrounding code, including
> allocatable registers are not supported.
>
> Warning: So far it only works with the stack pointer on selected
> -architectures (ARM, ARM64, AArch64, PowerPC and x86_64). Significant amount of
> +architectures (ARM, AArch64, PowerPC and x86_64). Significant amount of
> work is needed to support other registers and even more so, allocatable
> registers.
>
>
> Modified: llvm/trunk/include/llvm/IR/Intrinsics.td
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/IR/Intrinsics.td?rev=209577&r1=209576&r2=209577&view=diff
> ==============================================================================
> --- llvm/trunk/include/llvm/IR/Intrinsics.td (original)
> +++ llvm/trunk/include/llvm/IR/Intrinsics.td Sat May 24 07:50:23 2014
> @@ -533,7 +533,7 @@ def int_clear_cache : Intrinsic<[], [llv
> include "llvm/IR/IntrinsicsPowerPC.td"
> include "llvm/IR/IntrinsicsX86.td"
> include "llvm/IR/IntrinsicsARM.td"
> -include "llvm/IR/IntrinsicsARM64.td"
> +include "llvm/IR/IntrinsicsAArch64.td"
> include "llvm/IR/IntrinsicsXCore.td"
> include "llvm/IR/IntrinsicsHexagon.td"
> include "llvm/IR/IntrinsicsNVVM.td"
>
> Copied: llvm/trunk/include/llvm/IR/IntrinsicsAArch64.td (from r209576, llvm/trunk/include/llvm/IR/IntrinsicsARM64.td)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/IR/IntrinsicsAArch64.td?p2=llvm/trunk/include/llvm/IR/IntrinsicsAArch64.td&p1=llvm/trunk/include/llvm/IR/IntrinsicsARM64.td&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/include/llvm/IR/IntrinsicsARM64.td (original)
> +++ llvm/trunk/include/llvm/IR/IntrinsicsAArch64.td Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===- IntrinsicsARM64.td - Defines ARM64 intrinsics -------*- tablegen -*-===//
> +//===- IntrinsicsAARCH64.td - Defines AARCH64 intrinsics ---*- tablegen -*-===//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,36 +7,36 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// This file defines all of the ARM64-specific intrinsics.
> +// This file defines all of the AARCH64-specific intrinsics.
> //
> //===----------------------------------------------------------------------===//
>
> -let TargetPrefix = "arm64" in {
> +let TargetPrefix = "aarch64" in {
>
> -def int_arm64_ldxr : Intrinsic<[llvm_i64_ty], [llvm_anyptr_ty]>;
> -def int_arm64_ldaxr : Intrinsic<[llvm_i64_ty], [llvm_anyptr_ty]>;
> -def int_arm64_stxr : Intrinsic<[llvm_i32_ty], [llvm_i64_ty, llvm_anyptr_ty]>;
> -def int_arm64_stlxr : Intrinsic<[llvm_i32_ty], [llvm_i64_ty, llvm_anyptr_ty]>;
> -
> -def int_arm64_ldxp : Intrinsic<[llvm_i64_ty, llvm_i64_ty], [llvm_ptr_ty]>;
> -def int_arm64_ldaxp : Intrinsic<[llvm_i64_ty, llvm_i64_ty], [llvm_ptr_ty]>;
> -def int_arm64_stxp : Intrinsic<[llvm_i32_ty],
> +def int_aarch64_ldxr : Intrinsic<[llvm_i64_ty], [llvm_anyptr_ty]>;
> +def int_aarch64_ldaxr : Intrinsic<[llvm_i64_ty], [llvm_anyptr_ty]>;
> +def int_aarch64_stxr : Intrinsic<[llvm_i32_ty], [llvm_i64_ty, llvm_anyptr_ty]>;
> +def int_aarch64_stlxr : Intrinsic<[llvm_i32_ty], [llvm_i64_ty, llvm_anyptr_ty]>;
> +
> +def int_aarch64_ldxp : Intrinsic<[llvm_i64_ty, llvm_i64_ty], [llvm_ptr_ty]>;
> +def int_aarch64_ldaxp : Intrinsic<[llvm_i64_ty, llvm_i64_ty], [llvm_ptr_ty]>;
> +def int_aarch64_stxp : Intrinsic<[llvm_i32_ty],
> [llvm_i64_ty, llvm_i64_ty, llvm_ptr_ty]>;
> -def int_arm64_stlxp : Intrinsic<[llvm_i32_ty],
> +def int_aarch64_stlxp : Intrinsic<[llvm_i32_ty],
> [llvm_i64_ty, llvm_i64_ty, llvm_ptr_ty]>;
>
> -def int_arm64_clrex : Intrinsic<[]>;
> +def int_aarch64_clrex : Intrinsic<[]>;
>
> -def int_arm64_sdiv : Intrinsic<[llvm_anyint_ty], [LLVMMatchType<0>,
> +def int_aarch64_sdiv : Intrinsic<[llvm_anyint_ty], [LLVMMatchType<0>,
> LLVMMatchType<0>], [IntrNoMem]>;
> -def int_arm64_udiv : Intrinsic<[llvm_anyint_ty], [LLVMMatchType<0>,
> +def int_aarch64_udiv : Intrinsic<[llvm_anyint_ty], [LLVMMatchType<0>,
> LLVMMatchType<0>], [IntrNoMem]>;
> }
>
> //===----------------------------------------------------------------------===//
> // Advanced SIMD (NEON)
>
> -let TargetPrefix = "arm64" in { // All intrinsics start with "llvm.arm64.".
> +let TargetPrefix = "aarch64" in { // All intrinsics start with "llvm.aarch64.".
> class AdvSIMD_2Scalar_Float_Intrinsic
> : Intrinsic<[llvm_anyfloat_ty], [LLVMMatchType<0>, LLVMMatchType<0>],
> [IntrNoMem]>;
> @@ -139,269 +139,269 @@ let TargetPrefix = "arm64" in { // All
>
> let Properties = [IntrNoMem] in {
> // Vector Add Across Lanes
> - def int_arm64_neon_saddv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_uaddv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_faddv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
> + def int_aarch64_neon_saddv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> + def int_aarch64_neon_uaddv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> + def int_aarch64_neon_faddv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
>
> // Vector Long Add Across Lanes
> - def int_arm64_neon_saddlv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_uaddlv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> + def int_aarch64_neon_saddlv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> + def int_aarch64_neon_uaddlv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
>
> // Vector Halving Add
> - def int_arm64_neon_shadd : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_uhadd : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_shadd : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_uhadd : AdvSIMD_2VectorArg_Intrinsic;
>
> // Vector Rounding Halving Add
> - def int_arm64_neon_srhadd : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_urhadd : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_srhadd : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_urhadd : AdvSIMD_2VectorArg_Intrinsic;
>
> // Vector Saturating Add
> - def int_arm64_neon_sqadd : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_suqadd : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_usqadd : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_uqadd : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_sqadd : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_suqadd : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_usqadd : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_uqadd : AdvSIMD_2IntArg_Intrinsic;
>
> // Vector Add High-Half
> // FIXME: this is a legacy intrinsic for aarch64_simd.h. Remove it when that
> // header is no longer supported.
> - def int_arm64_neon_addhn : AdvSIMD_2VectorArg_Narrow_Intrinsic;
> + def int_aarch64_neon_addhn : AdvSIMD_2VectorArg_Narrow_Intrinsic;
>
> // Vector Rounding Add High-Half
> - def int_arm64_neon_raddhn : AdvSIMD_2VectorArg_Narrow_Intrinsic;
> + def int_aarch64_neon_raddhn : AdvSIMD_2VectorArg_Narrow_Intrinsic;
>
> // Vector Saturating Doubling Multiply High
> - def int_arm64_neon_sqdmulh : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_sqdmulh : AdvSIMD_2IntArg_Intrinsic;
>
> // Vector Saturating Rounding Doubling Multiply High
> - def int_arm64_neon_sqrdmulh : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_sqrdmulh : AdvSIMD_2IntArg_Intrinsic;
>
> // Vector Polynominal Multiply
> - def int_arm64_neon_pmul : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_pmul : AdvSIMD_2VectorArg_Intrinsic;
>
> // Vector Long Multiply
> - def int_arm64_neon_smull : AdvSIMD_2VectorArg_Long_Intrinsic;
> - def int_arm64_neon_umull : AdvSIMD_2VectorArg_Long_Intrinsic;
> - def int_arm64_neon_pmull : AdvSIMD_2VectorArg_Long_Intrinsic;
> + def int_aarch64_neon_smull : AdvSIMD_2VectorArg_Long_Intrinsic;
> + def int_aarch64_neon_umull : AdvSIMD_2VectorArg_Long_Intrinsic;
> + def int_aarch64_neon_pmull : AdvSIMD_2VectorArg_Long_Intrinsic;
>
> // 64-bit polynomial multiply really returns an i128, which is not legal. Fake
> // it with a v16i8.
> - def int_arm64_neon_pmull64 :
> + def int_aarch64_neon_pmull64 :
> Intrinsic<[llvm_v16i8_ty], [llvm_i64_ty, llvm_i64_ty], [IntrNoMem]>;
>
> // Vector Extending Multiply
> - def int_arm64_neon_fmulx : AdvSIMD_2FloatArg_Intrinsic {
> + def int_aarch64_neon_fmulx : AdvSIMD_2FloatArg_Intrinsic {
> let Properties = [IntrNoMem, Commutative];
> }
>
> // Vector Saturating Doubling Long Multiply
> - def int_arm64_neon_sqdmull : AdvSIMD_2VectorArg_Long_Intrinsic;
> - def int_arm64_neon_sqdmulls_scalar
> + def int_aarch64_neon_sqdmull : AdvSIMD_2VectorArg_Long_Intrinsic;
> + def int_aarch64_neon_sqdmulls_scalar
> : Intrinsic<[llvm_i64_ty], [llvm_i32_ty, llvm_i32_ty], [IntrNoMem]>;
>
> // Vector Halving Subtract
> - def int_arm64_neon_shsub : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_uhsub : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_shsub : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_uhsub : AdvSIMD_2VectorArg_Intrinsic;
>
> // Vector Saturating Subtract
> - def int_arm64_neon_sqsub : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_uqsub : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_sqsub : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_uqsub : AdvSIMD_2IntArg_Intrinsic;
>
> // Vector Subtract High-Half
> // FIXME: this is a legacy intrinsic for aarch64_simd.h. Remove it when that
> // header is no longer supported.
> - def int_arm64_neon_subhn : AdvSIMD_2VectorArg_Narrow_Intrinsic;
> + def int_aarch64_neon_subhn : AdvSIMD_2VectorArg_Narrow_Intrinsic;
>
> // Vector Rounding Subtract High-Half
> - def int_arm64_neon_rsubhn : AdvSIMD_2VectorArg_Narrow_Intrinsic;
> + def int_aarch64_neon_rsubhn : AdvSIMD_2VectorArg_Narrow_Intrinsic;
>
> // Vector Compare Absolute Greater-than-or-equal
> - def int_arm64_neon_facge : AdvSIMD_2Arg_FloatCompare_Intrinsic;
> + def int_aarch64_neon_facge : AdvSIMD_2Arg_FloatCompare_Intrinsic;
>
> // Vector Compare Absolute Greater-than
> - def int_arm64_neon_facgt : AdvSIMD_2Arg_FloatCompare_Intrinsic;
> + def int_aarch64_neon_facgt : AdvSIMD_2Arg_FloatCompare_Intrinsic;
>
> // Vector Absolute Difference
> - def int_arm64_neon_sabd : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_uabd : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fabd : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_sabd : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_uabd : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_fabd : AdvSIMD_2VectorArg_Intrinsic;
>
> // Scalar Absolute Difference
> - def int_arm64_sisd_fabd : AdvSIMD_2Scalar_Float_Intrinsic;
> + def int_aarch64_sisd_fabd : AdvSIMD_2Scalar_Float_Intrinsic;
>
> // Vector Max
> - def int_arm64_neon_smax : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_umax : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fmax : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fmaxnmp : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_smax : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_umax : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_fmax : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_fmaxnmp : AdvSIMD_2VectorArg_Intrinsic;
>
> // Vector Max Across Lanes
> - def int_arm64_neon_smaxv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_umaxv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_fmaxv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
> - def int_arm64_neon_fmaxnmv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
> + def int_aarch64_neon_smaxv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> + def int_aarch64_neon_umaxv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> + def int_aarch64_neon_fmaxv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
> + def int_aarch64_neon_fmaxnmv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
>
> // Vector Min
> - def int_arm64_neon_smin : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_umin : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fmin : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fminnmp : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_smin : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_umin : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_fmin : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_fminnmp : AdvSIMD_2VectorArg_Intrinsic;
>
> // Vector Min/Max Number
> - def int_arm64_neon_fminnm : AdvSIMD_2FloatArg_Intrinsic;
> - def int_arm64_neon_fmaxnm : AdvSIMD_2FloatArg_Intrinsic;
> + def int_aarch64_neon_fminnm : AdvSIMD_2FloatArg_Intrinsic;
> + def int_aarch64_neon_fmaxnm : AdvSIMD_2FloatArg_Intrinsic;
>
> // Vector Min Across Lanes
> - def int_arm64_neon_sminv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_uminv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_fminv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
> - def int_arm64_neon_fminnmv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
> + def int_aarch64_neon_sminv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> + def int_aarch64_neon_uminv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> + def int_aarch64_neon_fminv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
> + def int_aarch64_neon_fminnmv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
>
> // Pairwise Add
> - def int_arm64_neon_addp : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_addp : AdvSIMD_2VectorArg_Intrinsic;
>
> // Long Pairwise Add
> // FIXME: In theory, we shouldn't need intrinsics for saddlp or
> // uaddlp, but tblgen's type inference currently can't handle the
> // pattern fragments this ends up generating.
> - def int_arm64_neon_saddlp : AdvSIMD_1VectorArg_Expand_Intrinsic;
> - def int_arm64_neon_uaddlp : AdvSIMD_1VectorArg_Expand_Intrinsic;
> + def int_aarch64_neon_saddlp : AdvSIMD_1VectorArg_Expand_Intrinsic;
> + def int_aarch64_neon_uaddlp : AdvSIMD_1VectorArg_Expand_Intrinsic;
>
> // Folding Maximum
> - def int_arm64_neon_smaxp : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_umaxp : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fmaxp : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_smaxp : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_umaxp : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_fmaxp : AdvSIMD_2VectorArg_Intrinsic;
>
> // Folding Minimum
> - def int_arm64_neon_sminp : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_uminp : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fminp : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_sminp : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_uminp : AdvSIMD_2VectorArg_Intrinsic;
> + def int_aarch64_neon_fminp : AdvSIMD_2VectorArg_Intrinsic;
>
> // Reciprocal Estimate/Step
> - def int_arm64_neon_frecps : AdvSIMD_2FloatArg_Intrinsic;
> - def int_arm64_neon_frsqrts : AdvSIMD_2FloatArg_Intrinsic;
> + def int_aarch64_neon_frecps : AdvSIMD_2FloatArg_Intrinsic;
> + def int_aarch64_neon_frsqrts : AdvSIMD_2FloatArg_Intrinsic;
>
> // Reciprocal Exponent
> - def int_arm64_neon_frecpx : AdvSIMD_1FloatArg_Intrinsic;
> + def int_aarch64_neon_frecpx : AdvSIMD_1FloatArg_Intrinsic;
>
> // Vector Saturating Shift Left
> - def int_arm64_neon_sqshl : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_uqshl : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_sqshl : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_uqshl : AdvSIMD_2IntArg_Intrinsic;
>
> // Vector Rounding Shift Left
> - def int_arm64_neon_srshl : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_urshl : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_srshl : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_urshl : AdvSIMD_2IntArg_Intrinsic;
>
> // Vector Saturating Rounding Shift Left
> - def int_arm64_neon_sqrshl : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_uqrshl : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_sqrshl : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_uqrshl : AdvSIMD_2IntArg_Intrinsic;
>
> // Vector Signed->Unsigned Shift Left by Constant
> - def int_arm64_neon_sqshlu : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_sqshlu : AdvSIMD_2IntArg_Intrinsic;
>
> // Vector Signed->Unsigned Narrowing Saturating Shift Right by Constant
> - def int_arm64_neon_sqshrun : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> + def int_aarch64_neon_sqshrun : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
>
> // Vector Signed->Unsigned Rounding Narrowing Saturating Shift Right by Const
> - def int_arm64_neon_sqrshrun : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> + def int_aarch64_neon_sqrshrun : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
>
> // Vector Narrowing Shift Right by Constant
> - def int_arm64_neon_sqshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> - def int_arm64_neon_uqshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> + def int_aarch64_neon_sqshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> + def int_aarch64_neon_uqshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
>
> // Vector Rounding Narrowing Shift Right by Constant
> - def int_arm64_neon_rshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> + def int_aarch64_neon_rshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
>
> // Vector Rounding Narrowing Saturating Shift Right by Constant
> - def int_arm64_neon_sqrshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> - def int_arm64_neon_uqrshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> + def int_aarch64_neon_sqrshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> + def int_aarch64_neon_uqrshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
>
> // Vector Shift Left
> - def int_arm64_neon_sshl : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_ushl : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_sshl : AdvSIMD_2IntArg_Intrinsic;
> + def int_aarch64_neon_ushl : AdvSIMD_2IntArg_Intrinsic;
>
> // Vector Widening Shift Left by Constant
> - def int_arm64_neon_shll : AdvSIMD_2VectorArg_Scalar_Wide_BySize_Intrinsic;
> - def int_arm64_neon_sshll : AdvSIMD_2VectorArg_Scalar_Wide_Intrinsic;
> - def int_arm64_neon_ushll : AdvSIMD_2VectorArg_Scalar_Wide_Intrinsic;
> + def int_aarch64_neon_shll : AdvSIMD_2VectorArg_Scalar_Wide_BySize_Intrinsic;
> + def int_aarch64_neon_sshll : AdvSIMD_2VectorArg_Scalar_Wide_Intrinsic;
> + def int_aarch64_neon_ushll : AdvSIMD_2VectorArg_Scalar_Wide_Intrinsic;
>
> // Vector Shift Right by Constant and Insert
> - def int_arm64_neon_vsri : AdvSIMD_3VectorArg_Scalar_Intrinsic;
> + def int_aarch64_neon_vsri : AdvSIMD_3VectorArg_Scalar_Intrinsic;
>
> // Vector Shift Left by Constant and Insert
> - def int_arm64_neon_vsli : AdvSIMD_3VectorArg_Scalar_Intrinsic;
> + def int_aarch64_neon_vsli : AdvSIMD_3VectorArg_Scalar_Intrinsic;
>
> // Vector Saturating Narrow
> - def int_arm64_neon_scalar_sqxtn: AdvSIMD_1IntArg_Narrow_Intrinsic;
> - def int_arm64_neon_scalar_uqxtn : AdvSIMD_1IntArg_Narrow_Intrinsic;
> - def int_arm64_neon_sqxtn : AdvSIMD_1VectorArg_Narrow_Intrinsic;
> - def int_arm64_neon_uqxtn : AdvSIMD_1VectorArg_Narrow_Intrinsic;
> + def int_aarch64_neon_scalar_sqxtn: AdvSIMD_1IntArg_Narrow_Intrinsic;
> + def int_aarch64_neon_scalar_uqxtn : AdvSIMD_1IntArg_Narrow_Intrinsic;
> + def int_aarch64_neon_sqxtn : AdvSIMD_1VectorArg_Narrow_Intrinsic;
> + def int_aarch64_neon_uqxtn : AdvSIMD_1VectorArg_Narrow_Intrinsic;
>
> // Vector Saturating Extract and Unsigned Narrow
> - def int_arm64_neon_scalar_sqxtun : AdvSIMD_1IntArg_Narrow_Intrinsic;
> - def int_arm64_neon_sqxtun : AdvSIMD_1VectorArg_Narrow_Intrinsic;
> + def int_aarch64_neon_scalar_sqxtun : AdvSIMD_1IntArg_Narrow_Intrinsic;
> + def int_aarch64_neon_sqxtun : AdvSIMD_1VectorArg_Narrow_Intrinsic;
>
> // Vector Absolute Value
> - def int_arm64_neon_abs : AdvSIMD_1IntArg_Intrinsic;
> + def int_aarch64_neon_abs : AdvSIMD_1IntArg_Intrinsic;
>
> // Vector Saturating Absolute Value
> - def int_arm64_neon_sqabs : AdvSIMD_1IntArg_Intrinsic;
> + def int_aarch64_neon_sqabs : AdvSIMD_1IntArg_Intrinsic;
>
> // Vector Saturating Negation
> - def int_arm64_neon_sqneg : AdvSIMD_1IntArg_Intrinsic;
> + def int_aarch64_neon_sqneg : AdvSIMD_1IntArg_Intrinsic;
>
> // Vector Count Leading Sign Bits
> - def int_arm64_neon_cls : AdvSIMD_1VectorArg_Intrinsic;
> + def int_aarch64_neon_cls : AdvSIMD_1VectorArg_Intrinsic;
>
> // Vector Reciprocal Estimate
> - def int_arm64_neon_urecpe : AdvSIMD_1VectorArg_Intrinsic;
> - def int_arm64_neon_frecpe : AdvSIMD_1FloatArg_Intrinsic;
> + def int_aarch64_neon_urecpe : AdvSIMD_1VectorArg_Intrinsic;
> + def int_aarch64_neon_frecpe : AdvSIMD_1FloatArg_Intrinsic;
>
> // Vector Square Root Estimate
> - def int_arm64_neon_ursqrte : AdvSIMD_1VectorArg_Intrinsic;
> - def int_arm64_neon_frsqrte : AdvSIMD_1FloatArg_Intrinsic;
> + def int_aarch64_neon_ursqrte : AdvSIMD_1VectorArg_Intrinsic;
> + def int_aarch64_neon_frsqrte : AdvSIMD_1FloatArg_Intrinsic;
>
> // Vector Bitwise Reverse
> - def int_arm64_neon_rbit : AdvSIMD_1VectorArg_Intrinsic;
> + def int_aarch64_neon_rbit : AdvSIMD_1VectorArg_Intrinsic;
>
> // Vector Conversions Between Half-Precision and Single-Precision.
> - def int_arm64_neon_vcvtfp2hf
> + def int_aarch64_neon_vcvtfp2hf
> : Intrinsic<[llvm_v4i16_ty], [llvm_v4f32_ty], [IntrNoMem]>;
> - def int_arm64_neon_vcvthf2fp
> + def int_aarch64_neon_vcvthf2fp
> : Intrinsic<[llvm_v4f32_ty], [llvm_v4i16_ty], [IntrNoMem]>;
>
> // Vector Conversions Between Floating-point and Fixed-point.
> - def int_arm64_neon_vcvtfp2fxs : AdvSIMD_CvtFPToFx_Intrinsic;
> - def int_arm64_neon_vcvtfp2fxu : AdvSIMD_CvtFPToFx_Intrinsic;
> - def int_arm64_neon_vcvtfxs2fp : AdvSIMD_CvtFxToFP_Intrinsic;
> - def int_arm64_neon_vcvtfxu2fp : AdvSIMD_CvtFxToFP_Intrinsic;
> + def int_aarch64_neon_vcvtfp2fxs : AdvSIMD_CvtFPToFx_Intrinsic;
> + def int_aarch64_neon_vcvtfp2fxu : AdvSIMD_CvtFPToFx_Intrinsic;
> + def int_aarch64_neon_vcvtfxs2fp : AdvSIMD_CvtFxToFP_Intrinsic;
> + def int_aarch64_neon_vcvtfxu2fp : AdvSIMD_CvtFxToFP_Intrinsic;
>
> // Vector FP->Int Conversions
> - def int_arm64_neon_fcvtas : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtau : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtms : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtmu : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtns : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtnu : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtps : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtpu : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtzs : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtzu : AdvSIMD_FPToIntRounding_Intrinsic;
> + def int_aarch64_neon_fcvtas : AdvSIMD_FPToIntRounding_Intrinsic;
> + def int_aarch64_neon_fcvtau : AdvSIMD_FPToIntRounding_Intrinsic;
> + def int_aarch64_neon_fcvtms : AdvSIMD_FPToIntRounding_Intrinsic;
> + def int_aarch64_neon_fcvtmu : AdvSIMD_FPToIntRounding_Intrinsic;
> + def int_aarch64_neon_fcvtns : AdvSIMD_FPToIntRounding_Intrinsic;
> + def int_aarch64_neon_fcvtnu : AdvSIMD_FPToIntRounding_Intrinsic;
> + def int_aarch64_neon_fcvtps : AdvSIMD_FPToIntRounding_Intrinsic;
> + def int_aarch64_neon_fcvtpu : AdvSIMD_FPToIntRounding_Intrinsic;
> + def int_aarch64_neon_fcvtzs : AdvSIMD_FPToIntRounding_Intrinsic;
> + def int_aarch64_neon_fcvtzu : AdvSIMD_FPToIntRounding_Intrinsic;
>
> // Vector FP Rounding: only ties to even is unrepresented by a normal
> // intrinsic.
> - def int_arm64_neon_frintn : AdvSIMD_1FloatArg_Intrinsic;
> + def int_aarch64_neon_frintn : AdvSIMD_1FloatArg_Intrinsic;
>
> // Scalar FP->Int conversions
>
> // Vector FP Inexact Narrowing
> - def int_arm64_neon_fcvtxn : AdvSIMD_1VectorArg_Expand_Intrinsic;
> + def int_aarch64_neon_fcvtxn : AdvSIMD_1VectorArg_Expand_Intrinsic;
>
> // Scalar FP Inexact Narrowing
> - def int_arm64_sisd_fcvtxn : Intrinsic<[llvm_float_ty], [llvm_double_ty],
> + def int_aarch64_sisd_fcvtxn : Intrinsic<[llvm_float_ty], [llvm_double_ty],
> [IntrNoMem]>;
> }
>
> -let TargetPrefix = "arm64" in { // All intrinsics start with "llvm.arm64.".
> +let TargetPrefix = "aarch64" in { // All intrinsics start with "llvm.aarch64.".
> class AdvSIMD_2Vector2Index_Intrinsic
> : Intrinsic<[llvm_anyvector_ty],
> [llvm_anyvector_ty, llvm_i64_ty, LLVMMatchType<0>, llvm_i64_ty],
> @@ -409,9 +409,9 @@ let TargetPrefix = "arm64" in { // All
> }
>
> // Vector element to element moves
> -def int_arm64_neon_vcopy_lane: AdvSIMD_2Vector2Index_Intrinsic;
> +def int_aarch64_neon_vcopy_lane: AdvSIMD_2Vector2Index_Intrinsic;
>
> -let TargetPrefix = "arm64" in { // All intrinsics start with "llvm.arm64.".
> +let TargetPrefix = "aarch64" in { // All intrinsics start with "llvm.aarch64.".
> class AdvSIMD_1Vec_Load_Intrinsic
> : Intrinsic<[llvm_anyvector_ty], [LLVMAnyPointerType<LLVMMatchType<0>>],
> [IntrReadArgMem]>;
> @@ -482,35 +482,35 @@ let TargetPrefix = "arm64" in { // All
>
> // Memory ops
>
> -def int_arm64_neon_ld1x2 : AdvSIMD_2Vec_Load_Intrinsic;
> -def int_arm64_neon_ld1x3 : AdvSIMD_3Vec_Load_Intrinsic;
> -def int_arm64_neon_ld1x4 : AdvSIMD_4Vec_Load_Intrinsic;
> -
> -def int_arm64_neon_st1x2 : AdvSIMD_2Vec_Store_Intrinsic;
> -def int_arm64_neon_st1x3 : AdvSIMD_3Vec_Store_Intrinsic;
> -def int_arm64_neon_st1x4 : AdvSIMD_4Vec_Store_Intrinsic;
> -
> -def int_arm64_neon_ld2 : AdvSIMD_2Vec_Load_Intrinsic;
> -def int_arm64_neon_ld3 : AdvSIMD_3Vec_Load_Intrinsic;
> -def int_arm64_neon_ld4 : AdvSIMD_4Vec_Load_Intrinsic;
> -
> -def int_arm64_neon_ld2lane : AdvSIMD_2Vec_Load_Lane_Intrinsic;
> -def int_arm64_neon_ld3lane : AdvSIMD_3Vec_Load_Lane_Intrinsic;
> -def int_arm64_neon_ld4lane : AdvSIMD_4Vec_Load_Lane_Intrinsic;
> -
> -def int_arm64_neon_ld2r : AdvSIMD_2Vec_Load_Intrinsic;
> -def int_arm64_neon_ld3r : AdvSIMD_3Vec_Load_Intrinsic;
> -def int_arm64_neon_ld4r : AdvSIMD_4Vec_Load_Intrinsic;
> -
> -def int_arm64_neon_st2 : AdvSIMD_2Vec_Store_Intrinsic;
> -def int_arm64_neon_st3 : AdvSIMD_3Vec_Store_Intrinsic;
> -def int_arm64_neon_st4 : AdvSIMD_4Vec_Store_Intrinsic;
> -
> -def int_arm64_neon_st2lane : AdvSIMD_2Vec_Store_Lane_Intrinsic;
> -def int_arm64_neon_st3lane : AdvSIMD_3Vec_Store_Lane_Intrinsic;
> -def int_arm64_neon_st4lane : AdvSIMD_4Vec_Store_Lane_Intrinsic;
> +def int_aarch64_neon_ld1x2 : AdvSIMD_2Vec_Load_Intrinsic;
> +def int_aarch64_neon_ld1x3 : AdvSIMD_3Vec_Load_Intrinsic;
> +def int_aarch64_neon_ld1x4 : AdvSIMD_4Vec_Load_Intrinsic;
> +
> +def int_aarch64_neon_st1x2 : AdvSIMD_2Vec_Store_Intrinsic;
> +def int_aarch64_neon_st1x3 : AdvSIMD_3Vec_Store_Intrinsic;
> +def int_aarch64_neon_st1x4 : AdvSIMD_4Vec_Store_Intrinsic;
> +
> +def int_aarch64_neon_ld2 : AdvSIMD_2Vec_Load_Intrinsic;
> +def int_aarch64_neon_ld3 : AdvSIMD_3Vec_Load_Intrinsic;
> +def int_aarch64_neon_ld4 : AdvSIMD_4Vec_Load_Intrinsic;
> +
> +def int_aarch64_neon_ld2lane : AdvSIMD_2Vec_Load_Lane_Intrinsic;
> +def int_aarch64_neon_ld3lane : AdvSIMD_3Vec_Load_Lane_Intrinsic;
> +def int_aarch64_neon_ld4lane : AdvSIMD_4Vec_Load_Lane_Intrinsic;
> +
> +def int_aarch64_neon_ld2r : AdvSIMD_2Vec_Load_Intrinsic;
> +def int_aarch64_neon_ld3r : AdvSIMD_3Vec_Load_Intrinsic;
> +def int_aarch64_neon_ld4r : AdvSIMD_4Vec_Load_Intrinsic;
> +
> +def int_aarch64_neon_st2 : AdvSIMD_2Vec_Store_Intrinsic;
> +def int_aarch64_neon_st3 : AdvSIMD_3Vec_Store_Intrinsic;
> +def int_aarch64_neon_st4 : AdvSIMD_4Vec_Store_Intrinsic;
> +
> +def int_aarch64_neon_st2lane : AdvSIMD_2Vec_Store_Lane_Intrinsic;
> +def int_aarch64_neon_st3lane : AdvSIMD_3Vec_Store_Lane_Intrinsic;
> +def int_aarch64_neon_st4lane : AdvSIMD_4Vec_Store_Lane_Intrinsic;
>
> -let TargetPrefix = "arm64" in { // All intrinsics start with "llvm.arm64.".
> +let TargetPrefix = "aarch64" in { // All intrinsics start with "llvm.aarch64.".
> class AdvSIMD_Tbl1_Intrinsic
> : Intrinsic<[llvm_anyvector_ty], [llvm_v16i8_ty, LLVMMatchType<0>],
> [IntrNoMem]>;
> @@ -548,17 +548,17 @@ let TargetPrefix = "arm64" in { // All
> llvm_v16i8_ty, llvm_v16i8_ty, LLVMMatchType<0>],
> [IntrNoMem]>;
> }
> -def int_arm64_neon_tbl1 : AdvSIMD_Tbl1_Intrinsic;
> -def int_arm64_neon_tbl2 : AdvSIMD_Tbl2_Intrinsic;
> -def int_arm64_neon_tbl3 : AdvSIMD_Tbl3_Intrinsic;
> -def int_arm64_neon_tbl4 : AdvSIMD_Tbl4_Intrinsic;
> -
> -def int_arm64_neon_tbx1 : AdvSIMD_Tbx1_Intrinsic;
> -def int_arm64_neon_tbx2 : AdvSIMD_Tbx2_Intrinsic;
> -def int_arm64_neon_tbx3 : AdvSIMD_Tbx3_Intrinsic;
> -def int_arm64_neon_tbx4 : AdvSIMD_Tbx4_Intrinsic;
> +def int_aarch64_neon_tbl1 : AdvSIMD_Tbl1_Intrinsic;
> +def int_aarch64_neon_tbl2 : AdvSIMD_Tbl2_Intrinsic;
> +def int_aarch64_neon_tbl3 : AdvSIMD_Tbl3_Intrinsic;
> +def int_aarch64_neon_tbl4 : AdvSIMD_Tbl4_Intrinsic;
> +
> +def int_aarch64_neon_tbx1 : AdvSIMD_Tbx1_Intrinsic;
> +def int_aarch64_neon_tbx2 : AdvSIMD_Tbx2_Intrinsic;
> +def int_aarch64_neon_tbx3 : AdvSIMD_Tbx3_Intrinsic;
> +def int_aarch64_neon_tbx4 : AdvSIMD_Tbx4_Intrinsic;
>
> -let TargetPrefix = "arm64" in {
> +let TargetPrefix = "aarch64" in {
> class Crypto_AES_DataKey_Intrinsic
> : Intrinsic<[llvm_v16i8_ty], [llvm_v16i8_ty, llvm_v16i8_ty], [IntrNoMem]>;
>
> @@ -592,45 +592,45 @@ let TargetPrefix = "arm64" in {
> }
>
> // AES
> -def int_arm64_crypto_aese : Crypto_AES_DataKey_Intrinsic;
> -def int_arm64_crypto_aesd : Crypto_AES_DataKey_Intrinsic;
> -def int_arm64_crypto_aesmc : Crypto_AES_Data_Intrinsic;
> -def int_arm64_crypto_aesimc : Crypto_AES_Data_Intrinsic;
> +def int_aarch64_crypto_aese : Crypto_AES_DataKey_Intrinsic;
> +def int_aarch64_crypto_aesd : Crypto_AES_DataKey_Intrinsic;
> +def int_aarch64_crypto_aesmc : Crypto_AES_Data_Intrinsic;
> +def int_aarch64_crypto_aesimc : Crypto_AES_Data_Intrinsic;
>
> // SHA1
> -def int_arm64_crypto_sha1c : Crypto_SHA_5Hash4Schedule_Intrinsic;
> -def int_arm64_crypto_sha1p : Crypto_SHA_5Hash4Schedule_Intrinsic;
> -def int_arm64_crypto_sha1m : Crypto_SHA_5Hash4Schedule_Intrinsic;
> -def int_arm64_crypto_sha1h : Crypto_SHA_1Hash_Intrinsic;
> +def int_aarch64_crypto_sha1c : Crypto_SHA_5Hash4Schedule_Intrinsic;
> +def int_aarch64_crypto_sha1p : Crypto_SHA_5Hash4Schedule_Intrinsic;
> +def int_aarch64_crypto_sha1m : Crypto_SHA_5Hash4Schedule_Intrinsic;
> +def int_aarch64_crypto_sha1h : Crypto_SHA_1Hash_Intrinsic;
>
> -def int_arm64_crypto_sha1su0 : Crypto_SHA_12Schedule_Intrinsic;
> -def int_arm64_crypto_sha1su1 : Crypto_SHA_8Schedule_Intrinsic;
> +def int_aarch64_crypto_sha1su0 : Crypto_SHA_12Schedule_Intrinsic;
> +def int_aarch64_crypto_sha1su1 : Crypto_SHA_8Schedule_Intrinsic;
>
> // SHA256
> -def int_arm64_crypto_sha256h : Crypto_SHA_8Hash4Schedule_Intrinsic;
> -def int_arm64_crypto_sha256h2 : Crypto_SHA_8Hash4Schedule_Intrinsic;
> -def int_arm64_crypto_sha256su0 : Crypto_SHA_8Schedule_Intrinsic;
> -def int_arm64_crypto_sha256su1 : Crypto_SHA_12Schedule_Intrinsic;
> +def int_aarch64_crypto_sha256h : Crypto_SHA_8Hash4Schedule_Intrinsic;
> +def int_aarch64_crypto_sha256h2 : Crypto_SHA_8Hash4Schedule_Intrinsic;
> +def int_aarch64_crypto_sha256su0 : Crypto_SHA_8Schedule_Intrinsic;
> +def int_aarch64_crypto_sha256su1 : Crypto_SHA_12Schedule_Intrinsic;
>
> //===----------------------------------------------------------------------===//
> // CRC32
>
> -let TargetPrefix = "arm64" in {
> +let TargetPrefix = "aarch64" in {
>
> -def int_arm64_crc32b : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> +def int_aarch64_crc32b : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> [IntrNoMem]>;
> -def int_arm64_crc32cb : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> +def int_aarch64_crc32cb : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> [IntrNoMem]>;
> -def int_arm64_crc32h : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> +def int_aarch64_crc32h : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> [IntrNoMem]>;
> -def int_arm64_crc32ch : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> +def int_aarch64_crc32ch : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> [IntrNoMem]>;
> -def int_arm64_crc32w : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> +def int_aarch64_crc32w : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> [IntrNoMem]>;
> -def int_arm64_crc32cw : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> +def int_aarch64_crc32cw : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> [IntrNoMem]>;
> -def int_arm64_crc32x : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i64_ty],
> +def int_aarch64_crc32x : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i64_ty],
> [IntrNoMem]>;
> -def int_arm64_crc32cx : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i64_ty],
> +def int_aarch64_crc32cx : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i64_ty],
> [IntrNoMem]>;
> }
>
> Removed: llvm/trunk/include/llvm/IR/IntrinsicsARM64.td
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/IR/IntrinsicsARM64.td?rev=209576&view=auto
> ==============================================================================
> --- llvm/trunk/include/llvm/IR/IntrinsicsARM64.td (original)
> +++ llvm/trunk/include/llvm/IR/IntrinsicsARM64.td (removed)
> @@ -1,636 +0,0 @@
> -//===- IntrinsicsARM64.td - Defines ARM64 intrinsics -------*- tablegen -*-===//
> -//
> -// The LLVM Compiler Infrastructure
> -//
> -// This file is distributed under the University of Illinois Open Source
> -// License. See LICENSE.TXT for details.
> -//
> -//===----------------------------------------------------------------------===//
> -//
> -// This file defines all of the ARM64-specific intrinsics.
> -//
> -//===----------------------------------------------------------------------===//
> -
> -let TargetPrefix = "arm64" in {
> -
> -def int_arm64_ldxr : Intrinsic<[llvm_i64_ty], [llvm_anyptr_ty]>;
> -def int_arm64_ldaxr : Intrinsic<[llvm_i64_ty], [llvm_anyptr_ty]>;
> -def int_arm64_stxr : Intrinsic<[llvm_i32_ty], [llvm_i64_ty, llvm_anyptr_ty]>;
> -def int_arm64_stlxr : Intrinsic<[llvm_i32_ty], [llvm_i64_ty, llvm_anyptr_ty]>;
> -
> -def int_arm64_ldxp : Intrinsic<[llvm_i64_ty, llvm_i64_ty], [llvm_ptr_ty]>;
> -def int_arm64_ldaxp : Intrinsic<[llvm_i64_ty, llvm_i64_ty], [llvm_ptr_ty]>;
> -def int_arm64_stxp : Intrinsic<[llvm_i32_ty],
> - [llvm_i64_ty, llvm_i64_ty, llvm_ptr_ty]>;
> -def int_arm64_stlxp : Intrinsic<[llvm_i32_ty],
> - [llvm_i64_ty, llvm_i64_ty, llvm_ptr_ty]>;
> -
> -def int_arm64_clrex : Intrinsic<[]>;
> -
> -def int_arm64_sdiv : Intrinsic<[llvm_anyint_ty], [LLVMMatchType<0>,
> - LLVMMatchType<0>], [IntrNoMem]>;
> -def int_arm64_udiv : Intrinsic<[llvm_anyint_ty], [LLVMMatchType<0>,
> - LLVMMatchType<0>], [IntrNoMem]>;
> -}
> -
> -//===----------------------------------------------------------------------===//
> -// Advanced SIMD (NEON)
> -
> -let TargetPrefix = "arm64" in { // All intrinsics start with "llvm.arm64.".
> - class AdvSIMD_2Scalar_Float_Intrinsic
> - : Intrinsic<[llvm_anyfloat_ty], [LLVMMatchType<0>, LLVMMatchType<0>],
> - [IntrNoMem]>;
> -
> - class AdvSIMD_FPToIntRounding_Intrinsic
> - : Intrinsic<[llvm_anyint_ty], [llvm_anyfloat_ty], [IntrNoMem]>;
> -
> - class AdvSIMD_1IntArg_Intrinsic
> - : Intrinsic<[llvm_anyint_ty], [LLVMMatchType<0>], [IntrNoMem]>;
> - class AdvSIMD_1FloatArg_Intrinsic
> - : Intrinsic<[llvm_anyfloat_ty], [LLVMMatchType<0>], [IntrNoMem]>;
> - class AdvSIMD_1VectorArg_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty], [LLVMMatchType<0>], [IntrNoMem]>;
> - class AdvSIMD_1VectorArg_Expand_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty], [llvm_anyvector_ty], [IntrNoMem]>;
> - class AdvSIMD_1VectorArg_Long_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty], [LLVMTruncatedType<0>], [IntrNoMem]>;
> - class AdvSIMD_1IntArg_Narrow_Intrinsic
> - : Intrinsic<[llvm_anyint_ty], [llvm_anyint_ty], [IntrNoMem]>;
> - class AdvSIMD_1VectorArg_Narrow_Intrinsic
> - : Intrinsic<[llvm_anyint_ty], [LLVMExtendedType<0>], [IntrNoMem]>;
> - class AdvSIMD_1VectorArg_Int_Across_Intrinsic
> - : Intrinsic<[llvm_anyint_ty], [llvm_anyvector_ty], [IntrNoMem]>;
> - class AdvSIMD_1VectorArg_Float_Across_Intrinsic
> - : Intrinsic<[llvm_anyfloat_ty], [llvm_anyvector_ty], [IntrNoMem]>;
> -
> - class AdvSIMD_2IntArg_Intrinsic
> - : Intrinsic<[llvm_anyint_ty], [LLVMMatchType<0>, LLVMMatchType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_2FloatArg_Intrinsic
> - : Intrinsic<[llvm_anyfloat_ty], [LLVMMatchType<0>, LLVMMatchType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_2VectorArg_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty], [LLVMMatchType<0>, LLVMMatchType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_2VectorArg_Compare_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty], [llvm_anyvector_ty, LLVMMatchType<1>],
> - [IntrNoMem]>;
> - class AdvSIMD_2Arg_FloatCompare_Intrinsic
> - : Intrinsic<[llvm_anyint_ty], [llvm_anyfloat_ty, LLVMMatchType<1>],
> - [IntrNoMem]>;
> - class AdvSIMD_2VectorArg_Long_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMTruncatedType<0>, LLVMTruncatedType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_2VectorArg_Wide_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMMatchType<0>, LLVMTruncatedType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_2VectorArg_Narrow_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMExtendedType<0>, LLVMExtendedType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_2Arg_Scalar_Narrow_Intrinsic
> - : Intrinsic<[llvm_anyint_ty],
> - [LLVMExtendedType<0>, llvm_i32_ty],
> - [IntrNoMem]>;
> - class AdvSIMD_2VectorArg_Scalar_Expand_BySize_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [llvm_anyvector_ty],
> - [IntrNoMem]>;
> - class AdvSIMD_2VectorArg_Scalar_Wide_BySize_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMTruncatedType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_2VectorArg_Scalar_Wide_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMTruncatedType<0>, llvm_i32_ty],
> - [IntrNoMem]>;
> - class AdvSIMD_2VectorArg_Tied_Narrow_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMHalfElementsVectorType<0>, llvm_anyvector_ty],
> - [IntrNoMem]>;
> -
> - class AdvSIMD_3VectorArg_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMMatchType<0>, LLVMMatchType<0>, LLVMMatchType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_3VectorArg_Scalar_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMMatchType<0>, LLVMMatchType<0>, llvm_i32_ty],
> - [IntrNoMem]>;
> - class AdvSIMD_3VectorArg_Tied_Narrow_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMHalfElementsVectorType<0>, llvm_anyvector_ty,
> - LLVMMatchType<1>], [IntrNoMem]>;
> - class AdvSIMD_3VectorArg_Scalar_Tied_Narrow_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMHalfElementsVectorType<0>, llvm_anyvector_ty, llvm_i32_ty],
> - [IntrNoMem]>;
> - class AdvSIMD_CvtFxToFP_Intrinsic
> - : Intrinsic<[llvm_anyfloat_ty], [llvm_anyint_ty, llvm_i32_ty],
> - [IntrNoMem]>;
> - class AdvSIMD_CvtFPToFx_Intrinsic
> - : Intrinsic<[llvm_anyint_ty], [llvm_anyfloat_ty, llvm_i32_ty],
> - [IntrNoMem]>;
> -}
> -
> -// Arithmetic ops
> -
> -let Properties = [IntrNoMem] in {
> - // Vector Add Across Lanes
> - def int_arm64_neon_saddv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_uaddv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_faddv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
> -
> - // Vector Long Add Across Lanes
> - def int_arm64_neon_saddlv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_uaddlv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> -
> - // Vector Halving Add
> - def int_arm64_neon_shadd : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_uhadd : AdvSIMD_2VectorArg_Intrinsic;
> -
> - // Vector Rounding Halving Add
> - def int_arm64_neon_srhadd : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_urhadd : AdvSIMD_2VectorArg_Intrinsic;
> -
> - // Vector Saturating Add
> - def int_arm64_neon_sqadd : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_suqadd : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_usqadd : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_uqadd : AdvSIMD_2IntArg_Intrinsic;
> -
> - // Vector Add High-Half
> - // FIXME: this is a legacy intrinsic for aarch64_simd.h. Remove it when that
> - // header is no longer supported.
> - def int_arm64_neon_addhn : AdvSIMD_2VectorArg_Narrow_Intrinsic;
> -
> - // Vector Rounding Add High-Half
> - def int_arm64_neon_raddhn : AdvSIMD_2VectorArg_Narrow_Intrinsic;
> -
> - // Vector Saturating Doubling Multiply High
> - def int_arm64_neon_sqdmulh : AdvSIMD_2IntArg_Intrinsic;
> -
> - // Vector Saturating Rounding Doubling Multiply High
> - def int_arm64_neon_sqrdmulh : AdvSIMD_2IntArg_Intrinsic;
> -
> - // Vector Polynominal Multiply
> - def int_arm64_neon_pmul : AdvSIMD_2VectorArg_Intrinsic;
> -
> - // Vector Long Multiply
> - def int_arm64_neon_smull : AdvSIMD_2VectorArg_Long_Intrinsic;
> - def int_arm64_neon_umull : AdvSIMD_2VectorArg_Long_Intrinsic;
> - def int_arm64_neon_pmull : AdvSIMD_2VectorArg_Long_Intrinsic;
> -
> - // 64-bit polynomial multiply really returns an i128, which is not legal. Fake
> - // it with a v16i8.
> - def int_arm64_neon_pmull64 :
> - Intrinsic<[llvm_v16i8_ty], [llvm_i64_ty, llvm_i64_ty], [IntrNoMem]>;
> -
> - // Vector Extending Multiply
> - def int_arm64_neon_fmulx : AdvSIMD_2FloatArg_Intrinsic {
> - let Properties = [IntrNoMem, Commutative];
> - }
> -
> - // Vector Saturating Doubling Long Multiply
> - def int_arm64_neon_sqdmull : AdvSIMD_2VectorArg_Long_Intrinsic;
> - def int_arm64_neon_sqdmulls_scalar
> - : Intrinsic<[llvm_i64_ty], [llvm_i32_ty, llvm_i32_ty], [IntrNoMem]>;
> -
> - // Vector Halving Subtract
> - def int_arm64_neon_shsub : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_uhsub : AdvSIMD_2VectorArg_Intrinsic;
> -
> - // Vector Saturating Subtract
> - def int_arm64_neon_sqsub : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_uqsub : AdvSIMD_2IntArg_Intrinsic;
> -
> - // Vector Subtract High-Half
> - // FIXME: this is a legacy intrinsic for aarch64_simd.h. Remove it when that
> - // header is no longer supported.
> - def int_arm64_neon_subhn : AdvSIMD_2VectorArg_Narrow_Intrinsic;
> -
> - // Vector Rounding Subtract High-Half
> - def int_arm64_neon_rsubhn : AdvSIMD_2VectorArg_Narrow_Intrinsic;
> -
> - // Vector Compare Absolute Greater-than-or-equal
> - def int_arm64_neon_facge : AdvSIMD_2Arg_FloatCompare_Intrinsic;
> -
> - // Vector Compare Absolute Greater-than
> - def int_arm64_neon_facgt : AdvSIMD_2Arg_FloatCompare_Intrinsic;
> -
> - // Vector Absolute Difference
> - def int_arm64_neon_sabd : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_uabd : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fabd : AdvSIMD_2VectorArg_Intrinsic;
> -
> - // Scalar Absolute Difference
> - def int_arm64_sisd_fabd : AdvSIMD_2Scalar_Float_Intrinsic;
> -
> - // Vector Max
> - def int_arm64_neon_smax : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_umax : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fmax : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fmaxnmp : AdvSIMD_2VectorArg_Intrinsic;
> -
> - // Vector Max Across Lanes
> - def int_arm64_neon_smaxv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_umaxv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_fmaxv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
> - def int_arm64_neon_fmaxnmv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
> -
> - // Vector Min
> - def int_arm64_neon_smin : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_umin : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fmin : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fminnmp : AdvSIMD_2VectorArg_Intrinsic;
> -
> - // Vector Min/Max Number
> - def int_arm64_neon_fminnm : AdvSIMD_2FloatArg_Intrinsic;
> - def int_arm64_neon_fmaxnm : AdvSIMD_2FloatArg_Intrinsic;
> -
> - // Vector Min Across Lanes
> - def int_arm64_neon_sminv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_uminv : AdvSIMD_1VectorArg_Int_Across_Intrinsic;
> - def int_arm64_neon_fminv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
> - def int_arm64_neon_fminnmv : AdvSIMD_1VectorArg_Float_Across_Intrinsic;
> -
> - // Pairwise Add
> - def int_arm64_neon_addp : AdvSIMD_2VectorArg_Intrinsic;
> -
> - // Long Pairwise Add
> - // FIXME: In theory, we shouldn't need intrinsics for saddlp or
> - // uaddlp, but tblgen's type inference currently can't handle the
> - // pattern fragments this ends up generating.
> - def int_arm64_neon_saddlp : AdvSIMD_1VectorArg_Expand_Intrinsic;
> - def int_arm64_neon_uaddlp : AdvSIMD_1VectorArg_Expand_Intrinsic;
> -
> - // Folding Maximum
> - def int_arm64_neon_smaxp : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_umaxp : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fmaxp : AdvSIMD_2VectorArg_Intrinsic;
> -
> - // Folding Minimum
> - def int_arm64_neon_sminp : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_uminp : AdvSIMD_2VectorArg_Intrinsic;
> - def int_arm64_neon_fminp : AdvSIMD_2VectorArg_Intrinsic;
> -
> - // Reciprocal Estimate/Step
> - def int_arm64_neon_frecps : AdvSIMD_2FloatArg_Intrinsic;
> - def int_arm64_neon_frsqrts : AdvSIMD_2FloatArg_Intrinsic;
> -
> - // Reciprocal Exponent
> - def int_arm64_neon_frecpx : AdvSIMD_1FloatArg_Intrinsic;
> -
> - // Vector Saturating Shift Left
> - def int_arm64_neon_sqshl : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_uqshl : AdvSIMD_2IntArg_Intrinsic;
> -
> - // Vector Rounding Shift Left
> - def int_arm64_neon_srshl : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_urshl : AdvSIMD_2IntArg_Intrinsic;
> -
> - // Vector Saturating Rounding Shift Left
> - def int_arm64_neon_sqrshl : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_uqrshl : AdvSIMD_2IntArg_Intrinsic;
> -
> - // Vector Signed->Unsigned Shift Left by Constant
> - def int_arm64_neon_sqshlu : AdvSIMD_2IntArg_Intrinsic;
> -
> - // Vector Signed->Unsigned Narrowing Saturating Shift Right by Constant
> - def int_arm64_neon_sqshrun : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> -
> - // Vector Signed->Unsigned Rounding Narrowing Saturating Shift Right by Const
> - def int_arm64_neon_sqrshrun : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> -
> - // Vector Narrowing Shift Right by Constant
> - def int_arm64_neon_sqshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> - def int_arm64_neon_uqshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> -
> - // Vector Rounding Narrowing Shift Right by Constant
> - def int_arm64_neon_rshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> -
> - // Vector Rounding Narrowing Saturating Shift Right by Constant
> - def int_arm64_neon_sqrshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> - def int_arm64_neon_uqrshrn : AdvSIMD_2Arg_Scalar_Narrow_Intrinsic;
> -
> - // Vector Shift Left
> - def int_arm64_neon_sshl : AdvSIMD_2IntArg_Intrinsic;
> - def int_arm64_neon_ushl : AdvSIMD_2IntArg_Intrinsic;
> -
> - // Vector Widening Shift Left by Constant
> - def int_arm64_neon_shll : AdvSIMD_2VectorArg_Scalar_Wide_BySize_Intrinsic;
> - def int_arm64_neon_sshll : AdvSIMD_2VectorArg_Scalar_Wide_Intrinsic;
> - def int_arm64_neon_ushll : AdvSIMD_2VectorArg_Scalar_Wide_Intrinsic;
> -
> - // Vector Shift Right by Constant and Insert
> - def int_arm64_neon_vsri : AdvSIMD_3VectorArg_Scalar_Intrinsic;
> -
> - // Vector Shift Left by Constant and Insert
> - def int_arm64_neon_vsli : AdvSIMD_3VectorArg_Scalar_Intrinsic;
> -
> - // Vector Saturating Narrow
> - def int_arm64_neon_scalar_sqxtn: AdvSIMD_1IntArg_Narrow_Intrinsic;
> - def int_arm64_neon_scalar_uqxtn : AdvSIMD_1IntArg_Narrow_Intrinsic;
> - def int_arm64_neon_sqxtn : AdvSIMD_1VectorArg_Narrow_Intrinsic;
> - def int_arm64_neon_uqxtn : AdvSIMD_1VectorArg_Narrow_Intrinsic;
> -
> - // Vector Saturating Extract and Unsigned Narrow
> - def int_arm64_neon_scalar_sqxtun : AdvSIMD_1IntArg_Narrow_Intrinsic;
> - def int_arm64_neon_sqxtun : AdvSIMD_1VectorArg_Narrow_Intrinsic;
> -
> - // Vector Absolute Value
> - def int_arm64_neon_abs : AdvSIMD_1IntArg_Intrinsic;
> -
> - // Vector Saturating Absolute Value
> - def int_arm64_neon_sqabs : AdvSIMD_1IntArg_Intrinsic;
> -
> - // Vector Saturating Negation
> - def int_arm64_neon_sqneg : AdvSIMD_1IntArg_Intrinsic;
> -
> - // Vector Count Leading Sign Bits
> - def int_arm64_neon_cls : AdvSIMD_1VectorArg_Intrinsic;
> -
> - // Vector Reciprocal Estimate
> - def int_arm64_neon_urecpe : AdvSIMD_1VectorArg_Intrinsic;
> - def int_arm64_neon_frecpe : AdvSIMD_1FloatArg_Intrinsic;
> -
> - // Vector Square Root Estimate
> - def int_arm64_neon_ursqrte : AdvSIMD_1VectorArg_Intrinsic;
> - def int_arm64_neon_frsqrte : AdvSIMD_1FloatArg_Intrinsic;
> -
> - // Vector Bitwise Reverse
> - def int_arm64_neon_rbit : AdvSIMD_1VectorArg_Intrinsic;
> -
> - // Vector Conversions Between Half-Precision and Single-Precision.
> - def int_arm64_neon_vcvtfp2hf
> - : Intrinsic<[llvm_v4i16_ty], [llvm_v4f32_ty], [IntrNoMem]>;
> - def int_arm64_neon_vcvthf2fp
> - : Intrinsic<[llvm_v4f32_ty], [llvm_v4i16_ty], [IntrNoMem]>;
> -
> - // Vector Conversions Between Floating-point and Fixed-point.
> - def int_arm64_neon_vcvtfp2fxs : AdvSIMD_CvtFPToFx_Intrinsic;
> - def int_arm64_neon_vcvtfp2fxu : AdvSIMD_CvtFPToFx_Intrinsic;
> - def int_arm64_neon_vcvtfxs2fp : AdvSIMD_CvtFxToFP_Intrinsic;
> - def int_arm64_neon_vcvtfxu2fp : AdvSIMD_CvtFxToFP_Intrinsic;
> -
> - // Vector FP->Int Conversions
> - def int_arm64_neon_fcvtas : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtau : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtms : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtmu : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtns : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtnu : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtps : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtpu : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtzs : AdvSIMD_FPToIntRounding_Intrinsic;
> - def int_arm64_neon_fcvtzu : AdvSIMD_FPToIntRounding_Intrinsic;
> -
> - // Vector FP Rounding: only ties to even is unrepresented by a normal
> - // intrinsic.
> - def int_arm64_neon_frintn : AdvSIMD_1FloatArg_Intrinsic;
> -
> - // Scalar FP->Int conversions
> -
> - // Vector FP Inexact Narrowing
> - def int_arm64_neon_fcvtxn : AdvSIMD_1VectorArg_Expand_Intrinsic;
> -
> - // Scalar FP Inexact Narrowing
> - def int_arm64_sisd_fcvtxn : Intrinsic<[llvm_float_ty], [llvm_double_ty],
> - [IntrNoMem]>;
> -}
> -
> -let TargetPrefix = "arm64" in { // All intrinsics start with "llvm.arm64.".
> - class AdvSIMD_2Vector2Index_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [llvm_anyvector_ty, llvm_i64_ty, LLVMMatchType<0>, llvm_i64_ty],
> - [IntrNoMem]>;
> -}
> -
> -// Vector element to element moves
> -def int_arm64_neon_vcopy_lane: AdvSIMD_2Vector2Index_Intrinsic;
> -
> -let TargetPrefix = "arm64" in { // All intrinsics start with "llvm.arm64.".
> - class AdvSIMD_1Vec_Load_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty], [LLVMAnyPointerType<LLVMMatchType<0>>],
> - [IntrReadArgMem]>;
> - class AdvSIMD_1Vec_Store_Lane_Intrinsic
> - : Intrinsic<[], [llvm_anyvector_ty, llvm_i64_ty, llvm_anyptr_ty],
> - [IntrReadWriteArgMem, NoCapture<2>]>;
> -
> - class AdvSIMD_2Vec_Load_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty, LLVMMatchType<0>],
> - [LLVMAnyPointerType<LLVMMatchType<0>>],
> - [IntrReadArgMem]>;
> - class AdvSIMD_2Vec_Load_Lane_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty, LLVMMatchType<0>],
> - [LLVMMatchType<0>, LLVMMatchType<0>,
> - llvm_i64_ty, llvm_anyptr_ty],
> - [IntrReadArgMem]>;
> - class AdvSIMD_2Vec_Store_Intrinsic
> - : Intrinsic<[], [llvm_anyvector_ty, LLVMMatchType<0>,
> - LLVMAnyPointerType<LLVMMatchType<0>>],
> - [IntrReadWriteArgMem, NoCapture<2>]>;
> - class AdvSIMD_2Vec_Store_Lane_Intrinsic
> - : Intrinsic<[], [llvm_anyvector_ty, LLVMMatchType<0>,
> - llvm_i64_ty, llvm_anyptr_ty],
> - [IntrReadWriteArgMem, NoCapture<3>]>;
> -
> - class AdvSIMD_3Vec_Load_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty, LLVMMatchType<0>, LLVMMatchType<0>],
> - [LLVMAnyPointerType<LLVMMatchType<0>>],
> - [IntrReadArgMem]>;
> - class AdvSIMD_3Vec_Load_Lane_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty, LLVMMatchType<0>, LLVMMatchType<0>],
> - [LLVMMatchType<0>, LLVMMatchType<0>, LLVMMatchType<0>,
> - llvm_i64_ty, llvm_anyptr_ty],
> - [IntrReadArgMem]>;
> - class AdvSIMD_3Vec_Store_Intrinsic
> - : Intrinsic<[], [llvm_anyvector_ty, LLVMMatchType<0>,
> - LLVMMatchType<0>, LLVMAnyPointerType<LLVMMatchType<0>>],
> - [IntrReadWriteArgMem, NoCapture<3>]>;
> - class AdvSIMD_3Vec_Store_Lane_Intrinsic
> - : Intrinsic<[], [llvm_anyvector_ty,
> - LLVMMatchType<0>, LLVMMatchType<0>,
> - llvm_i64_ty, llvm_anyptr_ty],
> - [IntrReadWriteArgMem, NoCapture<4>]>;
> -
> - class AdvSIMD_4Vec_Load_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty, LLVMMatchType<0>,
> - LLVMMatchType<0>, LLVMMatchType<0>],
> - [LLVMAnyPointerType<LLVMMatchType<0>>],
> - [IntrReadArgMem]>;
> - class AdvSIMD_4Vec_Load_Lane_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty, LLVMMatchType<0>,
> - LLVMMatchType<0>, LLVMMatchType<0>],
> - [LLVMMatchType<0>, LLVMMatchType<0>,
> - LLVMMatchType<0>, LLVMMatchType<0>,
> - llvm_i64_ty, llvm_anyptr_ty],
> - [IntrReadArgMem]>;
> - class AdvSIMD_4Vec_Store_Intrinsic
> - : Intrinsic<[], [llvm_anyvector_ty, LLVMMatchType<0>,
> - LLVMMatchType<0>, LLVMMatchType<0>,
> - LLVMAnyPointerType<LLVMMatchType<0>>],
> - [IntrReadWriteArgMem, NoCapture<4>]>;
> - class AdvSIMD_4Vec_Store_Lane_Intrinsic
> - : Intrinsic<[], [llvm_anyvector_ty, LLVMMatchType<0>,
> - LLVMMatchType<0>, LLVMMatchType<0>,
> - llvm_i64_ty, llvm_anyptr_ty],
> - [IntrReadWriteArgMem, NoCapture<5>]>;
> -}
> -
> -// Memory ops
> -
> -def int_arm64_neon_ld1x2 : AdvSIMD_2Vec_Load_Intrinsic;
> -def int_arm64_neon_ld1x3 : AdvSIMD_3Vec_Load_Intrinsic;
> -def int_arm64_neon_ld1x4 : AdvSIMD_4Vec_Load_Intrinsic;
> -
> -def int_arm64_neon_st1x2 : AdvSIMD_2Vec_Store_Intrinsic;
> -def int_arm64_neon_st1x3 : AdvSIMD_3Vec_Store_Intrinsic;
> -def int_arm64_neon_st1x4 : AdvSIMD_4Vec_Store_Intrinsic;
> -
> -def int_arm64_neon_ld2 : AdvSIMD_2Vec_Load_Intrinsic;
> -def int_arm64_neon_ld3 : AdvSIMD_3Vec_Load_Intrinsic;
> -def int_arm64_neon_ld4 : AdvSIMD_4Vec_Load_Intrinsic;
> -
> -def int_arm64_neon_ld2lane : AdvSIMD_2Vec_Load_Lane_Intrinsic;
> -def int_arm64_neon_ld3lane : AdvSIMD_3Vec_Load_Lane_Intrinsic;
> -def int_arm64_neon_ld4lane : AdvSIMD_4Vec_Load_Lane_Intrinsic;
> -
> -def int_arm64_neon_ld2r : AdvSIMD_2Vec_Load_Intrinsic;
> -def int_arm64_neon_ld3r : AdvSIMD_3Vec_Load_Intrinsic;
> -def int_arm64_neon_ld4r : AdvSIMD_4Vec_Load_Intrinsic;
> -
> -def int_arm64_neon_st2 : AdvSIMD_2Vec_Store_Intrinsic;
> -def int_arm64_neon_st3 : AdvSIMD_3Vec_Store_Intrinsic;
> -def int_arm64_neon_st4 : AdvSIMD_4Vec_Store_Intrinsic;
> -
> -def int_arm64_neon_st2lane : AdvSIMD_2Vec_Store_Lane_Intrinsic;
> -def int_arm64_neon_st3lane : AdvSIMD_3Vec_Store_Lane_Intrinsic;
> -def int_arm64_neon_st4lane : AdvSIMD_4Vec_Store_Lane_Intrinsic;
> -
> -let TargetPrefix = "arm64" in { // All intrinsics start with "llvm.arm64.".
> - class AdvSIMD_Tbl1_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty], [llvm_v16i8_ty, LLVMMatchType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_Tbl2_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [llvm_v16i8_ty, llvm_v16i8_ty, LLVMMatchType<0>], [IntrNoMem]>;
> - class AdvSIMD_Tbl3_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [llvm_v16i8_ty, llvm_v16i8_ty, llvm_v16i8_ty,
> - LLVMMatchType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_Tbl4_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [llvm_v16i8_ty, llvm_v16i8_ty, llvm_v16i8_ty, llvm_v16i8_ty,
> - LLVMMatchType<0>],
> - [IntrNoMem]>;
> -
> - class AdvSIMD_Tbx1_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMMatchType<0>, llvm_v16i8_ty, LLVMMatchType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_Tbx2_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMMatchType<0>, llvm_v16i8_ty, llvm_v16i8_ty,
> - LLVMMatchType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_Tbx3_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMMatchType<0>, llvm_v16i8_ty, llvm_v16i8_ty,
> - llvm_v16i8_ty, LLVMMatchType<0>],
> - [IntrNoMem]>;
> - class AdvSIMD_Tbx4_Intrinsic
> - : Intrinsic<[llvm_anyvector_ty],
> - [LLVMMatchType<0>, llvm_v16i8_ty, llvm_v16i8_ty,
> - llvm_v16i8_ty, llvm_v16i8_ty, LLVMMatchType<0>],
> - [IntrNoMem]>;
> -}
> -def int_arm64_neon_tbl1 : AdvSIMD_Tbl1_Intrinsic;
> -def int_arm64_neon_tbl2 : AdvSIMD_Tbl2_Intrinsic;
> -def int_arm64_neon_tbl3 : AdvSIMD_Tbl3_Intrinsic;
> -def int_arm64_neon_tbl4 : AdvSIMD_Tbl4_Intrinsic;
> -
> -def int_arm64_neon_tbx1 : AdvSIMD_Tbx1_Intrinsic;
> -def int_arm64_neon_tbx2 : AdvSIMD_Tbx2_Intrinsic;
> -def int_arm64_neon_tbx3 : AdvSIMD_Tbx3_Intrinsic;
> -def int_arm64_neon_tbx4 : AdvSIMD_Tbx4_Intrinsic;
> -
> -let TargetPrefix = "arm64" in {
> - class Crypto_AES_DataKey_Intrinsic
> - : Intrinsic<[llvm_v16i8_ty], [llvm_v16i8_ty, llvm_v16i8_ty], [IntrNoMem]>;
> -
> - class Crypto_AES_Data_Intrinsic
> - : Intrinsic<[llvm_v16i8_ty], [llvm_v16i8_ty], [IntrNoMem]>;
> -
> - // SHA intrinsic taking 5 words of the hash (v4i32, i32) and 4 of the schedule
> - // (v4i32).
> - class Crypto_SHA_5Hash4Schedule_Intrinsic
> - : Intrinsic<[llvm_v4i32_ty], [llvm_v4i32_ty, llvm_i32_ty, llvm_v4i32_ty],
> - [IntrNoMem]>;
> -
> - // SHA intrinsic taking 5 words of the hash (v4i32, i32) and 4 of the schedule
> - // (v4i32).
> - class Crypto_SHA_1Hash_Intrinsic
> - : Intrinsic<[llvm_i32_ty], [llvm_i32_ty], [IntrNoMem]>;
> -
> - // SHA intrinsic taking 8 words of the schedule
> - class Crypto_SHA_8Schedule_Intrinsic
> - : Intrinsic<[llvm_v4i32_ty], [llvm_v4i32_ty, llvm_v4i32_ty], [IntrNoMem]>;
> -
> - // SHA intrinsic taking 12 words of the schedule
> - class Crypto_SHA_12Schedule_Intrinsic
> - : Intrinsic<[llvm_v4i32_ty], [llvm_v4i32_ty, llvm_v4i32_ty, llvm_v4i32_ty],
> - [IntrNoMem]>;
> -
> - // SHA intrinsic taking 8 words of the hash and 4 of the schedule.
> - class Crypto_SHA_8Hash4Schedule_Intrinsic
> - : Intrinsic<[llvm_v4i32_ty], [llvm_v4i32_ty, llvm_v4i32_ty, llvm_v4i32_ty],
> - [IntrNoMem]>;
> -}
> -
> -// AES
> -def int_arm64_crypto_aese : Crypto_AES_DataKey_Intrinsic;
> -def int_arm64_crypto_aesd : Crypto_AES_DataKey_Intrinsic;
> -def int_arm64_crypto_aesmc : Crypto_AES_Data_Intrinsic;
> -def int_arm64_crypto_aesimc : Crypto_AES_Data_Intrinsic;
> -
> -// SHA1
> -def int_arm64_crypto_sha1c : Crypto_SHA_5Hash4Schedule_Intrinsic;
> -def int_arm64_crypto_sha1p : Crypto_SHA_5Hash4Schedule_Intrinsic;
> -def int_arm64_crypto_sha1m : Crypto_SHA_5Hash4Schedule_Intrinsic;
> -def int_arm64_crypto_sha1h : Crypto_SHA_1Hash_Intrinsic;
> -
> -def int_arm64_crypto_sha1su0 : Crypto_SHA_12Schedule_Intrinsic;
> -def int_arm64_crypto_sha1su1 : Crypto_SHA_8Schedule_Intrinsic;
> -
> -// SHA256
> -def int_arm64_crypto_sha256h : Crypto_SHA_8Hash4Schedule_Intrinsic;
> -def int_arm64_crypto_sha256h2 : Crypto_SHA_8Hash4Schedule_Intrinsic;
> -def int_arm64_crypto_sha256su0 : Crypto_SHA_8Schedule_Intrinsic;
> -def int_arm64_crypto_sha256su1 : Crypto_SHA_12Schedule_Intrinsic;
> -
> -//===----------------------------------------------------------------------===//
> -// CRC32
> -
> -let TargetPrefix = "arm64" in {
> -
> -def int_arm64_crc32b : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> - [IntrNoMem]>;
> -def int_arm64_crc32cb : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> - [IntrNoMem]>;
> -def int_arm64_crc32h : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> - [IntrNoMem]>;
> -def int_arm64_crc32ch : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> - [IntrNoMem]>;
> -def int_arm64_crc32w : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> - [IntrNoMem]>;
> -def int_arm64_crc32cw : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i32_ty],
> - [IntrNoMem]>;
> -def int_arm64_crc32x : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i64_ty],
> - [IntrNoMem]>;
> -def int_arm64_crc32cx : Intrinsic<[llvm_i32_ty], [llvm_i32_ty, llvm_i64_ty],
> - [IntrNoMem]>;
> -}
>
> Modified: llvm/trunk/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.cpp?rev=209577&r1=209576&r2=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.cpp (original)
> +++ llvm/trunk/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.cpp Sat May 24 07:50:23 2014
> @@ -168,8 +168,9 @@ void RuntimeDyldMachO::resolveRelocation
> case Triple::thumb:
> resolveARMRelocation(RE, Value);
> break;
> + case Triple::aarch64:
> case Triple::arm64:
> - resolveARM64Relocation(RE, Value);
> + resolveAArch64Relocation(RE, Value);
> break;
> }
> }
> @@ -289,8 +290,8 @@ bool RuntimeDyldMachO::resolveARMRelocat
> return false;
> }
>
> -bool RuntimeDyldMachO::resolveARM64Relocation(const RelocationEntry &RE,
> - uint64_t Value) {
> +bool RuntimeDyldMachO::resolveAArch64Relocation(const RelocationEntry &RE,
> + uint64_t Value) {
> const SectionEntry &Section = Sections[RE.SectionID];
> uint8_t* LocalAddress = Section.Address + RE.Offset;
>
>
> Modified: llvm/trunk/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.h
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.h?rev=209577&r1=209576&r2=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.h (original)
> +++ llvm/trunk/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.h Sat May 24 07:50:23 2014
> @@ -41,7 +41,7 @@ private:
> bool resolveI386Relocation(const RelocationEntry &RE, uint64_t Value);
> bool resolveX86_64Relocation(const RelocationEntry &RE, uint64_t Value);
> bool resolveARMRelocation(const RelocationEntry &RE, uint64_t Value);
> - bool resolveARM64Relocation(const RelocationEntry &RE, uint64_t Value);
> + bool resolveAArch64Relocation(const RelocationEntry &RE, uint64_t Value);
>
> // Populate stubs in __jump_table section.
> void populateJumpTable(MachOObjectFile &Obj, const SectionRef &JTSection,
>
> Modified: llvm/trunk/lib/LTO/LTOCodeGenerator.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/LTO/LTOCodeGenerator.cpp?rev=209577&r1=209576&r2=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/LTO/LTOCodeGenerator.cpp (original)
> +++ llvm/trunk/lib/LTO/LTOCodeGenerator.cpp Sat May 24 07:50:23 2014
> @@ -312,7 +312,8 @@ bool LTOCodeGenerator::determineTarget(s
> MCpu = "core2";
> else if (Triple.getArch() == llvm::Triple::x86)
> MCpu = "yonah";
> - else if (Triple.getArch() == llvm::Triple::arm64)
> + else if (Triple.getArch() == llvm::Triple::arm64 ||
> + Triple.getArch() == llvm::Triple::aarch64)
> MCpu = "cyclone";
> }
>
>
> Modified: llvm/trunk/lib/LTO/LTOModule.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/LTO/LTOModule.cpp?rev=209577&r1=209576&r2=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/LTO/LTOModule.cpp (original)
> +++ llvm/trunk/lib/LTO/LTOModule.cpp Sat May 24 07:50:23 2014
> @@ -168,7 +168,8 @@ LTOModule *LTOModule::makeLTOModule(Memo
> CPU = "core2";
> else if (Triple.getArch() == llvm::Triple::x86)
> CPU = "yonah";
> - else if (Triple.getArch() == llvm::Triple::arm64)
> + else if (Triple.getArch() == llvm::Triple::arm64 ||
> + Triple.getArch() == llvm::Triple::aarch64)
> CPU = "cyclone";
> }
>
>
> Modified: llvm/trunk/lib/MC/MCObjectFileInfo.cpp
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/MC/MCObjectFileInfo.cpp?rev=209577&r1=209576&r2=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/MC/MCObjectFileInfo.cpp (original)
> +++ llvm/trunk/lib/MC/MCObjectFileInfo.cpp Sat May 24 07:50:23 2014
> @@ -23,7 +23,8 @@ void MCObjectFileInfo::InitMachOMCObject
> IsFunctionEHFrameSymbolPrivate = false;
> SupportsWeakOmittedEHFrame = false;
>
> - if (T.isOSDarwin() && T.getArch() == Triple::arm64)
> + if (T.isOSDarwin() &&
> + (T.getArch() == Triple::arm64 || T.getArch() == Triple::aarch64))
> SupportsCompactUnwindWithoutEHFrame = true;
>
> PersonalityEncoding = dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel
> @@ -151,7 +152,8 @@ void MCObjectFileInfo::InitMachOMCObject
> COFFDebugSymbolsSection = nullptr;
>
> if ((T.isMacOSX() && !T.isMacOSXVersionLT(10, 6)) ||
> - (T.isOSDarwin() && T.getArch() == Triple::arm64)) {
> + (T.isOSDarwin() &&
> + (T.getArch() == Triple::arm64 || T.getArch() == Triple::aarch64))) {
> CompactUnwindSection =
> Ctx->getMachOSection("__LD", "__compact_unwind",
> MachO::S_ATTR_DEBUG,
> @@ -159,7 +161,7 @@ void MCObjectFileInfo::InitMachOMCObject
>
> if (T.getArch() == Triple::x86_64 || T.getArch() == Triple::x86)
> CompactUnwindDwarfEHFrameOnly = 0x04000000;
> - else if (T.getArch() == Triple::arm64)
> + else if (T.getArch() == Triple::arm64 || T.getArch() == Triple::aarch64)
> CompactUnwindDwarfEHFrameOnly = 0x03000000;
> }
>
> @@ -785,7 +787,7 @@ void MCObjectFileInfo::InitMCObjectFileI
> // cellspu-apple-darwin. Perhaps we should fix in Triple?
> if ((Arch == Triple::x86 || Arch == Triple::x86_64 ||
> Arch == Triple::arm || Arch == Triple::thumb ||
> - Arch == Triple::arm64 ||
> + Arch == Triple::arm64 || Arch == Triple::aarch64 ||
> Arch == Triple::ppc || Arch == Triple::ppc64 ||
> Arch == Triple::UnknownArch) &&
> (T.isOSDarwin() || T.isOSBinFormatMachO())) {
>
> Added: llvm/trunk/lib/Target/AArch64/AArch64.h
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64.h?rev=209577&view=auto
> ==============================================================================
> --- llvm/trunk/lib/Target/AArch64/AArch64.h (added)
> +++ llvm/trunk/lib/Target/AArch64/AArch64.h Sat May 24 07:50:23 2014
> @@ -0,0 +1,49 @@
> +//==-- AArch64.h - Top-level interface for AArch64 --------------*- C++ -*-==//
> +//
> +// The LLVM Compiler Infrastructure
> +//
> +// This file is distributed under the University of Illinois Open Source
> +// License. See LICENSE.TXT for details.
> +//
> +//===----------------------------------------------------------------------===//
> +//
> +// This file contains the entry points for global functions defined in the LLVM
> +// AArch64 back-end.
> +//
> +//===----------------------------------------------------------------------===//
> +
> +#ifndef TARGET_AArch64_H
> +#define TARGET_AArch64_H
> +
> +#include "Utils/AArch64BaseInfo.h"
> +#include "MCTargetDesc/AArch64MCTargetDesc.h"
> +#include "llvm/Target/TargetMachine.h"
> +#include "llvm/Support/DataTypes.h"
> +
> +namespace llvm {
> +
> +class AArch64TargetMachine;
> +class FunctionPass;
> +class MachineFunctionPass;
> +
> +FunctionPass *createAArch64DeadRegisterDefinitions();
> +FunctionPass *createAArch64ConditionalCompares();
> +FunctionPass *createAArch64AdvSIMDScalar();
> +FunctionPass *createAArch64BranchRelaxation();
> +FunctionPass *createAArch64ISelDag(AArch64TargetMachine &TM,
> + CodeGenOpt::Level OptLevel);
> +FunctionPass *createAArch64StorePairSuppressPass();
> +FunctionPass *createAArch64ExpandPseudoPass();
> +FunctionPass *createAArch64LoadStoreOptimizationPass();
> +ModulePass *createAArch64PromoteConstantPass();
> +FunctionPass *createAArch64AddressTypePromotionPass();
> +/// \brief Creates an ARM-specific Target Transformation Info pass.
> +ImmutablePass *
> +createAArch64TargetTransformInfoPass(const AArch64TargetMachine *TM);
> +
> +FunctionPass *createAArch64CleanupLocalDynamicTLSPass();
> +
> +FunctionPass *createAArch64CollectLOHPass();
> +} // end namespace llvm
> +
> +#endif
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64.td (from r209576, llvm/trunk/lib/Target/ARM64/ARM64.td)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64.td?p2=llvm/trunk/lib/Target/AArch64/AArch64.td&p1=llvm/trunk/lib/Target/ARM64/ARM64.td&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64.td (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64.td Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===- ARM64.td - Describe the ARM64 Target Machine --------*- tablegen -*-===//
> +//=- AArch64.td - Describe the AArch64 Target Machine --------*- tablegen -*-=//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -17,7 +17,7 @@
> include "llvm/Target/Target.td"
>
> //===----------------------------------------------------------------------===//
> -// ARM64 Subtarget features.
> +// AArch64 Subtarget features.
> //
>
> def FeatureFPARMv8 : SubtargetFeature<"fp-armv8", "HasFPARMv8", "true",
> @@ -44,23 +44,23 @@ def FeatureZCZeroing : SubtargetFeature<
> // Register File Description
> //===----------------------------------------------------------------------===//
>
> -include "ARM64RegisterInfo.td"
> -include "ARM64CallingConvention.td"
> +include "AArch64RegisterInfo.td"
> +include "AArch64CallingConvention.td"
>
> //===----------------------------------------------------------------------===//
> // Instruction Descriptions
> //===----------------------------------------------------------------------===//
>
> -include "ARM64Schedule.td"
> -include "ARM64InstrInfo.td"
> +include "AArch64Schedule.td"
> +include "AArch64InstrInfo.td"
>
> -def ARM64InstrInfo : InstrInfo;
> +def AArch64InstrInfo : InstrInfo;
>
> //===----------------------------------------------------------------------===//
> -// ARM64 Processors supported.
> +// AArch64 Processors supported.
> //
> -include "ARM64SchedA53.td"
> -include "ARM64SchedCyclone.td"
> +include "AArch64SchedA53.td"
> +include "AArch64SchedCyclone.td"
>
> def ProcA53 : SubtargetFeature<"a53", "ARMProcFamily", "CortexA53",
> "Cortex-A53 ARM processors",
> @@ -109,7 +109,7 @@ def AppleAsmParserVariant : AsmParserVar
> //===----------------------------------------------------------------------===//
> // Assembly printer
> //===----------------------------------------------------------------------===//
> -// ARM64 Uses the MC printer for asm output, so make sure the TableGen
> +// AArch64 Uses the MC printer for asm output, so make sure the TableGen
> // AsmWriter bits get associated with the correct class.
> def GenericAsmWriter : AsmWriter {
> string AsmWriterClassName = "InstPrinter";
> @@ -127,8 +127,8 @@ def AppleAsmWriter : AsmWriter {
> // Target Declaration
> //===----------------------------------------------------------------------===//
>
> -def ARM64 : Target {
> - let InstructionSet = ARM64InstrInfo;
> +def AArch64 : Target {
> + let InstructionSet = AArch64InstrInfo;
> let AssemblyParserVariants = [GenericAsmParserVariant, AppleAsmParserVariant];
> let AssemblyWriters = [GenericAsmWriter, AppleAsmWriter];
> }
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64AddressTypePromotion.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64AddressTypePromotion.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64AddressTypePromotion.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64AddressTypePromotion.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64AddressTypePromotion.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64AddressTypePromotion.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64AddressTypePromotion.cpp Sat May 24 07:50:23 2014
> @@ -1,5 +1,4 @@
> -
> -//===-- ARM64AddressTypePromotion.cpp --- Promote type for addr accesses -===//
> +//===-- AArch64AddressTypePromotion.cpp --- Promote type for addr accesses -==//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -29,7 +28,7 @@
> // FIXME: This pass may be useful for other targets too.
> // ===---------------------------------------------------------------------===//
>
> -#include "ARM64.h"
> +#include "AArch64.h"
> #include "llvm/ADT/DenseMap.h"
> #include "llvm/ADT/SmallPtrSet.h"
> #include "llvm/ADT/SmallVector.h"
> @@ -45,38 +44,38 @@
>
> using namespace llvm;
>
> -#define DEBUG_TYPE "arm64-type-promotion"
> +#define DEBUG_TYPE "aarch64-type-promotion"
>
> static cl::opt<bool>
> -EnableAddressTypePromotion("arm64-type-promotion", cl::Hidden,
> +EnableAddressTypePromotion("aarch64-type-promotion", cl::Hidden,
> cl::desc("Enable the type promotion pass"),
> cl::init(true));
> static cl::opt<bool>
> -EnableMerge("arm64-type-promotion-merge", cl::Hidden,
> +EnableMerge("aarch64-type-promotion-merge", cl::Hidden,
> cl::desc("Enable merging of redundant sexts when one is dominating"
> " the other."),
> cl::init(true));
>
> //===----------------------------------------------------------------------===//
> -// ARM64AddressTypePromotion
> +// AArch64AddressTypePromotion
> //===----------------------------------------------------------------------===//
>
> namespace llvm {
> -void initializeARM64AddressTypePromotionPass(PassRegistry &);
> +void initializeAArch64AddressTypePromotionPass(PassRegistry &);
> }
>
> namespace {
> -class ARM64AddressTypePromotion : public FunctionPass {
> +class AArch64AddressTypePromotion : public FunctionPass {
>
> public:
> static char ID;
> - ARM64AddressTypePromotion()
> + AArch64AddressTypePromotion()
> : FunctionPass(ID), Func(nullptr), ConsideredSExtType(nullptr) {
> - initializeARM64AddressTypePromotionPass(*PassRegistry::getPassRegistry());
> + initializeAArch64AddressTypePromotionPass(*PassRegistry::getPassRegistry());
> }
>
> const char *getPassName() const override {
> - return "ARM64 Address Type Promotion";
> + return "AArch64 Address Type Promotion";
> }
>
> /// Iterate over the functions and promote the computation of interesting
> @@ -140,19 +139,19 @@ private:
> };
> } // end anonymous namespace.
>
> -char ARM64AddressTypePromotion::ID = 0;
> +char AArch64AddressTypePromotion::ID = 0;
>
> -INITIALIZE_PASS_BEGIN(ARM64AddressTypePromotion, "arm64-type-promotion",
> - "ARM64 Type Promotion Pass", false, false)
> +INITIALIZE_PASS_BEGIN(AArch64AddressTypePromotion, "aarch64-type-promotion",
> + "AArch64 Type Promotion Pass", false, false)
> INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass)
> -INITIALIZE_PASS_END(ARM64AddressTypePromotion, "arm64-type-promotion",
> - "ARM64 Type Promotion Pass", false, false)
> +INITIALIZE_PASS_END(AArch64AddressTypePromotion, "aarch64-type-promotion",
> + "AArch64 Type Promotion Pass", false, false)
>
> -FunctionPass *llvm::createARM64AddressTypePromotionPass() {
> - return new ARM64AddressTypePromotion();
> +FunctionPass *llvm::createAArch64AddressTypePromotionPass() {
> + return new AArch64AddressTypePromotion();
> }
>
> -bool ARM64AddressTypePromotion::canGetThrough(const Instruction *Inst) {
> +bool AArch64AddressTypePromotion::canGetThrough(const Instruction *Inst) {
> if (isa<SExtInst>(Inst))
> return true;
>
> @@ -175,7 +174,7 @@ bool ARM64AddressTypePromotion::canGetTh
> return false;
> }
>
> -bool ARM64AddressTypePromotion::shouldGetThrough(const Instruction *Inst) {
> +bool AArch64AddressTypePromotion::shouldGetThrough(const Instruction *Inst) {
> // If the type of the sext is the same as the considered one, this sext
> // will become useless.
> // Otherwise, we will have to do something to preserve the original value,
> @@ -211,7 +210,7 @@ static bool shouldSExtOperand(const Inst
> }
>
> bool
> -ARM64AddressTypePromotion::shouldConsiderSExt(const Instruction *SExt) const {
> +AArch64AddressTypePromotion::shouldConsiderSExt(const Instruction *SExt) const {
> if (SExt->getType() != ConsideredSExtType)
> return false;
>
> @@ -249,7 +248,7 @@ ARM64AddressTypePromotion::shouldConside
> // = a
> // Iterate on 'c'.
> bool
> -ARM64AddressTypePromotion::propagateSignExtension(Instructions &SExtInsts) {
> +AArch64AddressTypePromotion::propagateSignExtension(Instructions &SExtInsts) {
> DEBUG(dbgs() << "*** Propagate Sign Extension ***\n");
>
> bool LocalChange = false;
> @@ -375,8 +374,8 @@ ARM64AddressTypePromotion::propagateSign
> return LocalChange;
> }
>
> -void ARM64AddressTypePromotion::mergeSExts(ValueToInsts &ValToSExtendedUses,
> - SetOfInstructions &ToRemove) {
> +void AArch64AddressTypePromotion::mergeSExts(ValueToInsts &ValToSExtendedUses,
> + SetOfInstructions &ToRemove) {
> DominatorTree &DT = getAnalysis<DominatorTreeWrapperPass>().getDomTree();
>
> for (auto &Entry : ValToSExtendedUses) {
> @@ -414,7 +413,7 @@ void ARM64AddressTypePromotion::mergeSEx
> }
> }
>
> -void ARM64AddressTypePromotion::analyzeSExtension(Instructions &SExtInsts) {
> +void AArch64AddressTypePromotion::analyzeSExtension(Instructions &SExtInsts) {
> DEBUG(dbgs() << "*** Analyze Sign Extensions ***\n");
>
> DenseMap<Value *, Instruction *> SeenChains;
> @@ -479,7 +478,7 @@ void ARM64AddressTypePromotion::analyzeS
> }
> }
>
> -bool ARM64AddressTypePromotion::runOnFunction(Function &F) {
> +bool AArch64AddressTypePromotion::runOnFunction(Function &F) {
> if (!EnableAddressTypePromotion || F.isDeclaration())
> return false;
> Func = &F;
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64AdvSIMDScalarPass.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64AdvSIMDScalarPass.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64AdvSIMDScalarPass.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64AdvSIMDScalarPass.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64AdvSIMDScalarPass.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64AdvSIMDScalarPass.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64AdvSIMDScalarPass.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===-- ARM64AdvSIMDScalar.cpp - Replace dead defs w/ zero reg --===//
> +//===-- AArch64AdvSIMDScalar.cpp - Replace dead defs w/ zero reg --===//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -33,9 +33,9 @@
> // solution.
> //===----------------------------------------------------------------------===//
>
> -#include "ARM64.h"
> -#include "ARM64InstrInfo.h"
> -#include "ARM64RegisterInfo.h"
> +#include "AArch64.h"
> +#include "AArch64InstrInfo.h"
> +#include "AArch64RegisterInfo.h"
> #include "llvm/ADT/Statistic.h"
> #include "llvm/CodeGen/MachineFunctionPass.h"
> #include "llvm/CodeGen/MachineFunction.h"
> @@ -47,12 +47,12 @@
> #include "llvm/Support/raw_ostream.h"
> using namespace llvm;
>
> -#define DEBUG_TYPE "arm64-simd-scalar"
> +#define DEBUG_TYPE "aarch64-simd-scalar"
>
> // Allow forcing all i64 operations with equivalent SIMD instructions to use
> // them. For stress-testing the transformation function.
> static cl::opt<bool>
> -TransformAll("arm64-simd-scalar-force-all",
> +TransformAll("aarch64-simd-scalar-force-all",
> cl::desc("Force use of AdvSIMD scalar instructions everywhere"),
> cl::init(false), cl::Hidden);
>
> @@ -61,9 +61,9 @@ STATISTIC(NumCopiesDeleted, "Number of c
> STATISTIC(NumCopiesInserted, "Number of cross-class copies inserted");
>
> namespace {
> -class ARM64AdvSIMDScalar : public MachineFunctionPass {
> +class AArch64AdvSIMDScalar : public MachineFunctionPass {
> MachineRegisterInfo *MRI;
> - const ARM64InstrInfo *TII;
> + const AArch64InstrInfo *TII;
>
> private:
> // isProfitableToTransform - Predicate function to determine whether an
> @@ -81,7 +81,7 @@ private:
>
> public:
> static char ID; // Pass identification, replacement for typeid.
> - explicit ARM64AdvSIMDScalar() : MachineFunctionPass(ID) {}
> + explicit AArch64AdvSIMDScalar() : MachineFunctionPass(ID) {}
>
> bool runOnMachineFunction(MachineFunction &F) override;
>
> @@ -94,7 +94,7 @@ public:
> MachineFunctionPass::getAnalysisUsage(AU);
> }
> };
> -char ARM64AdvSIMDScalar::ID = 0;
> +char AArch64AdvSIMDScalar::ID = 0;
> } // end anonymous namespace
>
> static bool isGPR64(unsigned Reg, unsigned SubReg,
> @@ -102,20 +102,20 @@ static bool isGPR64(unsigned Reg, unsign
> if (SubReg)
> return false;
> if (TargetRegisterInfo::isVirtualRegister(Reg))
> - return MRI->getRegClass(Reg)->hasSuperClassEq(&ARM64::GPR64RegClass);
> - return ARM64::GPR64RegClass.contains(Reg);
> + return MRI->getRegClass(Reg)->hasSuperClassEq(&AArch64::GPR64RegClass);
> + return AArch64::GPR64RegClass.contains(Reg);
> }
>
> static bool isFPR64(unsigned Reg, unsigned SubReg,
> const MachineRegisterInfo *MRI) {
> if (TargetRegisterInfo::isVirtualRegister(Reg))
> - return (MRI->getRegClass(Reg)->hasSuperClassEq(&ARM64::FPR64RegClass) &&
> + return (MRI->getRegClass(Reg)->hasSuperClassEq(&AArch64::FPR64RegClass) &&
> SubReg == 0) ||
> - (MRI->getRegClass(Reg)->hasSuperClassEq(&ARM64::FPR128RegClass) &&
> - SubReg == ARM64::dsub);
> + (MRI->getRegClass(Reg)->hasSuperClassEq(&AArch64::FPR128RegClass) &&
> + SubReg == AArch64::dsub);
> // Physical register references just check the register class directly.
> - return (ARM64::FPR64RegClass.contains(Reg) && SubReg == 0) ||
> - (ARM64::FPR128RegClass.contains(Reg) && SubReg == ARM64::dsub);
> + return (AArch64::FPR64RegClass.contains(Reg) && SubReg == 0) ||
> + (AArch64::FPR128RegClass.contains(Reg) && SubReg == AArch64::dsub);
> }
>
> // getSrcFromCopy - Get the original source register for a GPR64 <--> FPR64
> @@ -125,17 +125,18 @@ static unsigned getSrcFromCopy(const Mac
> unsigned &SubReg) {
> SubReg = 0;
> // The "FMOV Xd, Dn" instruction is the typical form.
> - if (MI->getOpcode() == ARM64::FMOVDXr || MI->getOpcode() == ARM64::FMOVXDr)
> + if (MI->getOpcode() == AArch64::FMOVDXr ||
> + MI->getOpcode() == AArch64::FMOVXDr)
> return MI->getOperand(1).getReg();
> // A lane zero extract "UMOV.d Xd, Vn[0]" is equivalent. We shouldn't see
> // these at this stage, but it's easy to check for.
> - if (MI->getOpcode() == ARM64::UMOVvi64 && MI->getOperand(2).getImm() == 0) {
> - SubReg = ARM64::dsub;
> + if (MI->getOpcode() == AArch64::UMOVvi64 && MI->getOperand(2).getImm() == 0) {
> + SubReg = AArch64::dsub;
> return MI->getOperand(1).getReg();
> }
> // Or just a plain COPY instruction. This can be directly to/from FPR64,
> // or it can be a dsub subreg reference to an FPR128.
> - if (MI->getOpcode() == ARM64::COPY) {
> + if (MI->getOpcode() == AArch64::COPY) {
> if (isFPR64(MI->getOperand(0).getReg(), MI->getOperand(0).getSubReg(),
> MRI) &&
> isGPR64(MI->getOperand(1).getReg(), MI->getOperand(1).getSubReg(), MRI))
> @@ -161,10 +162,10 @@ static int getTransformOpcode(unsigned O
> default:
> break;
> // FIXME: Lots more possibilities.
> - case ARM64::ADDXrr:
> - return ARM64::ADDv1i64;
> - case ARM64::SUBXrr:
> - return ARM64::SUBv1i64;
> + case AArch64::ADDXrr:
> + return AArch64::ADDv1i64;
> + case AArch64::SUBXrr:
> + return AArch64::SUBv1i64;
> }
> // No AdvSIMD equivalent, so just return the original opcode.
> return Opc;
> @@ -178,7 +179,8 @@ static bool isTransformable(const Machin
> // isProfitableToTransform - Predicate function to determine whether an
> // instruction should be transformed to its equivalent AdvSIMD scalar
> // instruction. "add Xd, Xn, Xm" ==> "add Dd, Da, Db", for example.
> -bool ARM64AdvSIMDScalar::isProfitableToTransform(const MachineInstr *MI) const {
> +bool
> +AArch64AdvSIMDScalar::isProfitableToTransform(const MachineInstr *MI) const {
> // If this instruction isn't eligible to be transformed (no SIMD equivalent),
> // early exit since that's the common case.
> if (!isTransformable(MI))
> @@ -238,8 +240,8 @@ bool ARM64AdvSIMDScalar::isProfitableToT
> // preferable to have it use the FPR64 in most cases, as if the source
> // vector is an IMPLICIT_DEF, the INSERT_SUBREG just goes away entirely.
> // Ditto for a lane insert.
> - else if (Use->getOpcode() == ARM64::INSERT_SUBREG ||
> - Use->getOpcode() == ARM64::INSvi64gpr)
> + else if (Use->getOpcode() == AArch64::INSERT_SUBREG ||
> + Use->getOpcode() == AArch64::INSvi64gpr)
> ;
> else
> AllUsesAreCopies = false;
> @@ -259,10 +261,10 @@ bool ARM64AdvSIMDScalar::isProfitableToT
> return TransformAll;
> }
>
> -static MachineInstr *insertCopy(const ARM64InstrInfo *TII, MachineInstr *MI,
> +static MachineInstr *insertCopy(const AArch64InstrInfo *TII, MachineInstr *MI,
> unsigned Dst, unsigned Src, bool IsKill) {
> MachineInstrBuilder MIB =
> - BuildMI(*MI->getParent(), MI, MI->getDebugLoc(), TII->get(ARM64::COPY),
> + BuildMI(*MI->getParent(), MI, MI->getDebugLoc(), TII->get(AArch64::COPY),
> Dst)
> .addReg(Src, getKillRegState(IsKill));
> DEBUG(dbgs() << " adding copy: " << *MIB);
> @@ -273,7 +275,7 @@ static MachineInstr *insertCopy(const AR
> // transformInstruction - Perform the transformation of an instruction
> // to its equivalant AdvSIMD scalar instruction. Update inputs and outputs
> // to be the correct register class, minimizing cross-class copies.
> -void ARM64AdvSIMDScalar::transformInstruction(MachineInstr *MI) {
> +void AArch64AdvSIMDScalar::transformInstruction(MachineInstr *MI) {
> DEBUG(dbgs() << "Scalar transform: " << *MI);
>
> MachineBasicBlock *MBB = MI->getParent();
> @@ -316,19 +318,19 @@ void ARM64AdvSIMDScalar::transformInstru
> // copy.
> if (!Src0) {
> SubReg0 = 0;
> - Src0 = MRI->createVirtualRegister(&ARM64::FPR64RegClass);
> + Src0 = MRI->createVirtualRegister(&AArch64::FPR64RegClass);
> insertCopy(TII, MI, Src0, OrigSrc0, true);
> }
> if (!Src1) {
> SubReg1 = 0;
> - Src1 = MRI->createVirtualRegister(&ARM64::FPR64RegClass);
> + Src1 = MRI->createVirtualRegister(&AArch64::FPR64RegClass);
> insertCopy(TII, MI, Src1, OrigSrc1, true);
> }
>
> // Create a vreg for the destination.
> // FIXME: No need to do this if the ultimate user expects an FPR64.
> // Check for that and avoid the copy if possible.
> - unsigned Dst = MRI->createVirtualRegister(&ARM64::FPR64RegClass);
> + unsigned Dst = MRI->createVirtualRegister(&AArch64::FPR64RegClass);
>
> // For now, all of the new instructions have the same simple three-register
> // form, so no need to special case based on what instruction we're
> @@ -349,7 +351,7 @@ void ARM64AdvSIMDScalar::transformInstru
> }
>
> // processMachineBasicBlock - Main optimzation loop.
> -bool ARM64AdvSIMDScalar::processMachineBasicBlock(MachineBasicBlock *MBB) {
> +bool AArch64AdvSIMDScalar::processMachineBasicBlock(MachineBasicBlock *MBB) {
> bool Changed = false;
> for (MachineBasicBlock::iterator I = MBB->begin(), E = MBB->end(); I != E;) {
> MachineInstr *MI = I;
> @@ -363,13 +365,13 @@ bool ARM64AdvSIMDScalar::processMachineB
> }
>
> // runOnMachineFunction - Pass entry point from PassManager.
> -bool ARM64AdvSIMDScalar::runOnMachineFunction(MachineFunction &mf) {
> +bool AArch64AdvSIMDScalar::runOnMachineFunction(MachineFunction &mf) {
> bool Changed = false;
> - DEBUG(dbgs() << "***** ARM64AdvSIMDScalar *****\n");
> + DEBUG(dbgs() << "***** AArch64AdvSIMDScalar *****\n");
>
> const TargetMachine &TM = mf.getTarget();
> MRI = &mf.getRegInfo();
> - TII = static_cast<const ARM64InstrInfo *>(TM.getInstrInfo());
> + TII = static_cast<const AArch64InstrInfo *>(TM.getInstrInfo());
>
> // Just check things on a one-block-at-a-time basis.
> for (MachineFunction::iterator I = mf.begin(), E = mf.end(); I != E; ++I)
> @@ -378,8 +380,8 @@ bool ARM64AdvSIMDScalar::runOnMachineFun
> return Changed;
> }
>
> -// createARM64AdvSIMDScalar - Factory function used by ARM64TargetMachine
> +// createAArch64AdvSIMDScalar - Factory function used by AArch64TargetMachine
> // to add the pass to the PassManager.
> -FunctionPass *llvm::createARM64AdvSIMDScalar() {
> - return new ARM64AdvSIMDScalar();
> +FunctionPass *llvm::createAArch64AdvSIMDScalar() {
> + return new AArch64AdvSIMDScalar();
> }
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64AsmPrinter.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64AsmPrinter.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64AsmPrinter.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64AsmPrinter.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64AsmPrinter.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64AsmPrinter.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64AsmPrinter.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===-- ARM64AsmPrinter.cpp - ARM64 LLVM assembly writer ------------------===//
> +//===-- AArch64AsmPrinter.cpp - AArch64 LLVM assembly writer --------------===//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -8,16 +8,16 @@
> //===----------------------------------------------------------------------===//
> //
> // This file contains a printer that converts from our internal representation
> -// of machine-dependent LLVM code to the ARM64 assembly language.
> +// of machine-dependent LLVM code to the AArch64 assembly language.
> //
> //===----------------------------------------------------------------------===//
>
> -#include "ARM64.h"
> -#include "ARM64MachineFunctionInfo.h"
> -#include "ARM64MCInstLower.h"
> -#include "ARM64RegisterInfo.h"
> -#include "ARM64Subtarget.h"
> -#include "InstPrinter/ARM64InstPrinter.h"
> +#include "AArch64.h"
> +#include "AArch64MachineFunctionInfo.h"
> +#include "AArch64MCInstLower.h"
> +#include "AArch64RegisterInfo.h"
> +#include "AArch64Subtarget.h"
> +#include "InstPrinter/AArch64InstPrinter.h"
> #include "llvm/ADT/SmallString.h"
> #include "llvm/ADT/StringSwitch.h"
> #include "llvm/ADT/Twine.h"
> @@ -42,21 +42,24 @@ using namespace llvm;
>
> namespace {
>
> -class ARM64AsmPrinter : public AsmPrinter {
> - /// Subtarget - Keep a pointer to the ARM64Subtarget around so that we can
> +class AArch64AsmPrinter : public AsmPrinter {
> + /// Subtarget - Keep a pointer to the AArch64Subtarget around so that we can
> /// make the right decision when printing asm code for different targets.
> - const ARM64Subtarget *Subtarget;
> + const AArch64Subtarget *Subtarget;
>
> - ARM64MCInstLower MCInstLowering;
> + AArch64MCInstLower MCInstLowering;
> StackMaps SM;
>
> public:
> - ARM64AsmPrinter(TargetMachine &TM, MCStreamer &Streamer)
> - : AsmPrinter(TM, Streamer), Subtarget(&TM.getSubtarget<ARM64Subtarget>()),
> - MCInstLowering(OutContext, *Mang, *this), SM(*this), ARM64FI(nullptr),
> + AArch64AsmPrinter(TargetMachine &TM, MCStreamer &Streamer)
> + : AsmPrinter(TM, Streamer),
> + Subtarget(&TM.getSubtarget<AArch64Subtarget>()),
> + MCInstLowering(OutContext, *Mang, *this), SM(*this), AArch64FI(nullptr),
> LOHLabelCounter(0) {}
>
> - const char *getPassName() const override { return "ARM64 Assembly Printer"; }
> + const char *getPassName() const override {
> + return "AArch64 Assembly Printer";
> + }
>
> /// \brief Wrapper for MCInstLowering.lowerOperand() for the
> /// tblgen'erated pseudo lowering.
> @@ -81,7 +84,7 @@ public:
> }
>
> bool runOnMachineFunction(MachineFunction &F) override {
> - ARM64FI = F.getInfo<ARM64FunctionInfo>();
> + AArch64FI = F.getInfo<AArch64FunctionInfo>();
> return AsmPrinter::runOnMachineFunction(F);
> }
>
> @@ -106,9 +109,9 @@ private:
>
> MCSymbol *GetCPISymbol(unsigned CPID) const override;
> void EmitEndOfAsmFile(Module &M) override;
> - ARM64FunctionInfo *ARM64FI;
> + AArch64FunctionInfo *AArch64FI;
>
> - /// \brief Emit the LOHs contained in ARM64FI.
> + /// \brief Emit the LOHs contained in AArch64FI.
> void EmitLOHs();
>
> typedef std::map<const MachineInstr *, MCSymbol *> MInstToMCSymbol;
> @@ -120,7 +123,7 @@ private:
>
> //===----------------------------------------------------------------------===//
>
> -void ARM64AsmPrinter::EmitEndOfAsmFile(Module &M) {
> +void AArch64AsmPrinter::EmitEndOfAsmFile(Module &M) {
> if (Subtarget->isTargetMachO()) {
> // Funny Darwin hack: This flag tells the linker that no global symbols
> // contain code that falls through to other global symbols (e.g. the obvious
> @@ -156,7 +159,7 @@ void ARM64AsmPrinter::EmitEndOfAsmFile(M
> }
>
> MachineLocation
> -ARM64AsmPrinter::getDebugValueLocation(const MachineInstr *MI) const {
> +AArch64AsmPrinter::getDebugValueLocation(const MachineInstr *MI) const {
> MachineLocation Location;
> assert(MI->getNumOperands() == 4 && "Invalid no. of machine operands!");
> // Frame address. Currently handles register +- offset only.
> @@ -168,10 +171,10 @@ ARM64AsmPrinter::getDebugValueLocation(c
> return Location;
> }
>
> -void ARM64AsmPrinter::EmitLOHs() {
> +void AArch64AsmPrinter::EmitLOHs() {
> SmallVector<MCSymbol *, 3> MCArgs;
>
> - for (const auto &D : ARM64FI->getLOHContainer()) {
> + for (const auto &D : AArch64FI->getLOHContainer()) {
> for (const MachineInstr *MI : D.getArgs()) {
> MInstToMCSymbol::iterator LabelIt = LOHInstToLabel.find(MI);
> assert(LabelIt != LOHInstToLabel.end() &&
> @@ -183,13 +186,13 @@ void ARM64AsmPrinter::EmitLOHs() {
> }
> }
>
> -void ARM64AsmPrinter::EmitFunctionBodyEnd() {
> - if (!ARM64FI->getLOHRelated().empty())
> +void AArch64AsmPrinter::EmitFunctionBodyEnd() {
> + if (!AArch64FI->getLOHRelated().empty())
> EmitLOHs();
> }
>
> /// GetCPISymbol - Return the symbol for the specified constant pool entry.
> -MCSymbol *ARM64AsmPrinter::GetCPISymbol(unsigned CPID) const {
> +MCSymbol *AArch64AsmPrinter::GetCPISymbol(unsigned CPID) const {
> // Darwin uses a linker-private symbol name for constant-pools (to
> // avoid addends on the relocation?), ELF has no such concept and
> // uses a normal private symbol.
> @@ -203,8 +206,8 @@ MCSymbol *ARM64AsmPrinter::GetCPISymbol(
> Twine(getFunctionNumber()) + "_" + Twine(CPID));
> }
>
> -void ARM64AsmPrinter::printOperand(const MachineInstr *MI, unsigned OpNum,
> - raw_ostream &O) {
> +void AArch64AsmPrinter::printOperand(const MachineInstr *MI, unsigned OpNum,
> + raw_ostream &O) {
> const MachineOperand &MO = MI->getOperand(OpNum);
> switch (MO.getType()) {
> default:
> @@ -213,7 +216,7 @@ void ARM64AsmPrinter::printOperand(const
> unsigned Reg = MO.getReg();
> assert(TargetRegisterInfo::isPhysicalRegister(Reg));
> assert(!MO.getSubReg() && "Subregs should be eliminated!");
> - O << ARM64InstPrinter::getRegisterName(Reg);
> + O << AArch64InstPrinter::getRegisterName(Reg);
> break;
> }
> case MachineOperand::MO_Immediate: {
> @@ -224,8 +227,8 @@ void ARM64AsmPrinter::printOperand(const
> }
> }
>
> -bool ARM64AsmPrinter::printAsmMRegister(const MachineOperand &MO, char Mode,
> - raw_ostream &O) {
> +bool AArch64AsmPrinter::printAsmMRegister(const MachineOperand &MO, char Mode,
> + raw_ostream &O) {
> unsigned Reg = MO.getReg();
> switch (Mode) {
> default:
> @@ -238,30 +241,30 @@ bool ARM64AsmPrinter::printAsmMRegister(
> break;
> }
>
> - O << ARM64InstPrinter::getRegisterName(Reg);
> + O << AArch64InstPrinter::getRegisterName(Reg);
> return false;
> }
>
> // Prints the register in MO using class RC using the offset in the
> // new register class. This should not be used for cross class
> // printing.
> -bool ARM64AsmPrinter::printAsmRegInClass(const MachineOperand &MO,
> - const TargetRegisterClass *RC,
> - bool isVector, raw_ostream &O) {
> +bool AArch64AsmPrinter::printAsmRegInClass(const MachineOperand &MO,
> + const TargetRegisterClass *RC,
> + bool isVector, raw_ostream &O) {
> assert(MO.isReg() && "Should only get here with a register!");
> - const ARM64RegisterInfo *RI =
> - static_cast<const ARM64RegisterInfo *>(TM.getRegisterInfo());
> + const AArch64RegisterInfo *RI =
> + static_cast<const AArch64RegisterInfo *>(TM.getRegisterInfo());
> unsigned Reg = MO.getReg();
> unsigned RegToPrint = RC->getRegister(RI->getEncodingValue(Reg));
> assert(RI->regsOverlap(RegToPrint, Reg));
> - O << ARM64InstPrinter::getRegisterName(
> - RegToPrint, isVector ? ARM64::vreg : ARM64::NoRegAltName);
> + O << AArch64InstPrinter::getRegisterName(
> + RegToPrint, isVector ? AArch64::vreg : AArch64::NoRegAltName);
> return false;
> }
>
> -bool ARM64AsmPrinter::PrintAsmOperand(const MachineInstr *MI, unsigned OpNum,
> - unsigned AsmVariant,
> - const char *ExtraCode, raw_ostream &O) {
> +bool AArch64AsmPrinter::PrintAsmOperand(const MachineInstr *MI, unsigned OpNum,
> + unsigned AsmVariant,
> + const char *ExtraCode, raw_ostream &O) {
> const MachineOperand &MO = MI->getOperand(OpNum);
> // Does this asm operand have a single letter operand modifier?
> if (ExtraCode && ExtraCode[0]) {
> @@ -276,8 +279,8 @@ bool ARM64AsmPrinter::PrintAsmOperand(co
> if (MO.isReg())
> return printAsmMRegister(MO, ExtraCode[0], O);
> if (MO.isImm() && MO.getImm() == 0) {
> - unsigned Reg = ExtraCode[0] == 'w' ? ARM64::WZR : ARM64::XZR;
> - O << ARM64InstPrinter::getRegisterName(Reg);
> + unsigned Reg = ExtraCode[0] == 'w' ? AArch64::WZR : AArch64::XZR;
> + O << AArch64InstPrinter::getRegisterName(Reg);
> return false;
> }
> printOperand(MI, OpNum, O);
> @@ -291,19 +294,19 @@ bool ARM64AsmPrinter::PrintAsmOperand(co
> const TargetRegisterClass *RC;
> switch (ExtraCode[0]) {
> case 'b':
> - RC = &ARM64::FPR8RegClass;
> + RC = &AArch64::FPR8RegClass;
> break;
> case 'h':
> - RC = &ARM64::FPR16RegClass;
> + RC = &AArch64::FPR16RegClass;
> break;
> case 's':
> - RC = &ARM64::FPR32RegClass;
> + RC = &AArch64::FPR32RegClass;
> break;
> case 'd':
> - RC = &ARM64::FPR64RegClass;
> + RC = &AArch64::FPR64RegClass;
> break;
> case 'q':
> - RC = &ARM64::FPR128RegClass;
> + RC = &AArch64::FPR128RegClass;
> break;
> default:
> return true;
> @@ -321,33 +324,35 @@ bool ARM64AsmPrinter::PrintAsmOperand(co
> unsigned Reg = MO.getReg();
>
> // If this is a w or x register, print an x register.
> - if (ARM64::GPR32allRegClass.contains(Reg) ||
> - ARM64::GPR64allRegClass.contains(Reg))
> + if (AArch64::GPR32allRegClass.contains(Reg) ||
> + AArch64::GPR64allRegClass.contains(Reg))
> return printAsmMRegister(MO, 'x', O);
>
> // If this is a b, h, s, d, or q register, print it as a v register.
> - return printAsmRegInClass(MO, &ARM64::FPR128RegClass, true /* vector */, O);
> + return printAsmRegInClass(MO, &AArch64::FPR128RegClass, true /* vector */,
> + O);
> }
>
> printOperand(MI, OpNum, O);
> return false;
> }
>
> -bool ARM64AsmPrinter::PrintAsmMemoryOperand(const MachineInstr *MI,
> - unsigned OpNum, unsigned AsmVariant,
> - const char *ExtraCode,
> - raw_ostream &O) {
> +bool AArch64AsmPrinter::PrintAsmMemoryOperand(const MachineInstr *MI,
> + unsigned OpNum,
> + unsigned AsmVariant,
> + const char *ExtraCode,
> + raw_ostream &O) {
> if (ExtraCode && ExtraCode[0])
> return true; // Unknown modifier.
>
> const MachineOperand &MO = MI->getOperand(OpNum);
> assert(MO.isReg() && "unexpected inline asm memory operand");
> - O << "[" << ARM64InstPrinter::getRegisterName(MO.getReg()) << "]";
> + O << "[" << AArch64InstPrinter::getRegisterName(MO.getReg()) << "]";
> return false;
> }
>
> -void ARM64AsmPrinter::PrintDebugValueComment(const MachineInstr *MI,
> - raw_ostream &OS) {
> +void AArch64AsmPrinter::PrintDebugValueComment(const MachineInstr *MI,
> + raw_ostream &OS) {
> unsigned NOps = MI->getNumOperands();
> assert(NOps == 4);
> OS << '\t' << MAI->getCommentString() << "DEBUG_VALUE: ";
> @@ -366,21 +371,21 @@ void ARM64AsmPrinter::PrintDebugValueCom
> printOperand(MI, NOps - 2, OS);
> }
>
> -void ARM64AsmPrinter::LowerSTACKMAP(MCStreamer &OutStreamer, StackMaps &SM,
> - const MachineInstr &MI) {
> +void AArch64AsmPrinter::LowerSTACKMAP(MCStreamer &OutStreamer, StackMaps &SM,
> + const MachineInstr &MI) {
> unsigned NumNOPBytes = MI.getOperand(1).getImm();
>
> SM.recordStackMap(MI);
> // Emit padding.
> assert(NumNOPBytes % 4 == 0 && "Invalid number of NOP bytes requested!");
> for (unsigned i = 0; i < NumNOPBytes; i += 4)
> - EmitToStreamer(OutStreamer, MCInstBuilder(ARM64::HINT).addImm(0));
> + EmitToStreamer(OutStreamer, MCInstBuilder(AArch64::HINT).addImm(0));
> }
>
> // Lower a patchpoint of the form:
> // [<def>], <id>, <numBytes>, <target>, <numArgs>
> -void ARM64AsmPrinter::LowerPATCHPOINT(MCStreamer &OutStreamer, StackMaps &SM,
> - const MachineInstr &MI) {
> +void AArch64AsmPrinter::LowerPATCHPOINT(MCStreamer &OutStreamer, StackMaps &SM,
> + const MachineInstr &MI) {
> SM.recordPatchPoint(MI);
>
> PatchPointOpers Opers(&MI);
> @@ -393,21 +398,21 @@ void ARM64AsmPrinter::LowerPATCHPOINT(MC
> unsigned ScratchReg = MI.getOperand(Opers.getNextScratchIdx()).getReg();
> EncodedBytes = 16;
> // Materialize the jump address:
> - EmitToStreamer(OutStreamer, MCInstBuilder(ARM64::MOVZWi)
> + EmitToStreamer(OutStreamer, MCInstBuilder(AArch64::MOVZWi)
> .addReg(ScratchReg)
> .addImm((CallTarget >> 32) & 0xFFFF)
> .addImm(32));
> - EmitToStreamer(OutStreamer, MCInstBuilder(ARM64::MOVKWi)
> + EmitToStreamer(OutStreamer, MCInstBuilder(AArch64::MOVKWi)
> .addReg(ScratchReg)
> .addReg(ScratchReg)
> .addImm((CallTarget >> 16) & 0xFFFF)
> .addImm(16));
> - EmitToStreamer(OutStreamer, MCInstBuilder(ARM64::MOVKWi)
> + EmitToStreamer(OutStreamer, MCInstBuilder(AArch64::MOVKWi)
> .addReg(ScratchReg)
> .addReg(ScratchReg)
> .addImm(CallTarget & 0xFFFF)
> .addImm(0));
> - EmitToStreamer(OutStreamer, MCInstBuilder(ARM64::BLR).addReg(ScratchReg));
> + EmitToStreamer(OutStreamer, MCInstBuilder(AArch64::BLR).addReg(ScratchReg));
> }
> // Emit padding.
> unsigned NumBytes = Opers.getMetaOper(PatchPointOpers::NBytesPos).getImm();
> @@ -416,19 +421,19 @@ void ARM64AsmPrinter::LowerPATCHPOINT(MC
> assert((NumBytes - EncodedBytes) % 4 == 0 &&
> "Invalid number of NOP bytes requested!");
> for (unsigned i = EncodedBytes; i < NumBytes; i += 4)
> - EmitToStreamer(OutStreamer, MCInstBuilder(ARM64::HINT).addImm(0));
> + EmitToStreamer(OutStreamer, MCInstBuilder(AArch64::HINT).addImm(0));
> }
>
> // Simple pseudo-instructions have their lowering (with expansion to real
> // instructions) auto-generated.
> -#include "ARM64GenMCPseudoLowering.inc"
> +#include "AArch64GenMCPseudoLowering.inc"
>
> -void ARM64AsmPrinter::EmitInstruction(const MachineInstr *MI) {
> +void AArch64AsmPrinter::EmitInstruction(const MachineInstr *MI) {
> // Do any auto-generated pseudo lowerings.
> if (emitPseudoExpansionLowering(OutStreamer, MI))
> return;
>
> - if (ARM64FI->getLOHRelated().count(MI)) {
> + if (AArch64FI->getLOHRelated().count(MI)) {
> // Generate a label for LOH related instruction
> MCSymbol *LOHLabel = GetTempSymbol("loh", LOHLabelCounter++);
> // Associate the instruction with the label
> @@ -440,7 +445,7 @@ void ARM64AsmPrinter::EmitInstruction(co
> switch (MI->getOpcode()) {
> default:
> break;
> - case ARM64::DBG_VALUE: {
> + case AArch64::DBG_VALUE: {
> if (isVerbose() && OutStreamer.hasRawTextSupport()) {
> SmallString<128> TmpStr;
> raw_svector_ostream OS(TmpStr);
> @@ -453,23 +458,23 @@ void ARM64AsmPrinter::EmitInstruction(co
> // Tail calls use pseudo instructions so they have the proper code-gen
> // attributes (isCall, isReturn, etc.). We lower them to the real
> // instruction here.
> - case ARM64::TCRETURNri: {
> + case AArch64::TCRETURNri: {
> MCInst TmpInst;
> - TmpInst.setOpcode(ARM64::BR);
> + TmpInst.setOpcode(AArch64::BR);
> TmpInst.addOperand(MCOperand::CreateReg(MI->getOperand(0).getReg()));
> EmitToStreamer(OutStreamer, TmpInst);
> return;
> }
> - case ARM64::TCRETURNdi: {
> + case AArch64::TCRETURNdi: {
> MCOperand Dest;
> MCInstLowering.lowerOperand(MI->getOperand(0), Dest);
> MCInst TmpInst;
> - TmpInst.setOpcode(ARM64::B);
> + TmpInst.setOpcode(AArch64::B);
> TmpInst.addOperand(Dest);
> EmitToStreamer(OutStreamer, TmpInst);
> return;
> }
> - case ARM64::TLSDESC_BLR: {
> + case AArch64::TLSDESC_BLR: {
> MCOperand Callee, Sym;
> MCInstLowering.lowerOperand(MI->getOperand(0), Callee);
> MCInstLowering.lowerOperand(MI->getOperand(1), Sym);
> @@ -477,14 +482,14 @@ void ARM64AsmPrinter::EmitInstruction(co
> // First emit a relocation-annotation. This expands to no code, but requests
> // the following instruction gets an R_AARCH64_TLSDESC_CALL.
> MCInst TLSDescCall;
> - TLSDescCall.setOpcode(ARM64::TLSDESCCALL);
> + TLSDescCall.setOpcode(AArch64::TLSDESCCALL);
> TLSDescCall.addOperand(Sym);
> EmitToStreamer(OutStreamer, TLSDescCall);
>
> // Other than that it's just a normal indirect call to the function loaded
> // from the descriptor.
> MCInst BLR;
> - BLR.setOpcode(ARM64::BLR);
> + BLR.setOpcode(AArch64::BLR);
> BLR.addOperand(Callee);
> EmitToStreamer(OutStreamer, BLR);
>
> @@ -505,10 +510,10 @@ void ARM64AsmPrinter::EmitInstruction(co
> }
>
> // Force static initialization.
> -extern "C" void LLVMInitializeARM64AsmPrinter() {
> - RegisterAsmPrinter<ARM64AsmPrinter> X(TheARM64leTarget);
> - RegisterAsmPrinter<ARM64AsmPrinter> Y(TheARM64beTarget);
> +extern "C" void LLVMInitializeAArch64AsmPrinter() {
> + RegisterAsmPrinter<AArch64AsmPrinter> X(TheAArch64leTarget);
> + RegisterAsmPrinter<AArch64AsmPrinter> Y(TheAArch64beTarget);
>
> - RegisterAsmPrinter<ARM64AsmPrinter> Z(TheAArch64leTarget);
> - RegisterAsmPrinter<ARM64AsmPrinter> W(TheAArch64beTarget);
> + RegisterAsmPrinter<AArch64AsmPrinter> Z(TheARM64leTarget);
> + RegisterAsmPrinter<AArch64AsmPrinter> W(TheARM64beTarget);
> }
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64BranchRelaxation.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64BranchRelaxation.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64BranchRelaxation.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64BranchRelaxation.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64BranchRelaxation.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64BranchRelaxation.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64BranchRelaxation.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===-- ARM64BranchRelaxation.cpp - ARM64 branch relaxation ---------------===//
> +//===-- AArch64BranchRelaxation.cpp - AArch64 branch relaxation -----------===//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -9,9 +9,9 @@
> //
> //===----------------------------------------------------------------------===//
>
> -#include "ARM64.h"
> -#include "ARM64InstrInfo.h"
> -#include "ARM64MachineFunctionInfo.h"
> +#include "AArch64.h"
> +#include "AArch64InstrInfo.h"
> +#include "AArch64MachineFunctionInfo.h"
> #include "llvm/ADT/SmallVector.h"
> #include "llvm/CodeGen/MachineFunctionPass.h"
> #include "llvm/CodeGen/MachineInstrBuilder.h"
> @@ -23,29 +23,29 @@
> #include "llvm/Support/CommandLine.h"
> using namespace llvm;
>
> -#define DEBUG_TYPE "arm64-branch-relax"
> +#define DEBUG_TYPE "aarch64-branch-relax"
>
> static cl::opt<bool>
> -BranchRelaxation("arm64-branch-relax", cl::Hidden, cl::init(true),
> +BranchRelaxation("aarch64-branch-relax", cl::Hidden, cl::init(true),
> cl::desc("Relax out of range conditional branches"));
>
> static cl::opt<unsigned>
> -TBZDisplacementBits("arm64-tbz-offset-bits", cl::Hidden, cl::init(14),
> +TBZDisplacementBits("aarch64-tbz-offset-bits", cl::Hidden, cl::init(14),
> cl::desc("Restrict range of TB[N]Z instructions (DEBUG)"));
>
> static cl::opt<unsigned>
> -CBZDisplacementBits("arm64-cbz-offset-bits", cl::Hidden, cl::init(19),
> +CBZDisplacementBits("aarch64-cbz-offset-bits", cl::Hidden, cl::init(19),
> cl::desc("Restrict range of CB[N]Z instructions (DEBUG)"));
>
> static cl::opt<unsigned>
> -BCCDisplacementBits("arm64-bcc-offset-bits", cl::Hidden, cl::init(19),
> +BCCDisplacementBits("aarch64-bcc-offset-bits", cl::Hidden, cl::init(19),
> cl::desc("Restrict range of Bcc instructions (DEBUG)"));
>
> STATISTIC(NumSplit, "Number of basic blocks split");
> STATISTIC(NumRelaxed, "Number of conditional branches relaxed");
>
> namespace {
> -class ARM64BranchRelaxation : public MachineFunctionPass {
> +class AArch64BranchRelaxation : public MachineFunctionPass {
> /// BasicBlockInfo - Information about the offset and size of a single
> /// basic block.
> struct BasicBlockInfo {
> @@ -77,7 +77,7 @@ class ARM64BranchRelaxation : public Mac
> SmallVector<BasicBlockInfo, 16> BlockInfo;
>
> MachineFunction *MF;
> - const ARM64InstrInfo *TII;
> + const AArch64InstrInfo *TII;
>
> bool relaxBranchInstructions();
> void scanFunction();
> @@ -92,19 +92,19 @@ class ARM64BranchRelaxation : public Mac
>
> public:
> static char ID;
> - ARM64BranchRelaxation() : MachineFunctionPass(ID) {}
> + AArch64BranchRelaxation() : MachineFunctionPass(ID) {}
>
> bool runOnMachineFunction(MachineFunction &MF) override;
>
> const char *getPassName() const override {
> - return "ARM64 branch relaxation pass";
> + return "AArch64 branch relaxation pass";
> }
> };
> -char ARM64BranchRelaxation::ID = 0;
> +char AArch64BranchRelaxation::ID = 0;
> }
>
> /// verify - check BBOffsets, BBSizes, alignment of islands
> -void ARM64BranchRelaxation::verify() {
> +void AArch64BranchRelaxation::verify() {
> #ifndef NDEBUG
> unsigned PrevNum = MF->begin()->getNumber();
> for (MachineBasicBlock &MBB : *MF) {
> @@ -118,7 +118,7 @@ void ARM64BranchRelaxation::verify() {
> }
>
> /// print block size and offset information - debugging
> -void ARM64BranchRelaxation::dumpBBs() {
> +void AArch64BranchRelaxation::dumpBBs() {
> for (auto &MBB : *MF) {
> const BasicBlockInfo &BBI = BlockInfo[MBB.getNumber()];
> dbgs() << format("BB#%u\toffset=%08x\t", MBB.getNumber(), BBI.Offset)
> @@ -145,7 +145,7 @@ static bool BBHasFallthrough(MachineBasi
>
> /// scanFunction - Do the initial scan of the function, building up
> /// information about each block.
> -void ARM64BranchRelaxation::scanFunction() {
> +void AArch64BranchRelaxation::scanFunction() {
> BlockInfo.clear();
> BlockInfo.resize(MF->getNumBlockIDs());
>
> @@ -162,7 +162,7 @@ void ARM64BranchRelaxation::scanFunction
>
> /// computeBlockSize - Compute the size for MBB.
> /// This function updates BlockInfo directly.
> -void ARM64BranchRelaxation::computeBlockSize(const MachineBasicBlock &MBB) {
> +void AArch64BranchRelaxation::computeBlockSize(const MachineBasicBlock &MBB) {
> unsigned Size = 0;
> for (const MachineInstr &MI : MBB)
> Size += TII->GetInstSizeInBytes(&MI);
> @@ -172,7 +172,7 @@ void ARM64BranchRelaxation::computeBlock
> /// getInstrOffset - Return the current offset of the specified machine
> /// instruction from the start of the function. This offset changes as stuff is
> /// moved around inside the function.
> -unsigned ARM64BranchRelaxation::getInstrOffset(MachineInstr *MI) const {
> +unsigned AArch64BranchRelaxation::getInstrOffset(MachineInstr *MI) const {
> MachineBasicBlock *MBB = MI->getParent();
>
> // The offset is composed of two things: the sum of the sizes of all MBB's
> @@ -188,7 +188,7 @@ unsigned ARM64BranchRelaxation::getInstr
> return Offset;
> }
>
> -void ARM64BranchRelaxation::adjustBlockOffsets(MachineBasicBlock &Start) {
> +void AArch64BranchRelaxation::adjustBlockOffsets(MachineBasicBlock &Start) {
> unsigned PrevNum = Start.getNumber();
> for (auto &MBB : make_range(MachineFunction::iterator(Start), MF->end())) {
> unsigned Num = MBB.getNumber();
> @@ -209,7 +209,7 @@ void ARM64BranchRelaxation::adjustBlockO
> /// and must be updated by the caller! Other transforms follow using this
> /// utility function, so no point updating now rather than waiting.
> MachineBasicBlock *
> -ARM64BranchRelaxation::splitBlockBeforeInstr(MachineInstr *MI) {
> +AArch64BranchRelaxation::splitBlockBeforeInstr(MachineInstr *MI) {
> MachineBasicBlock *OrigBB = MI->getParent();
>
> // Create a new MBB for the code after the OrigBB.
> @@ -226,7 +226,7 @@ ARM64BranchRelaxation::splitBlockBeforeI
> // Note the new unconditional branch is not being recorded.
> // There doesn't seem to be meaningful DebugInfo available; this doesn't
> // correspond to anything in the source.
> - BuildMI(OrigBB, DebugLoc(), TII->get(ARM64::B)).addMBB(NewBB);
> + BuildMI(OrigBB, DebugLoc(), TII->get(AArch64::B)).addMBB(NewBB);
>
> // Insert an entry into BlockInfo to align it properly with the block numbers.
> BlockInfo.insert(BlockInfo.begin() + NewBB->getNumber(), BasicBlockInfo());
> @@ -252,9 +252,9 @@ ARM64BranchRelaxation::splitBlockBeforeI
>
> /// isBlockInRange - Returns true if the distance between specific MI and
> /// specific BB can fit in MI's displacement field.
> -bool ARM64BranchRelaxation::isBlockInRange(MachineInstr *MI,
> - MachineBasicBlock *DestBB,
> - unsigned Bits) {
> +bool AArch64BranchRelaxation::isBlockInRange(MachineInstr *MI,
> + MachineBasicBlock *DestBB,
> + unsigned Bits) {
> unsigned MaxOffs = ((1 << (Bits - 1)) - 1) << 2;
> unsigned BrOffset = getInstrOffset(MI);
> unsigned DestOffset = BlockInfo[DestBB->getNumber()].Offset;
> @@ -275,15 +275,15 @@ static bool isConditionalBranch(unsigned
> switch (Opc) {
> default:
> return false;
> - case ARM64::TBZW:
> - case ARM64::TBNZW:
> - case ARM64::TBZX:
> - case ARM64::TBNZX:
> - case ARM64::CBZW:
> - case ARM64::CBNZW:
> - case ARM64::CBZX:
> - case ARM64::CBNZX:
> - case ARM64::Bcc:
> + case AArch64::TBZW:
> + case AArch64::TBNZW:
> + case AArch64::TBZX:
> + case AArch64::TBNZX:
> + case AArch64::CBZW:
> + case AArch64::CBNZW:
> + case AArch64::CBZX:
> + case AArch64::CBNZX:
> + case AArch64::Bcc:
> return true;
> }
> }
> @@ -292,16 +292,16 @@ static MachineBasicBlock *getDestBlock(M
> switch (MI->getOpcode()) {
> default:
> assert(0 && "unexpected opcode!");
> - case ARM64::TBZW:
> - case ARM64::TBNZW:
> - case ARM64::TBZX:
> - case ARM64::TBNZX:
> + case AArch64::TBZW:
> + case AArch64::TBNZW:
> + case AArch64::TBZX:
> + case AArch64::TBNZX:
> return MI->getOperand(2).getMBB();
> - case ARM64::CBZW:
> - case ARM64::CBNZW:
> - case ARM64::CBZX:
> - case ARM64::CBNZX:
> - case ARM64::Bcc:
> + case AArch64::CBZW:
> + case AArch64::CBNZW:
> + case AArch64::CBZX:
> + case AArch64::CBNZX:
> + case AArch64::Bcc:
> return MI->getOperand(1).getMBB();
> }
> }
> @@ -310,15 +310,15 @@ static unsigned getOppositeConditionOpco
> switch (Opc) {
> default:
> assert(0 && "unexpected opcode!");
> - case ARM64::TBNZW: return ARM64::TBZW;
> - case ARM64::TBNZX: return ARM64::TBZX;
> - case ARM64::TBZW: return ARM64::TBNZW;
> - case ARM64::TBZX: return ARM64::TBNZX;
> - case ARM64::CBNZW: return ARM64::CBZW;
> - case ARM64::CBNZX: return ARM64::CBZX;
> - case ARM64::CBZW: return ARM64::CBNZW;
> - case ARM64::CBZX: return ARM64::CBNZX;
> - case ARM64::Bcc: return ARM64::Bcc; // Condition is an operand for Bcc.
> + case AArch64::TBNZW: return AArch64::TBZW;
> + case AArch64::TBNZX: return AArch64::TBZX;
> + case AArch64::TBZW: return AArch64::TBNZW;
> + case AArch64::TBZX: return AArch64::TBNZX;
> + case AArch64::CBNZW: return AArch64::CBZW;
> + case AArch64::CBNZX: return AArch64::CBZX;
> + case AArch64::CBZW: return AArch64::CBNZW;
> + case AArch64::CBZX: return AArch64::CBNZX;
> + case AArch64::Bcc: return AArch64::Bcc; // Condition is an operand for Bcc.
> }
> }
>
> @@ -326,32 +326,32 @@ static unsigned getBranchDisplacementBit
> switch (Opc) {
> default:
> assert(0 && "unexpected opcode!");
> - case ARM64::TBNZW:
> - case ARM64::TBZW:
> - case ARM64::TBNZX:
> - case ARM64::TBZX:
> + case AArch64::TBNZW:
> + case AArch64::TBZW:
> + case AArch64::TBNZX:
> + case AArch64::TBZX:
> return TBZDisplacementBits;
> - case ARM64::CBNZW:
> - case ARM64::CBZW:
> - case ARM64::CBNZX:
> - case ARM64::CBZX:
> + case AArch64::CBNZW:
> + case AArch64::CBZW:
> + case AArch64::CBNZX:
> + case AArch64::CBZX:
> return CBZDisplacementBits;
> - case ARM64::Bcc:
> + case AArch64::Bcc:
> return BCCDisplacementBits;
> }
> }
>
> static inline void invertBccCondition(MachineInstr *MI) {
> - assert(MI->getOpcode() == ARM64::Bcc && "Unexpected opcode!");
> - ARM64CC::CondCode CC = (ARM64CC::CondCode)MI->getOperand(0).getImm();
> - CC = ARM64CC::getInvertedCondCode(CC);
> + assert(MI->getOpcode() == AArch64::Bcc && "Unexpected opcode!");
> + AArch64CC::CondCode CC = (AArch64CC::CondCode)MI->getOperand(0).getImm();
> + CC = AArch64CC::getInvertedCondCode(CC);
> MI->getOperand(0).setImm((int64_t)CC);
> }
>
> /// fixupConditionalBranch - Fix up a conditional branch whose destination is
> /// too far away to fit in its displacement field. It is converted to an inverse
> /// conditional branch + an unconditional branch to the destination.
> -bool ARM64BranchRelaxation::fixupConditionalBranch(MachineInstr *MI) {
> +bool AArch64BranchRelaxation::fixupConditionalBranch(MachineInstr *MI) {
> MachineBasicBlock *DestBB = getDestBlock(MI);
>
> // Add an unconditional branch to the destination and invert the branch
> @@ -372,7 +372,7 @@ bool ARM64BranchRelaxation::fixupConditi
> if (BMI != MI) {
> if (std::next(MachineBasicBlock::iterator(MI)) ==
> std::prev(MBB->getLastNonDebugInstr()) &&
> - BMI->getOpcode() == ARM64::B) {
> + BMI->getOpcode() == AArch64::B) {
> // Last MI in the BB is an unconditional branch. Can we simply invert the
> // condition and swap destinations:
> // beq L1
> @@ -386,14 +386,15 @@ bool ARM64BranchRelaxation::fixupConditi
> DEBUG(dbgs() << " Invert condition and swap its destination with "
> << *BMI);
> BMI->getOperand(0).setMBB(DestBB);
> - unsigned OpNum =
> - (MI->getOpcode() == ARM64::TBZW || MI->getOpcode() == ARM64::TBNZW ||
> - MI->getOpcode() == ARM64::TBZX || MI->getOpcode() == ARM64::TBNZX)
> - ? 2
> - : 1;
> + unsigned OpNum = (MI->getOpcode() == AArch64::TBZW ||
> + MI->getOpcode() == AArch64::TBNZW ||
> + MI->getOpcode() == AArch64::TBZX ||
> + MI->getOpcode() == AArch64::TBNZX)
> + ? 2
> + : 1;
> MI->getOperand(OpNum).setMBB(NewDest);
> MI->setDesc(TII->get(getOppositeConditionOpcode(MI->getOpcode())));
> - if (MI->getOpcode() == ARM64::Bcc)
> + if (MI->getOpcode() == AArch64::Bcc)
> invertBccCondition(MI);
> return true;
> }
> @@ -429,14 +430,14 @@ bool ARM64BranchRelaxation::fixupConditi
> MachineInstrBuilder MIB = BuildMI(
> MBB, DebugLoc(), TII->get(getOppositeConditionOpcode(MI->getOpcode())))
> .addOperand(MI->getOperand(0));
> - if (MI->getOpcode() == ARM64::TBZW || MI->getOpcode() == ARM64::TBNZW ||
> - MI->getOpcode() == ARM64::TBZX || MI->getOpcode() == ARM64::TBNZX)
> + if (MI->getOpcode() == AArch64::TBZW || MI->getOpcode() == AArch64::TBNZW ||
> + MI->getOpcode() == AArch64::TBZX || MI->getOpcode() == AArch64::TBNZX)
> MIB.addOperand(MI->getOperand(1));
> - if (MI->getOpcode() == ARM64::Bcc)
> + if (MI->getOpcode() == AArch64::Bcc)
> invertBccCondition(MIB);
> MIB.addMBB(NextBB);
> BlockInfo[MBB->getNumber()].Size += TII->GetInstSizeInBytes(&MBB->back());
> - BuildMI(MBB, DebugLoc(), TII->get(ARM64::B)).addMBB(DestBB);
> + BuildMI(MBB, DebugLoc(), TII->get(AArch64::B)).addMBB(DestBB);
> BlockInfo[MBB->getNumber()].Size += TII->GetInstSizeInBytes(&MBB->back());
>
> // Remove the old conditional branch. It may or may not still be in MBB.
> @@ -448,7 +449,7 @@ bool ARM64BranchRelaxation::fixupConditi
> return true;
> }
>
> -bool ARM64BranchRelaxation::relaxBranchInstructions() {
> +bool AArch64BranchRelaxation::relaxBranchInstructions() {
> bool Changed = false;
> // Relaxing branches involves creating new basic blocks, so re-eval
> // end() for termination.
> @@ -465,16 +466,16 @@ bool ARM64BranchRelaxation::relaxBranchI
> return Changed;
> }
>
> -bool ARM64BranchRelaxation::runOnMachineFunction(MachineFunction &mf) {
> +bool AArch64BranchRelaxation::runOnMachineFunction(MachineFunction &mf) {
> MF = &mf;
>
> // If the pass is disabled, just bail early.
> if (!BranchRelaxation)
> return false;
>
> - DEBUG(dbgs() << "***** ARM64BranchRelaxation *****\n");
> + DEBUG(dbgs() << "***** AArch64BranchRelaxation *****\n");
>
> - TII = (const ARM64InstrInfo *)MF->getTarget().getInstrInfo();
> + TII = (const AArch64InstrInfo *)MF->getTarget().getInstrInfo();
>
> // Renumber all of the machine basic blocks in the function, guaranteeing that
> // the numbers agree with the position of the block in the function.
> @@ -502,8 +503,8 @@ bool ARM64BranchRelaxation::runOnMachine
> return MadeChange;
> }
>
> -/// createARM64BranchRelaxation - returns an instance of the constpool
> +/// createAArch64BranchRelaxation - returns an instance of the constpool
> /// island pass.
> -FunctionPass *llvm::createARM64BranchRelaxation() {
> - return new ARM64BranchRelaxation();
> +FunctionPass *llvm::createAArch64BranchRelaxation() {
> + return new AArch64BranchRelaxation();
> }
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64CallingConv.h (from r209576, llvm/trunk/lib/Target/ARM64/ARM64CallingConv.h)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64CallingConv.h?p2=llvm/trunk/lib/Target/AArch64/AArch64CallingConv.h&p1=llvm/trunk/lib/Target/ARM64/ARM64CallingConv.h&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64CallingConv.h (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64CallingConv.h Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//=== ARM64CallingConv.h - Custom Calling Convention Routines -*- C++ -*-===//
> +//=== AArch64CallingConv.h - Custom Calling Convention Routines -*- C++ -*-===//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,38 +7,38 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// This file contains the custom routines for the ARM64 Calling Convention that
> +// This file contains the custom routines for the AArch64 Calling Convention that
> // aren't done by tablegen.
> //
> //===----------------------------------------------------------------------===//
>
> -#ifndef ARM64CALLINGCONV_H
> -#define ARM64CALLINGCONV_H
> +#ifndef AArch64CALLINGCONV_H
> +#define AArch64CALLINGCONV_H
>
> -#include "ARM64InstrInfo.h"
> +#include "AArch64InstrInfo.h"
> #include "llvm/IR/CallingConv.h"
> #include "llvm/CodeGen/CallingConvLower.h"
> #include "llvm/Target/TargetInstrInfo.h"
>
> namespace llvm {
>
> -/// CC_ARM64_Custom_i1i8i16_Reg - customized handling of passing i1/i8/i16 via
> +/// CC_AArch64_Custom_i1i8i16_Reg - customized handling of passing i1/i8/i16 via
> /// register. Here, ValVT can be i1/i8/i16 or i32 depending on whether the
> /// argument is already promoted and LocVT is i1/i8/i16. We only promote the
> /// argument to i32 if we are sure this argument will be passed in register.
> -static bool CC_ARM64_Custom_i1i8i16_Reg(unsigned ValNo, MVT ValVT, MVT LocVT,
> +static bool CC_AArch64_Custom_i1i8i16_Reg(unsigned ValNo, MVT ValVT, MVT LocVT,
> CCValAssign::LocInfo LocInfo,
> ISD::ArgFlagsTy ArgFlags,
> CCState &State,
> bool IsWebKitJS = false) {
> - static const MCPhysReg RegList1[] = { ARM64::W0, ARM64::W1, ARM64::W2,
> - ARM64::W3, ARM64::W4, ARM64::W5,
> - ARM64::W6, ARM64::W7 };
> - static const MCPhysReg RegList2[] = { ARM64::X0, ARM64::X1, ARM64::X2,
> - ARM64::X3, ARM64::X4, ARM64::X5,
> - ARM64::X6, ARM64::X7 };
> - static const MCPhysReg WebKitRegList1[] = { ARM64::W0 };
> - static const MCPhysReg WebKitRegList2[] = { ARM64::X0 };
> + static const MCPhysReg RegList1[] = { AArch64::W0, AArch64::W1, AArch64::W2,
> + AArch64::W3, AArch64::W4, AArch64::W5,
> + AArch64::W6, AArch64::W7 };
> + static const MCPhysReg RegList2[] = { AArch64::X0, AArch64::X1, AArch64::X2,
> + AArch64::X3, AArch64::X4, AArch64::X5,
> + AArch64::X6, AArch64::X7 };
> + static const MCPhysReg WebKitRegList1[] = { AArch64::W0 };
> + static const MCPhysReg WebKitRegList2[] = { AArch64::X0 };
>
> const MCPhysReg *List1 = IsWebKitJS ? WebKitRegList1 : RegList1;
> const MCPhysReg *List2 = IsWebKitJS ? WebKitRegList2 : RegList2;
> @@ -63,22 +63,22 @@ static bool CC_ARM64_Custom_i1i8i16_Reg(
> return false;
> }
>
> -/// CC_ARM64_WebKit_JS_i1i8i16_Reg - customized handling of passing i1/i8/i16
> -/// via register. This behaves the same as CC_ARM64_Custom_i1i8i16_Reg, but only
> +/// CC_AArch64_WebKit_JS_i1i8i16_Reg - customized handling of passing i1/i8/i16
> +/// via register. This behaves the same as CC_AArch64_Custom_i1i8i16_Reg, but only
> /// uses the first register.
> -static bool CC_ARM64_WebKit_JS_i1i8i16_Reg(unsigned ValNo, MVT ValVT, MVT LocVT,
> +static bool CC_AArch64_WebKit_JS_i1i8i16_Reg(unsigned ValNo, MVT ValVT, MVT LocVT,
> CCValAssign::LocInfo LocInfo,
> ISD::ArgFlagsTy ArgFlags,
> CCState &State) {
> - return CC_ARM64_Custom_i1i8i16_Reg(ValNo, ValVT, LocVT, LocInfo, ArgFlags,
> + return CC_AArch64_Custom_i1i8i16_Reg(ValNo, ValVT, LocVT, LocInfo, ArgFlags,
> State, true);
> }
>
> -/// CC_ARM64_Custom_i1i8i16_Stack: customized handling of passing i1/i8/i16 on
> +/// CC_AArch64_Custom_i1i8i16_Stack: customized handling of passing i1/i8/i16 on
> /// stack. Here, ValVT can be i1/i8/i16 or i32 depending on whether the argument
> /// is already promoted and LocVT is i1/i8/i16. If ValVT is already promoted,
> /// it will be truncated back to i1/i8/i16.
> -static bool CC_ARM64_Custom_i1i8i16_Stack(unsigned ValNo, MVT ValVT, MVT LocVT,
> +static bool CC_AArch64_Custom_i1i8i16_Stack(unsigned ValNo, MVT ValVT, MVT LocVT,
> CCValAssign::LocInfo LocInfo,
> ISD::ArgFlagsTy ArgFlags,
> CCState &State) {
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64CallingConvention.td (from r209576, llvm/trunk/lib/Target/ARM64/ARM64CallingConvention.td)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64CallingConvention.td?p2=llvm/trunk/lib/Target/AArch64/AArch64CallingConvention.td&p1=llvm/trunk/lib/Target/ARM64/ARM64CallingConvention.td&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64CallingConvention.td (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64CallingConvention.td Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===- ARM64CallingConv.td - Calling Conventions for ARM64 -*- tablegen -*-===//
> +//=- AArch64CallingConv.td - Calling Conventions for AArch64 -*- tablegen -*-=//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,7 +7,7 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// This describes the calling conventions for ARM64 architecture.
> +// This describes the calling conventions for AArch64 architecture.
> //
> //===----------------------------------------------------------------------===//
>
> @@ -22,7 +22,7 @@ class CCIfBigEndian<CCAction A> :
> // ARM AAPCS64 Calling Convention
> //===----------------------------------------------------------------------===//
>
> -def CC_ARM64_AAPCS : CallingConv<[
> +def CC_AArch64_AAPCS : CallingConv<[
> CCIfType<[v2f32], CCBitConvertToType<v2i32>>,
> CCIfType<[v2f64, v4f32], CCBitConvertToType<v2i64>>,
>
> @@ -42,7 +42,7 @@ def CC_ARM64_AAPCS : CallingConv<[
>
> // Handle i1, i8, i16, i32, i64, f32, f64 and v2f64 by passing in registers,
> // up to eight each of GPR and FPR.
> - CCIfType<[i1, i8, i16], CCCustom<"CC_ARM64_Custom_i1i8i16_Reg">>,
> + CCIfType<[i1, i8, i16], CCCustom<"CC_AArch64_Custom_i1i8i16_Reg">>,
> CCIfType<[i32], CCAssignToRegWithShadow<[W0, W1, W2, W3, W4, W5, W6, W7],
> [X0, X1, X2, X3, X4, X5, X6, X7]>>,
> // i128 is split to two i64s, we can't fit half to register X7.
> @@ -73,7 +73,7 @@ def CC_ARM64_AAPCS : CallingConv<[
> CCAssignToStack<16, 16>>
> ]>;
>
> -def RetCC_ARM64_AAPCS : CallingConv<[
> +def RetCC_AArch64_AAPCS : CallingConv<[
> CCIfType<[v2f32], CCBitConvertToType<v2i32>>,
> CCIfType<[v2f64, v4f32], CCBitConvertToType<v2i64>>,
>
> @@ -104,7 +104,7 @@ def RetCC_ARM64_AAPCS : CallingConv<[
> // from the standard one at this level:
> // + i128s (i.e. split i64s) don't need even registers.
> // + Stack slots are sized as needed rather than being at least 64-bit.
> -def CC_ARM64_DarwinPCS : CallingConv<[
> +def CC_AArch64_DarwinPCS : CallingConv<[
> CCIfType<[v2f32], CCBitConvertToType<v2i32>>,
> CCIfType<[v2f64, v4f32, f128], CCBitConvertToType<v2i64>>,
>
> @@ -117,7 +117,7 @@ def CC_ARM64_DarwinPCS : CallingConv<[
>
> // Handle i1, i8, i16, i32, i64, f32, f64 and v2f64 by passing in registers,
> // up to eight each of GPR and FPR.
> - CCIfType<[i1, i8, i16], CCCustom<"CC_ARM64_Custom_i1i8i16_Reg">>,
> + CCIfType<[i1, i8, i16], CCCustom<"CC_AArch64_Custom_i1i8i16_Reg">>,
> CCIfType<[i32], CCAssignToRegWithShadow<[W0, W1, W2, W3, W4, W5, W6, W7],
> [X0, X1, X2, X3, X4, X5, X6, X7]>>,
> // i128 is split to two i64s, we can't fit half to register X7.
> @@ -140,14 +140,14 @@ def CC_ARM64_DarwinPCS : CallingConv<[
> CCAssignToReg<[Q0, Q1, Q2, Q3, Q4, Q5, Q6, Q7]>>,
>
> // If more than will fit in registers, pass them on the stack instead.
> - CCIfType<[i1, i8, i16], CCCustom<"CC_ARM64_Custom_i1i8i16_Stack">>,
> + CCIfType<[i1, i8, i16], CCCustom<"CC_AArch64_Custom_i1i8i16_Stack">>,
> CCIfType<[i32, f32], CCAssignToStack<4, 4>>,
> CCIfType<[i64, f64, v1f64, v2f32, v1i64, v2i32, v4i16, v8i8],
> CCAssignToStack<8, 8>>,
> CCIfType<[v2i64, v4i32, v8i16, v16i8, v4f32, v2f64], CCAssignToStack<16, 16>>
> ]>;
>
> -def CC_ARM64_DarwinPCS_VarArg : CallingConv<[
> +def CC_AArch64_DarwinPCS_VarArg : CallingConv<[
> CCIfType<[v2f32], CCBitConvertToType<v2i32>>,
> CCIfType<[v2f64, v4f32, f128], CCBitConvertToType<v2i64>>,
>
> @@ -166,9 +166,9 @@ def CC_ARM64_DarwinPCS_VarArg : CallingC
> // in register and the remaining arguments on stack. We allow 32bit stack slots,
> // so that WebKit can write partial values in the stack and define the other
> // 32bit quantity as undef.
> -def CC_ARM64_WebKit_JS : CallingConv<[
> +def CC_AArch64_WebKit_JS : CallingConv<[
> // Handle i1, i8, i16, i32, and i64 passing in register X0 (W0).
> - CCIfType<[i1, i8, i16], CCCustom<"CC_ARM64_WebKit_JS_i1i8i16_Reg">>,
> + CCIfType<[i1, i8, i16], CCCustom<"CC_AArch64_WebKit_JS_i1i8i16_Reg">>,
> CCIfType<[i32], CCAssignToRegWithShadow<[W0], [X0]>>,
> CCIfType<[i64], CCAssignToRegWithShadow<[X0], [W0]>>,
>
> @@ -178,7 +178,7 @@ def CC_ARM64_WebKit_JS : CallingConv<[
> CCIfType<[i64, f64], CCAssignToStack<8, 8>>
> ]>;
>
> -def RetCC_ARM64_WebKit_JS : CallingConv<[
> +def RetCC_AArch64_WebKit_JS : CallingConv<[
> CCIfType<[i32], CCAssignToRegWithShadow<[W0, W1, W2, W3, W4, W5, W6, W7],
> [X0, X1, X2, X3, X4, X5, X6, X7]>>,
> CCIfType<[i64], CCAssignToRegWithShadow<[X0, X1, X2, X3, X4, X5, X6, X7],
> @@ -197,7 +197,7 @@ def RetCC_ARM64_WebKit_JS : CallingConv<
> // It would be better to model its preservation semantics properly (create a
> // vreg on entry, use it in RET & tail call generation; make that vreg def if we
> // end up saving LR as part of a call frame). Watch this space...
> -def CSR_ARM64_AAPCS : CalleeSavedRegs<(add LR, FP, X19, X20, X21, X22,
> +def CSR_AArch64_AAPCS : CalleeSavedRegs<(add LR, FP, X19, X20, X21, X22,
> X23, X24, X25, X26, X27, X28,
> D8, D9, D10, D11,
> D12, D13, D14, D15)>;
> @@ -210,24 +210,24 @@ def CSR_ARM64_AAPCS : CalleeSavedRegs<(a
> // (For generic ARM 64-bit ABI code, clang will not generate constructors or
> // destructors with 'this' returns, so this RegMask will not be used in that
> // case)
> -def CSR_ARM64_AAPCS_ThisReturn : CalleeSavedRegs<(add CSR_ARM64_AAPCS, X0)>;
> +def CSR_AArch64_AAPCS_ThisReturn : CalleeSavedRegs<(add CSR_AArch64_AAPCS, X0)>;
>
> // The function used by Darwin to obtain the address of a thread-local variable
> // guarantees more than a normal AAPCS function. x16 and x17 are used on the
> // fast path for calculation, but other registers except X0 (argument/return)
> // and LR (it is a call, after all) are preserved.
> -def CSR_ARM64_TLS_Darwin
> +def CSR_AArch64_TLS_Darwin
> : CalleeSavedRegs<(add (sub (sequence "X%u", 1, 28), X16, X17),
> FP,
> (sequence "Q%u", 0, 31))>;
>
> // The ELF stub used for TLS-descriptor access saves every feasible
> // register. Only X0 and LR are clobbered.
> -def CSR_ARM64_TLS_ELF
> +def CSR_AArch64_TLS_ELF
> : CalleeSavedRegs<(add (sequence "X%u", 1, 28), FP,
> (sequence "Q%u", 0, 31))>;
>
> -def CSR_ARM64_AllRegs
> +def CSR_AArch64_AllRegs
> : CalleeSavedRegs<(add (sequence "W%u", 0, 30), WSP,
> (sequence "X%u", 0, 28), FP, LR, SP,
> (sequence "B%u", 0, 31), (sequence "H%u", 0, 31),
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64CleanupLocalDynamicTLSPass.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64CleanupLocalDynamicTLSPass.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64CleanupLocalDynamicTLSPass.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64CleanupLocalDynamicTLSPass.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64CleanupLocalDynamicTLSPass.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64CleanupLocalDynamicTLSPass.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64CleanupLocalDynamicTLSPass.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===-- ARM64CleanupLocalDynamicTLSPass.cpp -----------------------*- C++ -*-=//
> +//===-- AArch64CleanupLocalDynamicTLSPass.cpp ---------------------*- C++ -*-=//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -22,10 +22,10 @@
> // pass looks through a function and performs such combinations.
> //
> //===----------------------------------------------------------------------===//
> -#include "ARM64.h"
> -#include "ARM64InstrInfo.h"
> -#include "ARM64MachineFunctionInfo.h"
> -#include "ARM64TargetMachine.h"
> +#include "AArch64.h"
> +#include "AArch64InstrInfo.h"
> +#include "AArch64MachineFunctionInfo.h"
> +#include "AArch64TargetMachine.h"
> #include "llvm/CodeGen/MachineDominators.h"
> #include "llvm/CodeGen/MachineFunction.h"
> #include "llvm/CodeGen/MachineFunctionPass.h"
> @@ -39,7 +39,7 @@ struct LDTLSCleanup : public MachineFunc
> LDTLSCleanup() : MachineFunctionPass(ID) {}
>
> bool runOnMachineFunction(MachineFunction &MF) override {
> - ARM64FunctionInfo *AFI = MF.getInfo<ARM64FunctionInfo>();
> + AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
> if (AFI->getNumLocalDynamicTLSAccesses() < 2) {
> // No point folding accesses if there isn't at least two.
> return false;
> @@ -62,7 +62,7 @@ struct LDTLSCleanup : public MachineFunc
> for (MachineBasicBlock::iterator I = BB->begin(), E = BB->end(); I != E;
> ++I) {
> switch (I->getOpcode()) {
> - case ARM64::TLSDESC_BLR:
> + case AArch64::TLSDESC_BLR:
> // Make sure it's a local dynamic access.
> if (!I->getOperand(1).isSymbol() ||
> strcmp(I->getOperand(1).getSymbolName(), "_TLS_MODULE_BASE_"))
> @@ -92,15 +92,15 @@ struct LDTLSCleanup : public MachineFunc
> MachineInstr *replaceTLSBaseAddrCall(MachineInstr *I,
> unsigned TLSBaseAddrReg) {
> MachineFunction *MF = I->getParent()->getParent();
> - const ARM64TargetMachine *TM =
> - static_cast<const ARM64TargetMachine *>(&MF->getTarget());
> - const ARM64InstrInfo *TII = TM->getInstrInfo();
> + const AArch64TargetMachine *TM =
> + static_cast<const AArch64TargetMachine *>(&MF->getTarget());
> + const AArch64InstrInfo *TII = TM->getInstrInfo();
>
> // Insert a Copy from TLSBaseAddrReg to x0, which is where the rest of the
> // code sequence assumes the address will be.
> - MachineInstr *Copy =
> - BuildMI(*I->getParent(), I, I->getDebugLoc(),
> - TII->get(TargetOpcode::COPY), ARM64::X0).addReg(TLSBaseAddrReg);
> + MachineInstr *Copy = BuildMI(*I->getParent(), I, I->getDebugLoc(),
> + TII->get(TargetOpcode::COPY),
> + AArch64::X0).addReg(TLSBaseAddrReg);
>
> // Erase the TLS_base_addr instruction.
> I->eraseFromParent();
> @@ -112,19 +112,19 @@ struct LDTLSCleanup : public MachineFunc
> // inserting a copy instruction after I. Returns the new instruction.
> MachineInstr *setRegister(MachineInstr *I, unsigned *TLSBaseAddrReg) {
> MachineFunction *MF = I->getParent()->getParent();
> - const ARM64TargetMachine *TM =
> - static_cast<const ARM64TargetMachine *>(&MF->getTarget());
> - const ARM64InstrInfo *TII = TM->getInstrInfo();
> + const AArch64TargetMachine *TM =
> + static_cast<const AArch64TargetMachine *>(&MF->getTarget());
> + const AArch64InstrInfo *TII = TM->getInstrInfo();
>
> // Create a virtual register for the TLS base address.
> MachineRegisterInfo &RegInfo = MF->getRegInfo();
> - *TLSBaseAddrReg = RegInfo.createVirtualRegister(&ARM64::GPR64RegClass);
> + *TLSBaseAddrReg = RegInfo.createVirtualRegister(&AArch64::GPR64RegClass);
>
> // Insert a copy from X0 to TLSBaseAddrReg for later.
> MachineInstr *Next = I->getNextNode();
> MachineInstr *Copy = BuildMI(*I->getParent(), Next, I->getDebugLoc(),
> TII->get(TargetOpcode::COPY),
> - *TLSBaseAddrReg).addReg(ARM64::X0);
> + *TLSBaseAddrReg).addReg(AArch64::X0);
>
> return Copy;
> }
> @@ -142,6 +142,6 @@ struct LDTLSCleanup : public MachineFunc
> }
>
> char LDTLSCleanup::ID = 0;
> -FunctionPass *llvm::createARM64CleanupLocalDynamicTLSPass() {
> +FunctionPass *llvm::createAArch64CleanupLocalDynamicTLSPass() {
> return new LDTLSCleanup();
> }
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64CollectLOH.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64CollectLOH.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64CollectLOH.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64CollectLOH.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64CollectLOH.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64CollectLOH.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64CollectLOH.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===-------------- ARM64CollectLOH.cpp - ARM64 collect LOH pass --*- C++ -*-=//
> +//===---------- AArch64CollectLOH.cpp - AArch64 collect LOH pass --*- C++ -*-=//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -85,8 +85,8 @@
> // This LOH aims at getting rid of redundant ADRP instructions.
> //
> // The overall design for emitting the LOHs is:
> -// 1. ARM64CollectLOH (this pass) records the LOHs in the ARM64FunctionInfo.
> -// 2. ARM64AsmPrinter reads the LOHs from ARM64FunctionInfo and it:
> +// 1. AArch64CollectLOH (this pass) records the LOHs in the AArch64FunctionInfo.
> +// 2. AArch64AsmPrinter reads the LOHs from AArch64FunctionInfo and it:
> // 1. Associates them a label.
> // 2. Emits them in a MCStreamer (EmitLOHDirective).
> // - The MCMachOStreamer records them into the MCAssembler.
> @@ -98,10 +98,10 @@
> // - Other ObjectWriters ignore them.
> //===----------------------------------------------------------------------===//
>
> -#include "ARM64.h"
> -#include "ARM64InstrInfo.h"
> -#include "ARM64MachineFunctionInfo.h"
> -#include "MCTargetDesc/ARM64AddressingModes.h"
> +#include "AArch64.h"
> +#include "AArch64InstrInfo.h"
> +#include "AArch64MachineFunctionInfo.h"
> +#include "MCTargetDesc/AArch64AddressingModes.h"
> #include "llvm/ADT/BitVector.h"
> #include "llvm/ADT/DenseMap.h"
> #include "llvm/ADT/MapVector.h"
> @@ -122,16 +122,16 @@
> #include "llvm/ADT/Statistic.h"
> using namespace llvm;
>
> -#define DEBUG_TYPE "arm64-collect-loh"
> +#define DEBUG_TYPE "aarch64-collect-loh"
>
> static cl::opt<bool>
> -PreCollectRegister("arm64-collect-loh-pre-collect-register", cl::Hidden,
> +PreCollectRegister("aarch64-collect-loh-pre-collect-register", cl::Hidden,
> cl::desc("Restrict analysis to registers invovled"
> " in LOHs"),
> cl::init(true));
>
> static cl::opt<bool>
> -BasicBlockScopeOnly("arm64-collect-loh-bb-only", cl::Hidden,
> +BasicBlockScopeOnly("aarch64-collect-loh-bb-only", cl::Hidden,
> cl::desc("Restrict analysis at basic block scope"),
> cl::init(true));
>
> @@ -164,20 +164,20 @@ STATISTIC(NumADRSimpleCandidate, "Number
> STATISTIC(NumADRComplexCandidate, "Number of too complex ADRP + ADD");
>
> namespace llvm {
> -void initializeARM64CollectLOHPass(PassRegistry &);
> +void initializeAArch64CollectLOHPass(PassRegistry &);
> }
>
> namespace {
> -struct ARM64CollectLOH : public MachineFunctionPass {
> +struct AArch64CollectLOH : public MachineFunctionPass {
> static char ID;
> - ARM64CollectLOH() : MachineFunctionPass(ID) {
> - initializeARM64CollectLOHPass(*PassRegistry::getPassRegistry());
> + AArch64CollectLOH() : MachineFunctionPass(ID) {
> + initializeAArch64CollectLOHPass(*PassRegistry::getPassRegistry());
> }
>
> bool runOnMachineFunction(MachineFunction &MF) override;
>
> const char *getPassName() const override {
> - return "ARM64 Collect Linker Optimization Hint (LOH)";
> + return "AArch64 Collect Linker Optimization Hint (LOH)";
> }
>
> void getAnalysisUsage(AnalysisUsage &AU) const override {
> @@ -214,14 +214,14 @@ typedef DenseMap<unsigned, unsigned> Map
> typedef SmallVector<unsigned, 32> MapIdToReg;
> } // end anonymous namespace.
>
> -char ARM64CollectLOH::ID = 0;
> +char AArch64CollectLOH::ID = 0;
>
> -INITIALIZE_PASS_BEGIN(ARM64CollectLOH, "arm64-collect-loh",
> - "ARM64 Collect Linker Optimization Hint (LOH)", false,
> +INITIALIZE_PASS_BEGIN(AArch64CollectLOH, "aarch64-collect-loh",
> + "AArch64 Collect Linker Optimization Hint (LOH)", false,
> false)
> INITIALIZE_PASS_DEPENDENCY(MachineDominatorTree)
> -INITIALIZE_PASS_END(ARM64CollectLOH, "arm64-collect-loh",
> - "ARM64 Collect Linker Optimization Hint (LOH)", false,
> +INITIALIZE_PASS_END(AArch64CollectLOH, "aarch64-collect-loh",
> + "AArch64 Collect Linker Optimization Hint (LOH)", false,
> false)
>
> /// Given a couple (MBB, reg) get the corresponding set of instruction from
> @@ -295,7 +295,7 @@ static void initReachingDef(MachineFunct
> BitVector &BBKillSet = Kill[&MBB];
> BBKillSet.resize(NbReg);
> for (const MachineInstr &MI : MBB) {
> - bool IsADRP = MI.getOpcode() == ARM64::ADRP;
> + bool IsADRP = MI.getOpcode() == AArch64::ADRP;
>
> // Process uses first.
> if (IsADRP || !ADRPMode)
> @@ -509,9 +509,9 @@ static bool canDefBePartOfLOH(const Mach
> switch (Opc) {
> default:
> return false;
> - case ARM64::ADRP:
> + case AArch64::ADRP:
> return true;
> - case ARM64::ADDXri:
> + case AArch64::ADDXri:
> // Check immediate to see if the immediate is an address.
> switch (Def->getOperand(2).getType()) {
> default:
> @@ -522,7 +522,7 @@ static bool canDefBePartOfLOH(const Mach
> case MachineOperand::MO_BlockAddress:
> return true;
> }
> - case ARM64::LDRXui:
> + case AArch64::LDRXui:
> // Check immediate to see if the immediate is an address.
> switch (Def->getOperand(2).getType()) {
> default:
> @@ -541,13 +541,13 @@ static bool isCandidateStore(const Machi
> switch (Instr->getOpcode()) {
> default:
> return false;
> - case ARM64::STRBui:
> - case ARM64::STRHui:
> - case ARM64::STRWui:
> - case ARM64::STRXui:
> - case ARM64::STRSui:
> - case ARM64::STRDui:
> - case ARM64::STRQui:
> + case AArch64::STRBui:
> + case AArch64::STRHui:
> + case AArch64::STRWui:
> + case AArch64::STRXui:
> + case AArch64::STRSui:
> + case AArch64::STRDui:
> + case AArch64::STRQui:
> // In case we have str xA, [xA, #imm], this is two different uses
> // of xA and we cannot fold, otherwise the xA stored may be wrong,
> // even if #imm == 0.
> @@ -582,7 +582,7 @@ static void reachedUsesToDefs(InstrToIns
> MapRegToId::const_iterator It;
> // if all the reaching defs are not adrp, this use will not be
> // simplifiable.
> - if ((ADRPMode && Def->getOpcode() != ARM64::ADRP) ||
> + if ((ADRPMode && Def->getOpcode() != AArch64::ADRP) ||
> (!ADRPMode && !canDefBePartOfLOH(Def)) ||
> (!ADRPMode && isCandidateStore(MI) &&
> // store are LOH candidate iff the end of the chain is used as
> @@ -615,7 +615,7 @@ static void reachedUsesToDefs(InstrToIns
> /// Based on the use to defs information (in ADRPMode), compute the
> /// opportunities of LOH ADRP-related.
> static void computeADRP(const InstrToInstrs &UseToDefs,
> - ARM64FunctionInfo &ARM64FI,
> + AArch64FunctionInfo &AArch64FI,
> const MachineDominatorTree *MDT) {
> DEBUG(dbgs() << "*** Compute LOH for ADRP\n");
> for (const auto &Entry : UseToDefs) {
> @@ -634,7 +634,7 @@ static void computeADRP(const InstrToIns
> SmallVector<const MachineInstr *, 2> Args;
> Args.push_back(L2);
> Args.push_back(L1);
> - ARM64FI.addLOHDirective(MCLOH_AdrpAdrp, Args);
> + AArch64FI.addLOHDirective(MCLOH_AdrpAdrp, Args);
> ++NumADRPSimpleCandidate;
> }
> #ifdef DEBUG
> @@ -656,19 +656,19 @@ static bool isCandidateLoad(const Machin
> switch (Instr->getOpcode()) {
> default:
> return false;
> - case ARM64::LDRSBWui:
> - case ARM64::LDRSBXui:
> - case ARM64::LDRSHWui:
> - case ARM64::LDRSHXui:
> - case ARM64::LDRSWui:
> - case ARM64::LDRBui:
> - case ARM64::LDRHui:
> - case ARM64::LDRWui:
> - case ARM64::LDRXui:
> - case ARM64::LDRSui:
> - case ARM64::LDRDui:
> - case ARM64::LDRQui:
> - if (Instr->getOperand(2).getTargetFlags() & ARM64II::MO_GOT)
> + case AArch64::LDRSBWui:
> + case AArch64::LDRSBXui:
> + case AArch64::LDRSHWui:
> + case AArch64::LDRSHXui:
> + case AArch64::LDRSWui:
> + case AArch64::LDRBui:
> + case AArch64::LDRHui:
> + case AArch64::LDRWui:
> + case AArch64::LDRXui:
> + case AArch64::LDRSui:
> + case AArch64::LDRDui:
> + case AArch64::LDRQui:
> + if (Instr->getOperand(2).getTargetFlags() & AArch64II::MO_GOT)
> return false;
> return true;
> }
> @@ -681,12 +681,12 @@ static bool supportLoadFromLiteral(const
> switch (Instr->getOpcode()) {
> default:
> return false;
> - case ARM64::LDRSWui:
> - case ARM64::LDRWui:
> - case ARM64::LDRXui:
> - case ARM64::LDRSui:
> - case ARM64::LDRDui:
> - case ARM64::LDRQui:
> + case AArch64::LDRSWui:
> + case AArch64::LDRWui:
> + case AArch64::LDRXui:
> + case AArch64::LDRSui:
> + case AArch64::LDRDui:
> + case AArch64::LDRQui:
> return true;
> }
> // Unreachable.
> @@ -705,7 +705,7 @@ static bool isCandidate(const MachineIns
> return false;
>
> const MachineInstr *Def = *UseToDefs.find(Instr)->second.begin();
> - if (Def->getOpcode() != ARM64::ADRP) {
> + if (Def->getOpcode() != AArch64::ADRP) {
> // At this point, Def is ADDXri or LDRXui of the right type of
> // symbol, because we filtered out the uses that were not defined
> // by these kind of instructions (+ ADRP).
> @@ -728,7 +728,7 @@ static bool isCandidate(const MachineIns
> // - top is ADRP.
> // - check the simple chain property: each intermediate node must
> // dominates the next one.
> - if (Def->getOpcode() == ARM64::ADRP)
> + if (Def->getOpcode() == AArch64::ADRP)
> return MDT->dominates(Def, Instr);
> return false;
> }
> @@ -736,22 +736,22 @@ static bool isCandidate(const MachineIns
> static bool registerADRCandidate(const MachineInstr &Use,
> const InstrToInstrs &UseToDefs,
> const InstrToInstrs *DefsPerColorToUses,
> - ARM64FunctionInfo &ARM64FI,
> + AArch64FunctionInfo &AArch64FI,
> SetOfMachineInstr *InvolvedInLOHs,
> const MapRegToId &RegToId) {
> // Look for opportunities to turn ADRP -> ADD or
> // ADRP -> LDR GOTPAGEOFF into ADR.
> // If ADRP has more than one use. Give up.
> - if (Use.getOpcode() != ARM64::ADDXri &&
> - (Use.getOpcode() != ARM64::LDRXui ||
> - !(Use.getOperand(2).getTargetFlags() & ARM64II::MO_GOT)))
> + if (Use.getOpcode() != AArch64::ADDXri &&
> + (Use.getOpcode() != AArch64::LDRXui ||
> + !(Use.getOperand(2).getTargetFlags() & AArch64II::MO_GOT)))
> return false;
> InstrToInstrs::const_iterator It = UseToDefs.find(&Use);
> // The map may contain garbage that we need to ignore.
> if (It == UseToDefs.end() || It->second.empty())
> return false;
> const MachineInstr &Def = **It->second.begin();
> - if (Def.getOpcode() != ARM64::ADRP)
> + if (Def.getOpcode() != AArch64::ADRP)
> return false;
> // Check the number of users of ADRP.
> const SetOfMachineInstr *Users =
> @@ -772,7 +772,7 @@ static bool registerADRCandidate(const M
> Args.push_back(&Def);
> Args.push_back(&Use);
>
> - ARM64FI.addLOHDirective(Use.getOpcode() == ARM64::ADDXri ? MCLOH_AdrpAdd
> + AArch64FI.addLOHDirective(Use.getOpcode() == AArch64::ADDXri ? MCLOH_AdrpAdd
> : MCLOH_AdrpLdrGot,
> Args);
> return true;
> @@ -782,7 +782,7 @@ static bool registerADRCandidate(const M
> /// opportunities of LOH non-ADRP-related
> static void computeOthers(const InstrToInstrs &UseToDefs,
> const InstrToInstrs *DefsPerColorToUses,
> - ARM64FunctionInfo &ARM64FI, const MapRegToId &RegToId,
> + AArch64FunctionInfo &AArch64FI, const MapRegToId &RegToId,
> const MachineDominatorTree *MDT) {
> SetOfMachineInstr *InvolvedInLOHs = nullptr;
> #ifdef DEBUG
> @@ -839,7 +839,7 @@ static void computeOthers(const InstrToI
> const MachineInstr *L1 = Def;
> const MachineInstr *L2 = nullptr;
> unsigned ImmediateDefOpc = Def->getOpcode();
> - if (Def->getOpcode() != ARM64::ADRP) {
> + if (Def->getOpcode() != AArch64::ADRP) {
> // Check the number of users of this node.
> const SetOfMachineInstr *Users =
> getUses(DefsPerColorToUses,
> @@ -899,10 +899,10 @@ static void computeOthers(const InstrToI
> continue;
> }
>
> - bool IsL2Add = (ImmediateDefOpc == ARM64::ADDXri);
> + bool IsL2Add = (ImmediateDefOpc == AArch64::ADDXri);
> // If the chain is three instructions long and ldr is the second element,
> // then this ldr must load form GOT, otherwise this is not a correct chain.
> - if (L2 && !IsL2Add && L2->getOperand(2).getTargetFlags() != ARM64II::MO_GOT)
> + if (L2 && !IsL2Add && L2->getOperand(2).getTargetFlags() != AArch64II::MO_GOT)
> continue;
> SmallVector<const MachineInstr *, 3> Args;
> MCLOHType Kind;
> @@ -944,18 +944,18 @@ static void computeOthers(const InstrToI
> #ifdef DEBUG
> // get the immediate of the load
> if (Candidate->getOperand(2).getImm() == 0)
> - if (ImmediateDefOpc == ARM64::ADDXri)
> + if (ImmediateDefOpc == AArch64::ADDXri)
> ++NumADDToLDR;
> else
> ++NumLDRToLDR;
> - else if (ImmediateDefOpc == ARM64::ADDXri)
> + else if (ImmediateDefOpc == AArch64::ADDXri)
> ++NumADDToLDRWithImm;
> else
> ++NumLDRToLDRWithImm;
> #endif // DEBUG
> }
> } else {
> - if (ImmediateDefOpc == ARM64::ADRP)
> + if (ImmediateDefOpc == AArch64::ADRP)
> continue;
> else {
>
> @@ -978,23 +978,23 @@ static void computeOthers(const InstrToI
> #ifdef DEBUG
> // get the immediate of the store
> if (Candidate->getOperand(2).getImm() == 0)
> - if (ImmediateDefOpc == ARM64::ADDXri)
> + if (ImmediateDefOpc == AArch64::ADDXri)
> ++NumADDToSTR;
> else
> ++NumLDRToSTR;
> - else if (ImmediateDefOpc == ARM64::ADDXri)
> + else if (ImmediateDefOpc == AArch64::ADDXri)
> ++NumADDToSTRWithImm;
> else
> ++NumLDRToSTRWithImm;
> #endif // DEBUG
> }
> }
> - ARM64FI.addLOHDirective(Kind, Args);
> + AArch64FI.addLOHDirective(Kind, Args);
> }
>
> // Now, we grabbed all the big patterns, check ADR opportunities.
> for (const MachineInstr *Candidate : PotentialADROpportunities)
> - registerADRCandidate(*Candidate, UseToDefs, DefsPerColorToUses, ARM64FI,
> + registerADRCandidate(*Candidate, UseToDefs, DefsPerColorToUses, AArch64FI,
> InvolvedInLOHs, RegToId);
> }
>
> @@ -1041,15 +1041,15 @@ static void collectInvolvedReg(MachineFu
> }
> }
>
> -bool ARM64CollectLOH::runOnMachineFunction(MachineFunction &MF) {
> +bool AArch64CollectLOH::runOnMachineFunction(MachineFunction &MF) {
> const TargetMachine &TM = MF.getTarget();
> const TargetRegisterInfo *TRI = TM.getRegisterInfo();
> const MachineDominatorTree *MDT = &getAnalysis<MachineDominatorTree>();
>
> MapRegToId RegToId;
> MapIdToReg IdToReg;
> - ARM64FunctionInfo *ARM64FI = MF.getInfo<ARM64FunctionInfo>();
> - assert(ARM64FI && "No MachineFunctionInfo for this function!");
> + AArch64FunctionInfo *AArch64FI = MF.getInfo<AArch64FunctionInfo>();
> + assert(AArch64FI && "No MachineFunctionInfo for this function!");
>
> DEBUG(dbgs() << "Looking for LOH in " << MF.getName() << '\n');
>
> @@ -1059,11 +1059,11 @@ bool ARM64CollectLOH::runOnMachineFuncti
>
> MachineInstr *DummyOp = nullptr;
> if (BasicBlockScopeOnly) {
> - const ARM64InstrInfo *TII =
> - static_cast<const ARM64InstrInfo *>(TM.getInstrInfo());
> + const AArch64InstrInfo *TII =
> + static_cast<const AArch64InstrInfo *>(TM.getInstrInfo());
> // For local analysis, create a dummy operation to record uses that are not
> // local.
> - DummyOp = MF.CreateMachineInstr(TII->get(ARM64::COPY), DebugLoc());
> + DummyOp = MF.CreateMachineInstr(TII->get(AArch64::COPY), DebugLoc());
> }
>
> unsigned NbReg = RegToId.size();
> @@ -1084,7 +1084,7 @@ bool ARM64CollectLOH::runOnMachineFuncti
> reachedUsesToDefs(ADRPToReachingDefs, ColorOpToReachedUses, RegToId, true);
>
> // Compute LOH for ADRP.
> - computeADRP(ADRPToReachingDefs, *ARM64FI, MDT);
> + computeADRP(ADRPToReachingDefs, *AArch64FI, MDT);
> delete[] ColorOpToReachedUses;
>
> // Continue with general ADRP -> ADD/LDR -> LDR/STR pattern.
> @@ -1100,7 +1100,7 @@ bool ARM64CollectLOH::runOnMachineFuncti
> reachedUsesToDefs(UsesToReachingDefs, ColorOpToReachedUses, RegToId, false);
>
> // Compute other than AdrpAdrp LOH.
> - computeOthers(UsesToReachingDefs, ColorOpToReachedUses, *ARM64FI, RegToId,
> + computeOthers(UsesToReachingDefs, ColorOpToReachedUses, *AArch64FI, RegToId,
> MDT);
> delete[] ColorOpToReachedUses;
>
> @@ -1110,8 +1110,8 @@ bool ARM64CollectLOH::runOnMachineFuncti
> return Modified;
> }
>
> -/// createARM64CollectLOHPass - returns an instance of the Statistic for
> +/// createAArch64CollectLOHPass - returns an instance of the Statistic for
> /// linker optimization pass.
> -FunctionPass *llvm::createARM64CollectLOHPass() {
> - return new ARM64CollectLOH();
> +FunctionPass *llvm::createAArch64CollectLOHPass() {
> + return new AArch64CollectLOH();
> }
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64ConditionalCompares.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64ConditionalCompares.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64ConditionalCompares.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64ConditionalCompares.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64ConditionalCompares.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64ConditionalCompares.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64ConditionalCompares.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===-- ARM64ConditionalCompares.cpp --- CCMP formation for ARM64 ---------===//
> +//===-- AArch64ConditionalCompares.cpp --- CCMP formation for AArch64 -----===//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,7 +7,7 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// This file implements the ARM64ConditionalCompares pass which reduces
> +// This file implements the AArch64ConditionalCompares pass which reduces
> // branching and code size by using the conditional compare instructions CCMP,
> // CCMN, and FCMP.
> //
> @@ -17,7 +17,7 @@
> //
> //===----------------------------------------------------------------------===//
>
> -#include "ARM64.h"
> +#include "AArch64.h"
> #include "llvm/ADT/BitVector.h"
> #include "llvm/ADT/DepthFirstIterator.h"
> #include "llvm/ADT/SetVector.h"
> @@ -42,16 +42,16 @@
>
> using namespace llvm;
>
> -#define DEBUG_TYPE "arm64-ccmp"
> +#define DEBUG_TYPE "aarch64-ccmp"
>
> // Absolute maximum number of instructions allowed per speculated block.
> // This bypasses all other heuristics, so it should be set fairly high.
> static cl::opt<unsigned> BlockInstrLimit(
> - "arm64-ccmp-limit", cl::init(30), cl::Hidden,
> + "aarch64-ccmp-limit", cl::init(30), cl::Hidden,
> cl::desc("Maximum number of instructions per speculated block."));
>
> // Stress testing mode - disable heuristics.
> -static cl::opt<bool> Stress("arm64-stress-ccmp", cl::Hidden,
> +static cl::opt<bool> Stress("aarch64-stress-ccmp", cl::Hidden,
> cl::desc("Turn all knobs to 11"));
>
> STATISTIC(NumConsidered, "Number of ccmps considered");
> @@ -98,7 +98,7 @@ STATISTIC(NumCompBranches, "Number of cb
> //
> // The cmp-conversion turns the compare instruction in CmpBB into a conditional
> // compare, and merges CmpBB into Head, speculatively executing its
> -// instructions. The ARM64 conditional compare instructions have an immediate
> +// instructions. The AArch64 conditional compare instructions have an immediate
> // operand that specifies the NZCV flag values when the condition is false and
> // the compare isn't executed. This makes it possible to chain compares with
> // different condition codes.
> @@ -162,13 +162,13 @@ private:
> SmallVector<MachineOperand, 4> HeadCond;
>
> /// The condition code that makes Head branch to CmpBB.
> - ARM64CC::CondCode HeadCmpBBCC;
> + AArch64CC::CondCode HeadCmpBBCC;
>
> /// The branch condition in CmpBB.
> SmallVector<MachineOperand, 4> CmpBBCond;
>
> /// The condition code that makes CmpBB branch to Tail.
> - ARM64CC::CondCode CmpBBTailCC;
> + AArch64CC::CondCode CmpBBTailCC;
>
> /// Check if the Tail PHIs are trivially convertible.
> bool trivialTailPHIs();
> @@ -253,11 +253,11 @@ void SSACCmpConv::updateTailPHIs() {
> }
> }
>
> -// This pass runs before the ARM64DeadRegisterDefinitions pass, so compares are
> -// still writing virtual registers without any uses.
> +// This pass runs before the AArch64DeadRegisterDefinitions pass, so compares
> +// are still writing virtual registers without any uses.
> bool SSACCmpConv::isDeadDef(unsigned DstReg) {
> // Writes to the zero register are dead.
> - if (DstReg == ARM64::WZR || DstReg == ARM64::XZR)
> + if (DstReg == AArch64::WZR || DstReg == AArch64::XZR)
> return true;
> if (!TargetRegisterInfo::isVirtualRegister(DstReg))
> return false;
> @@ -269,11 +269,11 @@ bool SSACCmpConv::isDeadDef(unsigned Dst
> // Parse a condition code returned by AnalyzeBranch, and compute the CondCode
> // corresponding to TBB.
> // Return
> -static bool parseCond(ArrayRef<MachineOperand> Cond, ARM64CC::CondCode &CC) {
> +static bool parseCond(ArrayRef<MachineOperand> Cond, AArch64CC::CondCode &CC) {
> // A normal br.cond simply has the condition code.
> if (Cond[0].getImm() != -1) {
> assert(Cond.size() == 1 && "Unknown Cond array format");
> - CC = (ARM64CC::CondCode)(int)Cond[0].getImm();
> + CC = (AArch64CC::CondCode)(int)Cond[0].getImm();
> return true;
> }
> // For tbz and cbz instruction, the opcode is next.
> @@ -282,15 +282,15 @@ static bool parseCond(ArrayRef<MachineOp
> // This includes tbz / tbnz branches which can't be converted to
> // ccmp + br.cond.
> return false;
> - case ARM64::CBZW:
> - case ARM64::CBZX:
> + case AArch64::CBZW:
> + case AArch64::CBZX:
> assert(Cond.size() == 3 && "Unknown Cond array format");
> - CC = ARM64CC::EQ;
> + CC = AArch64CC::EQ;
> return true;
> - case ARM64::CBNZW:
> - case ARM64::CBNZX:
> + case AArch64::CBNZW:
> + case AArch64::CBNZX:
> assert(Cond.size() == 3 && "Unknown Cond array format");
> - CC = ARM64CC::NE;
> + CC = AArch64CC::NE;
> return true;
> }
> }
> @@ -300,12 +300,12 @@ MachineInstr *SSACCmpConv::findConvertib
> if (I == MBB->end())
> return nullptr;
> // The terminator must be controlled by the flags.
> - if (!I->readsRegister(ARM64::NZCV)) {
> + if (!I->readsRegister(AArch64::NZCV)) {
> switch (I->getOpcode()) {
> - case ARM64::CBZW:
> - case ARM64::CBZX:
> - case ARM64::CBNZW:
> - case ARM64::CBNZX:
> + case AArch64::CBZW:
> + case AArch64::CBZX:
> + case AArch64::CBNZW:
> + case AArch64::CBNZX:
> // These can be converted into a ccmp against #0.
> return I;
> }
> @@ -320,11 +320,11 @@ MachineInstr *SSACCmpConv::findConvertib
> assert(!I->isTerminator() && "Spurious terminator");
> switch (I->getOpcode()) {
> // cmp is an alias for subs with a dead destination register.
> - case ARM64::SUBSWri:
> - case ARM64::SUBSXri:
> + case AArch64::SUBSWri:
> + case AArch64::SUBSXri:
> // cmn is an alias for adds with a dead destination register.
> - case ARM64::ADDSWri:
> - case ARM64::ADDSXri:
> + case AArch64::ADDSWri:
> + case AArch64::ADDSXri:
> // Check that the immediate operand is within range, ccmp wants a uimm5.
> // Rd = SUBSri Rn, imm, shift
> if (I->getOperand(3).getImm() || !isUInt<5>(I->getOperand(2).getImm())) {
> @@ -333,25 +333,25 @@ MachineInstr *SSACCmpConv::findConvertib
> return nullptr;
> }
> // Fall through.
> - case ARM64::SUBSWrr:
> - case ARM64::SUBSXrr:
> - case ARM64::ADDSWrr:
> - case ARM64::ADDSXrr:
> + case AArch64::SUBSWrr:
> + case AArch64::SUBSXrr:
> + case AArch64::ADDSWrr:
> + case AArch64::ADDSXrr:
> if (isDeadDef(I->getOperand(0).getReg()))
> return I;
> DEBUG(dbgs() << "Can't convert compare with live destination: " << *I);
> ++NumLiveDstRejs;
> return nullptr;
> - case ARM64::FCMPSrr:
> - case ARM64::FCMPDrr:
> - case ARM64::FCMPESrr:
> - case ARM64::FCMPEDrr:
> + case AArch64::FCMPSrr:
> + case AArch64::FCMPDrr:
> + case AArch64::FCMPESrr:
> + case AArch64::FCMPEDrr:
> return I;
> }
>
> // Check for flag reads and clobbers.
> MIOperands::PhysRegInfo PRI =
> - MIOperands(I).analyzePhysReg(ARM64::NZCV, TRI);
> + MIOperands(I).analyzePhysReg(AArch64::NZCV, TRI);
>
> if (PRI.Reads) {
> // The ccmp doesn't produce exactly the same flags as the original
> @@ -422,7 +422,7 @@ bool SSACCmpConv::canSpeculateInstrs(Mac
> }
>
> // Only CmpMI is allowed to clobber the flags.
> - if (&I != CmpMI && I.modifiesRegister(ARM64::NZCV, TRI)) {
> + if (&I != CmpMI && I.modifiesRegister(AArch64::NZCV, TRI)) {
> DEBUG(dbgs() << "Clobbers flags: " << I);
> return false;
> }
> @@ -519,7 +519,7 @@ bool SSACCmpConv::canConvert(MachineBasi
> // Make sure the branch direction is right.
> if (TBB != CmpBB) {
> assert(TBB == Tail && "Unexpected TBB");
> - HeadCmpBBCC = ARM64CC::getInvertedCondCode(HeadCmpBBCC);
> + HeadCmpBBCC = AArch64CC::getInvertedCondCode(HeadCmpBBCC);
> }
>
> CmpBBCond.clear();
> @@ -543,10 +543,10 @@ bool SSACCmpConv::canConvert(MachineBasi
> }
>
> if (TBB != Tail)
> - CmpBBTailCC = ARM64CC::getInvertedCondCode(CmpBBTailCC);
> + CmpBBTailCC = AArch64CC::getInvertedCondCode(CmpBBTailCC);
>
> - DEBUG(dbgs() << "Head->CmpBB on " << ARM64CC::getCondCodeName(HeadCmpBBCC)
> - << ", CmpBB->Tail on " << ARM64CC::getCondCodeName(CmpBBTailCC)
> + DEBUG(dbgs() << "Head->CmpBB on " << AArch64CC::getCondCodeName(HeadCmpBBCC)
> + << ", CmpBB->Tail on " << AArch64CC::getCondCodeName(CmpBBTailCC)
> << '\n');
>
> CmpMI = findConvertibleCompare(CmpBB);
> @@ -579,13 +579,13 @@ void SSACCmpConv::convert(SmallVectorImp
> ++NumCompBranches;
> unsigned Opc = 0;
> switch (HeadCond[1].getImm()) {
> - case ARM64::CBZW:
> - case ARM64::CBNZW:
> - Opc = ARM64::SUBSWri;
> + case AArch64::CBZW:
> + case AArch64::CBNZW:
> + Opc = AArch64::SUBSWri;
> break;
> - case ARM64::CBZX:
> - case ARM64::CBNZX:
> - Opc = ARM64::SUBSXri;
> + case AArch64::CBZX:
> + case AArch64::CBNZX:
> + Opc = AArch64::SUBSXri;
> break;
> default:
> llvm_unreachable("Cannot convert Head branch");
> @@ -615,27 +615,27 @@ void SSACCmpConv::convert(SmallVectorImp
> switch (CmpMI->getOpcode()) {
> default:
> llvm_unreachable("Unknown compare opcode");
> - case ARM64::SUBSWri: Opc = ARM64::CCMPWi; break;
> - case ARM64::SUBSWrr: Opc = ARM64::CCMPWr; break;
> - case ARM64::SUBSXri: Opc = ARM64::CCMPXi; break;
> - case ARM64::SUBSXrr: Opc = ARM64::CCMPXr; break;
> - case ARM64::ADDSWri: Opc = ARM64::CCMNWi; break;
> - case ARM64::ADDSWrr: Opc = ARM64::CCMNWr; break;
> - case ARM64::ADDSXri: Opc = ARM64::CCMNXi; break;
> - case ARM64::ADDSXrr: Opc = ARM64::CCMNXr; break;
> - case ARM64::FCMPSrr: Opc = ARM64::FCCMPSrr; FirstOp = 0; break;
> - case ARM64::FCMPDrr: Opc = ARM64::FCCMPDrr; FirstOp = 0; break;
> - case ARM64::FCMPESrr: Opc = ARM64::FCCMPESrr; FirstOp = 0; break;
> - case ARM64::FCMPEDrr: Opc = ARM64::FCCMPEDrr; FirstOp = 0; break;
> - case ARM64::CBZW:
> - case ARM64::CBNZW:
> - Opc = ARM64::CCMPWi;
> + case AArch64::SUBSWri: Opc = AArch64::CCMPWi; break;
> + case AArch64::SUBSWrr: Opc = AArch64::CCMPWr; break;
> + case AArch64::SUBSXri: Opc = AArch64::CCMPXi; break;
> + case AArch64::SUBSXrr: Opc = AArch64::CCMPXr; break;
> + case AArch64::ADDSWri: Opc = AArch64::CCMNWi; break;
> + case AArch64::ADDSWrr: Opc = AArch64::CCMNWr; break;
> + case AArch64::ADDSXri: Opc = AArch64::CCMNXi; break;
> + case AArch64::ADDSXrr: Opc = AArch64::CCMNXr; break;
> + case AArch64::FCMPSrr: Opc = AArch64::FCCMPSrr; FirstOp = 0; break;
> + case AArch64::FCMPDrr: Opc = AArch64::FCCMPDrr; FirstOp = 0; break;
> + case AArch64::FCMPESrr: Opc = AArch64::FCCMPESrr; FirstOp = 0; break;
> + case AArch64::FCMPEDrr: Opc = AArch64::FCCMPEDrr; FirstOp = 0; break;
> + case AArch64::CBZW:
> + case AArch64::CBNZW:
> + Opc = AArch64::CCMPWi;
> FirstOp = 0;
> isZBranch = true;
> break;
> - case ARM64::CBZX:
> - case ARM64::CBNZX:
> - Opc = ARM64::CCMPXi;
> + case AArch64::CBZX:
> + case AArch64::CBNZX:
> + Opc = AArch64::CCMPXi;
> FirstOp = 0;
> isZBranch = true;
> break;
> @@ -646,7 +646,7 @@ void SSACCmpConv::convert(SmallVectorImp
> // The NZCV immediate operand should provide flags for the case where Head
> // would have branched to Tail. These flags should cause the new Head
> // terminator to branch to tail.
> - unsigned NZCV = ARM64CC::getNZCVToSatisfyCondCode(CmpBBTailCC);
> + unsigned NZCV = AArch64CC::getNZCVToSatisfyCondCode(CmpBBTailCC);
> const MCInstrDesc &MCID = TII->get(Opc);
> MRI->constrainRegClass(CmpMI->getOperand(FirstOp).getReg(),
> TII->getRegClass(MCID, 0, TRI, *MF));
> @@ -665,10 +665,10 @@ void SSACCmpConv::convert(SmallVectorImp
> // If CmpMI was a terminator, we need a new conditional branch to replace it.
> // This now becomes a Head terminator.
> if (isZBranch) {
> - bool isNZ = CmpMI->getOpcode() == ARM64::CBNZW ||
> - CmpMI->getOpcode() == ARM64::CBNZX;
> - BuildMI(*Head, CmpMI, CmpMI->getDebugLoc(), TII->get(ARM64::Bcc))
> - .addImm(isNZ ? ARM64CC::NE : ARM64CC::EQ)
> + bool isNZ = CmpMI->getOpcode() == AArch64::CBNZW ||
> + CmpMI->getOpcode() == AArch64::CBNZX;
> + BuildMI(*Head, CmpMI, CmpMI->getDebugLoc(), TII->get(AArch64::Bcc))
> + .addImm(isNZ ? AArch64CC::NE : AArch64CC::EQ)
> .addOperand(CmpMI->getOperand(1)); // Branch target.
> }
> CmpMI->eraseFromParent();
> @@ -687,10 +687,10 @@ int SSACCmpConv::expectedCodeSizeDelta()
> // plus a branch instruction.
> if (HeadCond[0].getImm() == -1) {
> switch (HeadCond[1].getImm()) {
> - case ARM64::CBZW:
> - case ARM64::CBNZW:
> - case ARM64::CBZX:
> - case ARM64::CBNZX:
> + case AArch64::CBZW:
> + case AArch64::CBNZW:
> + case AArch64::CBZX:
> + case AArch64::CBNZX:
> // Therefore delta += 1
> delta = 1;
> break;
> @@ -706,21 +706,21 @@ int SSACCmpConv::expectedCodeSizeDelta()
> default:
> --delta;
> break;
> - case ARM64::CBZW:
> - case ARM64::CBNZW:
> - case ARM64::CBZX:
> - case ARM64::CBNZX:
> + case AArch64::CBZW:
> + case AArch64::CBNZW:
> + case AArch64::CBZX:
> + case AArch64::CBNZX:
> break;
> }
> return delta;
> }
>
> //===----------------------------------------------------------------------===//
> -// ARM64ConditionalCompares Pass
> +// AArch64ConditionalCompares Pass
> //===----------------------------------------------------------------------===//
>
> namespace {
> -class ARM64ConditionalCompares : public MachineFunctionPass {
> +class AArch64ConditionalCompares : public MachineFunctionPass {
> const TargetInstrInfo *TII;
> const TargetRegisterInfo *TRI;
> const MCSchedModel *SchedModel;
> @@ -735,11 +735,11 @@ class ARM64ConditionalCompares : public
>
> public:
> static char ID;
> - ARM64ConditionalCompares() : MachineFunctionPass(ID) {}
> + AArch64ConditionalCompares() : MachineFunctionPass(ID) {}
> void getAnalysisUsage(AnalysisUsage &AU) const override;
> bool runOnMachineFunction(MachineFunction &MF) override;
> const char *getPassName() const override {
> - return "ARM64 Conditional Compares";
> + return "AArch64 Conditional Compares";
> }
>
> private:
> @@ -751,25 +751,25 @@ private:
> };
> } // end anonymous namespace
>
> -char ARM64ConditionalCompares::ID = 0;
> +char AArch64ConditionalCompares::ID = 0;
>
> namespace llvm {
> -void initializeARM64ConditionalComparesPass(PassRegistry &);
> +void initializeAArch64ConditionalComparesPass(PassRegistry &);
> }
>
> -INITIALIZE_PASS_BEGIN(ARM64ConditionalCompares, "arm64-ccmp", "ARM64 CCMP Pass",
> - false, false)
> +INITIALIZE_PASS_BEGIN(AArch64ConditionalCompares, "aarch64-ccmp",
> + "AArch64 CCMP Pass", false, false)
> INITIALIZE_PASS_DEPENDENCY(MachineBranchProbabilityInfo)
> INITIALIZE_PASS_DEPENDENCY(MachineDominatorTree)
> INITIALIZE_PASS_DEPENDENCY(MachineTraceMetrics)
> -INITIALIZE_PASS_END(ARM64ConditionalCompares, "arm64-ccmp", "ARM64 CCMP Pass",
> - false, false)
> +INITIALIZE_PASS_END(AArch64ConditionalCompares, "aarch64-ccmp",
> + "AArch64 CCMP Pass", false, false)
>
> -FunctionPass *llvm::createARM64ConditionalCompares() {
> - return new ARM64ConditionalCompares();
> +FunctionPass *llvm::createAArch64ConditionalCompares() {
> + return new AArch64ConditionalCompares();
> }
>
> -void ARM64ConditionalCompares::getAnalysisUsage(AnalysisUsage &AU) const {
> +void AArch64ConditionalCompares::getAnalysisUsage(AnalysisUsage &AU) const {
> AU.addRequired<MachineBranchProbabilityInfo>();
> AU.addRequired<MachineDominatorTree>();
> AU.addPreserved<MachineDominatorTree>();
> @@ -781,8 +781,8 @@ void ARM64ConditionalCompares::getAnalys
> }
>
> /// Update the dominator tree after if-conversion erased some blocks.
> -void
> -ARM64ConditionalCompares::updateDomTree(ArrayRef<MachineBasicBlock *> Removed) {
> +void AArch64ConditionalCompares::updateDomTree(
> + ArrayRef<MachineBasicBlock *> Removed) {
> // convert() removes CmpBB which was previously dominated by Head.
> // CmpBB children should be transferred to Head.
> MachineDomTreeNode *HeadNode = DomTree->getNode(CmpConv.Head);
> @@ -798,7 +798,7 @@ ARM64ConditionalCompares::updateDomTree(
>
> /// Update LoopInfo after if-conversion.
> void
> -ARM64ConditionalCompares::updateLoops(ArrayRef<MachineBasicBlock *> Removed) {
> +AArch64ConditionalCompares::updateLoops(ArrayRef<MachineBasicBlock *> Removed) {
> if (!Loops)
> return;
> for (unsigned i = 0, e = Removed.size(); i != e; ++i)
> @@ -806,7 +806,7 @@ ARM64ConditionalCompares::updateLoops(Ar
> }
>
> /// Invalidate MachineTraceMetrics before if-conversion.
> -void ARM64ConditionalCompares::invalidateTraces() {
> +void AArch64ConditionalCompares::invalidateTraces() {
> Traces->invalidate(CmpConv.Head);
> Traces->invalidate(CmpConv.CmpBB);
> }
> @@ -814,7 +814,7 @@ void ARM64ConditionalCompares::invalidat
> /// Apply cost model and heuristics to the if-conversion in IfConv.
> /// Return true if the conversion is a good idea.
> ///
> -bool ARM64ConditionalCompares::shouldConvert() {
> +bool AArch64ConditionalCompares::shouldConvert() {
> // Stress testing mode disables all cost considerations.
> if (Stress)
> return true;
> @@ -875,7 +875,7 @@ bool ARM64ConditionalCompares::shouldCon
> return true;
> }
>
> -bool ARM64ConditionalCompares::tryConvert(MachineBasicBlock *MBB) {
> +bool AArch64ConditionalCompares::tryConvert(MachineBasicBlock *MBB) {
> bool Changed = false;
> while (CmpConv.canConvert(MBB) && shouldConvert()) {
> invalidateTraces();
> @@ -888,8 +888,8 @@ bool ARM64ConditionalCompares::tryConver
> return Changed;
> }
>
> -bool ARM64ConditionalCompares::runOnMachineFunction(MachineFunction &MF) {
> - DEBUG(dbgs() << "********** ARM64 Conditional Compares **********\n"
> +bool AArch64ConditionalCompares::runOnMachineFunction(MachineFunction &MF) {
> + DEBUG(dbgs() << "********** AArch64 Conditional Compares **********\n"
> << "********** Function: " << MF.getName() << '\n');
> TII = MF.getTarget().getInstrInfo();
> TRI = MF.getTarget().getRegisterInfo();
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64DeadRegisterDefinitionsPass.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64DeadRegisterDefinitionsPass.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64DeadRegisterDefinitionsPass.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64DeadRegisterDefinitionsPass.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64DeadRegisterDefinitionsPass.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64DeadRegisterDefinitionsPass.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64DeadRegisterDefinitionsPass.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===-- ARM64DeadRegisterDefinitions.cpp - Replace dead defs w/ zero reg --===//
> +//==-- AArch64DeadRegisterDefinitions.cpp - Replace dead defs w/ zero reg --==//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -11,8 +11,8 @@
> // hardware's register renamer.
> //===----------------------------------------------------------------------===//
>
> -#include "ARM64.h"
> -#include "ARM64RegisterInfo.h"
> +#include "AArch64.h"
> +#include "AArch64RegisterInfo.h"
> #include "llvm/ADT/Statistic.h"
> #include "llvm/CodeGen/MachineFunctionPass.h"
> #include "llvm/CodeGen/MachineFunction.h"
> @@ -21,12 +21,12 @@
> #include "llvm/Support/raw_ostream.h"
> using namespace llvm;
>
> -#define DEBUG_TYPE "arm64-dead-defs"
> +#define DEBUG_TYPE "aarch64-dead-defs"
>
> STATISTIC(NumDeadDefsReplaced, "Number of dead definitions replaced");
>
> namespace {
> -class ARM64DeadRegisterDefinitions : public MachineFunctionPass {
> +class AArch64DeadRegisterDefinitions : public MachineFunctionPass {
> private:
> const TargetRegisterInfo *TRI;
> bool implicitlyDefinesOverlappingReg(unsigned Reg, const MachineInstr &MI);
> @@ -34,7 +34,7 @@ private:
> bool usesFrameIndex(const MachineInstr &MI);
> public:
> static char ID; // Pass identification, replacement for typeid.
> - explicit ARM64DeadRegisterDefinitions() : MachineFunctionPass(ID) {}
> + explicit AArch64DeadRegisterDefinitions() : MachineFunctionPass(ID) {}
>
> virtual bool runOnMachineFunction(MachineFunction &F) override;
>
> @@ -45,10 +45,10 @@ public:
> MachineFunctionPass::getAnalysisUsage(AU);
> }
> };
> -char ARM64DeadRegisterDefinitions::ID = 0;
> +char AArch64DeadRegisterDefinitions::ID = 0;
> } // end anonymous namespace
>
> -bool ARM64DeadRegisterDefinitions::implicitlyDefinesOverlappingReg(
> +bool AArch64DeadRegisterDefinitions::implicitlyDefinesOverlappingReg(
> unsigned Reg, const MachineInstr &MI) {
> for (const MachineOperand &MO : MI.implicit_operands())
> if (MO.isReg() && MO.isDef())
> @@ -57,15 +57,15 @@ bool ARM64DeadRegisterDefinitions::impli
> return false;
> }
>
> -bool ARM64DeadRegisterDefinitions::usesFrameIndex(const MachineInstr &MI) {
> +bool AArch64DeadRegisterDefinitions::usesFrameIndex(const MachineInstr &MI) {
> for (const MachineOperand &Op : MI.uses())
> if (Op.isFI())
> return true;
> return false;
> }
>
> -bool
> -ARM64DeadRegisterDefinitions::processMachineBasicBlock(MachineBasicBlock &MBB) {
> +bool AArch64DeadRegisterDefinitions::processMachineBasicBlock(
> + MachineBasicBlock &MBB) {
> bool Changed = false;
> for (MachineInstr &MI : MBB) {
> if (usesFrameIndex(MI)) {
> @@ -99,11 +99,11 @@ ARM64DeadRegisterDefinitions::processMac
> default:
> DEBUG(dbgs() << " Ignoring, register is not a GPR.\n");
> continue;
> - case ARM64::GPR32RegClassID:
> - NewReg = ARM64::WZR;
> + case AArch64::GPR32RegClassID:
> + NewReg = AArch64::WZR;
> break;
> - case ARM64::GPR64RegClassID:
> - NewReg = ARM64::XZR;
> + case AArch64::GPR64RegClassID:
> + NewReg = AArch64::XZR;
> break;
> }
> DEBUG(dbgs() << " Replacing with zero register. New:\n ");
> @@ -118,10 +118,10 @@ ARM64DeadRegisterDefinitions::processMac
>
> // Scan the function for instructions that have a dead definition of a
> // register. Replace that register with the zero register when possible.
> -bool ARM64DeadRegisterDefinitions::runOnMachineFunction(MachineFunction &MF) {
> +bool AArch64DeadRegisterDefinitions::runOnMachineFunction(MachineFunction &MF) {
> TRI = MF.getTarget().getRegisterInfo();
> bool Changed = false;
> - DEBUG(dbgs() << "***** ARM64DeadRegisterDefinitions *****\n");
> + DEBUG(dbgs() << "***** AArch64DeadRegisterDefinitions *****\n");
>
> for (auto &MBB : MF)
> if (processMachineBasicBlock(MBB))
> @@ -129,6 +129,6 @@ bool ARM64DeadRegisterDefinitions::runOn
> return Changed;
> }
>
> -FunctionPass *llvm::createARM64DeadRegisterDefinitions() {
> - return new ARM64DeadRegisterDefinitions();
> +FunctionPass *llvm::createAArch64DeadRegisterDefinitions() {
> + return new AArch64DeadRegisterDefinitions();
> }
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64ExpandPseudoInsts.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64ExpandPseudoInsts.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64ExpandPseudoInsts.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64ExpandPseudoInsts.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===-- ARM64ExpandPseudoInsts.cpp - Expand pseudo instructions ---*- C++ -*-=//
> +//==-- AArch64ExpandPseudoInsts.cpp - Expand pseudo instructions --*- C++ -*-=//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -14,25 +14,25 @@
> //
> //===----------------------------------------------------------------------===//
>
> -#include "MCTargetDesc/ARM64AddressingModes.h"
> -#include "ARM64InstrInfo.h"
> +#include "MCTargetDesc/AArch64AddressingModes.h"
> +#include "AArch64InstrInfo.h"
> #include "llvm/CodeGen/MachineFunctionPass.h"
> #include "llvm/CodeGen/MachineInstrBuilder.h"
> #include "llvm/Support/MathExtras.h"
> using namespace llvm;
>
> namespace {
> -class ARM64ExpandPseudo : public MachineFunctionPass {
> +class AArch64ExpandPseudo : public MachineFunctionPass {
> public:
> static char ID;
> - ARM64ExpandPseudo() : MachineFunctionPass(ID) {}
> + AArch64ExpandPseudo() : MachineFunctionPass(ID) {}
>
> - const ARM64InstrInfo *TII;
> + const AArch64InstrInfo *TII;
>
> bool runOnMachineFunction(MachineFunction &Fn) override;
>
> const char *getPassName() const override {
> - return "ARM64 pseudo instruction expansion pass";
> + return "AArch64 pseudo instruction expansion pass";
> }
>
> private:
> @@ -41,7 +41,7 @@ private:
> bool expandMOVImm(MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI,
> unsigned BitSize);
> };
> -char ARM64ExpandPseudo::ID = 0;
> +char AArch64ExpandPseudo::ID = 0;
> }
>
> /// \brief Transfer implicit operands on the pseudo instruction to the
> @@ -87,17 +87,17 @@ static uint64_t replicateChunk(uint64_t
> static bool tryOrrMovk(uint64_t UImm, uint64_t OrrImm, MachineInstr &MI,
> MachineBasicBlock &MBB,
> MachineBasicBlock::iterator &MBBI,
> - const ARM64InstrInfo *TII, unsigned ChunkIdx) {
> + const AArch64InstrInfo *TII, unsigned ChunkIdx) {
> assert(ChunkIdx < 4 && "Out of range chunk index specified!");
> const unsigned ShiftAmt = ChunkIdx * 16;
>
> uint64_t Encoding;
> - if (ARM64_AM::processLogicalImmediate(OrrImm, 64, Encoding)) {
> + if (AArch64_AM::processLogicalImmediate(OrrImm, 64, Encoding)) {
> // Create the ORR-immediate instruction.
> MachineInstrBuilder MIB =
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::ORRXri))
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::ORRXri))
> .addOperand(MI.getOperand(0))
> - .addReg(ARM64::XZR)
> + .addReg(AArch64::XZR)
> .addImm(Encoding);
>
> // Create the MOVK instruction.
> @@ -105,11 +105,11 @@ static bool tryOrrMovk(uint64_t UImm, ui
> const unsigned DstReg = MI.getOperand(0).getReg();
> const bool DstIsDead = MI.getOperand(0).isDead();
> MachineInstrBuilder MIB1 =
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::MOVKXi))
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::MOVKXi))
> .addReg(DstReg, RegState::Define | getDeadRegState(DstIsDead))
> .addReg(DstReg)
> .addImm(Imm16)
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, ShiftAmt));
> + .addImm(AArch64_AM::getShifterImm(AArch64_AM::LSL, ShiftAmt));
>
> transferImpOps(MI, MIB, MIB1);
> MI.eraseFromParent();
> @@ -124,7 +124,7 @@ static bool tryOrrMovk(uint64_t UImm, ui
> static bool canUseOrr(uint64_t Chunk, uint64_t &Encoding) {
> Chunk = (Chunk << 48) | (Chunk << 32) | (Chunk << 16) | Chunk;
>
> - return ARM64_AM::processLogicalImmediate(Chunk, 64, Encoding);
> + return AArch64_AM::processLogicalImmediate(Chunk, 64, Encoding);
> }
>
> /// \brief Check for identical 16-bit chunks within the constant and if so
> @@ -138,7 +138,7 @@ static bool canUseOrr(uint64_t Chunk, ui
> static bool tryToreplicateChunks(uint64_t UImm, MachineInstr &MI,
> MachineBasicBlock &MBB,
> MachineBasicBlock::iterator &MBBI,
> - const ARM64InstrInfo *TII) {
> + const AArch64InstrInfo *TII) {
> typedef DenseMap<uint64_t, unsigned> CountMap;
> CountMap Counts;
>
> @@ -162,9 +162,9 @@ static bool tryToreplicateChunks(uint64_
> const bool CountThree = Count == 3;
> // Create the ORR-immediate instruction.
> MachineInstrBuilder MIB =
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::ORRXri))
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::ORRXri))
> .addOperand(MI.getOperand(0))
> - .addReg(ARM64::XZR)
> + .addReg(AArch64::XZR)
> .addImm(Encoding);
>
> const unsigned DstReg = MI.getOperand(0).getReg();
> @@ -182,12 +182,12 @@ static bool tryToreplicateChunks(uint64_
>
> // Create the first MOVK instruction.
> MachineInstrBuilder MIB1 =
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::MOVKXi))
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::MOVKXi))
> .addReg(DstReg,
> RegState::Define | getDeadRegState(DstIsDead && CountThree))
> .addReg(DstReg)
> .addImm(Imm16)
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, ShiftAmt));
> + .addImm(AArch64_AM::getShifterImm(AArch64_AM::LSL, ShiftAmt));
>
> // In case we have three instances the whole constant is now materialized
> // and we can exit.
> @@ -207,11 +207,11 @@ static bool tryToreplicateChunks(uint64_
>
> // Create the second MOVK instruction.
> MachineInstrBuilder MIB2 =
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::MOVKXi))
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::MOVKXi))
> .addReg(DstReg, RegState::Define | getDeadRegState(DstIsDead))
> .addReg(DstReg)
> .addImm(Imm16)
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, ShiftAmt));
> + .addImm(AArch64_AM::getShifterImm(AArch64_AM::LSL, ShiftAmt));
>
> transferImpOps(MI, MIB, MIB2);
> MI.eraseFromParent();
> @@ -272,7 +272,7 @@ static uint64_t updateImm(uint64_t Imm,
> static bool trySequenceOfOnes(uint64_t UImm, MachineInstr &MI,
> MachineBasicBlock &MBB,
> MachineBasicBlock::iterator &MBBI,
> - const ARM64InstrInfo *TII) {
> + const AArch64InstrInfo *TII) {
> const int NotSet = -1;
> const uint64_t Mask = 0xFFFF;
>
> @@ -343,11 +343,11 @@ static bool trySequenceOfOnes(uint64_t U
>
> // Create the ORR-immediate instruction.
> uint64_t Encoding = 0;
> - ARM64_AM::processLogicalImmediate(OrrImm, 64, Encoding);
> + AArch64_AM::processLogicalImmediate(OrrImm, 64, Encoding);
> MachineInstrBuilder MIB =
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::ORRXri))
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::ORRXri))
> .addOperand(MI.getOperand(0))
> - .addReg(ARM64::XZR)
> + .addReg(AArch64::XZR)
> .addImm(Encoding);
>
> const unsigned DstReg = MI.getOperand(0).getReg();
> @@ -356,12 +356,13 @@ static bool trySequenceOfOnes(uint64_t U
> const bool SingleMovk = SecondMovkIdx == NotSet;
> // Create the first MOVK instruction.
> MachineInstrBuilder MIB1 =
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::MOVKXi))
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::MOVKXi))
> .addReg(DstReg,
> RegState::Define | getDeadRegState(DstIsDead && SingleMovk))
> .addReg(DstReg)
> .addImm(getChunk(UImm, FirstMovkIdx))
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, FirstMovkIdx * 16));
> + .addImm(
> + AArch64_AM::getShifterImm(AArch64_AM::LSL, FirstMovkIdx * 16));
>
> // Early exit in case we only need to emit a single MOVK instruction.
> if (SingleMovk) {
> @@ -372,11 +373,12 @@ static bool trySequenceOfOnes(uint64_t U
>
> // Create the second MOVK instruction.
> MachineInstrBuilder MIB2 =
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::MOVKXi))
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::MOVKXi))
> .addReg(DstReg, RegState::Define | getDeadRegState(DstIsDead))
> .addReg(DstReg)
> .addImm(getChunk(UImm, SecondMovkIdx))
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, SecondMovkIdx * 16));
> + .addImm(
> + AArch64_AM::getShifterImm(AArch64_AM::LSL, SecondMovkIdx * 16));
>
> transferImpOps(MI, MIB, MIB2);
> MI.eraseFromParent();
> @@ -385,9 +387,9 @@ static bool trySequenceOfOnes(uint64_t U
>
> /// \brief Expand a MOVi32imm or MOVi64imm pseudo instruction to one or more
> /// real move-immediate instructions to synthesize the immediate.
> -bool ARM64ExpandPseudo::expandMOVImm(MachineBasicBlock &MBB,
> - MachineBasicBlock::iterator MBBI,
> - unsigned BitSize) {
> +bool AArch64ExpandPseudo::expandMOVImm(MachineBasicBlock &MBB,
> + MachineBasicBlock::iterator MBBI,
> + unsigned BitSize) {
> MachineInstr &MI = *MBBI;
> uint64_t Imm = MI.getOperand(1).getImm();
> const unsigned Mask = 0xFFFF;
> @@ -395,12 +397,12 @@ bool ARM64ExpandPseudo::expandMOVImm(Mac
> // Try a MOVI instruction (aka ORR-immediate with the zero register).
> uint64_t UImm = Imm << (64 - BitSize) >> (64 - BitSize);
> uint64_t Encoding;
> - if (ARM64_AM::processLogicalImmediate(UImm, BitSize, Encoding)) {
> - unsigned Opc = (BitSize == 32 ? ARM64::ORRWri : ARM64::ORRXri);
> + if (AArch64_AM::processLogicalImmediate(UImm, BitSize, Encoding)) {
> + unsigned Opc = (BitSize == 32 ? AArch64::ORRWri : AArch64::ORRXri);
> MachineInstrBuilder MIB =
> BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(Opc))
> .addOperand(MI.getOperand(0))
> - .addReg(BitSize == 32 ? ARM64::WZR : ARM64::XZR)
> + .addReg(BitSize == 32 ? AArch64::WZR : AArch64::XZR)
> .addImm(Encoding);
> transferImpOps(MI, MIB, MIB);
> MI.eraseFromParent();
> @@ -504,9 +506,9 @@ bool ARM64ExpandPseudo::expandMOVImm(Mac
> unsigned FirstOpc;
> if (BitSize == 32) {
> Imm &= (1LL << 32) - 1;
> - FirstOpc = (isNeg ? ARM64::MOVNWi : ARM64::MOVZWi);
> + FirstOpc = (isNeg ? AArch64::MOVNWi : AArch64::MOVZWi);
> } else {
> - FirstOpc = (isNeg ? ARM64::MOVNXi : ARM64::MOVZXi);
> + FirstOpc = (isNeg ? AArch64::MOVNXi : AArch64::MOVZXi);
> }
> unsigned Shift = 0; // LSL amount for high bits with MOVZ/MOVN
> unsigned LastShift = 0; // LSL amount for last MOVK
> @@ -524,7 +526,7 @@ bool ARM64ExpandPseudo::expandMOVImm(Mac
> .addReg(DstReg, RegState::Define |
> getDeadRegState(DstIsDead && Shift == LastShift))
> .addImm(Imm16)
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, Shift));
> + .addImm(AArch64_AM::getShifterImm(AArch64_AM::LSL, Shift));
>
> // If a MOVN was used for the high bits of a negative value, flip the rest
> // of the bits back for use with MOVK.
> @@ -538,7 +540,7 @@ bool ARM64ExpandPseudo::expandMOVImm(Mac
> }
>
> MachineInstrBuilder MIB2;
> - unsigned Opc = (BitSize == 32 ? ARM64::MOVKWi : ARM64::MOVKXi);
> + unsigned Opc = (BitSize == 32 ? AArch64::MOVKWi : AArch64::MOVKXi);
> while (Shift != LastShift) {
> Shift -= 16;
> Imm16 = (Imm >> Shift) & Mask;
> @@ -550,7 +552,7 @@ bool ARM64ExpandPseudo::expandMOVImm(Mac
> getDeadRegState(DstIsDead && Shift == LastShift))
> .addReg(DstReg)
> .addImm(Imm16)
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, Shift));
> + .addImm(AArch64_AM::getShifterImm(AArch64_AM::LSL, Shift));
> }
>
> transferImpOps(MI, MIB1, MIB2);
> @@ -560,7 +562,7 @@ bool ARM64ExpandPseudo::expandMOVImm(Mac
>
> /// \brief If MBBI references a pseudo instruction that should be expanded here,
> /// do the expansion and return true. Otherwise return false.
> -bool ARM64ExpandPseudo::expandMI(MachineBasicBlock &MBB,
> +bool AArch64ExpandPseudo::expandMI(MachineBasicBlock &MBB,
> MachineBasicBlock::iterator MBBI) {
> MachineInstr &MI = *MBBI;
> unsigned Opcode = MI.getOpcode();
> @@ -568,75 +570,76 @@ bool ARM64ExpandPseudo::expandMI(Machine
> default:
> break;
>
> - case ARM64::ADDWrr:
> - case ARM64::SUBWrr:
> - case ARM64::ADDXrr:
> - case ARM64::SUBXrr:
> - case ARM64::ADDSWrr:
> - case ARM64::SUBSWrr:
> - case ARM64::ADDSXrr:
> - case ARM64::SUBSXrr:
> - case ARM64::ANDWrr:
> - case ARM64::ANDXrr:
> - case ARM64::BICWrr:
> - case ARM64::BICXrr:
> - case ARM64::ANDSWrr:
> - case ARM64::ANDSXrr:
> - case ARM64::BICSWrr:
> - case ARM64::BICSXrr:
> - case ARM64::EONWrr:
> - case ARM64::EONXrr:
> - case ARM64::EORWrr:
> - case ARM64::EORXrr:
> - case ARM64::ORNWrr:
> - case ARM64::ORNXrr:
> - case ARM64::ORRWrr:
> - case ARM64::ORRXrr: {
> + case AArch64::ADDWrr:
> + case AArch64::SUBWrr:
> + case AArch64::ADDXrr:
> + case AArch64::SUBXrr:
> + case AArch64::ADDSWrr:
> + case AArch64::SUBSWrr:
> + case AArch64::ADDSXrr:
> + case AArch64::SUBSXrr:
> + case AArch64::ANDWrr:
> + case AArch64::ANDXrr:
> + case AArch64::BICWrr:
> + case AArch64::BICXrr:
> + case AArch64::ANDSWrr:
> + case AArch64::ANDSXrr:
> + case AArch64::BICSWrr:
> + case AArch64::BICSXrr:
> + case AArch64::EONWrr:
> + case AArch64::EONXrr:
> + case AArch64::EORWrr:
> + case AArch64::EORXrr:
> + case AArch64::ORNWrr:
> + case AArch64::ORNXrr:
> + case AArch64::ORRWrr:
> + case AArch64::ORRXrr: {
> unsigned Opcode;
> switch (MI.getOpcode()) {
> default:
> return false;
> - case ARM64::ADDWrr: Opcode = ARM64::ADDWrs; break;
> - case ARM64::SUBWrr: Opcode = ARM64::SUBWrs; break;
> - case ARM64::ADDXrr: Opcode = ARM64::ADDXrs; break;
> - case ARM64::SUBXrr: Opcode = ARM64::SUBXrs; break;
> - case ARM64::ADDSWrr: Opcode = ARM64::ADDSWrs; break;
> - case ARM64::SUBSWrr: Opcode = ARM64::SUBSWrs; break;
> - case ARM64::ADDSXrr: Opcode = ARM64::ADDSXrs; break;
> - case ARM64::SUBSXrr: Opcode = ARM64::SUBSXrs; break;
> - case ARM64::ANDWrr: Opcode = ARM64::ANDWrs; break;
> - case ARM64::ANDXrr: Opcode = ARM64::ANDXrs; break;
> - case ARM64::BICWrr: Opcode = ARM64::BICWrs; break;
> - case ARM64::BICXrr: Opcode = ARM64::BICXrs; break;
> - case ARM64::ANDSWrr: Opcode = ARM64::ANDSWrs; break;
> - case ARM64::ANDSXrr: Opcode = ARM64::ANDSXrs; break;
> - case ARM64::BICSWrr: Opcode = ARM64::BICSWrs; break;
> - case ARM64::BICSXrr: Opcode = ARM64::BICSXrs; break;
> - case ARM64::EONWrr: Opcode = ARM64::EONWrs; break;
> - case ARM64::EONXrr: Opcode = ARM64::EONXrs; break;
> - case ARM64::EORWrr: Opcode = ARM64::EORWrs; break;
> - case ARM64::EORXrr: Opcode = ARM64::EORXrs; break;
> - case ARM64::ORNWrr: Opcode = ARM64::ORNWrs; break;
> - case ARM64::ORNXrr: Opcode = ARM64::ORNXrs; break;
> - case ARM64::ORRWrr: Opcode = ARM64::ORRWrs; break;
> - case ARM64::ORRXrr: Opcode = ARM64::ORRXrs; break;
> + case AArch64::ADDWrr: Opcode = AArch64::ADDWrs; break;
> + case AArch64::SUBWrr: Opcode = AArch64::SUBWrs; break;
> + case AArch64::ADDXrr: Opcode = AArch64::ADDXrs; break;
> + case AArch64::SUBXrr: Opcode = AArch64::SUBXrs; break;
> + case AArch64::ADDSWrr: Opcode = AArch64::ADDSWrs; break;
> + case AArch64::SUBSWrr: Opcode = AArch64::SUBSWrs; break;
> + case AArch64::ADDSXrr: Opcode = AArch64::ADDSXrs; break;
> + case AArch64::SUBSXrr: Opcode = AArch64::SUBSXrs; break;
> + case AArch64::ANDWrr: Opcode = AArch64::ANDWrs; break;
> + case AArch64::ANDXrr: Opcode = AArch64::ANDXrs; break;
> + case AArch64::BICWrr: Opcode = AArch64::BICWrs; break;
> + case AArch64::BICXrr: Opcode = AArch64::BICXrs; break;
> + case AArch64::ANDSWrr: Opcode = AArch64::ANDSWrs; break;
> + case AArch64::ANDSXrr: Opcode = AArch64::ANDSXrs; break;
> + case AArch64::BICSWrr: Opcode = AArch64::BICSWrs; break;
> + case AArch64::BICSXrr: Opcode = AArch64::BICSXrs; break;
> + case AArch64::EONWrr: Opcode = AArch64::EONWrs; break;
> + case AArch64::EONXrr: Opcode = AArch64::EONXrs; break;
> + case AArch64::EORWrr: Opcode = AArch64::EORWrs; break;
> + case AArch64::EORXrr: Opcode = AArch64::EORXrs; break;
> + case AArch64::ORNWrr: Opcode = AArch64::ORNWrs; break;
> + case AArch64::ORNXrr: Opcode = AArch64::ORNXrs; break;
> + case AArch64::ORRWrr: Opcode = AArch64::ORRWrs; break;
> + case AArch64::ORRXrr: Opcode = AArch64::ORRXrs; break;
> }
> MachineInstrBuilder MIB1 =
> BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(Opcode),
> MI.getOperand(0).getReg())
> .addOperand(MI.getOperand(1))
> .addOperand(MI.getOperand(2))
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, 0));
> + .addImm(AArch64_AM::getShifterImm(AArch64_AM::LSL, 0));
> transferImpOps(MI, MIB1, MIB1);
> MI.eraseFromParent();
> return true;
> }
>
> - case ARM64::FCVTSHpseudo: {
> + case AArch64::FCVTSHpseudo: {
> MachineOperand Src = MI.getOperand(1);
> Src.setImplicit();
> - unsigned SrcH = TII->getRegisterInfo().getSubReg(Src.getReg(), ARM64::hsub);
> - auto MIB = BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::FCVTSHr))
> + unsigned SrcH =
> + TII->getRegisterInfo().getSubReg(Src.getReg(), AArch64::hsub);
> + auto MIB = BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::FCVTSHr))
> .addOperand(MI.getOperand(0))
> .addReg(SrcH, RegState::Undef)
> .addOperand(Src);
> @@ -644,33 +647,34 @@ bool ARM64ExpandPseudo::expandMI(Machine
> MI.eraseFromParent();
> return true;
> }
> - case ARM64::LOADgot: {
> + case AArch64::LOADgot: {
> // Expand into ADRP + LDR.
> unsigned DstReg = MI.getOperand(0).getReg();
> const MachineOperand &MO1 = MI.getOperand(1);
> unsigned Flags = MO1.getTargetFlags();
> MachineInstrBuilder MIB1 =
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::ADRP), DstReg);
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::ADRP), DstReg);
> MachineInstrBuilder MIB2 =
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::LDRXui))
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::LDRXui))
> .addOperand(MI.getOperand(0))
> .addReg(DstReg);
>
> if (MO1.isGlobal()) {
> - MIB1.addGlobalAddress(MO1.getGlobal(), 0, Flags | ARM64II::MO_PAGE);
> + MIB1.addGlobalAddress(MO1.getGlobal(), 0, Flags | AArch64II::MO_PAGE);
> MIB2.addGlobalAddress(MO1.getGlobal(), 0,
> - Flags | ARM64II::MO_PAGEOFF | ARM64II::MO_NC);
> + Flags | AArch64II::MO_PAGEOFF | AArch64II::MO_NC);
> } else if (MO1.isSymbol()) {
> - MIB1.addExternalSymbol(MO1.getSymbolName(), Flags | ARM64II::MO_PAGE);
> + MIB1.addExternalSymbol(MO1.getSymbolName(), Flags | AArch64II::MO_PAGE);
> MIB2.addExternalSymbol(MO1.getSymbolName(),
> - Flags | ARM64II::MO_PAGEOFF | ARM64II::MO_NC);
> + Flags | AArch64II::MO_PAGEOFF | AArch64II::MO_NC);
> } else {
> assert(MO1.isCPI() &&
> "Only expect globals, externalsymbols, or constant pools");
> MIB1.addConstantPoolIndex(MO1.getIndex(), MO1.getOffset(),
> - Flags | ARM64II::MO_PAGE);
> + Flags | AArch64II::MO_PAGE);
> MIB2.addConstantPoolIndex(MO1.getIndex(), MO1.getOffset(),
> - Flags | ARM64II::MO_PAGEOFF | ARM64II::MO_NC);
> + Flags | AArch64II::MO_PAGEOFF |
> + AArch64II::MO_NC);
> }
>
> transferImpOps(MI, MIB1, MIB2);
> @@ -678,20 +682,20 @@ bool ARM64ExpandPseudo::expandMI(Machine
> return true;
> }
>
> - case ARM64::MOVaddr:
> - case ARM64::MOVaddrJT:
> - case ARM64::MOVaddrCP:
> - case ARM64::MOVaddrBA:
> - case ARM64::MOVaddrTLS:
> - case ARM64::MOVaddrEXT: {
> + case AArch64::MOVaddr:
> + case AArch64::MOVaddrJT:
> + case AArch64::MOVaddrCP:
> + case AArch64::MOVaddrBA:
> + case AArch64::MOVaddrTLS:
> + case AArch64::MOVaddrEXT: {
> // Expand into ADRP + ADD.
> unsigned DstReg = MI.getOperand(0).getReg();
> MachineInstrBuilder MIB1 =
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::ADRP), DstReg)
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::ADRP), DstReg)
> .addOperand(MI.getOperand(1));
>
> MachineInstrBuilder MIB2 =
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::ADDXri))
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::ADDXri))
> .addOperand(MI.getOperand(0))
> .addReg(DstReg)
> .addOperand(MI.getOperand(2))
> @@ -702,13 +706,13 @@ bool ARM64ExpandPseudo::expandMI(Machine
> return true;
> }
>
> - case ARM64::MOVi32imm:
> + case AArch64::MOVi32imm:
> return expandMOVImm(MBB, MBBI, 32);
> - case ARM64::MOVi64imm:
> + case AArch64::MOVi64imm:
> return expandMOVImm(MBB, MBBI, 64);
> - case ARM64::RET_ReallyLR:
> - BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(ARM64::RET))
> - .addReg(ARM64::LR);
> + case AArch64::RET_ReallyLR:
> + BuildMI(MBB, MBBI, MI.getDebugLoc(), TII->get(AArch64::RET))
> + .addReg(AArch64::LR);
> MI.eraseFromParent();
> return true;
> }
> @@ -717,7 +721,7 @@ bool ARM64ExpandPseudo::expandMI(Machine
>
> /// \brief Iterate over the instructions in basic block MBB and expand any
> /// pseudo instructions. Return true if anything was modified.
> -bool ARM64ExpandPseudo::expandMBB(MachineBasicBlock &MBB) {
> +bool AArch64ExpandPseudo::expandMBB(MachineBasicBlock &MBB) {
> bool Modified = false;
>
> MachineBasicBlock::iterator MBBI = MBB.begin(), E = MBB.end();
> @@ -730,8 +734,8 @@ bool ARM64ExpandPseudo::expandMBB(Machin
> return Modified;
> }
>
> -bool ARM64ExpandPseudo::runOnMachineFunction(MachineFunction &MF) {
> - TII = static_cast<const ARM64InstrInfo *>(MF.getTarget().getInstrInfo());
> +bool AArch64ExpandPseudo::runOnMachineFunction(MachineFunction &MF) {
> + TII = static_cast<const AArch64InstrInfo *>(MF.getTarget().getInstrInfo());
>
> bool Modified = false;
> for (auto &MBB : MF)
> @@ -740,6 +744,6 @@ bool ARM64ExpandPseudo::runOnMachineFunc
> }
>
> /// \brief Returns an instance of the pseudo instruction expansion pass.
> -FunctionPass *llvm::createARM64ExpandPseudoPass() {
> - return new ARM64ExpandPseudo();
> +FunctionPass *llvm::createAArch64ExpandPseudoPass() {
> + return new AArch64ExpandPseudo();
> }
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64FastISel.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64FastISel.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64FastISel.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64FastISel.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64FastISel.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64FastISel.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64FastISel.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===-- ARM6464FastISel.cpp - ARM64 FastISel implementation ---------------===//
> +//===-- AArch6464FastISel.cpp - AArch64 FastISel implementation -----------===//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,17 +7,17 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// This file defines the ARM64-specific support for the FastISel class. Some
> +// This file defines the AArch64-specific support for the FastISel class. Some
> // of the target-specific code is generated by tablegen in the file
> -// ARM64GenFastISel.inc, which is #included here.
> +// AArch64GenFastISel.inc, which is #included here.
> //
> //===----------------------------------------------------------------------===//
>
> -#include "ARM64.h"
> -#include "ARM64TargetMachine.h"
> -#include "ARM64Subtarget.h"
> -#include "ARM64CallingConv.h"
> -#include "MCTargetDesc/ARM64AddressingModes.h"
> +#include "AArch64.h"
> +#include "AArch64TargetMachine.h"
> +#include "AArch64Subtarget.h"
> +#include "AArch64CallingConv.h"
> +#include "MCTargetDesc/AArch64AddressingModes.h"
> #include "llvm/CodeGen/CallingConvLower.h"
> #include "llvm/CodeGen/FastISel.h"
> #include "llvm/CodeGen/FunctionLoweringInfo.h"
> @@ -40,7 +40,7 @@ using namespace llvm;
>
> namespace {
>
> -class ARM64FastISel : public FastISel {
> +class AArch64FastISel : public FastISel {
>
> class Address {
> public:
> @@ -85,9 +85,9 @@ class ARM64FastISel : public FastISel {
> bool isValid() { return isFIBase() || (isRegBase() && getReg() != 0); }
> };
>
> - /// Subtarget - Keep a pointer to the ARM64Subtarget around so that we can
> + /// Subtarget - Keep a pointer to the AArch64Subtarget around so that we can
> /// make the right decision when generating code for different targets.
> - const ARM64Subtarget *Subtarget;
> + const AArch64Subtarget *Subtarget;
> LLVMContext *Context;
>
> private:
> @@ -130,8 +130,8 @@ private:
> unsigned EmitIntExt(MVT SrcVT, unsigned SrcReg, MVT DestVT, bool isZExt);
> unsigned Emiti1Ext(unsigned SrcReg, MVT DestVT, bool isZExt);
>
> - unsigned ARM64MaterializeFP(const ConstantFP *CFP, MVT VT);
> - unsigned ARM64MaterializeGV(const GlobalValue *GV);
> + unsigned AArch64MaterializeFP(const ConstantFP *CFP, MVT VT);
> + unsigned AArch64MaterializeGV(const GlobalValue *GV);
>
> // Call handling routines.
> private:
> @@ -150,29 +150,29 @@ public:
> unsigned TargetMaterializeAlloca(const AllocaInst *AI) override;
> unsigned TargetMaterializeConstant(const Constant *C) override;
>
> - explicit ARM64FastISel(FunctionLoweringInfo &funcInfo,
> + explicit AArch64FastISel(FunctionLoweringInfo &funcInfo,
> const TargetLibraryInfo *libInfo)
> : FastISel(funcInfo, libInfo) {
> - Subtarget = &TM.getSubtarget<ARM64Subtarget>();
> + Subtarget = &TM.getSubtarget<AArch64Subtarget>();
> Context = &funcInfo.Fn->getContext();
> }
>
> bool TargetSelectInstruction(const Instruction *I) override;
>
> -#include "ARM64GenFastISel.inc"
> +#include "AArch64GenFastISel.inc"
> };
>
> } // end anonymous namespace
>
> -#include "ARM64GenCallingConv.inc"
> +#include "AArch64GenCallingConv.inc"
>
> -CCAssignFn *ARM64FastISel::CCAssignFnForCall(CallingConv::ID CC) const {
> +CCAssignFn *AArch64FastISel::CCAssignFnForCall(CallingConv::ID CC) const {
> if (CC == CallingConv::WebKit_JS)
> - return CC_ARM64_WebKit_JS;
> - return Subtarget->isTargetDarwin() ? CC_ARM64_DarwinPCS : CC_ARM64_AAPCS;
> + return CC_AArch64_WebKit_JS;
> + return Subtarget->isTargetDarwin() ? CC_AArch64_DarwinPCS : CC_AArch64_AAPCS;
> }
>
> -unsigned ARM64FastISel::TargetMaterializeAlloca(const AllocaInst *AI) {
> +unsigned AArch64FastISel::TargetMaterializeAlloca(const AllocaInst *AI) {
> assert(TLI.getValueType(AI->getType(), true) == MVT::i64 &&
> "Alloca should always return a pointer.");
>
> @@ -184,8 +184,8 @@ unsigned ARM64FastISel::TargetMaterializ
> FuncInfo.StaticAllocaMap.find(AI);
>
> if (SI != FuncInfo.StaticAllocaMap.end()) {
> - unsigned ResultReg = createResultReg(&ARM64::GPR64RegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::ADDXri),
> + unsigned ResultReg = createResultReg(&AArch64::GPR64RegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::ADDXri),
> ResultReg)
> .addFrameIndex(SI->second)
> .addImm(0)
> @@ -196,7 +196,7 @@ unsigned ARM64FastISel::TargetMaterializ
> return 0;
> }
>
> -unsigned ARM64FastISel::ARM64MaterializeFP(const ConstantFP *CFP, MVT VT) {
> +unsigned AArch64FastISel::AArch64MaterializeFP(const ConstantFP *CFP, MVT VT) {
> if (VT != MVT::f32 && VT != MVT::f64)
> return 0;
>
> @@ -209,11 +209,11 @@ unsigned ARM64FastISel::ARM64Materialize
> int Imm;
> unsigned Opc;
> if (is64bit) {
> - Imm = ARM64_AM::getFP64Imm(Val);
> - Opc = ARM64::FMOVDi;
> + Imm = AArch64_AM::getFP64Imm(Val);
> + Opc = AArch64::FMOVDi;
> } else {
> - Imm = ARM64_AM::getFP32Imm(Val);
> - Opc = ARM64::FMOVSi;
> + Imm = AArch64_AM::getFP32Imm(Val);
> + Opc = AArch64::FMOVSi;
> }
> unsigned ResultReg = createResultReg(TLI.getRegClassFor(VT));
> BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(Opc), ResultReg)
> @@ -228,19 +228,19 @@ unsigned ARM64FastISel::ARM64Materialize
> Align = DL.getTypeAllocSize(CFP->getType());
>
> unsigned Idx = MCP.getConstantPoolIndex(cast<Constant>(CFP), Align);
> - unsigned ADRPReg = createResultReg(&ARM64::GPR64commonRegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::ADRP),
> - ADRPReg).addConstantPoolIndex(Idx, 0, ARM64II::MO_PAGE);
> + unsigned ADRPReg = createResultReg(&AArch64::GPR64commonRegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::ADRP),
> + ADRPReg).addConstantPoolIndex(Idx, 0, AArch64II::MO_PAGE);
>
> - unsigned Opc = is64bit ? ARM64::LDRDui : ARM64::LDRSui;
> + unsigned Opc = is64bit ? AArch64::LDRDui : AArch64::LDRSui;
> unsigned ResultReg = createResultReg(TLI.getRegClassFor(VT));
> BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(Opc), ResultReg)
> .addReg(ADRPReg)
> - .addConstantPoolIndex(Idx, 0, ARM64II::MO_PAGEOFF | ARM64II::MO_NC);
> + .addConstantPoolIndex(Idx, 0, AArch64II::MO_PAGEOFF | AArch64II::MO_NC);
> return ResultReg;
> }
>
> -unsigned ARM64FastISel::ARM64MaterializeGV(const GlobalValue *GV) {
> +unsigned AArch64FastISel::AArch64MaterializeGV(const GlobalValue *GV) {
> // We can't handle thread-local variables quickly yet. Unfortunately we have
> // to peer through any aliases to find out if that rule applies.
> const GlobalValue *TLSGV = GV;
> @@ -257,37 +257,37 @@ unsigned ARM64FastISel::ARM64Materialize
> if (!DestEVT.isSimple())
> return 0;
>
> - unsigned ADRPReg = createResultReg(&ARM64::GPR64commonRegClass);
> + unsigned ADRPReg = createResultReg(&AArch64::GPR64commonRegClass);
> unsigned ResultReg;
>
> - if (OpFlags & ARM64II::MO_GOT) {
> + if (OpFlags & AArch64II::MO_GOT) {
> // ADRP + LDRX
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::ADRP),
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::ADRP),
> ADRPReg)
> - .addGlobalAddress(GV, 0, ARM64II::MO_GOT | ARM64II::MO_PAGE);
> + .addGlobalAddress(GV, 0, AArch64II::MO_GOT | AArch64II::MO_PAGE);
>
> - ResultReg = createResultReg(&ARM64::GPR64RegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::LDRXui),
> + ResultReg = createResultReg(&AArch64::GPR64RegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::LDRXui),
> ResultReg)
> .addReg(ADRPReg)
> - .addGlobalAddress(GV, 0, ARM64II::MO_GOT | ARM64II::MO_PAGEOFF |
> - ARM64II::MO_NC);
> + .addGlobalAddress(GV, 0, AArch64II::MO_GOT | AArch64II::MO_PAGEOFF |
> + AArch64II::MO_NC);
> } else {
> // ADRP + ADDX
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::ADRP),
> - ADRPReg).addGlobalAddress(GV, 0, ARM64II::MO_PAGE);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::ADRP),
> + ADRPReg).addGlobalAddress(GV, 0, AArch64II::MO_PAGE);
>
> - ResultReg = createResultReg(&ARM64::GPR64spRegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::ADDXri),
> + ResultReg = createResultReg(&AArch64::GPR64spRegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::ADDXri),
> ResultReg)
> .addReg(ADRPReg)
> - .addGlobalAddress(GV, 0, ARM64II::MO_PAGEOFF | ARM64II::MO_NC)
> + .addGlobalAddress(GV, 0, AArch64II::MO_PAGEOFF | AArch64II::MO_NC)
> .addImm(0);
> }
> return ResultReg;
> }
>
> -unsigned ARM64FastISel::TargetMaterializeConstant(const Constant *C) {
> +unsigned AArch64FastISel::TargetMaterializeConstant(const Constant *C) {
> EVT CEVT = TLI.getValueType(C->getType(), true);
>
> // Only handle simple types.
> @@ -297,15 +297,15 @@ unsigned ARM64FastISel::TargetMaterializ
>
> // FIXME: Handle ConstantInt.
> if (const ConstantFP *CFP = dyn_cast<ConstantFP>(C))
> - return ARM64MaterializeFP(CFP, VT);
> + return AArch64MaterializeFP(CFP, VT);
> else if (const GlobalValue *GV = dyn_cast<GlobalValue>(C))
> - return ARM64MaterializeGV(GV);
> + return AArch64MaterializeGV(GV);
>
> return 0;
> }
>
> // Computes the address to get to an object.
> -bool ARM64FastISel::ComputeAddress(const Value *Obj, Address &Addr) {
> +bool AArch64FastISel::ComputeAddress(const Value *Obj, Address &Addr) {
> const User *U = nullptr;
> unsigned Opcode = Instruction::UserOp1;
> if (const Instruction *I = dyn_cast<Instruction>(Obj)) {
> @@ -413,7 +413,7 @@ bool ARM64FastISel::ComputeAddress(const
> return Addr.isValid();
> }
>
> -bool ARM64FastISel::isTypeLegal(Type *Ty, MVT &VT) {
> +bool AArch64FastISel::isTypeLegal(Type *Ty, MVT &VT) {
> EVT evt = TLI.getValueType(Ty, true);
>
> // Only handle simple types.
> @@ -430,7 +430,7 @@ bool ARM64FastISel::isTypeLegal(Type *Ty
> return TLI.isTypeLegal(VT);
> }
>
> -bool ARM64FastISel::isLoadStoreTypeLegal(Type *Ty, MVT &VT) {
> +bool AArch64FastISel::isLoadStoreTypeLegal(Type *Ty, MVT &VT) {
> if (isTypeLegal(Ty, VT))
> return true;
>
> @@ -442,8 +442,8 @@ bool ARM64FastISel::isLoadStoreTypeLegal
> return false;
> }
>
> -bool ARM64FastISel::SimplifyAddress(Address &Addr, MVT VT, int64_t ScaleFactor,
> - bool UseUnscaled) {
> +bool AArch64FastISel::SimplifyAddress(Address &Addr, MVT VT,
> + int64_t ScaleFactor, bool UseUnscaled) {
> bool needsLowering = false;
> int64_t Offset = Addr.getOffset();
> switch (VT.SimpleTy) {
> @@ -486,9 +486,9 @@ bool ARM64FastISel::SimplifyAddress(Addr
> return true;
> }
>
> -void ARM64FastISel::AddLoadStoreOperands(Address &Addr,
> - const MachineInstrBuilder &MIB,
> - unsigned Flags, bool UseUnscaled) {
> +void AArch64FastISel::AddLoadStoreOperands(Address &Addr,
> + const MachineInstrBuilder &MIB,
> + unsigned Flags, bool UseUnscaled) {
> int64_t Offset = Addr.getOffset();
> // Frame base works a bit differently. Handle it separately.
> if (Addr.getKind() == Address::FrameIndexBase) {
> @@ -507,8 +507,8 @@ void ARM64FastISel::AddLoadStoreOperands
> }
> }
>
> -bool ARM64FastISel::EmitLoad(MVT VT, unsigned &ResultReg, Address Addr,
> - bool UseUnscaled) {
> +bool AArch64FastISel::EmitLoad(MVT VT, unsigned &ResultReg, Address Addr,
> + bool UseUnscaled) {
> // Negative offsets require unscaled, 9-bit, signed immediate offsets.
> // Otherwise, we try using scaled, 12-bit, unsigned immediate offsets.
> if (!UseUnscaled && Addr.getOffset() < 0)
> @@ -525,32 +525,32 @@ bool ARM64FastISel::EmitLoad(MVT VT, uns
> VTIsi1 = true;
> // Intentional fall-through.
> case MVT::i8:
> - Opc = UseUnscaled ? ARM64::LDURBBi : ARM64::LDRBBui;
> - RC = &ARM64::GPR32RegClass;
> + Opc = UseUnscaled ? AArch64::LDURBBi : AArch64::LDRBBui;
> + RC = &AArch64::GPR32RegClass;
> ScaleFactor = 1;
> break;
> case MVT::i16:
> - Opc = UseUnscaled ? ARM64::LDURHHi : ARM64::LDRHHui;
> - RC = &ARM64::GPR32RegClass;
> + Opc = UseUnscaled ? AArch64::LDURHHi : AArch64::LDRHHui;
> + RC = &AArch64::GPR32RegClass;
> ScaleFactor = 2;
> break;
> case MVT::i32:
> - Opc = UseUnscaled ? ARM64::LDURWi : ARM64::LDRWui;
> - RC = &ARM64::GPR32RegClass;
> + Opc = UseUnscaled ? AArch64::LDURWi : AArch64::LDRWui;
> + RC = &AArch64::GPR32RegClass;
> ScaleFactor = 4;
> break;
> case MVT::i64:
> - Opc = UseUnscaled ? ARM64::LDURXi : ARM64::LDRXui;
> - RC = &ARM64::GPR64RegClass;
> + Opc = UseUnscaled ? AArch64::LDURXi : AArch64::LDRXui;
> + RC = &AArch64::GPR64RegClass;
> ScaleFactor = 8;
> break;
> case MVT::f32:
> - Opc = UseUnscaled ? ARM64::LDURSi : ARM64::LDRSui;
> + Opc = UseUnscaled ? AArch64::LDURSi : AArch64::LDRSui;
> RC = TLI.getRegClassFor(VT);
> ScaleFactor = 4;
> break;
> case MVT::f64:
> - Opc = UseUnscaled ? ARM64::LDURDi : ARM64::LDRDui;
> + Opc = UseUnscaled ? AArch64::LDURDi : AArch64::LDRDui;
> RC = TLI.getRegClassFor(VT);
> ScaleFactor = 8;
> break;
> @@ -577,18 +577,18 @@ bool ARM64FastISel::EmitLoad(MVT VT, uns
>
> // Loading an i1 requires special handling.
> if (VTIsi1) {
> - MRI.constrainRegClass(ResultReg, &ARM64::GPR32RegClass);
> - unsigned ANDReg = createResultReg(&ARM64::GPR32spRegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::ANDWri),
> + MRI.constrainRegClass(ResultReg, &AArch64::GPR32RegClass);
> + unsigned ANDReg = createResultReg(&AArch64::GPR32spRegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::ANDWri),
> ANDReg)
> .addReg(ResultReg)
> - .addImm(ARM64_AM::encodeLogicalImmediate(1, 32));
> + .addImm(AArch64_AM::encodeLogicalImmediate(1, 32));
> ResultReg = ANDReg;
> }
> return true;
> }
>
> -bool ARM64FastISel::SelectLoad(const Instruction *I) {
> +bool AArch64FastISel::SelectLoad(const Instruction *I) {
> MVT VT;
> // Verify we have a legal type before going any further. Currently, we handle
> // simple types that will directly fit in a register (i32/f32/i64/f64) or
> @@ -609,8 +609,8 @@ bool ARM64FastISel::SelectLoad(const Ins
> return true;
> }
>
> -bool ARM64FastISel::EmitStore(MVT VT, unsigned SrcReg, Address Addr,
> - bool UseUnscaled) {
> +bool AArch64FastISel::EmitStore(MVT VT, unsigned SrcReg, Address Addr,
> + bool UseUnscaled) {
> // Negative offsets require unscaled, 9-bit, signed immediate offsets.
> // Otherwise, we try using scaled, 12-bit, unsigned immediate offsets.
> if (!UseUnscaled && Addr.getOffset() < 0)
> @@ -626,27 +626,27 @@ bool ARM64FastISel::EmitStore(MVT VT, un
> case MVT::i1:
> VTIsi1 = true;
> case MVT::i8:
> - StrOpc = UseUnscaled ? ARM64::STURBBi : ARM64::STRBBui;
> + StrOpc = UseUnscaled ? AArch64::STURBBi : AArch64::STRBBui;
> ScaleFactor = 1;
> break;
> case MVT::i16:
> - StrOpc = UseUnscaled ? ARM64::STURHHi : ARM64::STRHHui;
> + StrOpc = UseUnscaled ? AArch64::STURHHi : AArch64::STRHHui;
> ScaleFactor = 2;
> break;
> case MVT::i32:
> - StrOpc = UseUnscaled ? ARM64::STURWi : ARM64::STRWui;
> + StrOpc = UseUnscaled ? AArch64::STURWi : AArch64::STRWui;
> ScaleFactor = 4;
> break;
> case MVT::i64:
> - StrOpc = UseUnscaled ? ARM64::STURXi : ARM64::STRXui;
> + StrOpc = UseUnscaled ? AArch64::STURXi : AArch64::STRXui;
> ScaleFactor = 8;
> break;
> case MVT::f32:
> - StrOpc = UseUnscaled ? ARM64::STURSi : ARM64::STRSui;
> + StrOpc = UseUnscaled ? AArch64::STURSi : AArch64::STRSui;
> ScaleFactor = 4;
> break;
> case MVT::f64:
> - StrOpc = UseUnscaled ? ARM64::STURDi : ARM64::STRDui;
> + StrOpc = UseUnscaled ? AArch64::STURDi : AArch64::STRDui;
> ScaleFactor = 8;
> break;
> }
> @@ -666,12 +666,12 @@ bool ARM64FastISel::EmitStore(MVT VT, un
>
> // Storing an i1 requires special handling.
> if (VTIsi1) {
> - MRI.constrainRegClass(SrcReg, &ARM64::GPR32RegClass);
> - unsigned ANDReg = createResultReg(&ARM64::GPR32spRegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::ANDWri),
> + MRI.constrainRegClass(SrcReg, &AArch64::GPR32RegClass);
> + unsigned ANDReg = createResultReg(&AArch64::GPR32spRegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::ANDWri),
> ANDReg)
> .addReg(SrcReg)
> - .addImm(ARM64_AM::encodeLogicalImmediate(1, 32));
> + .addImm(AArch64_AM::encodeLogicalImmediate(1, 32));
> SrcReg = ANDReg;
> }
> // Create the base instruction, then add the operands.
> @@ -681,7 +681,7 @@ bool ARM64FastISel::EmitStore(MVT VT, un
> return true;
> }
>
> -bool ARM64FastISel::SelectStore(const Instruction *I) {
> +bool AArch64FastISel::SelectStore(const Instruction *I) {
> MVT VT;
> Value *Op0 = I->getOperand(0);
> // Verify we have a legal type before going any further. Currently, we handle
> @@ -706,53 +706,53 @@ bool ARM64FastISel::SelectStore(const In
> return true;
> }
>
> -static ARM64CC::CondCode getCompareCC(CmpInst::Predicate Pred) {
> +static AArch64CC::CondCode getCompareCC(CmpInst::Predicate Pred) {
> switch (Pred) {
> case CmpInst::FCMP_ONE:
> case CmpInst::FCMP_UEQ:
> default:
> // AL is our "false" for now. The other two need more compares.
> - return ARM64CC::AL;
> + return AArch64CC::AL;
> case CmpInst::ICMP_EQ:
> case CmpInst::FCMP_OEQ:
> - return ARM64CC::EQ;
> + return AArch64CC::EQ;
> case CmpInst::ICMP_SGT:
> case CmpInst::FCMP_OGT:
> - return ARM64CC::GT;
> + return AArch64CC::GT;
> case CmpInst::ICMP_SGE:
> case CmpInst::FCMP_OGE:
> - return ARM64CC::GE;
> + return AArch64CC::GE;
> case CmpInst::ICMP_UGT:
> case CmpInst::FCMP_UGT:
> - return ARM64CC::HI;
> + return AArch64CC::HI;
> case CmpInst::FCMP_OLT:
> - return ARM64CC::MI;
> + return AArch64CC::MI;
> case CmpInst::ICMP_ULE:
> case CmpInst::FCMP_OLE:
> - return ARM64CC::LS;
> + return AArch64CC::LS;
> case CmpInst::FCMP_ORD:
> - return ARM64CC::VC;
> + return AArch64CC::VC;
> case CmpInst::FCMP_UNO:
> - return ARM64CC::VS;
> + return AArch64CC::VS;
> case CmpInst::FCMP_UGE:
> - return ARM64CC::PL;
> + return AArch64CC::PL;
> case CmpInst::ICMP_SLT:
> case CmpInst::FCMP_ULT:
> - return ARM64CC::LT;
> + return AArch64CC::LT;
> case CmpInst::ICMP_SLE:
> case CmpInst::FCMP_ULE:
> - return ARM64CC::LE;
> + return AArch64CC::LE;
> case CmpInst::FCMP_UNE:
> case CmpInst::ICMP_NE:
> - return ARM64CC::NE;
> + return AArch64CC::NE;
> case CmpInst::ICMP_UGE:
> - return ARM64CC::HS;
> + return AArch64CC::HS;
> case CmpInst::ICMP_ULT:
> - return ARM64CC::LO;
> + return AArch64CC::LO;
> }
> }
>
> -bool ARM64FastISel::SelectBranch(const Instruction *I) {
> +bool AArch64FastISel::SelectBranch(const Instruction *I) {
> const BranchInst *BI = cast<BranchInst>(I);
> MachineBasicBlock *TBB = FuncInfo.MBBMap[BI->getSuccessor(0)];
> MachineBasicBlock *FBB = FuncInfo.MBBMap[BI->getSuccessor(1)];
> @@ -760,8 +760,8 @@ bool ARM64FastISel::SelectBranch(const I
> if (const CmpInst *CI = dyn_cast<CmpInst>(BI->getCondition())) {
> if (CI->hasOneUse() && (CI->getParent() == I->getParent())) {
> // We may not handle every CC for now.
> - ARM64CC::CondCode CC = getCompareCC(CI->getPredicate());
> - if (CC == ARM64CC::AL)
> + AArch64CC::CondCode CC = getCompareCC(CI->getPredicate());
> + if (CC == AArch64CC::AL)
> return false;
>
> // Emit the cmp.
> @@ -769,7 +769,7 @@ bool ARM64FastISel::SelectBranch(const I
> return false;
>
> // Emit the branch.
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::Bcc))
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::Bcc))
> .addImm(CC)
> .addMBB(TBB);
> FuncInfo.MBB->addSuccessor(TBB);
> @@ -788,26 +788,27 @@ bool ARM64FastISel::SelectBranch(const I
> // Issue an extract_subreg to get the lower 32-bits.
> if (SrcVT == MVT::i64)
> CondReg = FastEmitInst_extractsubreg(MVT::i32, CondReg, /*Kill=*/true,
> - ARM64::sub_32);
> + AArch64::sub_32);
>
> - MRI.constrainRegClass(CondReg, &ARM64::GPR32RegClass);
> - unsigned ANDReg = createResultReg(&ARM64::GPR32spRegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::ANDWri),
> - ANDReg)
> + MRI.constrainRegClass(CondReg, &AArch64::GPR32RegClass);
> + unsigned ANDReg = createResultReg(&AArch64::GPR32spRegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc,
> + TII.get(AArch64::ANDWri), ANDReg)
> .addReg(CondReg)
> - .addImm(ARM64_AM::encodeLogicalImmediate(1, 32));
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::SUBSWri))
> + .addImm(AArch64_AM::encodeLogicalImmediate(1, 32));
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc,
> + TII.get(AArch64::SUBSWri))
> .addReg(ANDReg)
> .addReg(ANDReg)
> .addImm(0)
> .addImm(0);
>
> - unsigned CC = ARM64CC::NE;
> + unsigned CC = AArch64CC::NE;
> if (FuncInfo.MBB->isLayoutSuccessor(TBB)) {
> std::swap(TBB, FBB);
> - CC = ARM64CC::EQ;
> + CC = AArch64CC::EQ;
> }
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::Bcc))
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::Bcc))
> .addImm(CC)
> .addMBB(TBB);
> FuncInfo.MBB->addSuccessor(TBB);
> @@ -818,7 +819,7 @@ bool ARM64FastISel::SelectBranch(const I
> dyn_cast<ConstantInt>(BI->getCondition())) {
> uint64_t Imm = CI->getZExtValue();
> MachineBasicBlock *Target = (Imm == 0) ? FBB : TBB;
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::B))
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::B))
> .addMBB(Target);
> FuncInfo.MBB->addSuccessor(Target);
> return true;
> @@ -835,19 +836,19 @@ bool ARM64FastISel::SelectBranch(const I
> // Regardless, the compare has been done in the predecessor block,
> // and it left a value for us in a virtual register. Ergo, we test
> // the one-bit value left in the virtual register.
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::SUBSWri),
> - ARM64::WZR)
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::SUBSWri),
> + AArch64::WZR)
> .addReg(CondReg)
> .addImm(0)
> .addImm(0);
>
> - unsigned CC = ARM64CC::NE;
> + unsigned CC = AArch64CC::NE;
> if (FuncInfo.MBB->isLayoutSuccessor(TBB)) {
> std::swap(TBB, FBB);
> - CC = ARM64CC::EQ;
> + CC = AArch64CC::EQ;
> }
>
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::Bcc))
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::Bcc))
> .addImm(CC)
> .addMBB(TBB);
> FuncInfo.MBB->addSuccessor(TBB);
> @@ -855,14 +856,14 @@ bool ARM64FastISel::SelectBranch(const I
> return true;
> }
>
> -bool ARM64FastISel::SelectIndirectBr(const Instruction *I) {
> +bool AArch64FastISel::SelectIndirectBr(const Instruction *I) {
> const IndirectBrInst *BI = cast<IndirectBrInst>(I);
> unsigned AddrReg = getRegForValue(BI->getOperand(0));
> if (AddrReg == 0)
> return false;
>
> // Emit the indirect branch.
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::BR))
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::BR))
> .addReg(AddrReg);
>
> // Make sure the CFG is up-to-date.
> @@ -872,7 +873,7 @@ bool ARM64FastISel::SelectIndirectBr(con
> return true;
> }
>
> -bool ARM64FastISel::EmitCmp(Value *Src1Value, Value *Src2Value, bool isZExt) {
> +bool AArch64FastISel::EmitCmp(Value *Src1Value, Value *Src2Value, bool isZExt) {
> Type *Ty = Src1Value->getType();
> EVT SrcEVT = TLI.getValueType(Ty, true);
> if (!SrcEVT.isSimple())
> @@ -916,26 +917,26 @@ bool ARM64FastISel::EmitCmp(Value *Src1V
> needsExt = true;
> // Intentional fall-through.
> case MVT::i32:
> - ZReg = ARM64::WZR;
> + ZReg = AArch64::WZR;
> if (UseImm)
> - CmpOpc = isNegativeImm ? ARM64::ADDSWri : ARM64::SUBSWri;
> + CmpOpc = isNegativeImm ? AArch64::ADDSWri : AArch64::SUBSWri;
> else
> - CmpOpc = ARM64::SUBSWrr;
> + CmpOpc = AArch64::SUBSWrr;
> break;
> case MVT::i64:
> - ZReg = ARM64::XZR;
> + ZReg = AArch64::XZR;
> if (UseImm)
> - CmpOpc = isNegativeImm ? ARM64::ADDSXri : ARM64::SUBSXri;
> + CmpOpc = isNegativeImm ? AArch64::ADDSXri : AArch64::SUBSXri;
> else
> - CmpOpc = ARM64::SUBSXrr;
> + CmpOpc = AArch64::SUBSXrr;
> break;
> case MVT::f32:
> isICmp = false;
> - CmpOpc = UseImm ? ARM64::FCMPSri : ARM64::FCMPSrr;
> + CmpOpc = UseImm ? AArch64::FCMPSri : AArch64::FCMPSrr;
> break;
> case MVT::f64:
> isICmp = false;
> - CmpOpc = UseImm ? ARM64::FCMPDri : ARM64::FCMPDrr;
> + CmpOpc = UseImm ? AArch64::FCMPDri : AArch64::FCMPDrr;
> break;
> }
>
> @@ -986,12 +987,12 @@ bool ARM64FastISel::EmitCmp(Value *Src1V
> return true;
> }
>
> -bool ARM64FastISel::SelectCmp(const Instruction *I) {
> +bool AArch64FastISel::SelectCmp(const Instruction *I) {
> const CmpInst *CI = cast<CmpInst>(I);
>
> // We may not handle every CC for now.
> - ARM64CC::CondCode CC = getCompareCC(CI->getPredicate());
> - if (CC == ARM64CC::AL)
> + AArch64CC::CondCode CC = getCompareCC(CI->getPredicate());
> + if (CC == AArch64CC::AL)
> return false;
>
> // Emit the cmp.
> @@ -999,19 +1000,19 @@ bool ARM64FastISel::SelectCmp(const Inst
> return false;
>
> // Now set a register based on the comparison.
> - ARM64CC::CondCode invertedCC = getInvertedCondCode(CC);
> - unsigned ResultReg = createResultReg(&ARM64::GPR32RegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::CSINCWr),
> + AArch64CC::CondCode invertedCC = getInvertedCondCode(CC);
> + unsigned ResultReg = createResultReg(&AArch64::GPR32RegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::CSINCWr),
> ResultReg)
> - .addReg(ARM64::WZR)
> - .addReg(ARM64::WZR)
> + .addReg(AArch64::WZR)
> + .addReg(AArch64::WZR)
> .addImm(invertedCC);
>
> UpdateValueMap(I, ResultReg);
> return true;
> }
>
> -bool ARM64FastISel::SelectSelect(const Instruction *I) {
> +bool AArch64FastISel::SelectSelect(const Instruction *I) {
> const SelectInst *SI = cast<SelectInst>(I);
>
> EVT DestEVT = TLI.getValueType(SI->getType(), true);
> @@ -1034,14 +1035,14 @@ bool ARM64FastISel::SelectSelect(const I
> return false;
>
>
> - MRI.constrainRegClass(CondReg, &ARM64::GPR32RegClass);
> - unsigned ANDReg = createResultReg(&ARM64::GPR32spRegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::ANDWri),
> + MRI.constrainRegClass(CondReg, &AArch64::GPR32RegClass);
> + unsigned ANDReg = createResultReg(&AArch64::GPR32spRegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::ANDWri),
> ANDReg)
> .addReg(CondReg)
> - .addImm(ARM64_AM::encodeLogicalImmediate(1, 32));
> + .addImm(AArch64_AM::encodeLogicalImmediate(1, 32));
>
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::SUBSWri))
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::SUBSWri))
> .addReg(ANDReg)
> .addReg(ANDReg)
> .addImm(0)
> @@ -1052,16 +1053,16 @@ bool ARM64FastISel::SelectSelect(const I
> default:
> return false;
> case MVT::i32:
> - SelectOpc = ARM64::CSELWr;
> + SelectOpc = AArch64::CSELWr;
> break;
> case MVT::i64:
> - SelectOpc = ARM64::CSELXr;
> + SelectOpc = AArch64::CSELXr;
> break;
> case MVT::f32:
> - SelectOpc = ARM64::FCSELSrrr;
> + SelectOpc = AArch64::FCSELSrrr;
> break;
> case MVT::f64:
> - SelectOpc = ARM64::FCSELDrrr;
> + SelectOpc = AArch64::FCSELDrrr;
> break;
> }
>
> @@ -1070,13 +1071,13 @@ bool ARM64FastISel::SelectSelect(const I
> ResultReg)
> .addReg(TrueReg)
> .addReg(FalseReg)
> - .addImm(ARM64CC::NE);
> + .addImm(AArch64CC::NE);
>
> UpdateValueMap(I, ResultReg);
> return true;
> }
>
> -bool ARM64FastISel::SelectFPExt(const Instruction *I) {
> +bool AArch64FastISel::SelectFPExt(const Instruction *I) {
> Value *V = I->getOperand(0);
> if (!I->getType()->isDoubleTy() || !V->getType()->isFloatTy())
> return false;
> @@ -1085,14 +1086,14 @@ bool ARM64FastISel::SelectFPExt(const In
> if (Op == 0)
> return false;
>
> - unsigned ResultReg = createResultReg(&ARM64::FPR64RegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::FCVTDSr),
> + unsigned ResultReg = createResultReg(&AArch64::FPR64RegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::FCVTDSr),
> ResultReg).addReg(Op);
> UpdateValueMap(I, ResultReg);
> return true;
> }
>
> -bool ARM64FastISel::SelectFPTrunc(const Instruction *I) {
> +bool AArch64FastISel::SelectFPTrunc(const Instruction *I) {
> Value *V = I->getOperand(0);
> if (!I->getType()->isFloatTy() || !V->getType()->isDoubleTy())
> return false;
> @@ -1101,15 +1102,15 @@ bool ARM64FastISel::SelectFPTrunc(const
> if (Op == 0)
> return false;
>
> - unsigned ResultReg = createResultReg(&ARM64::FPR32RegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::FCVTSDr),
> + unsigned ResultReg = createResultReg(&AArch64::FPR32RegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::FCVTSDr),
> ResultReg).addReg(Op);
> UpdateValueMap(I, ResultReg);
> return true;
> }
>
> // FPToUI and FPToSI
> -bool ARM64FastISel::SelectFPToInt(const Instruction *I, bool Signed) {
> +bool AArch64FastISel::SelectFPToInt(const Instruction *I, bool Signed) {
> MVT DestVT;
> if (!isTypeLegal(I->getType(), DestVT) || DestVT.isVector())
> return false;
> @@ -1125,24 +1126,24 @@ bool ARM64FastISel::SelectFPToInt(const
> unsigned Opc;
> if (SrcVT == MVT::f64) {
> if (Signed)
> - Opc = (DestVT == MVT::i32) ? ARM64::FCVTZSUWDr : ARM64::FCVTZSUXDr;
> + Opc = (DestVT == MVT::i32) ? AArch64::FCVTZSUWDr : AArch64::FCVTZSUXDr;
> else
> - Opc = (DestVT == MVT::i32) ? ARM64::FCVTZUUWDr : ARM64::FCVTZUUXDr;
> + Opc = (DestVT == MVT::i32) ? AArch64::FCVTZUUWDr : AArch64::FCVTZUUXDr;
> } else {
> if (Signed)
> - Opc = (DestVT == MVT::i32) ? ARM64::FCVTZSUWSr : ARM64::FCVTZSUXSr;
> + Opc = (DestVT == MVT::i32) ? AArch64::FCVTZSUWSr : AArch64::FCVTZSUXSr;
> else
> - Opc = (DestVT == MVT::i32) ? ARM64::FCVTZUUWSr : ARM64::FCVTZUUXSr;
> + Opc = (DestVT == MVT::i32) ? AArch64::FCVTZUUWSr : AArch64::FCVTZUUXSr;
> }
> unsigned ResultReg = createResultReg(
> - DestVT == MVT::i32 ? &ARM64::GPR32RegClass : &ARM64::GPR64RegClass);
> + DestVT == MVT::i32 ? &AArch64::GPR32RegClass : &AArch64::GPR64RegClass);
> BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(Opc), ResultReg)
> .addReg(SrcReg);
> UpdateValueMap(I, ResultReg);
> return true;
> }
>
> -bool ARM64FastISel::SelectIntToFP(const Instruction *I, bool Signed) {
> +bool AArch64FastISel::SelectIntToFP(const Instruction *I, bool Signed) {
> MVT DestVT;
> if (!isTypeLegal(I->getType(), DestVT) || DestVT.isVector())
> return false;
> @@ -1163,20 +1164,20 @@ bool ARM64FastISel::SelectIntToFP(const
> return false;
> }
>
> - MRI.constrainRegClass(SrcReg, SrcVT == MVT::i64 ? &ARM64::GPR64RegClass
> - : &ARM64::GPR32RegClass);
> + MRI.constrainRegClass(SrcReg, SrcVT == MVT::i64 ? &AArch64::GPR64RegClass
> + : &AArch64::GPR32RegClass);
>
> unsigned Opc;
> if (SrcVT == MVT::i64) {
> if (Signed)
> - Opc = (DestVT == MVT::f32) ? ARM64::SCVTFUXSri : ARM64::SCVTFUXDri;
> + Opc = (DestVT == MVT::f32) ? AArch64::SCVTFUXSri : AArch64::SCVTFUXDri;
> else
> - Opc = (DestVT == MVT::f32) ? ARM64::UCVTFUXSri : ARM64::UCVTFUXDri;
> + Opc = (DestVT == MVT::f32) ? AArch64::UCVTFUXSri : AArch64::UCVTFUXDri;
> } else {
> if (Signed)
> - Opc = (DestVT == MVT::f32) ? ARM64::SCVTFUWSri : ARM64::SCVTFUWDri;
> + Opc = (DestVT == MVT::f32) ? AArch64::SCVTFUWSri : AArch64::SCVTFUWDri;
> else
> - Opc = (DestVT == MVT::f32) ? ARM64::UCVTFUWSri : ARM64::UCVTFUWDri;
> + Opc = (DestVT == MVT::f32) ? AArch64::UCVTFUWSri : AArch64::UCVTFUWDri;
> }
>
> unsigned ResultReg = createResultReg(TLI.getRegClassFor(DestVT));
> @@ -1186,12 +1187,11 @@ bool ARM64FastISel::SelectIntToFP(const
> return true;
> }
>
> -bool ARM64FastISel::ProcessCallArgs(SmallVectorImpl<Value *> &Args,
> - SmallVectorImpl<unsigned> &ArgRegs,
> - SmallVectorImpl<MVT> &ArgVTs,
> - SmallVectorImpl<ISD::ArgFlagsTy> &ArgFlags,
> - SmallVectorImpl<unsigned> &RegArgs,
> - CallingConv::ID CC, unsigned &NumBytes) {
> +bool AArch64FastISel::ProcessCallArgs(
> + SmallVectorImpl<Value *> &Args, SmallVectorImpl<unsigned> &ArgRegs,
> + SmallVectorImpl<MVT> &ArgVTs, SmallVectorImpl<ISD::ArgFlagsTy> &ArgFlags,
> + SmallVectorImpl<unsigned> &RegArgs, CallingConv::ID CC,
> + unsigned &NumBytes) {
> SmallVector<CCValAssign, 16> ArgLocs;
> CCState CCInfo(CC, false, *FuncInfo.MF, TM, ArgLocs, *Context);
> CCInfo.AnalyzeCallOperands(ArgVTs, ArgFlags, CCAssignFnForCall(CC));
> @@ -1258,7 +1258,7 @@ bool ARM64FastISel::ProcessCallArgs(Smal
>
> Address Addr;
> Addr.setKind(Address::RegBase);
> - Addr.setReg(ARM64::SP);
> + Addr.setReg(AArch64::SP);
> Addr.setOffset(VA.getLocMemOffset() + BEAlign);
>
> if (!EmitStore(ArgVT, Arg, Addr))
> @@ -1268,9 +1268,9 @@ bool ARM64FastISel::ProcessCallArgs(Smal
> return true;
> }
>
> -bool ARM64FastISel::FinishCall(MVT RetVT, SmallVectorImpl<unsigned> &UsedRegs,
> - const Instruction *I, CallingConv::ID CC,
> - unsigned &NumBytes) {
> +bool AArch64FastISel::FinishCall(MVT RetVT, SmallVectorImpl<unsigned> &UsedRegs,
> + const Instruction *I, CallingConv::ID CC,
> + unsigned &NumBytes) {
> // Issue CALLSEQ_END
> unsigned AdjStackUp = TII.getCallFrameDestroyOpcode();
> BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AdjStackUp))
> @@ -1302,8 +1302,8 @@ bool ARM64FastISel::FinishCall(MVT RetVT
> return true;
> }
>
> -bool ARM64FastISel::SelectCall(const Instruction *I,
> - const char *IntrMemName = nullptr) {
> +bool AArch64FastISel::SelectCall(const Instruction *I,
> + const char *IntrMemName = nullptr) {
> const CallInst *CI = cast<CallInst>(I);
> const Value *Callee = CI->getCalledValue();
>
> @@ -1396,7 +1396,7 @@ bool ARM64FastISel::SelectCall(const Ins
>
> // Issue the call.
> MachineInstrBuilder MIB;
> - MIB = BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::BL));
> + MIB = BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::BL));
> if (!IntrMemName)
> MIB.addGlobalAddress(GV, 0, 0);
> else
> @@ -1421,15 +1421,15 @@ bool ARM64FastISel::SelectCall(const Ins
> return true;
> }
>
> -bool ARM64FastISel::IsMemCpySmall(uint64_t Len, unsigned Alignment) {
> +bool AArch64FastISel::IsMemCpySmall(uint64_t Len, unsigned Alignment) {
> if (Alignment)
> return Len / Alignment <= 4;
> else
> return Len < 32;
> }
>
> -bool ARM64FastISel::TryEmitSmallMemCpy(Address Dest, Address Src, uint64_t Len,
> - unsigned Alignment) {
> +bool AArch64FastISel::TryEmitSmallMemCpy(Address Dest, Address Src,
> + uint64_t Len, unsigned Alignment) {
> // Make sure we don't bloat code by inlining very large memcpy's.
> if (!IsMemCpySmall(Len, Alignment))
> return false;
> @@ -1481,7 +1481,7 @@ bool ARM64FastISel::TryEmitSmallMemCpy(A
> return true;
> }
>
> -bool ARM64FastISel::SelectIntrinsicCall(const IntrinsicInst &I) {
> +bool AArch64FastISel::SelectIntrinsicCall(const IntrinsicInst &I) {
> // FIXME: Handle more intrinsics.
> switch (I.getIntrinsicID()) {
> default:
> @@ -1539,7 +1539,7 @@ bool ARM64FastISel::SelectIntrinsicCall(
> return SelectCall(&I, "memset");
> }
> case Intrinsic::trap: {
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::BRK))
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::BRK))
> .addImm(1);
> return true;
> }
> @@ -1547,7 +1547,7 @@ bool ARM64FastISel::SelectIntrinsicCall(
> return false;
> }
>
> -bool ARM64FastISel::SelectRet(const Instruction *I) {
> +bool AArch64FastISel::SelectRet(const Instruction *I) {
> const ReturnInst *Ret = cast<ReturnInst>(I);
> const Function &F = *I->getParent()->getParent();
>
> @@ -1569,8 +1569,8 @@ bool ARM64FastISel::SelectRet(const Inst
> SmallVector<CCValAssign, 16> ValLocs;
> CCState CCInfo(CC, F.isVarArg(), *FuncInfo.MF, TM, ValLocs,
> I->getContext());
> - CCAssignFn *RetCC = CC == CallingConv::WebKit_JS ? RetCC_ARM64_WebKit_JS
> - : RetCC_ARM64_AAPCS;
> + CCAssignFn *RetCC = CC == CallingConv::WebKit_JS ? RetCC_AArch64_WebKit_JS
> + : RetCC_AArch64_AAPCS;
> CCInfo.AnalyzeReturn(Outs, RetCC);
>
> // Only handle a single return value for now.
> @@ -1631,13 +1631,13 @@ bool ARM64FastISel::SelectRet(const Inst
> }
>
> MachineInstrBuilder MIB = BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc,
> - TII.get(ARM64::RET_ReallyLR));
> + TII.get(AArch64::RET_ReallyLR));
> for (unsigned i = 0, e = RetRegs.size(); i != e; ++i)
> MIB.addReg(RetRegs[i], RegState::Implicit);
> return true;
> }
>
> -bool ARM64FastISel::SelectTrunc(const Instruction *I) {
> +bool AArch64FastISel::SelectTrunc(const Instruction *I) {
> Type *DestTy = I->getType();
> Value *Op = I->getOperand(0);
> Type *SrcTy = Op->getType();
> @@ -1684,14 +1684,14 @@ bool ARM64FastISel::SelectTrunc(const In
> }
> // Issue an extract_subreg to get the lower 32-bits.
> unsigned Reg32 = FastEmitInst_extractsubreg(MVT::i32, SrcReg, /*Kill=*/true,
> - ARM64::sub_32);
> - MRI.constrainRegClass(Reg32, &ARM64::GPR32RegClass);
> + AArch64::sub_32);
> + MRI.constrainRegClass(Reg32, &AArch64::GPR32RegClass);
> // Create the AND instruction which performs the actual truncation.
> - unsigned ANDReg = createResultReg(&ARM64::GPR32spRegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::ANDWri),
> + unsigned ANDReg = createResultReg(&AArch64::GPR32spRegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::ANDWri),
> ANDReg)
> .addReg(Reg32)
> - .addImm(ARM64_AM::encodeLogicalImmediate(Mask, 32));
> + .addImm(AArch64_AM::encodeLogicalImmediate(Mask, 32));
> SrcReg = ANDReg;
> }
>
> @@ -1699,7 +1699,7 @@ bool ARM64FastISel::SelectTrunc(const In
> return true;
> }
>
> -unsigned ARM64FastISel::Emiti1Ext(unsigned SrcReg, MVT DestVT, bool isZExt) {
> +unsigned AArch64FastISel::Emiti1Ext(unsigned SrcReg, MVT DestVT, bool isZExt) {
> assert((DestVT == MVT::i8 || DestVT == MVT::i16 || DestVT == MVT::i32 ||
> DestVT == MVT::i64) &&
> "Unexpected value type.");
> @@ -1708,22 +1708,22 @@ unsigned ARM64FastISel::Emiti1Ext(unsign
> DestVT = MVT::i32;
>
> if (isZExt) {
> - MRI.constrainRegClass(SrcReg, &ARM64::GPR32RegClass);
> - unsigned ResultReg = createResultReg(&ARM64::GPR32spRegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::ANDWri),
> + MRI.constrainRegClass(SrcReg, &AArch64::GPR32RegClass);
> + unsigned ResultReg = createResultReg(&AArch64::GPR32spRegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::ANDWri),
> ResultReg)
> .addReg(SrcReg)
> - .addImm(ARM64_AM::encodeLogicalImmediate(1, 32));
> + .addImm(AArch64_AM::encodeLogicalImmediate(1, 32));
>
> if (DestVT == MVT::i64) {
> // We're ZExt i1 to i64. The ANDWri Wd, Ws, #1 implicitly clears the
> // upper 32 bits. Emit a SUBREG_TO_REG to extend from Wd to Xd.
> - unsigned Reg64 = MRI.createVirtualRegister(&ARM64::GPR64RegClass);
> + unsigned Reg64 = MRI.createVirtualRegister(&AArch64::GPR64RegClass);
> BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc,
> - TII.get(ARM64::SUBREG_TO_REG), Reg64)
> + TII.get(AArch64::SUBREG_TO_REG), Reg64)
> .addImm(0)
> .addReg(ResultReg)
> - .addImm(ARM64::sub_32);
> + .addImm(AArch64::sub_32);
> ResultReg = Reg64;
> }
> return ResultReg;
> @@ -1732,8 +1732,8 @@ unsigned ARM64FastISel::Emiti1Ext(unsign
> // FIXME: We're SExt i1 to i64.
> return 0;
> }
> - unsigned ResultReg = createResultReg(&ARM64::GPR32RegClass);
> - BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(ARM64::SBFMWri),
> + unsigned ResultReg = createResultReg(&AArch64::GPR32RegClass);
> + BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc, TII.get(AArch64::SBFMWri),
> ResultReg)
> .addReg(SrcReg)
> .addImm(0)
> @@ -1742,8 +1742,8 @@ unsigned ARM64FastISel::Emiti1Ext(unsign
> }
> }
>
> -unsigned ARM64FastISel::EmitIntExt(MVT SrcVT, unsigned SrcReg, MVT DestVT,
> - bool isZExt) {
> +unsigned AArch64FastISel::EmitIntExt(MVT SrcVT, unsigned SrcReg, MVT DestVT,
> + bool isZExt) {
> assert(DestVT != MVT::i1 && "ZeroExt/SignExt an i1?");
> unsigned Opc;
> unsigned Imm = 0;
> @@ -1755,21 +1755,21 @@ unsigned ARM64FastISel::EmitIntExt(MVT S
> return Emiti1Ext(SrcReg, DestVT, isZExt);
> case MVT::i8:
> if (DestVT == MVT::i64)
> - Opc = isZExt ? ARM64::UBFMXri : ARM64::SBFMXri;
> + Opc = isZExt ? AArch64::UBFMXri : AArch64::SBFMXri;
> else
> - Opc = isZExt ? ARM64::UBFMWri : ARM64::SBFMWri;
> + Opc = isZExt ? AArch64::UBFMWri : AArch64::SBFMWri;
> Imm = 7;
> break;
> case MVT::i16:
> if (DestVT == MVT::i64)
> - Opc = isZExt ? ARM64::UBFMXri : ARM64::SBFMXri;
> + Opc = isZExt ? AArch64::UBFMXri : AArch64::SBFMXri;
> else
> - Opc = isZExt ? ARM64::UBFMWri : ARM64::SBFMWri;
> + Opc = isZExt ? AArch64::UBFMWri : AArch64::SBFMWri;
> Imm = 15;
> break;
> case MVT::i32:
> assert(DestVT == MVT::i64 && "IntExt i32 to i32?!?");
> - Opc = isZExt ? ARM64::UBFMXri : ARM64::SBFMXri;
> + Opc = isZExt ? AArch64::UBFMXri : AArch64::SBFMXri;
> Imm = 31;
> break;
> }
> @@ -1778,12 +1778,12 @@ unsigned ARM64FastISel::EmitIntExt(MVT S
> if (DestVT == MVT::i8 || DestVT == MVT::i16)
> DestVT = MVT::i32;
> else if (DestVT == MVT::i64) {
> - unsigned Src64 = MRI.createVirtualRegister(&ARM64::GPR64RegClass);
> + unsigned Src64 = MRI.createVirtualRegister(&AArch64::GPR64RegClass);
> BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, DbgLoc,
> - TII.get(ARM64::SUBREG_TO_REG), Src64)
> + TII.get(AArch64::SUBREG_TO_REG), Src64)
> .addImm(0)
> .addReg(SrcReg)
> - .addImm(ARM64::sub_32);
> + .addImm(AArch64::sub_32);
> SrcReg = Src64;
> }
>
> @@ -1796,7 +1796,7 @@ unsigned ARM64FastISel::EmitIntExt(MVT S
> return ResultReg;
> }
>
> -bool ARM64FastISel::SelectIntExt(const Instruction *I) {
> +bool AArch64FastISel::SelectIntExt(const Instruction *I) {
> // On ARM, in general, integer casts don't involve legal types; this code
> // handles promotable integers. The high bits for a type smaller than
> // the register size are assumed to be undefined.
> @@ -1825,7 +1825,7 @@ bool ARM64FastISel::SelectIntExt(const I
> return true;
> }
>
> -bool ARM64FastISel::SelectRem(const Instruction *I, unsigned ISDOpcode) {
> +bool AArch64FastISel::SelectRem(const Instruction *I, unsigned ISDOpcode) {
> EVT DestEVT = TLI.getValueType(I->getType(), true);
> if (!DestEVT.isSimple())
> return false;
> @@ -1840,13 +1840,13 @@ bool ARM64FastISel::SelectRem(const Inst
> default:
> return false;
> case ISD::SREM:
> - DivOpc = is64bit ? ARM64::SDIVXr : ARM64::SDIVWr;
> + DivOpc = is64bit ? AArch64::SDIVXr : AArch64::SDIVWr;
> break;
> case ISD::UREM:
> - DivOpc = is64bit ? ARM64::UDIVXr : ARM64::UDIVWr;
> + DivOpc = is64bit ? AArch64::UDIVXr : AArch64::UDIVWr;
> break;
> }
> - unsigned MSubOpc = is64bit ? ARM64::MSUBXrrr : ARM64::MSUBWrrr;
> + unsigned MSubOpc = is64bit ? AArch64::MSUBXrrr : AArch64::MSUBWrrr;
> unsigned Src0Reg = getRegForValue(I->getOperand(0));
> if (!Src0Reg)
> return false;
> @@ -1870,7 +1870,7 @@ bool ARM64FastISel::SelectRem(const Inst
> return true;
> }
>
> -bool ARM64FastISel::SelectMul(const Instruction *I) {
> +bool AArch64FastISel::SelectMul(const Instruction *I) {
> EVT SrcEVT = TLI.getValueType(I->getOperand(0)->getType(), true);
> if (!SrcEVT.isSimple())
> return false;
> @@ -1889,12 +1889,12 @@ bool ARM64FastISel::SelectMul(const Inst
> case MVT::i8:
> case MVT::i16:
> case MVT::i32:
> - ZReg = ARM64::WZR;
> - Opc = ARM64::MADDWrrr;
> + ZReg = AArch64::WZR;
> + Opc = AArch64::MADDWrrr;
> break;
> case MVT::i64:
> - ZReg = ARM64::XZR;
> - Opc = ARM64::MADDXrrr;
> + ZReg = AArch64::XZR;
> + Opc = AArch64::MADDXrrr;
> break;
> }
>
> @@ -1916,7 +1916,7 @@ bool ARM64FastISel::SelectMul(const Inst
> return true;
> }
>
> -bool ARM64FastISel::TargetSelectInstruction(const Instruction *I) {
> +bool AArch64FastISel::TargetSelectInstruction(const Instruction *I) {
> switch (I->getOpcode()) {
> default:
> break;
> @@ -1966,12 +1966,12 @@ bool ARM64FastISel::TargetSelectInstruct
> }
> return false;
> // Silence warnings.
> - (void)&CC_ARM64_DarwinPCS_VarArg;
> + (void)&CC_AArch64_DarwinPCS_VarArg;
> }
>
> namespace llvm {
> -llvm::FastISel *ARM64::createFastISel(FunctionLoweringInfo &funcInfo,
> - const TargetLibraryInfo *libInfo) {
> - return new ARM64FastISel(funcInfo, libInfo);
> +llvm::FastISel *AArch64::createFastISel(FunctionLoweringInfo &funcInfo,
> + const TargetLibraryInfo *libInfo) {
> + return new AArch64FastISel(funcInfo, libInfo);
> }
> }
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64FrameLowering.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64FrameLowering.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64FrameLowering.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64FrameLowering.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64FrameLowering.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64FrameLowering.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64FrameLowering.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===- ARM64FrameLowering.cpp - ARM64 Frame Lowering -----------*- C++ -*-====//
> +//===- AArch64FrameLowering.cpp - AArch64 Frame Lowering -------*- C++ -*-====//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,15 +7,15 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// This file contains the ARM64 implementation of TargetFrameLowering class.
> +// This file contains the AArch64 implementation of TargetFrameLowering class.
> //
> //===----------------------------------------------------------------------===//
>
> -#include "ARM64FrameLowering.h"
> -#include "ARM64InstrInfo.h"
> -#include "ARM64MachineFunctionInfo.h"
> -#include "ARM64Subtarget.h"
> -#include "ARM64TargetMachine.h"
> +#include "AArch64FrameLowering.h"
> +#include "AArch64InstrInfo.h"
> +#include "AArch64MachineFunctionInfo.h"
> +#include "AArch64Subtarget.h"
> +#include "AArch64TargetMachine.h"
> #include "llvm/ADT/Statistic.h"
> #include "llvm/IR/DataLayout.h"
> #include "llvm/IR/Function.h"
> @@ -33,8 +33,8 @@ using namespace llvm;
>
> #define DEBUG_TYPE "frame-info"
>
> -static cl::opt<bool> EnableRedZone("arm64-redzone",
> - cl::desc("enable use of redzone on ARM64"),
> +static cl::opt<bool> EnableRedZone("aarch64-redzone",
> + cl::desc("enable use of redzone on AArch64"),
> cl::init(false), cl::Hidden);
>
> STATISTIC(NumRedZoneFunctions, "Number of functions using red zone");
> @@ -59,7 +59,7 @@ static unsigned estimateStackSize(Machin
> return (unsigned)Offset;
> }
>
> -bool ARM64FrameLowering::canUseRedZone(const MachineFunction &MF) const {
> +bool AArch64FrameLowering::canUseRedZone(const MachineFunction &MF) const {
> if (!EnableRedZone)
> return false;
> // Don't use the red zone if the function explicitly asks us not to.
> @@ -69,7 +69,7 @@ bool ARM64FrameLowering::canUseRedZone(c
> return false;
>
> const MachineFrameInfo *MFI = MF.getFrameInfo();
> - const ARM64FunctionInfo *AFI = MF.getInfo<ARM64FunctionInfo>();
> + const AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
> unsigned NumBytes = AFI->getLocalStackSize();
>
> // Note: currently hasFP() is always true for hasCalls(), but that's an
> @@ -82,13 +82,13 @@ bool ARM64FrameLowering::canUseRedZone(c
>
> /// hasFP - Return true if the specified function should have a dedicated frame
> /// pointer register.
> -bool ARM64FrameLowering::hasFP(const MachineFunction &MF) const {
> +bool AArch64FrameLowering::hasFP(const MachineFunction &MF) const {
> const MachineFrameInfo *MFI = MF.getFrameInfo();
>
> #ifndef NDEBUG
> const TargetRegisterInfo *RegInfo = MF.getTarget().getRegisterInfo();
> assert(!RegInfo->needsStackRealignment(MF) &&
> - "No stack realignment on ARM64!");
> + "No stack realignment on AArch64!");
> #endif
>
> return (MFI->hasCalls() || MFI->hasVarSizedObjects() ||
> @@ -100,15 +100,16 @@ bool ARM64FrameLowering::hasFP(const Mac
> /// immediately on entry to the current function. This eliminates the need for
> /// add/sub sp brackets around call sites. Returns true if the call frame is
> /// included as part of the stack frame.
> -bool ARM64FrameLowering::hasReservedCallFrame(const MachineFunction &MF) const {
> +bool
> +AArch64FrameLowering::hasReservedCallFrame(const MachineFunction &MF) const {
> return !MF.getFrameInfo()->hasVarSizedObjects();
> }
>
> -void ARM64FrameLowering::eliminateCallFramePseudoInstr(
> +void AArch64FrameLowering::eliminateCallFramePseudoInstr(
> MachineFunction &MF, MachineBasicBlock &MBB,
> MachineBasicBlock::iterator I) const {
> - const ARM64InstrInfo *TII =
> - static_cast<const ARM64InstrInfo *>(MF.getTarget().getInstrInfo());
> + const AArch64InstrInfo *TII =
> + static_cast<const AArch64InstrInfo *>(MF.getTarget().getInstrInfo());
> DebugLoc DL = I->getDebugLoc();
> int Opc = I->getOpcode();
> bool IsDestroy = Opc == TII->getCallFrameDestroyOpcode();
> @@ -138,26 +139,26 @@ void ARM64FrameLowering::eliminateCallFr
> // Mostly call frames will be allocated at the start of a function so
> // this is OK, but it is a limitation that needs dealing with.
> assert(Amount > -0xffffff && Amount < 0xffffff && "call frame too large");
> - emitFrameOffset(MBB, I, DL, ARM64::SP, ARM64::SP, Amount, TII);
> + emitFrameOffset(MBB, I, DL, AArch64::SP, AArch64::SP, Amount, TII);
> }
> } else if (CalleePopAmount != 0) {
> // If the calling convention demands that the callee pops arguments from the
> // stack, we want to add it back if we have a reserved call frame.
> assert(CalleePopAmount < 0xffffff && "call frame too large");
> - emitFrameOffset(MBB, I, DL, ARM64::SP, ARM64::SP, -CalleePopAmount, TII);
> + emitFrameOffset(MBB, I, DL, AArch64::SP, AArch64::SP, -CalleePopAmount,
> + TII);
> }
> MBB.erase(I);
> }
>
> -void
> -ARM64FrameLowering::emitCalleeSavedFrameMoves(MachineBasicBlock &MBB,
> - MachineBasicBlock::iterator MBBI,
> - unsigned FramePtr) const {
> +void AArch64FrameLowering::emitCalleeSavedFrameMoves(
> + MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI,
> + unsigned FramePtr) const {
> MachineFunction &MF = *MBB.getParent();
> MachineFrameInfo *MFI = MF.getFrameInfo();
> MachineModuleInfo &MMI = MF.getMMI();
> const MCRegisterInfo *MRI = MMI.getContext().getRegisterInfo();
> - const ARM64InstrInfo *TII = TM.getInstrInfo();
> + const AArch64InstrInfo *TII = TM.getInstrInfo();
> DebugLoc DL = MBB.findDebugLoc(MBBI);
>
> // Add callee saved registers to move list.
> @@ -185,7 +186,7 @@ ARM64FrameLowering::emitCalleeSavedFrame
> // method automatically generates the directives when frame pointers are
> // used. If we generate CFI directives for the extra "STP"s, the linker will
> // lose track of the correct values for the frame pointer and link register.
> - if (HasFP && (FramePtr == Reg || Reg == ARM64::LR)) {
> + if (HasFP && (FramePtr == Reg || Reg == AArch64::LR)) {
> TotalSkipped += stackGrowth;
> continue;
> }
> @@ -198,15 +199,15 @@ ARM64FrameLowering::emitCalleeSavedFrame
> }
> }
>
> -void ARM64FrameLowering::emitPrologue(MachineFunction &MF) const {
> +void AArch64FrameLowering::emitPrologue(MachineFunction &MF) const {
> MachineBasicBlock &MBB = MF.front(); // Prologue goes in entry BB.
> MachineBasicBlock::iterator MBBI = MBB.begin();
> const MachineFrameInfo *MFI = MF.getFrameInfo();
> const Function *Fn = MF.getFunction();
> - const ARM64RegisterInfo *RegInfo = TM.getRegisterInfo();
> - const ARM64InstrInfo *TII = TM.getInstrInfo();
> + const AArch64RegisterInfo *RegInfo = TM.getRegisterInfo();
> + const AArch64InstrInfo *TII = TM.getInstrInfo();
> MachineModuleInfo &MMI = MF.getMMI();
> - ARM64FunctionInfo *AFI = MF.getInfo<ARM64FunctionInfo>();
> + AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
> bool needsFrameMoves = MMI.hasDebugInfo() || Fn->needsUnwindTableEntry();
> bool HasFP = hasFP(MF);
> DebugLoc DL = MBB.findDebugLoc(MBBI);
> @@ -224,7 +225,7 @@ void ARM64FrameLowering::emitPrologue(Ma
> // REDZONE: If the stack size is less than 128 bytes, we don't need
> // to actually allocate.
> if (NumBytes && !canUseRedZone(MF)) {
> - emitFrameOffset(MBB, MBBI, DL, ARM64::SP, ARM64::SP, -NumBytes, TII,
> + emitFrameOffset(MBB, MBBI, DL, AArch64::SP, AArch64::SP, -NumBytes, TII,
> MachineInstr::FrameSetup);
>
> // Encode the stack size of the leaf function.
> @@ -244,9 +245,9 @@ void ARM64FrameLowering::emitPrologue(Ma
> if (HasFP) {
> // First instruction must a) allocate the stack and b) have an immediate
> // that is a multiple of -2.
> - assert((MBBI->getOpcode() == ARM64::STPXpre ||
> - MBBI->getOpcode() == ARM64::STPDpre) &&
> - MBBI->getOperand(3).getReg() == ARM64::SP &&
> + assert((MBBI->getOpcode() == AArch64::STPXpre ||
> + MBBI->getOpcode() == AArch64::STPDpre) &&
> + MBBI->getOperand(3).getReg() == AArch64::SP &&
> MBBI->getOperand(4).getImm() < 0 &&
> (MBBI->getOperand(4).getImm() & 1) == 0);
>
> @@ -258,10 +259,10 @@ void ARM64FrameLowering::emitPrologue(Ma
> }
>
> // Move past the saves of the callee-saved registers.
> - while (MBBI->getOpcode() == ARM64::STPXi ||
> - MBBI->getOpcode() == ARM64::STPDi ||
> - MBBI->getOpcode() == ARM64::STPXpre ||
> - MBBI->getOpcode() == ARM64::STPDpre) {
> + while (MBBI->getOpcode() == AArch64::STPXi ||
> + MBBI->getOpcode() == AArch64::STPDi ||
> + MBBI->getOpcode() == AArch64::STPXpre ||
> + MBBI->getOpcode() == AArch64::STPDpre) {
> ++MBBI;
> NumBytes -= 16;
> }
> @@ -271,7 +272,7 @@ void ARM64FrameLowering::emitPrologue(Ma
> // mov fp,sp when FPOffset is zero.
> // Note: All stores of callee-saved registers are marked as "FrameSetup".
> // This code marks the instruction(s) that set the FP also.
> - emitFrameOffset(MBB, MBBI, DL, ARM64::FP, ARM64::SP, FPOffset, TII,
> + emitFrameOffset(MBB, MBBI, DL, AArch64::FP, AArch64::SP, FPOffset, TII,
> MachineInstr::FrameSetup);
> }
>
> @@ -282,7 +283,7 @@ void ARM64FrameLowering::emitPrologue(Ma
> if (NumBytes) {
> // If we're a leaf function, try using the red zone.
> if (!canUseRedZone(MF))
> - emitFrameOffset(MBB, MBBI, DL, ARM64::SP, ARM64::SP, -NumBytes, TII,
> + emitFrameOffset(MBB, MBBI, DL, AArch64::SP, AArch64::SP, -NumBytes, TII,
> MachineInstr::FrameSetup);
> }
>
> @@ -295,7 +296,7 @@ void ARM64FrameLowering::emitPrologue(Ma
> // needed.
> //
> if (RegInfo->hasBasePointer(MF))
> - TII->copyPhysReg(MBB, MBBI, DL, ARM64::X19, ARM64::SP, false);
> + TII->copyPhysReg(MBB, MBBI, DL, AArch64::X19, AArch64::SP, false);
>
> if (needsFrameMoves) {
> const DataLayout *TD = MF.getTarget().getDataLayout();
> @@ -377,7 +378,7 @@ void ARM64FrameLowering::emitPrologue(Ma
> .addCFIIndex(CFIIndex);
>
> // Record the location of the stored LR
> - unsigned LR = RegInfo->getDwarfRegNum(ARM64::LR, true);
> + unsigned LR = RegInfo->getDwarfRegNum(AArch64::LR, true);
> CFIIndex = MMI.addFrameInst(
> MCCFIInstruction::createOffset(nullptr, LR, StackGrowth));
> BuildMI(MBB, MBBI, DL, TII->get(TargetOpcode::CFI_INSTRUCTION))
> @@ -410,15 +411,16 @@ static bool isCalleeSavedRegister(unsign
>
> static bool isCSRestore(MachineInstr *MI, const MCPhysReg *CSRegs) {
> unsigned RtIdx = 0;
> - if (MI->getOpcode() == ARM64::LDPXpost || MI->getOpcode() == ARM64::LDPDpost)
> + if (MI->getOpcode() == AArch64::LDPXpost ||
> + MI->getOpcode() == AArch64::LDPDpost)
> RtIdx = 1;
>
> - if (MI->getOpcode() == ARM64::LDPXpost ||
> - MI->getOpcode() == ARM64::LDPDpost || MI->getOpcode() == ARM64::LDPXi ||
> - MI->getOpcode() == ARM64::LDPDi) {
> + if (MI->getOpcode() == AArch64::LDPXpost ||
> + MI->getOpcode() == AArch64::LDPDpost ||
> + MI->getOpcode() == AArch64::LDPXi || MI->getOpcode() == AArch64::LDPDi) {
> if (!isCalleeSavedRegister(MI->getOperand(RtIdx).getReg(), CSRegs) ||
> !isCalleeSavedRegister(MI->getOperand(RtIdx + 1).getReg(), CSRegs) ||
> - MI->getOperand(RtIdx + 2).getReg() != ARM64::SP)
> + MI->getOperand(RtIdx + 2).getReg() != AArch64::SP)
> return false;
> return true;
> }
> @@ -426,25 +428,25 @@ static bool isCSRestore(MachineInstr *MI
> return false;
> }
>
> -void ARM64FrameLowering::emitEpilogue(MachineFunction &MF,
> - MachineBasicBlock &MBB) const {
> +void AArch64FrameLowering::emitEpilogue(MachineFunction &MF,
> + MachineBasicBlock &MBB) const {
> MachineBasicBlock::iterator MBBI = MBB.getLastNonDebugInstr();
> assert(MBBI->isReturn() && "Can only insert epilog into returning blocks");
> MachineFrameInfo *MFI = MF.getFrameInfo();
> - const ARM64InstrInfo *TII =
> - static_cast<const ARM64InstrInfo *>(MF.getTarget().getInstrInfo());
> - const ARM64RegisterInfo *RegInfo =
> - static_cast<const ARM64RegisterInfo *>(MF.getTarget().getRegisterInfo());
> + const AArch64InstrInfo *TII =
> + static_cast<const AArch64InstrInfo *>(MF.getTarget().getInstrInfo());
> + const AArch64RegisterInfo *RegInfo = static_cast<const AArch64RegisterInfo *>(
> + MF.getTarget().getRegisterInfo());
> DebugLoc DL = MBBI->getDebugLoc();
> unsigned RetOpcode = MBBI->getOpcode();
>
> int NumBytes = MFI->getStackSize();
> - const ARM64FunctionInfo *AFI = MF.getInfo<ARM64FunctionInfo>();
> + const AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
>
> // Initial and residual are named for consitency with the prologue. Note that
> // in the epilogue, the residual adjustment is executed first.
> uint64_t ArgumentPopSize = 0;
> - if (RetOpcode == ARM64::TCRETURNdi || RetOpcode == ARM64::TCRETURNri) {
> + if (RetOpcode == AArch64::TCRETURNdi || RetOpcode == AArch64::TCRETURNri) {
> MachineOperand &StackAdjust = MBBI->getOperand(1);
>
> // For a tail-call in a callee-pops-arguments environment, some or all of
> @@ -483,8 +485,8 @@ void ARM64FrameLowering::emitEpilogue(Ma
> // So NumBytes = StackSize + BytesInStackArgArea - CalleeArgStackSize
> // = StackSize + ArgumentPopSize
> //
> - // ARM64TargetLowering::LowerCall figures out ArgumentPopSize and keeps
> - // it as the 2nd argument of ARM64ISD::TC_RETURN.
> + // AArch64TargetLowering::LowerCall figures out ArgumentPopSize and keeps
> + // it as the 2nd argument of AArch64ISD::TC_RETURN.
> NumBytes += ArgumentPopSize;
>
> unsigned NumRestores = 0;
> @@ -508,7 +510,8 @@ void ARM64FrameLowering::emitEpilogue(Ma
> // If this was a redzone leaf function, we don't need to restore the
> // stack pointer.
> if (!canUseRedZone(MF))
> - emitFrameOffset(MBB, LastPopI, DL, ARM64::SP, ARM64::SP, NumBytes, TII);
> + emitFrameOffset(MBB, LastPopI, DL, AArch64::SP, AArch64::SP, NumBytes,
> + TII);
> return;
> }
>
> @@ -517,14 +520,14 @@ void ARM64FrameLowering::emitEpilogue(Ma
> // non-post-indexed loads for the restores if we aren't actually going to
> // be able to save any instructions.
> if (NumBytes || MFI->hasVarSizedObjects())
> - emitFrameOffset(MBB, LastPopI, DL, ARM64::SP, ARM64::FP,
> + emitFrameOffset(MBB, LastPopI, DL, AArch64::SP, AArch64::FP,
> -(NumRestores - 1) * 16, TII, MachineInstr::NoFlags);
> }
>
> /// getFrameIndexOffset - Returns the displacement from the frame register to
> /// the stack frame of the specified index.
> -int ARM64FrameLowering::getFrameIndexOffset(const MachineFunction &MF,
> - int FI) const {
> +int AArch64FrameLowering::getFrameIndexOffset(const MachineFunction &MF,
> + int FI) const {
> unsigned FrameReg;
> return getFrameIndexReference(MF, FI, FrameReg);
> }
> @@ -533,19 +536,19 @@ int ARM64FrameLowering::getFrameIndexOff
> /// debug info. It's the same as what we use for resolving the code-gen
> /// references for now. FIXME: This can go wrong when references are
> /// SP-relative and simple call frames aren't used.
> -int ARM64FrameLowering::getFrameIndexReference(const MachineFunction &MF,
> - int FI,
> - unsigned &FrameReg) const {
> +int AArch64FrameLowering::getFrameIndexReference(const MachineFunction &MF,
> + int FI,
> + unsigned &FrameReg) const {
> return resolveFrameIndexReference(MF, FI, FrameReg);
> }
>
> -int ARM64FrameLowering::resolveFrameIndexReference(const MachineFunction &MF,
> - int FI, unsigned &FrameReg,
> - bool PreferFP) const {
> +int AArch64FrameLowering::resolveFrameIndexReference(const MachineFunction &MF,
> + int FI, unsigned &FrameReg,
> + bool PreferFP) const {
> const MachineFrameInfo *MFI = MF.getFrameInfo();
> - const ARM64RegisterInfo *RegInfo =
> - static_cast<const ARM64RegisterInfo *>(MF.getTarget().getRegisterInfo());
> - const ARM64FunctionInfo *AFI = MF.getInfo<ARM64FunctionInfo>();
> + const AArch64RegisterInfo *RegInfo = static_cast<const AArch64RegisterInfo *>(
> + MF.getTarget().getRegisterInfo());
> + const AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
> int FPOffset = MFI->getObjectOffset(FI) + 16;
> int Offset = MFI->getObjectOffset(FI) + MFI->getStackSize();
> bool isFixed = MFI->isFixedObjectIndex(FI);
> @@ -587,7 +590,7 @@ int ARM64FrameLowering::resolveFrameInde
> if (RegInfo->hasBasePointer(MF))
> FrameReg = RegInfo->getBaseRegister();
> else {
> - FrameReg = ARM64::SP;
> + FrameReg = AArch64::SP;
> // If we're using the red zone for this function, the SP won't actually
> // be adjusted, so the offsets will be negative. They're also all
> // within range of the signed 9-bit immediate instructions.
> @@ -599,16 +602,16 @@ int ARM64FrameLowering::resolveFrameInde
> }
>
> static unsigned getPrologueDeath(MachineFunction &MF, unsigned Reg) {
> - if (Reg != ARM64::LR)
> + if (Reg != AArch64::LR)
> return getKillRegState(true);
>
> // LR maybe referred to later by an @llvm.returnaddress intrinsic.
> - bool LRLiveIn = MF.getRegInfo().isLiveIn(ARM64::LR);
> + bool LRLiveIn = MF.getRegInfo().isLiveIn(AArch64::LR);
> bool LRKill = !(LRLiveIn && MF.getFrameInfo()->isReturnAddressTaken());
> return getKillRegState(LRKill);
> }
>
> -bool ARM64FrameLowering::spillCalleeSavedRegisters(
> +bool AArch64FrameLowering::spillCalleeSavedRegisters(
> MachineBasicBlock &MBB, MachineBasicBlock::iterator MI,
> const std::vector<CalleeSavedInfo> &CSI,
> const TargetRegisterInfo *TRI) const {
> @@ -645,22 +648,22 @@ bool ARM64FrameLowering::spillCalleeSave
> // Rationale: This sequence saves uop updates compared to a sequence of
> // pre-increment spills like stp xi,xj,[sp,#-16]!
> // Note: Similar rational and sequence for restores in epilog.
> - if (ARM64::GPR64RegClass.contains(Reg1)) {
> - assert(ARM64::GPR64RegClass.contains(Reg2) &&
> + if (AArch64::GPR64RegClass.contains(Reg1)) {
> + assert(AArch64::GPR64RegClass.contains(Reg2) &&
> "Expected GPR64 callee-saved register pair!");
> // For first spill use pre-increment store.
> if (i == 0)
> - StrOpc = ARM64::STPXpre;
> + StrOpc = AArch64::STPXpre;
> else
> - StrOpc = ARM64::STPXi;
> - } else if (ARM64::FPR64RegClass.contains(Reg1)) {
> - assert(ARM64::FPR64RegClass.contains(Reg2) &&
> + StrOpc = AArch64::STPXi;
> + } else if (AArch64::FPR64RegClass.contains(Reg1)) {
> + assert(AArch64::FPR64RegClass.contains(Reg2) &&
> "Expected FPR64 callee-saved register pair!");
> // For first spill use pre-increment store.
> if (i == 0)
> - StrOpc = ARM64::STPDpre;
> + StrOpc = AArch64::STPDpre;
> else
> - StrOpc = ARM64::STPDi;
> + StrOpc = AArch64::STPDi;
> } else
> llvm_unreachable("Unexpected callee saved register!");
> DEBUG(dbgs() << "CSR spill: (" << TRI->getName(Reg1) << ", "
> @@ -672,19 +675,19 @@ bool ARM64FrameLowering::spillCalleeSave
> assert((Offset >= -64 && Offset <= 63) &&
> "Offset out of bounds for STP immediate");
> MachineInstrBuilder MIB = BuildMI(MBB, MI, DL, TII.get(StrOpc));
> - if (StrOpc == ARM64::STPDpre || StrOpc == ARM64::STPXpre)
> - MIB.addReg(ARM64::SP, RegState::Define);
> + if (StrOpc == AArch64::STPDpre || StrOpc == AArch64::STPXpre)
> + MIB.addReg(AArch64::SP, RegState::Define);
>
> MIB.addReg(Reg2, getPrologueDeath(MF, Reg2))
> .addReg(Reg1, getPrologueDeath(MF, Reg1))
> - .addReg(ARM64::SP)
> + .addReg(AArch64::SP)
> .addImm(Offset) // [sp, #offset * 8], where factor * 8 is implicit
> .setMIFlag(MachineInstr::FrameSetup);
> }
> return true;
> }
>
> -bool ARM64FrameLowering::restoreCalleeSavedRegisters(
> +bool AArch64FrameLowering::restoreCalleeSavedRegisters(
> MachineBasicBlock &MBB, MachineBasicBlock::iterator MI,
> const std::vector<CalleeSavedInfo> &CSI,
> const TargetRegisterInfo *TRI) const {
> @@ -716,20 +719,20 @@ bool ARM64FrameLowering::restoreCalleeSa
>
> assert((Count & 1) == 0 && "Odd number of callee-saved regs to spill!");
> assert((i & 1) == 0 && "Odd index for callee-saved reg spill!");
> - if (ARM64::GPR64RegClass.contains(Reg1)) {
> - assert(ARM64::GPR64RegClass.contains(Reg2) &&
> + if (AArch64::GPR64RegClass.contains(Reg1)) {
> + assert(AArch64::GPR64RegClass.contains(Reg2) &&
> "Expected GPR64 callee-saved register pair!");
> if (i == Count - 2)
> - LdrOpc = ARM64::LDPXpost;
> + LdrOpc = AArch64::LDPXpost;
> else
> - LdrOpc = ARM64::LDPXi;
> - } else if (ARM64::FPR64RegClass.contains(Reg1)) {
> - assert(ARM64::FPR64RegClass.contains(Reg2) &&
> + LdrOpc = AArch64::LDPXi;
> + } else if (AArch64::FPR64RegClass.contains(Reg1)) {
> + assert(AArch64::FPR64RegClass.contains(Reg2) &&
> "Expected FPR64 callee-saved register pair!");
> if (i == Count - 2)
> - LdrOpc = ARM64::LDPDpost;
> + LdrOpc = AArch64::LDPDpost;
> else
> - LdrOpc = ARM64::LDPDi;
> + LdrOpc = AArch64::LDPDi;
> } else
> llvm_unreachable("Unexpected callee saved register!");
> DEBUG(dbgs() << "CSR restore: (" << TRI->getName(Reg1) << ", "
> @@ -742,31 +745,31 @@ bool ARM64FrameLowering::restoreCalleeSa
> assert((Offset >= -64 && Offset <= 63) &&
> "Offset out of bounds for LDP immediate");
> MachineInstrBuilder MIB = BuildMI(MBB, MI, DL, TII.get(LdrOpc));
> - if (LdrOpc == ARM64::LDPXpost || LdrOpc == ARM64::LDPDpost)
> - MIB.addReg(ARM64::SP, RegState::Define);
> + if (LdrOpc == AArch64::LDPXpost || LdrOpc == AArch64::LDPDpost)
> + MIB.addReg(AArch64::SP, RegState::Define);
>
> MIB.addReg(Reg2, getDefRegState(true))
> .addReg(Reg1, getDefRegState(true))
> - .addReg(ARM64::SP)
> + .addReg(AArch64::SP)
> .addImm(Offset); // [sp], #offset * 8 or [sp, #offset * 8]
> // where the factor * 8 is implicit
> }
> return true;
> }
>
> -void ARM64FrameLowering::processFunctionBeforeCalleeSavedScan(
> +void AArch64FrameLowering::processFunctionBeforeCalleeSavedScan(
> MachineFunction &MF, RegScavenger *RS) const {
> - const ARM64RegisterInfo *RegInfo =
> - static_cast<const ARM64RegisterInfo *>(MF.getTarget().getRegisterInfo());
> - ARM64FunctionInfo *AFI = MF.getInfo<ARM64FunctionInfo>();
> + const AArch64RegisterInfo *RegInfo = static_cast<const AArch64RegisterInfo *>(
> + MF.getTarget().getRegisterInfo());
> + AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
> MachineRegisterInfo *MRI = &MF.getRegInfo();
> SmallVector<unsigned, 4> UnspilledCSGPRs;
> SmallVector<unsigned, 4> UnspilledCSFPRs;
>
> // The frame record needs to be created by saving the appropriate registers
> if (hasFP(MF)) {
> - MRI->setPhysRegUsed(ARM64::FP);
> - MRI->setPhysRegUsed(ARM64::LR);
> + MRI->setPhysRegUsed(AArch64::FP);
> + MRI->setPhysRegUsed(AArch64::LR);
> }
>
> // Spill the BasePtr if it's used. Do this first thing so that the
> @@ -788,10 +791,10 @@ void ARM64FrameLowering::processFunction
>
> const unsigned OddReg = CSRegs[i];
> const unsigned EvenReg = CSRegs[i + 1];
> - assert((ARM64::GPR64RegClass.contains(OddReg) &&
> - ARM64::GPR64RegClass.contains(EvenReg)) ^
> - (ARM64::FPR64RegClass.contains(OddReg) &&
> - ARM64::FPR64RegClass.contains(EvenReg)) &&
> + assert((AArch64::GPR64RegClass.contains(OddReg) &&
> + AArch64::GPR64RegClass.contains(EvenReg)) ^
> + (AArch64::FPR64RegClass.contains(OddReg) &&
> + AArch64::FPR64RegClass.contains(EvenReg)) &&
> "Register class mismatch!");
>
> const bool OddRegUsed = MRI->isPhysRegUsed(OddReg);
> @@ -800,7 +803,7 @@ void ARM64FrameLowering::processFunction
> // Early exit if none of the registers in the register pair is actually
> // used.
> if (!OddRegUsed && !EvenRegUsed) {
> - if (ARM64::GPR64RegClass.contains(OddReg)) {
> + if (AArch64::GPR64RegClass.contains(OddReg)) {
> UnspilledCSGPRs.push_back(OddReg);
> UnspilledCSGPRs.push_back(EvenReg);
> } else {
> @@ -810,7 +813,7 @@ void ARM64FrameLowering::processFunction
> continue;
> }
>
> - unsigned Reg = ARM64::NoRegister;
> + unsigned Reg = AArch64::NoRegister;
> // If only one of the registers of the register pair is used, make sure to
> // mark the other one as used as well.
> if (OddRegUsed ^ EvenRegUsed) {
> @@ -822,17 +825,17 @@ void ARM64FrameLowering::processFunction
> DEBUG(dbgs() << ' ' << PrintReg(OddReg, RegInfo));
> DEBUG(dbgs() << ' ' << PrintReg(EvenReg, RegInfo));
>
> - assert(((OddReg == ARM64::LR && EvenReg == ARM64::FP) ||
> + assert(((OddReg == AArch64::LR && EvenReg == AArch64::FP) ||
> (RegInfo->getEncodingValue(OddReg) + 1 ==
> RegInfo->getEncodingValue(EvenReg))) &&
> "Register pair of non-adjacent registers!");
> - if (ARM64::GPR64RegClass.contains(OddReg)) {
> + if (AArch64::GPR64RegClass.contains(OddReg)) {
> NumGPRSpilled += 2;
> // If it's not a reserved register, we can use it in lieu of an
> // emergency spill slot for the register scavenger.
> // FIXME: It would be better to instead keep looking and choose another
> // unspilled register that isn't reserved, if there is one.
> - if (Reg != ARM64::NoRegister && !RegInfo->isReservedReg(MF, Reg))
> + if (Reg != AArch64::NoRegister && !RegInfo->isReservedReg(MF, Reg))
> ExtraCSSpill = true;
> } else
> NumFPRSpilled += 2;
> @@ -878,7 +881,7 @@ void ARM64FrameLowering::processFunction
> // If we didn't find an extra callee-saved register to spill, create
> // an emergency spill slot.
> if (!ExtraCSSpill) {
> - const TargetRegisterClass *RC = &ARM64::GPR64RegClass;
> + const TargetRegisterClass *RC = &AArch64::GPR64RegClass;
> int FI = MFI->CreateStackObject(RC->getSize(), RC->getAlignment(), false);
> RS->addScavengingFrameIndex(FI);
> DEBUG(dbgs() << "No available CS registers, allocated fi#" << FI
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64FrameLowering.h (from r209576, llvm/trunk/lib/Target/ARM64/ARM64FrameLowering.h)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64FrameLowering.h?p2=llvm/trunk/lib/Target/AArch64/AArch64FrameLowering.h&p1=llvm/trunk/lib/Target/ARM64/ARM64FrameLowering.h&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64FrameLowering.h (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64FrameLowering.h Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===-- ARM64FrameLowering.h - TargetFrameLowering for ARM64 ----*- C++ -*-===//
> +//==-- AArch64FrameLowering.h - TargetFrameLowering for AArch64 --*- C++ -*-==//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -11,22 +11,22 @@
> //
> //===----------------------------------------------------------------------===//
>
> -#ifndef ARM64_FRAMELOWERING_H
> -#define ARM64_FRAMELOWERING_H
> +#ifndef AArch64_FRAMELOWERING_H
> +#define AArch64_FRAMELOWERING_H
>
> #include "llvm/Target/TargetFrameLowering.h"
>
> namespace llvm {
>
> -class ARM64Subtarget;
> -class ARM64TargetMachine;
> +class AArch64Subtarget;
> +class AArch64TargetMachine;
>
> -class ARM64FrameLowering : public TargetFrameLowering {
> - const ARM64TargetMachine &TM;
> +class AArch64FrameLowering : public TargetFrameLowering {
> + const AArch64TargetMachine &TM;
>
> public:
> - explicit ARM64FrameLowering(const ARM64TargetMachine &TM,
> - const ARM64Subtarget &STI)
> + explicit AArch64FrameLowering(const AArch64TargetMachine &TM,
> + const AArch64Subtarget &STI)
> : TargetFrameLowering(StackGrowsDown, 16, 0, 16,
> false /*StackRealignable*/),
> TM(TM) {}
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64ISelDAGToDAG.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64ISelDAGToDAG.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64ISelDAGToDAG.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===-- ARM64ISelDAGToDAG.cpp - A dag to dag inst selector for ARM64 ------===//
> +//===-- AArch64ISelDAGToDAG.cpp - A dag to dag inst selector for AArch64 --===//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,12 +7,12 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// This file defines an instruction selector for the ARM64 target.
> +// This file defines an instruction selector for the AArch64 target.
> //
> //===----------------------------------------------------------------------===//
>
> -#include "ARM64TargetMachine.h"
> -#include "MCTargetDesc/ARM64AddressingModes.h"
> +#include "AArch64TargetMachine.h"
> +#include "MCTargetDesc/AArch64AddressingModes.h"
> #include "llvm/ADT/APSInt.h"
> #include "llvm/CodeGen/SelectionDAGISel.h"
> #include "llvm/IR/Function.h" // To access function attributes.
> @@ -25,30 +25,31 @@
>
> using namespace llvm;
>
> -#define DEBUG_TYPE "arm64-isel"
> +#define DEBUG_TYPE "aarch64-isel"
>
> //===--------------------------------------------------------------------===//
> -/// ARM64DAGToDAGISel - ARM64 specific code to select ARM64 machine
> +/// AArch64DAGToDAGISel - AArch64 specific code to select AArch64 machine
> /// instructions for SelectionDAG operations.
> ///
> namespace {
>
> -class ARM64DAGToDAGISel : public SelectionDAGISel {
> - ARM64TargetMachine &TM;
> +class AArch64DAGToDAGISel : public SelectionDAGISel {
> + AArch64TargetMachine &TM;
>
> - /// Subtarget - Keep a pointer to the ARM64Subtarget around so that we can
> + /// Subtarget - Keep a pointer to the AArch64Subtarget around so that we can
> /// make the right decision when generating code for different targets.
> - const ARM64Subtarget *Subtarget;
> + const AArch64Subtarget *Subtarget;
>
> bool ForCodeSize;
>
> public:
> - explicit ARM64DAGToDAGISel(ARM64TargetMachine &tm, CodeGenOpt::Level OptLevel)
> - : SelectionDAGISel(tm, OptLevel), TM(tm),
> - Subtarget(nullptr), ForCodeSize(false) {}
> + explicit AArch64DAGToDAGISel(AArch64TargetMachine &tm,
> + CodeGenOpt::Level OptLevel)
> + : SelectionDAGISel(tm, OptLevel), TM(tm), Subtarget(nullptr),
> + ForCodeSize(false) {}
>
> const char *getPassName() const override {
> - return "ARM64 Instruction Selection";
> + return "AArch64 Instruction Selection";
> }
>
> bool runOnMachineFunction(MachineFunction &MF) override {
> @@ -57,7 +58,7 @@ public:
> FnAttrs.hasAttribute(AttributeSet::FunctionIndex,
> Attribute::OptimizeForSize) ||
> FnAttrs.hasAttribute(AttributeSet::FunctionIndex, Attribute::MinSize);
> - Subtarget = &TM.getSubtarget<ARM64Subtarget>();
> + Subtarget = &TM.getSubtarget<AArch64Subtarget>();
> return SelectionDAGISel::runOnMachineFunction(MF);
> }
>
> @@ -161,7 +162,7 @@ public:
> SDNode *SelectLIBM(SDNode *N);
>
> // Include the pieces autogenerated from the target description.
> -#include "ARM64GenDAGISel.inc"
> +#include "AArch64GenDAGISel.inc"
>
> private:
> bool SelectShiftedRegister(SDValue N, bool AllowROR, SDValue &Reg,
> @@ -214,10 +215,10 @@ static bool isOpcWithIntImmediate(const
> isIntImmediate(N->getOperand(1).getNode(), Imm);
> }
>
> -bool ARM64DAGToDAGISel::SelectInlineAsmMemoryOperand(
> +bool AArch64DAGToDAGISel::SelectInlineAsmMemoryOperand(
> const SDValue &Op, char ConstraintCode, std::vector<SDValue> &OutOps) {
> assert(ConstraintCode == 'm' && "unexpected asm memory constraint");
> - // Require the address to be in a register. That is safe for all ARM64
> + // Require the address to be in a register. That is safe for all AArch64
> // variants and it is hard to do anything much smarter without knowing
> // how the operand is used.
> OutOps.push_back(Op);
> @@ -227,8 +228,8 @@ bool ARM64DAGToDAGISel::SelectInlineAsmM
> /// SelectArithImmed - Select an immediate value that can be represented as
> /// a 12-bit value shifted left by either 0 or 12. If so, return true with
> /// Val set to the 12-bit value and Shift set to the shifter operand.
> -bool ARM64DAGToDAGISel::SelectArithImmed(SDValue N, SDValue &Val,
> - SDValue &Shift) {
> +bool AArch64DAGToDAGISel::SelectArithImmed(SDValue N, SDValue &Val,
> + SDValue &Shift) {
> // This function is called from the addsub_shifted_imm ComplexPattern,
> // which lists [imm] as the list of opcode it's interested in, however
> // we still need to check whether the operand is actually an immediate
> @@ -248,7 +249,7 @@ bool ARM64DAGToDAGISel::SelectArithImmed
> } else
> return false;
>
> - unsigned ShVal = ARM64_AM::getShifterImm(ARM64_AM::LSL, ShiftAmt);
> + unsigned ShVal = AArch64_AM::getShifterImm(AArch64_AM::LSL, ShiftAmt);
> Val = CurDAG->getTargetConstant(Immed, MVT::i32);
> Shift = CurDAG->getTargetConstant(ShVal, MVT::i32);
> return true;
> @@ -256,8 +257,8 @@ bool ARM64DAGToDAGISel::SelectArithImmed
>
> /// SelectNegArithImmed - As above, but negates the value before trying to
> /// select it.
> -bool ARM64DAGToDAGISel::SelectNegArithImmed(SDValue N, SDValue &Val,
> - SDValue &Shift) {
> +bool AArch64DAGToDAGISel::SelectNegArithImmed(SDValue N, SDValue &Val,
> + SDValue &Shift) {
> // This function is called from the addsub_shifted_imm ComplexPattern,
> // which lists [imm] as the list of opcode it's interested in, however
> // we still need to check whether the operand is actually an immediate
> @@ -288,23 +289,23 @@ bool ARM64DAGToDAGISel::SelectNegArithIm
>
> /// getShiftTypeForNode - Translate a shift node to the corresponding
> /// ShiftType value.
> -static ARM64_AM::ShiftExtendType getShiftTypeForNode(SDValue N) {
> +static AArch64_AM::ShiftExtendType getShiftTypeForNode(SDValue N) {
> switch (N.getOpcode()) {
> default:
> - return ARM64_AM::InvalidShiftExtend;
> + return AArch64_AM::InvalidShiftExtend;
> case ISD::SHL:
> - return ARM64_AM::LSL;
> + return AArch64_AM::LSL;
> case ISD::SRL:
> - return ARM64_AM::LSR;
> + return AArch64_AM::LSR;
> case ISD::SRA:
> - return ARM64_AM::ASR;
> + return AArch64_AM::ASR;
> case ISD::ROTR:
> - return ARM64_AM::ROR;
> + return AArch64_AM::ROR;
> }
> }
>
> /// \brief Determine wether it is worth to fold V into an extended register.
> -bool ARM64DAGToDAGISel::isWorthFolding(SDValue V) const {
> +bool AArch64DAGToDAGISel::isWorthFolding(SDValue V) const {
> // it hurts if the a value is used at least twice, unless we are optimizing
> // for code size.
> if (ForCodeSize || V.hasOneUse())
> @@ -317,18 +318,18 @@ bool ARM64DAGToDAGISel::isWorthFolding(S
> /// instructions allow the shifted register to be rotated, but the arithmetic
> /// instructions do not. The AllowROR parameter specifies whether ROR is
> /// supported.
> -bool ARM64DAGToDAGISel::SelectShiftedRegister(SDValue N, bool AllowROR,
> - SDValue &Reg, SDValue &Shift) {
> - ARM64_AM::ShiftExtendType ShType = getShiftTypeForNode(N);
> - if (ShType == ARM64_AM::InvalidShiftExtend)
> +bool AArch64DAGToDAGISel::SelectShiftedRegister(SDValue N, bool AllowROR,
> + SDValue &Reg, SDValue &Shift) {
> + AArch64_AM::ShiftExtendType ShType = getShiftTypeForNode(N);
> + if (ShType == AArch64_AM::InvalidShiftExtend)
> return false;
> - if (!AllowROR && ShType == ARM64_AM::ROR)
> + if (!AllowROR && ShType == AArch64_AM::ROR)
> return false;
>
> if (ConstantSDNode *RHS = dyn_cast<ConstantSDNode>(N.getOperand(1))) {
> unsigned BitSize = N.getValueType().getSizeInBits();
> unsigned Val = RHS->getZExtValue() & (BitSize - 1);
> - unsigned ShVal = ARM64_AM::getShifterImm(ShType, Val);
> + unsigned ShVal = AArch64_AM::getShifterImm(ShType, Val);
>
> Reg = N.getOperand(0);
> Shift = CurDAG->getTargetConstant(ShVal, MVT::i32);
> @@ -340,7 +341,7 @@ bool ARM64DAGToDAGISel::SelectShiftedReg
>
> /// getExtendTypeForNode - Translate an extend node to the corresponding
> /// ExtendType value.
> -static ARM64_AM::ShiftExtendType
> +static AArch64_AM::ShiftExtendType
> getExtendTypeForNode(SDValue N, bool IsLoadStore = false) {
> if (N.getOpcode() == ISD::SIGN_EXTEND ||
> N.getOpcode() == ISD::SIGN_EXTEND_INREG) {
> @@ -351,51 +352,51 @@ getExtendTypeForNode(SDValue N, bool IsL
> SrcVT = N.getOperand(0).getValueType();
>
> if (!IsLoadStore && SrcVT == MVT::i8)
> - return ARM64_AM::SXTB;
> + return AArch64_AM::SXTB;
> else if (!IsLoadStore && SrcVT == MVT::i16)
> - return ARM64_AM::SXTH;
> + return AArch64_AM::SXTH;
> else if (SrcVT == MVT::i32)
> - return ARM64_AM::SXTW;
> + return AArch64_AM::SXTW;
> assert(SrcVT != MVT::i64 && "extend from 64-bits?");
>
> - return ARM64_AM::InvalidShiftExtend;
> + return AArch64_AM::InvalidShiftExtend;
> } else if (N.getOpcode() == ISD::ZERO_EXTEND ||
> N.getOpcode() == ISD::ANY_EXTEND) {
> EVT SrcVT = N.getOperand(0).getValueType();
> if (!IsLoadStore && SrcVT == MVT::i8)
> - return ARM64_AM::UXTB;
> + return AArch64_AM::UXTB;
> else if (!IsLoadStore && SrcVT == MVT::i16)
> - return ARM64_AM::UXTH;
> + return AArch64_AM::UXTH;
> else if (SrcVT == MVT::i32)
> - return ARM64_AM::UXTW;
> + return AArch64_AM::UXTW;
> assert(SrcVT != MVT::i64 && "extend from 64-bits?");
>
> - return ARM64_AM::InvalidShiftExtend;
> + return AArch64_AM::InvalidShiftExtend;
> } else if (N.getOpcode() == ISD::AND) {
> ConstantSDNode *CSD = dyn_cast<ConstantSDNode>(N.getOperand(1));
> if (!CSD)
> - return ARM64_AM::InvalidShiftExtend;
> + return AArch64_AM::InvalidShiftExtend;
> uint64_t AndMask = CSD->getZExtValue();
>
> switch (AndMask) {
> default:
> - return ARM64_AM::InvalidShiftExtend;
> + return AArch64_AM::InvalidShiftExtend;
> case 0xFF:
> - return !IsLoadStore ? ARM64_AM::UXTB : ARM64_AM::InvalidShiftExtend;
> + return !IsLoadStore ? AArch64_AM::UXTB : AArch64_AM::InvalidShiftExtend;
> case 0xFFFF:
> - return !IsLoadStore ? ARM64_AM::UXTH : ARM64_AM::InvalidShiftExtend;
> + return !IsLoadStore ? AArch64_AM::UXTH : AArch64_AM::InvalidShiftExtend;
> case 0xFFFFFFFF:
> - return ARM64_AM::UXTW;
> + return AArch64_AM::UXTW;
> }
> }
>
> - return ARM64_AM::InvalidShiftExtend;
> + return AArch64_AM::InvalidShiftExtend;
> }
>
> // Helper for SelectMLAV64LaneV128 - Recognize high lane extracts.
> static bool checkHighLaneIndex(SDNode *DL, SDValue &LaneOp, int &LaneIdx) {
> - if (DL->getOpcode() != ARM64ISD::DUPLANE16 &&
> - DL->getOpcode() != ARM64ISD::DUPLANE32)
> + if (DL->getOpcode() != AArch64ISD::DUPLANE16 &&
> + DL->getOpcode() != AArch64ISD::DUPLANE32)
> return false;
>
> SDValue SV = DL->getOperand(0);
> @@ -428,10 +429,10 @@ static bool checkV64LaneV128(SDValue Op0
> return true;
> }
>
> -/// SelectMLAV64LaneV128 - ARM64 supports vector MLAs where one multiplicand is
> -/// a lane in the upper half of a 128-bit vector. Recognize and select this so
> -/// that we don't emit unnecessary lane extracts.
> -SDNode *ARM64DAGToDAGISel::SelectMLAV64LaneV128(SDNode *N) {
> +/// SelectMLAV64LaneV128 - AArch64 supports vector MLAs where one multiplicand
> +/// is a lane in the upper half of a 128-bit vector. Recognize and select this
> +/// so that we don't emit unnecessary lane extracts.
> +SDNode *AArch64DAGToDAGISel::SelectMLAV64LaneV128(SDNode *N) {
> SDValue Op0 = N->getOperand(0);
> SDValue Op1 = N->getOperand(1);
> SDValue MLAOp1; // Will hold ordinary multiplicand for MLA.
> @@ -458,23 +459,23 @@ SDNode *ARM64DAGToDAGISel::SelectMLAV64L
> default:
> llvm_unreachable("Unrecognized MLA.");
> case MVT::v4i16:
> - MLAOpc = ARM64::MLAv4i16_indexed;
> + MLAOpc = AArch64::MLAv4i16_indexed;
> break;
> case MVT::v8i16:
> - MLAOpc = ARM64::MLAv8i16_indexed;
> + MLAOpc = AArch64::MLAv8i16_indexed;
> break;
> case MVT::v2i32:
> - MLAOpc = ARM64::MLAv2i32_indexed;
> + MLAOpc = AArch64::MLAv2i32_indexed;
> break;
> case MVT::v4i32:
> - MLAOpc = ARM64::MLAv4i32_indexed;
> + MLAOpc = AArch64::MLAv4i32_indexed;
> break;
> }
>
> return CurDAG->getMachineNode(MLAOpc, SDLoc(N), N->getValueType(0), Ops);
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectMULLV64LaneV128(unsigned IntNo, SDNode *N) {
> +SDNode *AArch64DAGToDAGISel::SelectMULLV64LaneV128(unsigned IntNo, SDNode *N) {
> SDValue SMULLOp0;
> SDValue SMULLOp1;
> int LaneIdx;
> @@ -489,26 +490,26 @@ SDNode *ARM64DAGToDAGISel::SelectMULLV64
>
> unsigned SMULLOpc = ~0U;
>
> - if (IntNo == Intrinsic::arm64_neon_smull) {
> + if (IntNo == Intrinsic::aarch64_neon_smull) {
> switch (N->getSimpleValueType(0).SimpleTy) {
> default:
> llvm_unreachable("Unrecognized SMULL.");
> case MVT::v4i32:
> - SMULLOpc = ARM64::SMULLv4i16_indexed;
> + SMULLOpc = AArch64::SMULLv4i16_indexed;
> break;
> case MVT::v2i64:
> - SMULLOpc = ARM64::SMULLv2i32_indexed;
> + SMULLOpc = AArch64::SMULLv2i32_indexed;
> break;
> }
> - } else if (IntNo == Intrinsic::arm64_neon_umull) {
> + } else if (IntNo == Intrinsic::aarch64_neon_umull) {
> switch (N->getSimpleValueType(0).SimpleTy) {
> default:
> llvm_unreachable("Unrecognized SMULL.");
> case MVT::v4i32:
> - SMULLOpc = ARM64::UMULLv4i16_indexed;
> + SMULLOpc = AArch64::UMULLv4i16_indexed;
> break;
> case MVT::v2i64:
> - SMULLOpc = ARM64::UMULLv2i32_indexed;
> + SMULLOpc = AArch64::UMULLv2i32_indexed;
> break;
> }
> } else
> @@ -525,7 +526,7 @@ static SDValue narrowIfNeeded(SelectionD
> if (N.getValueType() == MVT::i32)
> return N;
>
> - SDValue SubReg = CurDAG->getTargetConstant(ARM64::sub_32, MVT::i32);
> + SDValue SubReg = CurDAG->getTargetConstant(AArch64::sub_32, MVT::i32);
> MachineSDNode *Node = CurDAG->getMachineNode(TargetOpcode::EXTRACT_SUBREG,
> SDLoc(N), MVT::i32, N, SubReg);
> return SDValue(Node, 0);
> @@ -534,10 +535,10 @@ static SDValue narrowIfNeeded(SelectionD
>
> /// SelectArithExtendedRegister - Select a "extended register" operand. This
> /// operand folds in an extend followed by an optional left shift.
> -bool ARM64DAGToDAGISel::SelectArithExtendedRegister(SDValue N, SDValue &Reg,
> - SDValue &Shift) {
> +bool AArch64DAGToDAGISel::SelectArithExtendedRegister(SDValue N, SDValue &Reg,
> + SDValue &Shift) {
> unsigned ShiftVal = 0;
> - ARM64_AM::ShiftExtendType Ext;
> + AArch64_AM::ShiftExtendType Ext;
>
> if (N.getOpcode() == ISD::SHL) {
> ConstantSDNode *CSD = dyn_cast<ConstantSDNode>(N.getOperand(1));
> @@ -548,24 +549,24 @@ bool ARM64DAGToDAGISel::SelectArithExten
> return false;
>
> Ext = getExtendTypeForNode(N.getOperand(0));
> - if (Ext == ARM64_AM::InvalidShiftExtend)
> + if (Ext == AArch64_AM::InvalidShiftExtend)
> return false;
>
> Reg = N.getOperand(0).getOperand(0);
> } else {
> Ext = getExtendTypeForNode(N);
> - if (Ext == ARM64_AM::InvalidShiftExtend)
> + if (Ext == AArch64_AM::InvalidShiftExtend)
> return false;
>
> Reg = N.getOperand(0);
> }
>
> - // ARM64 mandates that the RHS of the operation must use the smallest
> + // AArch64 mandates that the RHS of the operation must use the smallest
> // register classs that could contain the size being extended from. Thus,
> // if we're folding a (sext i8), we need the RHS to be a GPR32, even though
> // there might not be an actual 32-bit value in the program. We can
> // (harmlessly) synthesize one by injected an EXTRACT_SUBREG here.
> - assert(Ext != ARM64_AM::UXTX && Ext != ARM64_AM::SXTX);
> + assert(Ext != AArch64_AM::UXTX && Ext != AArch64_AM::SXTX);
> Reg = narrowIfNeeded(CurDAG, Reg);
> Shift = CurDAG->getTargetConstant(getArithExtendImm(Ext, ShiftVal), MVT::i32);
> return isWorthFolding(N);
> @@ -574,7 +575,7 @@ bool ARM64DAGToDAGISel::SelectArithExten
> /// SelectAddrModeIndexed - Select a "register plus scaled unsigned 12-bit
> /// immediate" address. The "Size" argument is the size in bytes of the memory
> /// reference, which determines the scale.
> -bool ARM64DAGToDAGISel::SelectAddrModeIndexed(SDValue N, unsigned Size,
> +bool AArch64DAGToDAGISel::SelectAddrModeIndexed(SDValue N, unsigned Size,
> SDValue &Base, SDValue &OffImm) {
> const TargetLowering *TLI = getTargetLowering();
> if (N.getOpcode() == ISD::FrameIndex) {
> @@ -584,7 +585,7 @@ bool ARM64DAGToDAGISel::SelectAddrModeIn
> return true;
> }
>
> - if (N.getOpcode() == ARM64ISD::ADDlow) {
> + if (N.getOpcode() == AArch64ISD::ADDlow) {
> GlobalAddressSDNode *GAN =
> dyn_cast<GlobalAddressSDNode>(N.getOperand(1).getNode());
> Base = N.getOperand(0);
> @@ -637,8 +638,9 @@ bool ARM64DAGToDAGISel::SelectAddrModeIn
> /// is not valid for a scaled immediate addressing mode. The "Size" argument
> /// is the size in bytes of the memory reference, which is needed here to know
> /// what is valid for a scaled immediate.
> -bool ARM64DAGToDAGISel::SelectAddrModeUnscaled(SDValue N, unsigned Size,
> - SDValue &Base, SDValue &OffImm) {
> +bool AArch64DAGToDAGISel::SelectAddrModeUnscaled(SDValue N, unsigned Size,
> + SDValue &Base,
> + SDValue &OffImm) {
> if (!CurDAG->isBaseWithConstantOffset(N))
> return false;
> if (ConstantSDNode *RHS = dyn_cast<ConstantSDNode>(N.getOperand(1))) {
> @@ -662,7 +664,7 @@ bool ARM64DAGToDAGISel::SelectAddrModeUn
> }
>
> static SDValue Widen(SelectionDAG *CurDAG, SDValue N) {
> - SDValue SubReg = CurDAG->getTargetConstant(ARM64::sub_32, MVT::i32);
> + SDValue SubReg = CurDAG->getTargetConstant(AArch64::sub_32, MVT::i32);
> SDValue ImpDef = SDValue(
> CurDAG->getMachineNode(TargetOpcode::IMPLICIT_DEF, SDLoc(N), MVT::i64),
> 0);
> @@ -673,21 +675,22 @@ static SDValue Widen(SelectionDAG *CurDA
>
> /// \brief Check if the given SHL node (\p N), can be used to form an
> /// extended register for an addressing mode.
> -bool ARM64DAGToDAGISel::SelectExtendedSHL(SDValue N, unsigned Size,
> - bool WantExtend, SDValue &Offset,
> - SDValue &SignExtend) {
> +bool AArch64DAGToDAGISel::SelectExtendedSHL(SDValue N, unsigned Size,
> + bool WantExtend, SDValue &Offset,
> + SDValue &SignExtend) {
> assert(N.getOpcode() == ISD::SHL && "Invalid opcode.");
> ConstantSDNode *CSD = dyn_cast<ConstantSDNode>(N.getOperand(1));
> if (!CSD || (CSD->getZExtValue() & 0x7) != CSD->getZExtValue())
> return false;
>
> if (WantExtend) {
> - ARM64_AM::ShiftExtendType Ext = getExtendTypeForNode(N.getOperand(0), true);
> - if (Ext == ARM64_AM::InvalidShiftExtend)
> + AArch64_AM::ShiftExtendType Ext =
> + getExtendTypeForNode(N.getOperand(0), true);
> + if (Ext == AArch64_AM::InvalidShiftExtend)
> return false;
>
> Offset = narrowIfNeeded(CurDAG, N.getOperand(0).getOperand(0));
> - SignExtend = CurDAG->getTargetConstant(Ext == ARM64_AM::SXTW, MVT::i32);
> + SignExtend = CurDAG->getTargetConstant(Ext == AArch64_AM::SXTW, MVT::i32);
> } else {
> Offset = N.getOperand(0);
> SignExtend = CurDAG->getTargetConstant(0, MVT::i32);
> @@ -705,10 +708,10 @@ bool ARM64DAGToDAGISel::SelectExtendedSH
> return false;
> }
>
> -bool ARM64DAGToDAGISel::SelectAddrModeWRO(SDValue N, unsigned Size,
> - SDValue &Base, SDValue &Offset,
> - SDValue &SignExtend,
> - SDValue &DoShift) {
> +bool AArch64DAGToDAGISel::SelectAddrModeWRO(SDValue N, unsigned Size,
> + SDValue &Base, SDValue &Offset,
> + SDValue &SignExtend,
> + SDValue &DoShift) {
> if (N.getOpcode() != ISD::ADD)
> return false;
> SDValue LHS = N.getOperand(0);
> @@ -750,23 +753,25 @@ bool ARM64DAGToDAGISel::SelectAddrModeWR
> // There was no shift, whatever else we find.
> DoShift = CurDAG->getTargetConstant(false, MVT::i32);
>
> - ARM64_AM::ShiftExtendType Ext = ARM64_AM::InvalidShiftExtend;
> + AArch64_AM::ShiftExtendType Ext = AArch64_AM::InvalidShiftExtend;
> // Try to match an unshifted extend on the LHS.
> if (IsExtendedRegisterWorthFolding &&
> - (Ext = getExtendTypeForNode(LHS, true)) != ARM64_AM::InvalidShiftExtend) {
> + (Ext = getExtendTypeForNode(LHS, true)) !=
> + AArch64_AM::InvalidShiftExtend) {
> Base = RHS;
> Offset = narrowIfNeeded(CurDAG, LHS.getOperand(0));
> - SignExtend = CurDAG->getTargetConstant(Ext == ARM64_AM::SXTW, MVT::i32);
> + SignExtend = CurDAG->getTargetConstant(Ext == AArch64_AM::SXTW, MVT::i32);
> if (isWorthFolding(LHS))
> return true;
> }
>
> // Try to match an unshifted extend on the RHS.
> if (IsExtendedRegisterWorthFolding &&
> - (Ext = getExtendTypeForNode(RHS, true)) != ARM64_AM::InvalidShiftExtend) {
> + (Ext = getExtendTypeForNode(RHS, true)) !=
> + AArch64_AM::InvalidShiftExtend) {
> Base = LHS;
> Offset = narrowIfNeeded(CurDAG, RHS.getOperand(0));
> - SignExtend = CurDAG->getTargetConstant(Ext == ARM64_AM::SXTW, MVT::i32);
> + SignExtend = CurDAG->getTargetConstant(Ext == AArch64_AM::SXTW, MVT::i32);
> if (isWorthFolding(RHS))
> return true;
> }
> @@ -774,10 +779,10 @@ bool ARM64DAGToDAGISel::SelectAddrModeWR
> return false;
> }
>
> -bool ARM64DAGToDAGISel::SelectAddrModeXRO(SDValue N, unsigned Size,
> - SDValue &Base, SDValue &Offset,
> - SDValue &SignExtend,
> - SDValue &DoShift) {
> +bool AArch64DAGToDAGISel::SelectAddrModeXRO(SDValue N, unsigned Size,
> + SDValue &Base, SDValue &Offset,
> + SDValue &SignExtend,
> + SDValue &DoShift) {
> if (N.getOpcode() != ISD::ADD)
> return false;
> SDValue LHS = N.getOperand(0);
> @@ -825,27 +830,27 @@ bool ARM64DAGToDAGISel::SelectAddrModeXR
> return true;
> }
>
> -SDValue ARM64DAGToDAGISel::createDTuple(ArrayRef<SDValue> Regs) {
> - static unsigned RegClassIDs[] = { ARM64::DDRegClassID, ARM64::DDDRegClassID,
> - ARM64::DDDDRegClassID };
> - static unsigned SubRegs[] = { ARM64::dsub0, ARM64::dsub1,
> - ARM64::dsub2, ARM64::dsub3 };
> +SDValue AArch64DAGToDAGISel::createDTuple(ArrayRef<SDValue> Regs) {
> + static unsigned RegClassIDs[] = {
> + AArch64::DDRegClassID, AArch64::DDDRegClassID, AArch64::DDDDRegClassID};
> + static unsigned SubRegs[] = { AArch64::dsub0, AArch64::dsub1,
> + AArch64::dsub2, AArch64::dsub3 };
>
> return createTuple(Regs, RegClassIDs, SubRegs);
> }
>
> -SDValue ARM64DAGToDAGISel::createQTuple(ArrayRef<SDValue> Regs) {
> - static unsigned RegClassIDs[] = { ARM64::QQRegClassID, ARM64::QQQRegClassID,
> - ARM64::QQQQRegClassID };
> - static unsigned SubRegs[] = { ARM64::qsub0, ARM64::qsub1,
> - ARM64::qsub2, ARM64::qsub3 };
> +SDValue AArch64DAGToDAGISel::createQTuple(ArrayRef<SDValue> Regs) {
> + static unsigned RegClassIDs[] = {
> + AArch64::QQRegClassID, AArch64::QQQRegClassID, AArch64::QQQQRegClassID};
> + static unsigned SubRegs[] = { AArch64::qsub0, AArch64::qsub1,
> + AArch64::qsub2, AArch64::qsub3 };
>
> return createTuple(Regs, RegClassIDs, SubRegs);
> }
>
> -SDValue ARM64DAGToDAGISel::createTuple(ArrayRef<SDValue> Regs,
> - unsigned RegClassIDs[],
> - unsigned SubRegs[]) {
> +SDValue AArch64DAGToDAGISel::createTuple(ArrayRef<SDValue> Regs,
> + unsigned RegClassIDs[],
> + unsigned SubRegs[]) {
> // There's no special register-class for a vector-list of 1 element: it's just
> // a vector.
> if (Regs.size() == 1)
> @@ -872,8 +877,8 @@ SDValue ARM64DAGToDAGISel::createTuple(A
> return SDValue(N, 0);
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectTable(SDNode *N, unsigned NumVecs,
> - unsigned Opc, bool isExt) {
> +SDNode *AArch64DAGToDAGISel::SelectTable(SDNode *N, unsigned NumVecs,
> + unsigned Opc, bool isExt) {
> SDLoc dl(N);
> EVT VT = N->getValueType(0);
>
> @@ -893,7 +898,7 @@ SDNode *ARM64DAGToDAGISel::SelectTable(S
> return CurDAG->getMachineNode(Opc, dl, VT, Ops);
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectIndexedLoad(SDNode *N, bool &Done) {
> +SDNode *AArch64DAGToDAGISel::SelectIndexedLoad(SDNode *N, bool &Done) {
> LoadSDNode *LD = cast<LoadSDNode>(N);
> if (LD->isUnindexed())
> return nullptr;
> @@ -910,14 +915,14 @@ SDNode *ARM64DAGToDAGISel::SelectIndexed
> ISD::LoadExtType ExtType = LD->getExtensionType();
> bool InsertTo64 = false;
> if (VT == MVT::i64)
> - Opcode = IsPre ? ARM64::LDRXpre : ARM64::LDRXpost;
> + Opcode = IsPre ? AArch64::LDRXpre : AArch64::LDRXpost;
> else if (VT == MVT::i32) {
> if (ExtType == ISD::NON_EXTLOAD)
> - Opcode = IsPre ? ARM64::LDRWpre : ARM64::LDRWpost;
> + Opcode = IsPre ? AArch64::LDRWpre : AArch64::LDRWpost;
> else if (ExtType == ISD::SEXTLOAD)
> - Opcode = IsPre ? ARM64::LDRSWpre : ARM64::LDRSWpost;
> + Opcode = IsPre ? AArch64::LDRSWpre : AArch64::LDRSWpost;
> else {
> - Opcode = IsPre ? ARM64::LDRWpre : ARM64::LDRWpost;
> + Opcode = IsPre ? AArch64::LDRWpre : AArch64::LDRWpost;
> InsertTo64 = true;
> // The result of the load is only i32. It's the subreg_to_reg that makes
> // it into an i64.
> @@ -926,11 +931,11 @@ SDNode *ARM64DAGToDAGISel::SelectIndexed
> } else if (VT == MVT::i16) {
> if (ExtType == ISD::SEXTLOAD) {
> if (DstVT == MVT::i64)
> - Opcode = IsPre ? ARM64::LDRSHXpre : ARM64::LDRSHXpost;
> + Opcode = IsPre ? AArch64::LDRSHXpre : AArch64::LDRSHXpost;
> else
> - Opcode = IsPre ? ARM64::LDRSHWpre : ARM64::LDRSHWpost;
> + Opcode = IsPre ? AArch64::LDRSHWpre : AArch64::LDRSHWpost;
> } else {
> - Opcode = IsPre ? ARM64::LDRHHpre : ARM64::LDRHHpost;
> + Opcode = IsPre ? AArch64::LDRHHpre : AArch64::LDRHHpost;
> InsertTo64 = DstVT == MVT::i64;
> // The result of the load is only i32. It's the subreg_to_reg that makes
> // it into an i64.
> @@ -939,22 +944,22 @@ SDNode *ARM64DAGToDAGISel::SelectIndexed
> } else if (VT == MVT::i8) {
> if (ExtType == ISD::SEXTLOAD) {
> if (DstVT == MVT::i64)
> - Opcode = IsPre ? ARM64::LDRSBXpre : ARM64::LDRSBXpost;
> + Opcode = IsPre ? AArch64::LDRSBXpre : AArch64::LDRSBXpost;
> else
> - Opcode = IsPre ? ARM64::LDRSBWpre : ARM64::LDRSBWpost;
> + Opcode = IsPre ? AArch64::LDRSBWpre : AArch64::LDRSBWpost;
> } else {
> - Opcode = IsPre ? ARM64::LDRBBpre : ARM64::LDRBBpost;
> + Opcode = IsPre ? AArch64::LDRBBpre : AArch64::LDRBBpost;
> InsertTo64 = DstVT == MVT::i64;
> // The result of the load is only i32. It's the subreg_to_reg that makes
> // it into an i64.
> DstVT = MVT::i32;
> }
> } else if (VT == MVT::f32) {
> - Opcode = IsPre ? ARM64::LDRSpre : ARM64::LDRSpost;
> + Opcode = IsPre ? AArch64::LDRSpre : AArch64::LDRSpost;
> } else if (VT == MVT::f64 || VT.is64BitVector()) {
> - Opcode = IsPre ? ARM64::LDRDpre : ARM64::LDRDpost;
> + Opcode = IsPre ? AArch64::LDRDpre : AArch64::LDRDpost;
> } else if (VT.is128BitVector()) {
> - Opcode = IsPre ? ARM64::LDRQpre : ARM64::LDRQpost;
> + Opcode = IsPre ? AArch64::LDRQpre : AArch64::LDRQpost;
> } else
> return nullptr;
> SDValue Chain = LD->getChain();
> @@ -969,11 +974,11 @@ SDNode *ARM64DAGToDAGISel::SelectIndexed
> Done = true;
> SDValue LoadedVal = SDValue(Res, 1);
> if (InsertTo64) {
> - SDValue SubReg = CurDAG->getTargetConstant(ARM64::sub_32, MVT::i32);
> + SDValue SubReg = CurDAG->getTargetConstant(AArch64::sub_32, MVT::i32);
> LoadedVal =
> - SDValue(CurDAG->getMachineNode(ARM64::SUBREG_TO_REG, SDLoc(N), MVT::i64,
> - CurDAG->getTargetConstant(0, MVT::i64),
> - LoadedVal, SubReg),
> + SDValue(CurDAG->getMachineNode(
> + AArch64::SUBREG_TO_REG, SDLoc(N), MVT::i64,
> + CurDAG->getTargetConstant(0, MVT::i64), LoadedVal, SubReg),
> 0);
> }
>
> @@ -984,8 +989,8 @@ SDNode *ARM64DAGToDAGISel::SelectIndexed
> return nullptr;
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectLoad(SDNode *N, unsigned NumVecs, unsigned Opc,
> - unsigned SubRegIdx) {
> +SDNode *AArch64DAGToDAGISel::SelectLoad(SDNode *N, unsigned NumVecs,
> + unsigned Opc, unsigned SubRegIdx) {
> SDLoc dl(N);
> EVT VT = N->getValueType(0);
> SDValue Chain = N->getOperand(0);
> @@ -1008,8 +1013,8 @@ SDNode *ARM64DAGToDAGISel::SelectLoad(SD
> return nullptr;
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectPostLoad(SDNode *N, unsigned NumVecs,
> - unsigned Opc, unsigned SubRegIdx) {
> +SDNode *AArch64DAGToDAGISel::SelectPostLoad(SDNode *N, unsigned NumVecs,
> + unsigned Opc, unsigned SubRegIdx) {
> SDLoc dl(N);
> EVT VT = N->getValueType(0);
> SDValue Chain = N->getOperand(0);
> @@ -1043,8 +1048,8 @@ SDNode *ARM64DAGToDAGISel::SelectPostLoa
> return nullptr;
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectStore(SDNode *N, unsigned NumVecs,
> - unsigned Opc) {
> +SDNode *AArch64DAGToDAGISel::SelectStore(SDNode *N, unsigned NumVecs,
> + unsigned Opc) {
> SDLoc dl(N);
> EVT VT = N->getOperand(2)->getValueType(0);
>
> @@ -1062,8 +1067,8 @@ SDNode *ARM64DAGToDAGISel::SelectStore(S
> return St;
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectPostStore(SDNode *N, unsigned NumVecs,
> - unsigned Opc) {
> +SDNode *AArch64DAGToDAGISel::SelectPostStore(SDNode *N, unsigned NumVecs,
> + unsigned Opc) {
> SDLoc dl(N);
> EVT VT = N->getOperand(2)->getValueType(0);
> SmallVector<EVT, 2> ResTys;
> @@ -1102,7 +1107,7 @@ public:
>
> SDValue Undef =
> SDValue(DAG.getMachineNode(TargetOpcode::IMPLICIT_DEF, DL, WideTy), 0);
> - return DAG.getTargetInsertSubreg(ARM64::dsub, DL, WideTy, Undef, V64Reg);
> + return DAG.getTargetInsertSubreg(AArch64::dsub, DL, WideTy, Undef, V64Reg);
> }
> };
>
> @@ -1114,12 +1119,12 @@ static SDValue NarrowVector(SDValue V128
> MVT EltTy = VT.getVectorElementType().getSimpleVT();
> MVT NarrowTy = MVT::getVectorVT(EltTy, WideSize / 2);
>
> - return DAG.getTargetExtractSubreg(ARM64::dsub, SDLoc(V128Reg), NarrowTy,
> + return DAG.getTargetExtractSubreg(AArch64::dsub, SDLoc(V128Reg), NarrowTy,
> V128Reg);
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectLoadLane(SDNode *N, unsigned NumVecs,
> - unsigned Opc) {
> +SDNode *AArch64DAGToDAGISel::SelectLoadLane(SDNode *N, unsigned NumVecs,
> + unsigned Opc) {
> SDLoc dl(N);
> EVT VT = N->getValueType(0);
> bool Narrow = VT.getSizeInBits() == 64;
> @@ -1149,8 +1154,8 @@ SDNode *ARM64DAGToDAGISel::SelectLoadLan
> SDValue SuperReg = SDValue(Ld, 0);
>
> EVT WideVT = RegSeq.getOperand(1)->getValueType(0);
> - static unsigned QSubs[] = { ARM64::qsub0, ARM64::qsub1, ARM64::qsub2,
> - ARM64::qsub3 };
> + static unsigned QSubs[] = { AArch64::qsub0, AArch64::qsub1, AArch64::qsub2,
> + AArch64::qsub3 };
> for (unsigned i = 0; i < NumVecs; ++i) {
> SDValue NV = CurDAG->getTargetExtractSubreg(QSubs[i], dl, WideVT, SuperReg);
> if (Narrow)
> @@ -1163,8 +1168,8 @@ SDNode *ARM64DAGToDAGISel::SelectLoadLan
> return Ld;
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectPostLoadLane(SDNode *N, unsigned NumVecs,
> - unsigned Opc) {
> +SDNode *AArch64DAGToDAGISel::SelectPostLoadLane(SDNode *N, unsigned NumVecs,
> + unsigned Opc) {
> SDLoc dl(N);
> EVT VT = N->getValueType(0);
> bool Narrow = VT.getSizeInBits() == 64;
> @@ -1204,8 +1209,8 @@ SDNode *ARM64DAGToDAGISel::SelectPostLoa
> Narrow ? NarrowVector(SuperReg, *CurDAG) : SuperReg);
> } else {
> EVT WideVT = RegSeq.getOperand(1)->getValueType(0);
> - static unsigned QSubs[] = { ARM64::qsub0, ARM64::qsub1, ARM64::qsub2,
> - ARM64::qsub3 };
> + static unsigned QSubs[] = { AArch64::qsub0, AArch64::qsub1, AArch64::qsub2,
> + AArch64::qsub3 };
> for (unsigned i = 0; i < NumVecs; ++i) {
> SDValue NV = CurDAG->getTargetExtractSubreg(QSubs[i], dl, WideVT,
> SuperReg);
> @@ -1221,8 +1226,8 @@ SDNode *ARM64DAGToDAGISel::SelectPostLoa
> return Ld;
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectStoreLane(SDNode *N, unsigned NumVecs,
> - unsigned Opc) {
> +SDNode *AArch64DAGToDAGISel::SelectStoreLane(SDNode *N, unsigned NumVecs,
> + unsigned Opc) {
> SDLoc dl(N);
> EVT VT = N->getOperand(2)->getValueType(0);
> bool Narrow = VT.getSizeInBits() == 64;
> @@ -1254,8 +1259,8 @@ SDNode *ARM64DAGToDAGISel::SelectStoreLa
> return St;
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectPostStoreLane(SDNode *N, unsigned NumVecs,
> - unsigned Opc) {
> +SDNode *AArch64DAGToDAGISel::SelectPostStoreLane(SDNode *N, unsigned NumVecs,
> + unsigned Opc) {
> SDLoc dl(N);
> EVT VT = N->getOperand(2)->getValueType(0);
> bool Narrow = VT.getSizeInBits() == 64;
> @@ -1374,7 +1379,7 @@ static bool isBitfieldExtractOpFromAnd(S
> // operation.
> MSB = MSB > 31 ? 31 : MSB;
>
> - Opc = VT == MVT::i32 ? ARM64::UBFMWri : ARM64::UBFMXri;
> + Opc = VT == MVT::i32 ? AArch64::UBFMWri : AArch64::UBFMXri;
> return true;
> }
>
> @@ -1410,9 +1415,9 @@ static bool isOneBitExtractOpFromShr(SDN
> // Check whether we really have a one bit extract here.
> if (And_mask >> Srl_imm == 0x1) {
> if (N->getValueType(0) == MVT::i32)
> - Opc = ARM64::UBFMWri;
> + Opc = AArch64::UBFMWri;
> else
> - Opc = ARM64::UBFMXri;
> + Opc = AArch64::UBFMXri;
>
> LSB = MSB = Srl_imm;
>
> @@ -1479,9 +1484,9 @@ static bool isBitfieldExtractOpFromShr(S
> MSB = LSB + Width;
> // SRA requires a signed extraction
> if (VT == MVT::i32)
> - Opc = N->getOpcode() == ISD::SRA ? ARM64::SBFMWri : ARM64::UBFMWri;
> + Opc = N->getOpcode() == ISD::SRA ? AArch64::SBFMWri : AArch64::UBFMWri;
> else
> - Opc = N->getOpcode() == ISD::SRA ? ARM64::SBFMXri : ARM64::UBFMXri;
> + Opc = N->getOpcode() == ISD::SRA ? AArch64::SBFMXri : AArch64::UBFMXri;
> return true;
> }
>
> @@ -1509,10 +1514,10 @@ static bool isBitfieldExtractOp(Selectio
> switch (NOpc) {
> default:
> return false;
> - case ARM64::SBFMWri:
> - case ARM64::UBFMWri:
> - case ARM64::SBFMXri:
> - case ARM64::UBFMXri:
> + case AArch64::SBFMWri:
> + case AArch64::UBFMWri:
> + case AArch64::SBFMXri:
> + case AArch64::UBFMXri:
> Opc = NOpc;
> Opd0 = N->getOperand(0);
> LSB = cast<ConstantSDNode>(N->getOperand(1).getNode())->getZExtValue();
> @@ -1523,7 +1528,7 @@ static bool isBitfieldExtractOp(Selectio
> return false;
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectBitfieldExtractOp(SDNode *N) {
> +SDNode *AArch64DAGToDAGISel::SelectBitfieldExtractOp(SDNode *N) {
> unsigned Opc, LSB, MSB;
> SDValue Opd0;
> if (!isBitfieldExtractOp(CurDAG, N, Opc, Opd0, LSB, MSB))
> @@ -1533,12 +1538,12 @@ SDNode *ARM64DAGToDAGISel::SelectBitfiel
>
> // If the bit extract operation is 64bit but the original type is 32bit, we
> // need to add one EXTRACT_SUBREG.
> - if ((Opc == ARM64::SBFMXri || Opc == ARM64::UBFMXri) && VT == MVT::i32) {
> + if ((Opc == AArch64::SBFMXri || Opc == AArch64::UBFMXri) && VT == MVT::i32) {
> SDValue Ops64[] = {Opd0, CurDAG->getTargetConstant(LSB, MVT::i64),
> CurDAG->getTargetConstant(MSB, MVT::i64)};
>
> SDNode *BFM = CurDAG->getMachineNode(Opc, SDLoc(N), MVT::i64, Ops64);
> - SDValue SubReg = CurDAG->getTargetConstant(ARM64::sub_32, MVT::i32);
> + SDValue SubReg = CurDAG->getTargetConstant(AArch64::sub_32, MVT::i32);
> MachineSDNode *Node =
> CurDAG->getMachineNode(TargetOpcode::EXTRACT_SUBREG, SDLoc(N), MVT::i32,
> SDValue(BFM, 0), SubReg);
> @@ -1588,7 +1593,7 @@ static void getUsefulBitsFromAndWithImme
> unsigned Depth) {
> uint64_t Imm =
> cast<const ConstantSDNode>(Op.getOperand(1).getNode())->getZExtValue();
> - Imm = ARM64_AM::decodeLogicalImmediate(Imm, UsefulBits.getBitWidth());
> + Imm = AArch64_AM::decodeLogicalImmediate(Imm, UsefulBits.getBitWidth());
> UsefulBits &= APInt(UsefulBits.getBitWidth(), Imm);
> getUsefulBits(Op, UsefulBits, Depth + 1);
> }
> @@ -1638,17 +1643,17 @@ static void getUsefulBitsFromOrWithShift
> Mask.clearAllBits();
> Mask.flipAllBits();
>
> - if (ARM64_AM::getShiftType(ShiftTypeAndValue) == ARM64_AM::LSL) {
> + if (AArch64_AM::getShiftType(ShiftTypeAndValue) == AArch64_AM::LSL) {
> // Shift Left
> - uint64_t ShiftAmt = ARM64_AM::getShiftValue(ShiftTypeAndValue);
> + uint64_t ShiftAmt = AArch64_AM::getShiftValue(ShiftTypeAndValue);
> Mask = Mask.shl(ShiftAmt);
> getUsefulBits(Op, Mask, Depth + 1);
> Mask = Mask.lshr(ShiftAmt);
> - } else if (ARM64_AM::getShiftType(ShiftTypeAndValue) == ARM64_AM::LSR) {
> + } else if (AArch64_AM::getShiftType(ShiftTypeAndValue) == AArch64_AM::LSR) {
> // Shift Right
> - // We do not handle ARM64_AM::ASR, because the sign will change the
> + // We do not handle AArch64_AM::ASR, because the sign will change the
> // number of useful bits
> - uint64_t ShiftAmt = ARM64_AM::getShiftValue(ShiftTypeAndValue);
> + uint64_t ShiftAmt = AArch64_AM::getShiftValue(ShiftTypeAndValue);
> Mask = Mask.lshr(ShiftAmt);
> getUsefulBits(Op, Mask, Depth + 1);
> Mask = Mask.shl(ShiftAmt);
> @@ -1695,25 +1700,25 @@ static void getUsefulBitsForUse(SDNode *
> switch (UserNode->getMachineOpcode()) {
> default:
> return;
> - case ARM64::ANDSWri:
> - case ARM64::ANDSXri:
> - case ARM64::ANDWri:
> - case ARM64::ANDXri:
> + case AArch64::ANDSWri:
> + case AArch64::ANDSXri:
> + case AArch64::ANDWri:
> + case AArch64::ANDXri:
> // We increment Depth only when we call the getUsefulBits
> return getUsefulBitsFromAndWithImmediate(SDValue(UserNode, 0), UsefulBits,
> Depth);
> - case ARM64::UBFMWri:
> - case ARM64::UBFMXri:
> + case AArch64::UBFMWri:
> + case AArch64::UBFMXri:
> return getUsefulBitsFromUBFM(SDValue(UserNode, 0), UsefulBits, Depth);
>
> - case ARM64::ORRWrs:
> - case ARM64::ORRXrs:
> + case AArch64::ORRWrs:
> + case AArch64::ORRXrs:
> if (UserNode->getOperand(1) != Orig)
> return;
> return getUsefulBitsFromOrWithShiftedReg(SDValue(UserNode, 0), UsefulBits,
> Depth);
> - case ARM64::BFMWri:
> - case ARM64::BFMXri:
> + case AArch64::BFMWri:
> + case AArch64::BFMXri:
> return getUsefulBitsFromBFM(SDValue(UserNode, 0), Orig, UsefulBits, Depth);
> }
> }
> @@ -1751,7 +1756,7 @@ static SDValue getLeftShift(SelectionDAG
>
> EVT VT = Op.getValueType();
> unsigned BitWidth = VT.getSizeInBits();
> - unsigned UBFMOpc = BitWidth == 32 ? ARM64::UBFMWri : ARM64::UBFMXri;
> + unsigned UBFMOpc = BitWidth == 32 ? AArch64::UBFMWri : AArch64::UBFMXri;
>
> SDNode *ShiftNode;
> if (ShlAmount > 0) {
> @@ -1833,9 +1838,9 @@ static bool isBitfieldInsertOpFromOr(SDN
> // Set Opc
> EVT VT = N->getValueType(0);
> if (VT == MVT::i32)
> - Opc = ARM64::BFMWri;
> + Opc = AArch64::BFMWri;
> else if (VT == MVT::i64)
> - Opc = ARM64::BFMXri;
> + Opc = AArch64::BFMXri;
> else
> return false;
>
> @@ -1860,8 +1865,8 @@ static bool isBitfieldInsertOpFromOr(SDN
> NumberOfIgnoredLowBits, true)) {
> // Check that the returned opcode is compatible with the pattern,
> // i.e., same type and zero extended (U and not S)
> - if ((BFXOpc != ARM64::UBFMXri && VT == MVT::i64) ||
> - (BFXOpc != ARM64::UBFMWri && VT == MVT::i32))
> + if ((BFXOpc != AArch64::UBFMXri && VT == MVT::i64) ||
> + (BFXOpc != AArch64::UBFMWri && VT == MVT::i32))
> continue;
>
> // Compute the width of the bitfield insertion
> @@ -1919,7 +1924,7 @@ static bool isBitfieldInsertOpFromOr(SDN
> return false;
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectBitfieldInsertOp(SDNode *N) {
> +SDNode *AArch64DAGToDAGISel::SelectBitfieldInsertOp(SDNode *N) {
> if (N->getOpcode() != ISD::OR)
> return nullptr;
>
> @@ -1938,11 +1943,11 @@ SDNode *ARM64DAGToDAGISel::SelectBitfiel
> return CurDAG->SelectNodeTo(N, Opc, VT, Ops);
> }
>
> -SDNode *ARM64DAGToDAGISel::SelectLIBM(SDNode *N) {
> +SDNode *AArch64DAGToDAGISel::SelectLIBM(SDNode *N) {
> EVT VT = N->getValueType(0);
> unsigned Variant;
> unsigned Opc;
> - unsigned FRINTXOpcs[] = { ARM64::FRINTXSr, ARM64::FRINTXDr };
> + unsigned FRINTXOpcs[] = { AArch64::FRINTXSr, AArch64::FRINTXDr };
>
> if (VT == MVT::f32) {
> Variant = 0;
> @@ -1958,22 +1963,22 @@ SDNode *ARM64DAGToDAGISel::SelectLIBM(SD
> default:
> return nullptr; // Unrecognized libm ISD node. Fall back on default codegen.
> case ISD::FCEIL: {
> - unsigned FRINTPOpcs[] = { ARM64::FRINTPSr, ARM64::FRINTPDr };
> + unsigned FRINTPOpcs[] = { AArch64::FRINTPSr, AArch64::FRINTPDr };
> Opc = FRINTPOpcs[Variant];
> break;
> }
> case ISD::FFLOOR: {
> - unsigned FRINTMOpcs[] = { ARM64::FRINTMSr, ARM64::FRINTMDr };
> + unsigned FRINTMOpcs[] = { AArch64::FRINTMSr, AArch64::FRINTMDr };
> Opc = FRINTMOpcs[Variant];
> break;
> }
> case ISD::FTRUNC: {
> - unsigned FRINTZOpcs[] = { ARM64::FRINTZSr, ARM64::FRINTZDr };
> + unsigned FRINTZOpcs[] = { AArch64::FRINTZSr, AArch64::FRINTZDr };
> Opc = FRINTZOpcs[Variant];
> break;
> }
> case ISD::FROUND: {
> - unsigned FRINTAOpcs[] = { ARM64::FRINTASr, ARM64::FRINTADr };
> + unsigned FRINTAOpcs[] = { AArch64::FRINTASr, AArch64::FRINTADr };
> Opc = FRINTAOpcs[Variant];
> break;
> }
> @@ -1993,14 +1998,14 @@ SDNode *ARM64DAGToDAGISel::SelectLIBM(SD
> }
>
> bool
> -ARM64DAGToDAGISel::SelectCVTFixedPosOperand(SDValue N, SDValue &FixedPos,
> +AArch64DAGToDAGISel::SelectCVTFixedPosOperand(SDValue N, SDValue &FixedPos,
> unsigned RegWidth) {
> APFloat FVal(0.0);
> if (ConstantFPSDNode *CN = dyn_cast<ConstantFPSDNode>(N))
> FVal = CN->getValueAPF();
> else if (LoadSDNode *LN = dyn_cast<LoadSDNode>(N)) {
> // Some otherwise illegal constants are allowed in this case.
> - if (LN->getOperand(1).getOpcode() != ARM64ISD::ADDlow ||
> + if (LN->getOperand(1).getOpcode() != AArch64ISD::ADDlow ||
> !isa<ConstantPoolSDNode>(LN->getOperand(1)->getOperand(1)))
> return false;
>
> @@ -2036,7 +2041,7 @@ ARM64DAGToDAGISel::SelectCVTFixedPosOper
> return true;
> }
>
> -SDNode *ARM64DAGToDAGISel::Select(SDNode *Node) {
> +SDNode *AArch64DAGToDAGISel::Select(SDNode *Node) {
> // Dump information about the Node being selected
> DEBUG(errs() << "Selecting: ");
> DEBUG(Node->dump(CurDAG));
> @@ -2108,10 +2113,10 @@ SDNode *ARM64DAGToDAGISel::Select(SDNode
> default:
> assert(0 && "Unexpected vector element type!");
> case 64:
> - SubReg = ARM64::dsub;
> + SubReg = AArch64::dsub;
> break;
> case 32:
> - SubReg = ARM64::ssub;
> + SubReg = AArch64::ssub;
> break;
> case 16: // FALLTHROUGH
> case 8:
> @@ -2131,10 +2136,10 @@ SDNode *ARM64DAGToDAGISel::Select(SDNode
> if (ConstNode->isNullValue()) {
> if (VT == MVT::i32)
> return CurDAG->getCopyFromReg(CurDAG->getEntryNode(), SDLoc(Node),
> - ARM64::WZR, MVT::i32).getNode();
> + AArch64::WZR, MVT::i32).getNode();
> else if (VT == MVT::i64)
> return CurDAG->getCopyFromReg(CurDAG->getEntryNode(), SDLoc(Node),
> - ARM64::XZR, MVT::i64).getNode();
> + AArch64::XZR, MVT::i64).getNode();
> }
> break;
> }
> @@ -2142,22 +2147,22 @@ SDNode *ARM64DAGToDAGISel::Select(SDNode
> case ISD::FrameIndex: {
> // Selects to ADDXri FI, 0 which in turn will become ADDXri SP, imm.
> int FI = cast<FrameIndexSDNode>(Node)->getIndex();
> - unsigned Shifter = ARM64_AM::getShifterImm(ARM64_AM::LSL, 0);
> + unsigned Shifter = AArch64_AM::getShifterImm(AArch64_AM::LSL, 0);
> const TargetLowering *TLI = getTargetLowering();
> SDValue TFI = CurDAG->getTargetFrameIndex(FI, TLI->getPointerTy());
> SDValue Ops[] = { TFI, CurDAG->getTargetConstant(0, MVT::i32),
> CurDAG->getTargetConstant(Shifter, MVT::i32) };
> - return CurDAG->SelectNodeTo(Node, ARM64::ADDXri, MVT::i64, Ops);
> + return CurDAG->SelectNodeTo(Node, AArch64::ADDXri, MVT::i64, Ops);
> }
> case ISD::INTRINSIC_W_CHAIN: {
> unsigned IntNo = cast<ConstantSDNode>(Node->getOperand(1))->getZExtValue();
> switch (IntNo) {
> default:
> break;
> - case Intrinsic::arm64_ldaxp:
> - case Intrinsic::arm64_ldxp: {
> + case Intrinsic::aarch64_ldaxp:
> + case Intrinsic::aarch64_ldxp: {
> unsigned Op =
> - IntNo == Intrinsic::arm64_ldaxp ? ARM64::LDAXPX : ARM64::LDXPX;
> + IntNo == Intrinsic::aarch64_ldaxp ? AArch64::LDAXPX : AArch64::LDXPX;
> SDValue MemAddr = Node->getOperand(2);
> SDLoc DL(Node);
> SDValue Chain = Node->getOperand(0);
> @@ -2171,10 +2176,10 @@ SDNode *ARM64DAGToDAGISel::Select(SDNode
> cast<MachineSDNode>(Ld)->setMemRefs(MemOp, MemOp + 1);
> return Ld;
> }
> - case Intrinsic::arm64_stlxp:
> - case Intrinsic::arm64_stxp: {
> + case Intrinsic::aarch64_stlxp:
> + case Intrinsic::aarch64_stxp: {
> unsigned Op =
> - IntNo == Intrinsic::arm64_stlxp ? ARM64::STLXPX : ARM64::STXPX;
> + IntNo == Intrinsic::aarch64_stlxp ? AArch64::STLXPX : AArch64::STXPX;
> SDLoc DL(Node);
> SDValue Chain = Node->getOperand(0);
> SDValue ValLo = Node->getOperand(2);
> @@ -2196,203 +2201,203 @@ SDNode *ARM64DAGToDAGISel::Select(SDNode
>
> return St;
> }
> - case Intrinsic::arm64_neon_ld1x2:
> + case Intrinsic::aarch64_neon_ld1x2:
> if (VT == MVT::v8i8)
> - return SelectLoad(Node, 2, ARM64::LD1Twov8b, ARM64::dsub0);
> + return SelectLoad(Node, 2, AArch64::LD1Twov8b, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectLoad(Node, 2, ARM64::LD1Twov16b, ARM64::qsub0);
> + return SelectLoad(Node, 2, AArch64::LD1Twov16b, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectLoad(Node, 2, ARM64::LD1Twov4h, ARM64::dsub0);
> + return SelectLoad(Node, 2, AArch64::LD1Twov4h, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectLoad(Node, 2, ARM64::LD1Twov8h, ARM64::qsub0);
> + return SelectLoad(Node, 2, AArch64::LD1Twov8h, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectLoad(Node, 2, ARM64::LD1Twov2s, ARM64::dsub0);
> + return SelectLoad(Node, 2, AArch64::LD1Twov2s, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectLoad(Node, 2, ARM64::LD1Twov4s, ARM64::qsub0);
> + return SelectLoad(Node, 2, AArch64::LD1Twov4s, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectLoad(Node, 2, ARM64::LD1Twov1d, ARM64::dsub0);
> + return SelectLoad(Node, 2, AArch64::LD1Twov1d, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectLoad(Node, 2, ARM64::LD1Twov2d, ARM64::qsub0);
> + return SelectLoad(Node, 2, AArch64::LD1Twov2d, AArch64::qsub0);
> break;
> - case Intrinsic::arm64_neon_ld1x3:
> + case Intrinsic::aarch64_neon_ld1x3:
> if (VT == MVT::v8i8)
> - return SelectLoad(Node, 3, ARM64::LD1Threev8b, ARM64::dsub0);
> + return SelectLoad(Node, 3, AArch64::LD1Threev8b, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectLoad(Node, 3, ARM64::LD1Threev16b, ARM64::qsub0);
> + return SelectLoad(Node, 3, AArch64::LD1Threev16b, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectLoad(Node, 3, ARM64::LD1Threev4h, ARM64::dsub0);
> + return SelectLoad(Node, 3, AArch64::LD1Threev4h, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectLoad(Node, 3, ARM64::LD1Threev8h, ARM64::qsub0);
> + return SelectLoad(Node, 3, AArch64::LD1Threev8h, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectLoad(Node, 3, ARM64::LD1Threev2s, ARM64::dsub0);
> + return SelectLoad(Node, 3, AArch64::LD1Threev2s, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectLoad(Node, 3, ARM64::LD1Threev4s, ARM64::qsub0);
> + return SelectLoad(Node, 3, AArch64::LD1Threev4s, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectLoad(Node, 3, ARM64::LD1Threev1d, ARM64::dsub0);
> + return SelectLoad(Node, 3, AArch64::LD1Threev1d, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectLoad(Node, 3, ARM64::LD1Threev2d, ARM64::qsub0);
> + return SelectLoad(Node, 3, AArch64::LD1Threev2d, AArch64::qsub0);
> break;
> - case Intrinsic::arm64_neon_ld1x4:
> + case Intrinsic::aarch64_neon_ld1x4:
> if (VT == MVT::v8i8)
> - return SelectLoad(Node, 4, ARM64::LD1Fourv8b, ARM64::dsub0);
> + return SelectLoad(Node, 4, AArch64::LD1Fourv8b, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectLoad(Node, 4, ARM64::LD1Fourv16b, ARM64::qsub0);
> + return SelectLoad(Node, 4, AArch64::LD1Fourv16b, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectLoad(Node, 4, ARM64::LD1Fourv4h, ARM64::dsub0);
> + return SelectLoad(Node, 4, AArch64::LD1Fourv4h, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectLoad(Node, 4, ARM64::LD1Fourv8h, ARM64::qsub0);
> + return SelectLoad(Node, 4, AArch64::LD1Fourv8h, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectLoad(Node, 4, ARM64::LD1Fourv2s, ARM64::dsub0);
> + return SelectLoad(Node, 4, AArch64::LD1Fourv2s, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectLoad(Node, 4, ARM64::LD1Fourv4s, ARM64::qsub0);
> + return SelectLoad(Node, 4, AArch64::LD1Fourv4s, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectLoad(Node, 4, ARM64::LD1Fourv1d, ARM64::dsub0);
> + return SelectLoad(Node, 4, AArch64::LD1Fourv1d, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectLoad(Node, 4, ARM64::LD1Fourv2d, ARM64::qsub0);
> + return SelectLoad(Node, 4, AArch64::LD1Fourv2d, AArch64::qsub0);
> break;
> - case Intrinsic::arm64_neon_ld2:
> + case Intrinsic::aarch64_neon_ld2:
> if (VT == MVT::v8i8)
> - return SelectLoad(Node, 2, ARM64::LD2Twov8b, ARM64::dsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Twov8b, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectLoad(Node, 2, ARM64::LD2Twov16b, ARM64::qsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Twov16b, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectLoad(Node, 2, ARM64::LD2Twov4h, ARM64::dsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Twov4h, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectLoad(Node, 2, ARM64::LD2Twov8h, ARM64::qsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Twov8h, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectLoad(Node, 2, ARM64::LD2Twov2s, ARM64::dsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Twov2s, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectLoad(Node, 2, ARM64::LD2Twov4s, ARM64::qsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Twov4s, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectLoad(Node, 2, ARM64::LD1Twov1d, ARM64::dsub0);
> + return SelectLoad(Node, 2, AArch64::LD1Twov1d, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectLoad(Node, 2, ARM64::LD2Twov2d, ARM64::qsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Twov2d, AArch64::qsub0);
> break;
> - case Intrinsic::arm64_neon_ld3:
> + case Intrinsic::aarch64_neon_ld3:
> if (VT == MVT::v8i8)
> - return SelectLoad(Node, 3, ARM64::LD3Threev8b, ARM64::dsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Threev8b, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectLoad(Node, 3, ARM64::LD3Threev16b, ARM64::qsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Threev16b, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectLoad(Node, 3, ARM64::LD3Threev4h, ARM64::dsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Threev4h, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectLoad(Node, 3, ARM64::LD3Threev8h, ARM64::qsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Threev8h, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectLoad(Node, 3, ARM64::LD3Threev2s, ARM64::dsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Threev2s, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectLoad(Node, 3, ARM64::LD3Threev4s, ARM64::qsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Threev4s, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectLoad(Node, 3, ARM64::LD1Threev1d, ARM64::dsub0);
> + return SelectLoad(Node, 3, AArch64::LD1Threev1d, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectLoad(Node, 3, ARM64::LD3Threev2d, ARM64::qsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Threev2d, AArch64::qsub0);
> break;
> - case Intrinsic::arm64_neon_ld4:
> + case Intrinsic::aarch64_neon_ld4:
> if (VT == MVT::v8i8)
> - return SelectLoad(Node, 4, ARM64::LD4Fourv8b, ARM64::dsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Fourv8b, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectLoad(Node, 4, ARM64::LD4Fourv16b, ARM64::qsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Fourv16b, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectLoad(Node, 4, ARM64::LD4Fourv4h, ARM64::dsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Fourv4h, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectLoad(Node, 4, ARM64::LD4Fourv8h, ARM64::qsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Fourv8h, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectLoad(Node, 4, ARM64::LD4Fourv2s, ARM64::dsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Fourv2s, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectLoad(Node, 4, ARM64::LD4Fourv4s, ARM64::qsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Fourv4s, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectLoad(Node, 4, ARM64::LD1Fourv1d, ARM64::dsub0);
> + return SelectLoad(Node, 4, AArch64::LD1Fourv1d, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectLoad(Node, 4, ARM64::LD4Fourv2d, ARM64::qsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Fourv2d, AArch64::qsub0);
> break;
> - case Intrinsic::arm64_neon_ld2r:
> + case Intrinsic::aarch64_neon_ld2r:
> if (VT == MVT::v8i8)
> - return SelectLoad(Node, 2, ARM64::LD2Rv8b, ARM64::dsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Rv8b, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectLoad(Node, 2, ARM64::LD2Rv16b, ARM64::qsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Rv16b, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectLoad(Node, 2, ARM64::LD2Rv4h, ARM64::dsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Rv4h, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectLoad(Node, 2, ARM64::LD2Rv8h, ARM64::qsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Rv8h, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectLoad(Node, 2, ARM64::LD2Rv2s, ARM64::dsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Rv2s, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectLoad(Node, 2, ARM64::LD2Rv4s, ARM64::qsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Rv4s, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectLoad(Node, 2, ARM64::LD2Rv1d, ARM64::dsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Rv1d, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectLoad(Node, 2, ARM64::LD2Rv2d, ARM64::qsub0);
> + return SelectLoad(Node, 2, AArch64::LD2Rv2d, AArch64::qsub0);
> break;
> - case Intrinsic::arm64_neon_ld3r:
> + case Intrinsic::aarch64_neon_ld3r:
> if (VT == MVT::v8i8)
> - return SelectLoad(Node, 3, ARM64::LD3Rv8b, ARM64::dsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Rv8b, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectLoad(Node, 3, ARM64::LD3Rv16b, ARM64::qsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Rv16b, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectLoad(Node, 3, ARM64::LD3Rv4h, ARM64::dsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Rv4h, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectLoad(Node, 3, ARM64::LD3Rv8h, ARM64::qsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Rv8h, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectLoad(Node, 3, ARM64::LD3Rv2s, ARM64::dsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Rv2s, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectLoad(Node, 3, ARM64::LD3Rv4s, ARM64::qsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Rv4s, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectLoad(Node, 3, ARM64::LD3Rv1d, ARM64::dsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Rv1d, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectLoad(Node, 3, ARM64::LD3Rv2d, ARM64::qsub0);
> + return SelectLoad(Node, 3, AArch64::LD3Rv2d, AArch64::qsub0);
> break;
> - case Intrinsic::arm64_neon_ld4r:
> + case Intrinsic::aarch64_neon_ld4r:
> if (VT == MVT::v8i8)
> - return SelectLoad(Node, 4, ARM64::LD4Rv8b, ARM64::dsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Rv8b, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectLoad(Node, 4, ARM64::LD4Rv16b, ARM64::qsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Rv16b, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectLoad(Node, 4, ARM64::LD4Rv4h, ARM64::dsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Rv4h, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectLoad(Node, 4, ARM64::LD4Rv8h, ARM64::qsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Rv8h, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectLoad(Node, 4, ARM64::LD4Rv2s, ARM64::dsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Rv2s, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectLoad(Node, 4, ARM64::LD4Rv4s, ARM64::qsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Rv4s, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectLoad(Node, 4, ARM64::LD4Rv1d, ARM64::dsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Rv1d, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectLoad(Node, 4, ARM64::LD4Rv2d, ARM64::qsub0);
> + return SelectLoad(Node, 4, AArch64::LD4Rv2d, AArch64::qsub0);
> break;
> - case Intrinsic::arm64_neon_ld2lane:
> + case Intrinsic::aarch64_neon_ld2lane:
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectLoadLane(Node, 2, ARM64::LD2i8);
> + return SelectLoadLane(Node, 2, AArch64::LD2i8);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectLoadLane(Node, 2, ARM64::LD2i16);
> + return SelectLoadLane(Node, 2, AArch64::LD2i16);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectLoadLane(Node, 2, ARM64::LD2i32);
> + return SelectLoadLane(Node, 2, AArch64::LD2i32);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectLoadLane(Node, 2, ARM64::LD2i64);
> + return SelectLoadLane(Node, 2, AArch64::LD2i64);
> break;
> - case Intrinsic::arm64_neon_ld3lane:
> + case Intrinsic::aarch64_neon_ld3lane:
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectLoadLane(Node, 3, ARM64::LD3i8);
> + return SelectLoadLane(Node, 3, AArch64::LD3i8);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectLoadLane(Node, 3, ARM64::LD3i16);
> + return SelectLoadLane(Node, 3, AArch64::LD3i16);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectLoadLane(Node, 3, ARM64::LD3i32);
> + return SelectLoadLane(Node, 3, AArch64::LD3i32);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectLoadLane(Node, 3, ARM64::LD3i64);
> + return SelectLoadLane(Node, 3, AArch64::LD3i64);
> break;
> - case Intrinsic::arm64_neon_ld4lane:
> + case Intrinsic::aarch64_neon_ld4lane:
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectLoadLane(Node, 4, ARM64::LD4i8);
> + return SelectLoadLane(Node, 4, AArch64::LD4i8);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectLoadLane(Node, 4, ARM64::LD4i16);
> + return SelectLoadLane(Node, 4, AArch64::LD4i16);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectLoadLane(Node, 4, ARM64::LD4i32);
> + return SelectLoadLane(Node, 4, AArch64::LD4i32);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectLoadLane(Node, 4, ARM64::LD4i64);
> + return SelectLoadLane(Node, 4, AArch64::LD4i64);
> break;
> }
> } break;
> @@ -2401,32 +2406,32 @@ SDNode *ARM64DAGToDAGISel::Select(SDNode
> switch (IntNo) {
> default:
> break;
> - case Intrinsic::arm64_neon_tbl2:
> - return SelectTable(Node, 2, VT == MVT::v8i8 ? ARM64::TBLv8i8Two
> - : ARM64::TBLv16i8Two,
> + case Intrinsic::aarch64_neon_tbl2:
> + return SelectTable(Node, 2, VT == MVT::v8i8 ? AArch64::TBLv8i8Two
> + : AArch64::TBLv16i8Two,
> false);
> - case Intrinsic::arm64_neon_tbl3:
> - return SelectTable(Node, 3, VT == MVT::v8i8 ? ARM64::TBLv8i8Three
> - : ARM64::TBLv16i8Three,
> + case Intrinsic::aarch64_neon_tbl3:
> + return SelectTable(Node, 3, VT == MVT::v8i8 ? AArch64::TBLv8i8Three
> + : AArch64::TBLv16i8Three,
> false);
> - case Intrinsic::arm64_neon_tbl4:
> - return SelectTable(Node, 4, VT == MVT::v8i8 ? ARM64::TBLv8i8Four
> - : ARM64::TBLv16i8Four,
> + case Intrinsic::aarch64_neon_tbl4:
> + return SelectTable(Node, 4, VT == MVT::v8i8 ? AArch64::TBLv8i8Four
> + : AArch64::TBLv16i8Four,
> false);
> - case Intrinsic::arm64_neon_tbx2:
> - return SelectTable(Node, 2, VT == MVT::v8i8 ? ARM64::TBXv8i8Two
> - : ARM64::TBXv16i8Two,
> + case Intrinsic::aarch64_neon_tbx2:
> + return SelectTable(Node, 2, VT == MVT::v8i8 ? AArch64::TBXv8i8Two
> + : AArch64::TBXv16i8Two,
> true);
> - case Intrinsic::arm64_neon_tbx3:
> - return SelectTable(Node, 3, VT == MVT::v8i8 ? ARM64::TBXv8i8Three
> - : ARM64::TBXv16i8Three,
> + case Intrinsic::aarch64_neon_tbx3:
> + return SelectTable(Node, 3, VT == MVT::v8i8 ? AArch64::TBXv8i8Three
> + : AArch64::TBXv16i8Three,
> true);
> - case Intrinsic::arm64_neon_tbx4:
> - return SelectTable(Node, 4, VT == MVT::v8i8 ? ARM64::TBXv8i8Four
> - : ARM64::TBXv16i8Four,
> + case Intrinsic::aarch64_neon_tbx4:
> + return SelectTable(Node, 4, VT == MVT::v8i8 ? AArch64::TBXv8i8Four
> + : AArch64::TBXv16i8Four,
> true);
> - case Intrinsic::arm64_neon_smull:
> - case Intrinsic::arm64_neon_umull:
> + case Intrinsic::aarch64_neon_smull:
> + case Intrinsic::aarch64_neon_umull:
> if (SDNode *N = SelectMULLV64LaneV128(IntNo, Node))
> return N;
> break;
> @@ -2440,563 +2445,563 @@ SDNode *ARM64DAGToDAGISel::Select(SDNode
> switch (IntNo) {
> default:
> break;
> - case Intrinsic::arm64_neon_st1x2: {
> + case Intrinsic::aarch64_neon_st1x2: {
> if (VT == MVT::v8i8)
> - return SelectStore(Node, 2, ARM64::ST1Twov8b);
> + return SelectStore(Node, 2, AArch64::ST1Twov8b);
> else if (VT == MVT::v16i8)
> - return SelectStore(Node, 2, ARM64::ST1Twov16b);
> + return SelectStore(Node, 2, AArch64::ST1Twov16b);
> else if (VT == MVT::v4i16)
> - return SelectStore(Node, 2, ARM64::ST1Twov4h);
> + return SelectStore(Node, 2, AArch64::ST1Twov4h);
> else if (VT == MVT::v8i16)
> - return SelectStore(Node, 2, ARM64::ST1Twov8h);
> + return SelectStore(Node, 2, AArch64::ST1Twov8h);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectStore(Node, 2, ARM64::ST1Twov2s);
> + return SelectStore(Node, 2, AArch64::ST1Twov2s);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectStore(Node, 2, ARM64::ST1Twov4s);
> + return SelectStore(Node, 2, AArch64::ST1Twov4s);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectStore(Node, 2, ARM64::ST1Twov2d);
> + return SelectStore(Node, 2, AArch64::ST1Twov2d);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectStore(Node, 2, ARM64::ST1Twov1d);
> + return SelectStore(Node, 2, AArch64::ST1Twov1d);
> break;
> }
> - case Intrinsic::arm64_neon_st1x3: {
> + case Intrinsic::aarch64_neon_st1x3: {
> if (VT == MVT::v8i8)
> - return SelectStore(Node, 3, ARM64::ST1Threev8b);
> + return SelectStore(Node, 3, AArch64::ST1Threev8b);
> else if (VT == MVT::v16i8)
> - return SelectStore(Node, 3, ARM64::ST1Threev16b);
> + return SelectStore(Node, 3, AArch64::ST1Threev16b);
> else if (VT == MVT::v4i16)
> - return SelectStore(Node, 3, ARM64::ST1Threev4h);
> + return SelectStore(Node, 3, AArch64::ST1Threev4h);
> else if (VT == MVT::v8i16)
> - return SelectStore(Node, 3, ARM64::ST1Threev8h);
> + return SelectStore(Node, 3, AArch64::ST1Threev8h);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectStore(Node, 3, ARM64::ST1Threev2s);
> + return SelectStore(Node, 3, AArch64::ST1Threev2s);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectStore(Node, 3, ARM64::ST1Threev4s);
> + return SelectStore(Node, 3, AArch64::ST1Threev4s);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectStore(Node, 3, ARM64::ST1Threev2d);
> + return SelectStore(Node, 3, AArch64::ST1Threev2d);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectStore(Node, 3, ARM64::ST1Threev1d);
> + return SelectStore(Node, 3, AArch64::ST1Threev1d);
> break;
> }
> - case Intrinsic::arm64_neon_st1x4: {
> + case Intrinsic::aarch64_neon_st1x4: {
> if (VT == MVT::v8i8)
> - return SelectStore(Node, 4, ARM64::ST1Fourv8b);
> + return SelectStore(Node, 4, AArch64::ST1Fourv8b);
> else if (VT == MVT::v16i8)
> - return SelectStore(Node, 4, ARM64::ST1Fourv16b);
> + return SelectStore(Node, 4, AArch64::ST1Fourv16b);
> else if (VT == MVT::v4i16)
> - return SelectStore(Node, 4, ARM64::ST1Fourv4h);
> + return SelectStore(Node, 4, AArch64::ST1Fourv4h);
> else if (VT == MVT::v8i16)
> - return SelectStore(Node, 4, ARM64::ST1Fourv8h);
> + return SelectStore(Node, 4, AArch64::ST1Fourv8h);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectStore(Node, 4, ARM64::ST1Fourv2s);
> + return SelectStore(Node, 4, AArch64::ST1Fourv2s);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectStore(Node, 4, ARM64::ST1Fourv4s);
> + return SelectStore(Node, 4, AArch64::ST1Fourv4s);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectStore(Node, 4, ARM64::ST1Fourv2d);
> + return SelectStore(Node, 4, AArch64::ST1Fourv2d);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectStore(Node, 4, ARM64::ST1Fourv1d);
> + return SelectStore(Node, 4, AArch64::ST1Fourv1d);
> break;
> }
> - case Intrinsic::arm64_neon_st2: {
> + case Intrinsic::aarch64_neon_st2: {
> if (VT == MVT::v8i8)
> - return SelectStore(Node, 2, ARM64::ST2Twov8b);
> + return SelectStore(Node, 2, AArch64::ST2Twov8b);
> else if (VT == MVT::v16i8)
> - return SelectStore(Node, 2, ARM64::ST2Twov16b);
> + return SelectStore(Node, 2, AArch64::ST2Twov16b);
> else if (VT == MVT::v4i16)
> - return SelectStore(Node, 2, ARM64::ST2Twov4h);
> + return SelectStore(Node, 2, AArch64::ST2Twov4h);
> else if (VT == MVT::v8i16)
> - return SelectStore(Node, 2, ARM64::ST2Twov8h);
> + return SelectStore(Node, 2, AArch64::ST2Twov8h);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectStore(Node, 2, ARM64::ST2Twov2s);
> + return SelectStore(Node, 2, AArch64::ST2Twov2s);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectStore(Node, 2, ARM64::ST2Twov4s);
> + return SelectStore(Node, 2, AArch64::ST2Twov4s);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectStore(Node, 2, ARM64::ST2Twov2d);
> + return SelectStore(Node, 2, AArch64::ST2Twov2d);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectStore(Node, 2, ARM64::ST1Twov1d);
> + return SelectStore(Node, 2, AArch64::ST1Twov1d);
> break;
> }
> - case Intrinsic::arm64_neon_st3: {
> + case Intrinsic::aarch64_neon_st3: {
> if (VT == MVT::v8i8)
> - return SelectStore(Node, 3, ARM64::ST3Threev8b);
> + return SelectStore(Node, 3, AArch64::ST3Threev8b);
> else if (VT == MVT::v16i8)
> - return SelectStore(Node, 3, ARM64::ST3Threev16b);
> + return SelectStore(Node, 3, AArch64::ST3Threev16b);
> else if (VT == MVT::v4i16)
> - return SelectStore(Node, 3, ARM64::ST3Threev4h);
> + return SelectStore(Node, 3, AArch64::ST3Threev4h);
> else if (VT == MVT::v8i16)
> - return SelectStore(Node, 3, ARM64::ST3Threev8h);
> + return SelectStore(Node, 3, AArch64::ST3Threev8h);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectStore(Node, 3, ARM64::ST3Threev2s);
> + return SelectStore(Node, 3, AArch64::ST3Threev2s);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectStore(Node, 3, ARM64::ST3Threev4s);
> + return SelectStore(Node, 3, AArch64::ST3Threev4s);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectStore(Node, 3, ARM64::ST3Threev2d);
> + return SelectStore(Node, 3, AArch64::ST3Threev2d);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectStore(Node, 3, ARM64::ST1Threev1d);
> + return SelectStore(Node, 3, AArch64::ST1Threev1d);
> break;
> }
> - case Intrinsic::arm64_neon_st4: {
> + case Intrinsic::aarch64_neon_st4: {
> if (VT == MVT::v8i8)
> - return SelectStore(Node, 4, ARM64::ST4Fourv8b);
> + return SelectStore(Node, 4, AArch64::ST4Fourv8b);
> else if (VT == MVT::v16i8)
> - return SelectStore(Node, 4, ARM64::ST4Fourv16b);
> + return SelectStore(Node, 4, AArch64::ST4Fourv16b);
> else if (VT == MVT::v4i16)
> - return SelectStore(Node, 4, ARM64::ST4Fourv4h);
> + return SelectStore(Node, 4, AArch64::ST4Fourv4h);
> else if (VT == MVT::v8i16)
> - return SelectStore(Node, 4, ARM64::ST4Fourv8h);
> + return SelectStore(Node, 4, AArch64::ST4Fourv8h);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectStore(Node, 4, ARM64::ST4Fourv2s);
> + return SelectStore(Node, 4, AArch64::ST4Fourv2s);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectStore(Node, 4, ARM64::ST4Fourv4s);
> + return SelectStore(Node, 4, AArch64::ST4Fourv4s);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectStore(Node, 4, ARM64::ST4Fourv2d);
> + return SelectStore(Node, 4, AArch64::ST4Fourv2d);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectStore(Node, 4, ARM64::ST1Fourv1d);
> + return SelectStore(Node, 4, AArch64::ST1Fourv1d);
> break;
> }
> - case Intrinsic::arm64_neon_st2lane: {
> + case Intrinsic::aarch64_neon_st2lane: {
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectStoreLane(Node, 2, ARM64::ST2i8);
> + return SelectStoreLane(Node, 2, AArch64::ST2i8);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectStoreLane(Node, 2, ARM64::ST2i16);
> + return SelectStoreLane(Node, 2, AArch64::ST2i16);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectStoreLane(Node, 2, ARM64::ST2i32);
> + return SelectStoreLane(Node, 2, AArch64::ST2i32);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectStoreLane(Node, 2, ARM64::ST2i64);
> + return SelectStoreLane(Node, 2, AArch64::ST2i64);
> break;
> }
> - case Intrinsic::arm64_neon_st3lane: {
> + case Intrinsic::aarch64_neon_st3lane: {
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectStoreLane(Node, 3, ARM64::ST3i8);
> + return SelectStoreLane(Node, 3, AArch64::ST3i8);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectStoreLane(Node, 3, ARM64::ST3i16);
> + return SelectStoreLane(Node, 3, AArch64::ST3i16);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectStoreLane(Node, 3, ARM64::ST3i32);
> + return SelectStoreLane(Node, 3, AArch64::ST3i32);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectStoreLane(Node, 3, ARM64::ST3i64);
> + return SelectStoreLane(Node, 3, AArch64::ST3i64);
> break;
> }
> - case Intrinsic::arm64_neon_st4lane: {
> + case Intrinsic::aarch64_neon_st4lane: {
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectStoreLane(Node, 4, ARM64::ST4i8);
> + return SelectStoreLane(Node, 4, AArch64::ST4i8);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectStoreLane(Node, 4, ARM64::ST4i16);
> + return SelectStoreLane(Node, 4, AArch64::ST4i16);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectStoreLane(Node, 4, ARM64::ST4i32);
> + return SelectStoreLane(Node, 4, AArch64::ST4i32);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectStoreLane(Node, 4, ARM64::ST4i64);
> + return SelectStoreLane(Node, 4, AArch64::ST4i64);
> break;
> }
> }
> }
> - case ARM64ISD::LD2post: {
> + case AArch64ISD::LD2post: {
> if (VT == MVT::v8i8)
> - return SelectPostLoad(Node, 2, ARM64::LD2Twov8b_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Twov8b_POST, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectPostLoad(Node, 2, ARM64::LD2Twov16b_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Twov16b_POST, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectPostLoad(Node, 2, ARM64::LD2Twov4h_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Twov4h_POST, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectPostLoad(Node, 2, ARM64::LD2Twov8h_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Twov8h_POST, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostLoad(Node, 2, ARM64::LD2Twov2s_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Twov2s_POST, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostLoad(Node, 2, ARM64::LD2Twov4s_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Twov4s_POST, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostLoad(Node, 2, ARM64::LD1Twov1d_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD1Twov1d_POST, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostLoad(Node, 2, ARM64::LD2Twov2d_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Twov2d_POST, AArch64::qsub0);
> break;
> }
> - case ARM64ISD::LD3post: {
> + case AArch64ISD::LD3post: {
> if (VT == MVT::v8i8)
> - return SelectPostLoad(Node, 3, ARM64::LD3Threev8b_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Threev8b_POST, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectPostLoad(Node, 3, ARM64::LD3Threev16b_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Threev16b_POST, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectPostLoad(Node, 3, ARM64::LD3Threev4h_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Threev4h_POST, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectPostLoad(Node, 3, ARM64::LD3Threev8h_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Threev8h_POST, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostLoad(Node, 3, ARM64::LD3Threev2s_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Threev2s_POST, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostLoad(Node, 3, ARM64::LD3Threev4s_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Threev4s_POST, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostLoad(Node, 3, ARM64::LD1Threev1d_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD1Threev1d_POST, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostLoad(Node, 3, ARM64::LD3Threev2d_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Threev2d_POST, AArch64::qsub0);
> break;
> }
> - case ARM64ISD::LD4post: {
> + case AArch64ISD::LD4post: {
> if (VT == MVT::v8i8)
> - return SelectPostLoad(Node, 4, ARM64::LD4Fourv8b_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Fourv8b_POST, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectPostLoad(Node, 4, ARM64::LD4Fourv16b_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Fourv16b_POST, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectPostLoad(Node, 4, ARM64::LD4Fourv4h_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Fourv4h_POST, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectPostLoad(Node, 4, ARM64::LD4Fourv8h_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Fourv8h_POST, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostLoad(Node, 4, ARM64::LD4Fourv2s_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Fourv2s_POST, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostLoad(Node, 4, ARM64::LD4Fourv4s_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Fourv4s_POST, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostLoad(Node, 4, ARM64::LD1Fourv1d_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD1Fourv1d_POST, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostLoad(Node, 4, ARM64::LD4Fourv2d_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Fourv2d_POST, AArch64::qsub0);
> break;
> }
> - case ARM64ISD::LD1x2post: {
> + case AArch64ISD::LD1x2post: {
> if (VT == MVT::v8i8)
> - return SelectPostLoad(Node, 2, ARM64::LD1Twov8b_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD1Twov8b_POST, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectPostLoad(Node, 2, ARM64::LD1Twov16b_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD1Twov16b_POST, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectPostLoad(Node, 2, ARM64::LD1Twov4h_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD1Twov4h_POST, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectPostLoad(Node, 2, ARM64::LD1Twov8h_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD1Twov8h_POST, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostLoad(Node, 2, ARM64::LD1Twov2s_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD1Twov2s_POST, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostLoad(Node, 2, ARM64::LD1Twov4s_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD1Twov4s_POST, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostLoad(Node, 2, ARM64::LD1Twov1d_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD1Twov1d_POST, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostLoad(Node, 2, ARM64::LD1Twov2d_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD1Twov2d_POST, AArch64::qsub0);
> break;
> }
> - case ARM64ISD::LD1x3post: {
> + case AArch64ISD::LD1x3post: {
> if (VT == MVT::v8i8)
> - return SelectPostLoad(Node, 3, ARM64::LD1Threev8b_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD1Threev8b_POST, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectPostLoad(Node, 3, ARM64::LD1Threev16b_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD1Threev16b_POST, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectPostLoad(Node, 3, ARM64::LD1Threev4h_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD1Threev4h_POST, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectPostLoad(Node, 3, ARM64::LD1Threev8h_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD1Threev8h_POST, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostLoad(Node, 3, ARM64::LD1Threev2s_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD1Threev2s_POST, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostLoad(Node, 3, ARM64::LD1Threev4s_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD1Threev4s_POST, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostLoad(Node, 3, ARM64::LD1Threev1d_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD1Threev1d_POST, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostLoad(Node, 3, ARM64::LD1Threev2d_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD1Threev2d_POST, AArch64::qsub0);
> break;
> }
> - case ARM64ISD::LD1x4post: {
> + case AArch64ISD::LD1x4post: {
> if (VT == MVT::v8i8)
> - return SelectPostLoad(Node, 4, ARM64::LD1Fourv8b_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD1Fourv8b_POST, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectPostLoad(Node, 4, ARM64::LD1Fourv16b_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD1Fourv16b_POST, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectPostLoad(Node, 4, ARM64::LD1Fourv4h_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD1Fourv4h_POST, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectPostLoad(Node, 4, ARM64::LD1Fourv8h_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD1Fourv8h_POST, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostLoad(Node, 4, ARM64::LD1Fourv2s_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD1Fourv2s_POST, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostLoad(Node, 4, ARM64::LD1Fourv4s_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD1Fourv4s_POST, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostLoad(Node, 4, ARM64::LD1Fourv1d_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD1Fourv1d_POST, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostLoad(Node, 4, ARM64::LD1Fourv2d_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD1Fourv2d_POST, AArch64::qsub0);
> break;
> }
> - case ARM64ISD::LD1DUPpost: {
> + case AArch64ISD::LD1DUPpost: {
> if (VT == MVT::v8i8)
> - return SelectPostLoad(Node, 1, ARM64::LD1Rv8b_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 1, AArch64::LD1Rv8b_POST, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectPostLoad(Node, 1, ARM64::LD1Rv16b_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 1, AArch64::LD1Rv16b_POST, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectPostLoad(Node, 1, ARM64::LD1Rv4h_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 1, AArch64::LD1Rv4h_POST, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectPostLoad(Node, 1, ARM64::LD1Rv8h_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 1, AArch64::LD1Rv8h_POST, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostLoad(Node, 1, ARM64::LD1Rv2s_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 1, AArch64::LD1Rv2s_POST, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostLoad(Node, 1, ARM64::LD1Rv4s_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 1, AArch64::LD1Rv4s_POST, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostLoad(Node, 1, ARM64::LD1Rv1d_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 1, AArch64::LD1Rv1d_POST, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostLoad(Node, 1, ARM64::LD1Rv2d_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 1, AArch64::LD1Rv2d_POST, AArch64::qsub0);
> break;
> }
> - case ARM64ISD::LD2DUPpost: {
> + case AArch64ISD::LD2DUPpost: {
> if (VT == MVT::v8i8)
> - return SelectPostLoad(Node, 2, ARM64::LD2Rv8b_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Rv8b_POST, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectPostLoad(Node, 2, ARM64::LD2Rv16b_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Rv16b_POST, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectPostLoad(Node, 2, ARM64::LD2Rv4h_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Rv4h_POST, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectPostLoad(Node, 2, ARM64::LD2Rv8h_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Rv8h_POST, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostLoad(Node, 2, ARM64::LD2Rv2s_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Rv2s_POST, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostLoad(Node, 2, ARM64::LD2Rv4s_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Rv4s_POST, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostLoad(Node, 2, ARM64::LD2Rv1d_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Rv1d_POST, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostLoad(Node, 2, ARM64::LD2Rv2d_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 2, AArch64::LD2Rv2d_POST, AArch64::qsub0);
> break;
> }
> - case ARM64ISD::LD3DUPpost: {
> + case AArch64ISD::LD3DUPpost: {
> if (VT == MVT::v8i8)
> - return SelectPostLoad(Node, 3, ARM64::LD3Rv8b_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Rv8b_POST, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectPostLoad(Node, 3, ARM64::LD3Rv16b_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Rv16b_POST, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectPostLoad(Node, 3, ARM64::LD3Rv4h_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Rv4h_POST, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectPostLoad(Node, 3, ARM64::LD3Rv8h_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Rv8h_POST, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostLoad(Node, 3, ARM64::LD3Rv2s_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Rv2s_POST, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostLoad(Node, 3, ARM64::LD3Rv4s_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Rv4s_POST, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostLoad(Node, 3, ARM64::LD3Rv1d_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Rv1d_POST, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostLoad(Node, 3, ARM64::LD3Rv2d_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 3, AArch64::LD3Rv2d_POST, AArch64::qsub0);
> break;
> }
> - case ARM64ISD::LD4DUPpost: {
> + case AArch64ISD::LD4DUPpost: {
> if (VT == MVT::v8i8)
> - return SelectPostLoad(Node, 4, ARM64::LD4Rv8b_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Rv8b_POST, AArch64::dsub0);
> else if (VT == MVT::v16i8)
> - return SelectPostLoad(Node, 4, ARM64::LD4Rv16b_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Rv16b_POST, AArch64::qsub0);
> else if (VT == MVT::v4i16)
> - return SelectPostLoad(Node, 4, ARM64::LD4Rv4h_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Rv4h_POST, AArch64::dsub0);
> else if (VT == MVT::v8i16)
> - return SelectPostLoad(Node, 4, ARM64::LD4Rv8h_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Rv8h_POST, AArch64::qsub0);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostLoad(Node, 4, ARM64::LD4Rv2s_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Rv2s_POST, AArch64::dsub0);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostLoad(Node, 4, ARM64::LD4Rv4s_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Rv4s_POST, AArch64::qsub0);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostLoad(Node, 4, ARM64::LD4Rv1d_POST, ARM64::dsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Rv1d_POST, AArch64::dsub0);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostLoad(Node, 4, ARM64::LD4Rv2d_POST, ARM64::qsub0);
> + return SelectPostLoad(Node, 4, AArch64::LD4Rv2d_POST, AArch64::qsub0);
> break;
> }
> - case ARM64ISD::LD1LANEpost: {
> + case AArch64ISD::LD1LANEpost: {
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectPostLoadLane(Node, 1, ARM64::LD1i8_POST);
> + return SelectPostLoadLane(Node, 1, AArch64::LD1i8_POST);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectPostLoadLane(Node, 1, ARM64::LD1i16_POST);
> + return SelectPostLoadLane(Node, 1, AArch64::LD1i16_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectPostLoadLane(Node, 1, ARM64::LD1i32_POST);
> + return SelectPostLoadLane(Node, 1, AArch64::LD1i32_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectPostLoadLane(Node, 1, ARM64::LD1i64_POST);
> + return SelectPostLoadLane(Node, 1, AArch64::LD1i64_POST);
> break;
> }
> - case ARM64ISD::LD2LANEpost: {
> + case AArch64ISD::LD2LANEpost: {
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectPostLoadLane(Node, 2, ARM64::LD2i8_POST);
> + return SelectPostLoadLane(Node, 2, AArch64::LD2i8_POST);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectPostLoadLane(Node, 2, ARM64::LD2i16_POST);
> + return SelectPostLoadLane(Node, 2, AArch64::LD2i16_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectPostLoadLane(Node, 2, ARM64::LD2i32_POST);
> + return SelectPostLoadLane(Node, 2, AArch64::LD2i32_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectPostLoadLane(Node, 2, ARM64::LD2i64_POST);
> + return SelectPostLoadLane(Node, 2, AArch64::LD2i64_POST);
> break;
> }
> - case ARM64ISD::LD3LANEpost: {
> + case AArch64ISD::LD3LANEpost: {
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectPostLoadLane(Node, 3, ARM64::LD3i8_POST);
> + return SelectPostLoadLane(Node, 3, AArch64::LD3i8_POST);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectPostLoadLane(Node, 3, ARM64::LD3i16_POST);
> + return SelectPostLoadLane(Node, 3, AArch64::LD3i16_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectPostLoadLane(Node, 3, ARM64::LD3i32_POST);
> + return SelectPostLoadLane(Node, 3, AArch64::LD3i32_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectPostLoadLane(Node, 3, ARM64::LD3i64_POST);
> + return SelectPostLoadLane(Node, 3, AArch64::LD3i64_POST);
> break;
> }
> - case ARM64ISD::LD4LANEpost: {
> + case AArch64ISD::LD4LANEpost: {
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectPostLoadLane(Node, 4, ARM64::LD4i8_POST);
> + return SelectPostLoadLane(Node, 4, AArch64::LD4i8_POST);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectPostLoadLane(Node, 4, ARM64::LD4i16_POST);
> + return SelectPostLoadLane(Node, 4, AArch64::LD4i16_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectPostLoadLane(Node, 4, ARM64::LD4i32_POST);
> + return SelectPostLoadLane(Node, 4, AArch64::LD4i32_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectPostLoadLane(Node, 4, ARM64::LD4i64_POST);
> + return SelectPostLoadLane(Node, 4, AArch64::LD4i64_POST);
> break;
> }
> - case ARM64ISD::ST2post: {
> + case AArch64ISD::ST2post: {
> VT = Node->getOperand(1).getValueType();
> if (VT == MVT::v8i8)
> - return SelectPostStore(Node, 2, ARM64::ST2Twov8b_POST);
> + return SelectPostStore(Node, 2, AArch64::ST2Twov8b_POST);
> else if (VT == MVT::v16i8)
> - return SelectPostStore(Node, 2, ARM64::ST2Twov16b_POST);
> + return SelectPostStore(Node, 2, AArch64::ST2Twov16b_POST);
> else if (VT == MVT::v4i16)
> - return SelectPostStore(Node, 2, ARM64::ST2Twov4h_POST);
> + return SelectPostStore(Node, 2, AArch64::ST2Twov4h_POST);
> else if (VT == MVT::v8i16)
> - return SelectPostStore(Node, 2, ARM64::ST2Twov8h_POST);
> + return SelectPostStore(Node, 2, AArch64::ST2Twov8h_POST);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostStore(Node, 2, ARM64::ST2Twov2s_POST);
> + return SelectPostStore(Node, 2, AArch64::ST2Twov2s_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostStore(Node, 2, ARM64::ST2Twov4s_POST);
> + return SelectPostStore(Node, 2, AArch64::ST2Twov4s_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostStore(Node, 2, ARM64::ST2Twov2d_POST);
> + return SelectPostStore(Node, 2, AArch64::ST2Twov2d_POST);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostStore(Node, 2, ARM64::ST1Twov1d_POST);
> + return SelectPostStore(Node, 2, AArch64::ST1Twov1d_POST);
> break;
> }
> - case ARM64ISD::ST3post: {
> + case AArch64ISD::ST3post: {
> VT = Node->getOperand(1).getValueType();
> if (VT == MVT::v8i8)
> - return SelectPostStore(Node, 3, ARM64::ST3Threev8b_POST);
> + return SelectPostStore(Node, 3, AArch64::ST3Threev8b_POST);
> else if (VT == MVT::v16i8)
> - return SelectPostStore(Node, 3, ARM64::ST3Threev16b_POST);
> + return SelectPostStore(Node, 3, AArch64::ST3Threev16b_POST);
> else if (VT == MVT::v4i16)
> - return SelectPostStore(Node, 3, ARM64::ST3Threev4h_POST);
> + return SelectPostStore(Node, 3, AArch64::ST3Threev4h_POST);
> else if (VT == MVT::v8i16)
> - return SelectPostStore(Node, 3, ARM64::ST3Threev8h_POST);
> + return SelectPostStore(Node, 3, AArch64::ST3Threev8h_POST);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostStore(Node, 3, ARM64::ST3Threev2s_POST);
> + return SelectPostStore(Node, 3, AArch64::ST3Threev2s_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostStore(Node, 3, ARM64::ST3Threev4s_POST);
> + return SelectPostStore(Node, 3, AArch64::ST3Threev4s_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostStore(Node, 3, ARM64::ST3Threev2d_POST);
> + return SelectPostStore(Node, 3, AArch64::ST3Threev2d_POST);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostStore(Node, 3, ARM64::ST1Threev1d_POST);
> + return SelectPostStore(Node, 3, AArch64::ST1Threev1d_POST);
> break;
> }
> - case ARM64ISD::ST4post: {
> + case AArch64ISD::ST4post: {
> VT = Node->getOperand(1).getValueType();
> if (VT == MVT::v8i8)
> - return SelectPostStore(Node, 4, ARM64::ST4Fourv8b_POST);
> + return SelectPostStore(Node, 4, AArch64::ST4Fourv8b_POST);
> else if (VT == MVT::v16i8)
> - return SelectPostStore(Node, 4, ARM64::ST4Fourv16b_POST);
> + return SelectPostStore(Node, 4, AArch64::ST4Fourv16b_POST);
> else if (VT == MVT::v4i16)
> - return SelectPostStore(Node, 4, ARM64::ST4Fourv4h_POST);
> + return SelectPostStore(Node, 4, AArch64::ST4Fourv4h_POST);
> else if (VT == MVT::v8i16)
> - return SelectPostStore(Node, 4, ARM64::ST4Fourv8h_POST);
> + return SelectPostStore(Node, 4, AArch64::ST4Fourv8h_POST);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostStore(Node, 4, ARM64::ST4Fourv2s_POST);
> + return SelectPostStore(Node, 4, AArch64::ST4Fourv2s_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostStore(Node, 4, ARM64::ST4Fourv4s_POST);
> + return SelectPostStore(Node, 4, AArch64::ST4Fourv4s_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostStore(Node, 4, ARM64::ST4Fourv2d_POST);
> + return SelectPostStore(Node, 4, AArch64::ST4Fourv2d_POST);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostStore(Node, 4, ARM64::ST1Fourv1d_POST);
> + return SelectPostStore(Node, 4, AArch64::ST1Fourv1d_POST);
> break;
> }
> - case ARM64ISD::ST1x2post: {
> + case AArch64ISD::ST1x2post: {
> VT = Node->getOperand(1).getValueType();
> if (VT == MVT::v8i8)
> - return SelectPostStore(Node, 2, ARM64::ST1Twov8b_POST);
> + return SelectPostStore(Node, 2, AArch64::ST1Twov8b_POST);
> else if (VT == MVT::v16i8)
> - return SelectPostStore(Node, 2, ARM64::ST1Twov16b_POST);
> + return SelectPostStore(Node, 2, AArch64::ST1Twov16b_POST);
> else if (VT == MVT::v4i16)
> - return SelectPostStore(Node, 2, ARM64::ST1Twov4h_POST);
> + return SelectPostStore(Node, 2, AArch64::ST1Twov4h_POST);
> else if (VT == MVT::v8i16)
> - return SelectPostStore(Node, 2, ARM64::ST1Twov8h_POST);
> + return SelectPostStore(Node, 2, AArch64::ST1Twov8h_POST);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostStore(Node, 2, ARM64::ST1Twov2s_POST);
> + return SelectPostStore(Node, 2, AArch64::ST1Twov2s_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostStore(Node, 2, ARM64::ST1Twov4s_POST);
> + return SelectPostStore(Node, 2, AArch64::ST1Twov4s_POST);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostStore(Node, 2, ARM64::ST1Twov1d_POST);
> + return SelectPostStore(Node, 2, AArch64::ST1Twov1d_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostStore(Node, 2, ARM64::ST1Twov2d_POST);
> + return SelectPostStore(Node, 2, AArch64::ST1Twov2d_POST);
> break;
> }
> - case ARM64ISD::ST1x3post: {
> + case AArch64ISD::ST1x3post: {
> VT = Node->getOperand(1).getValueType();
> if (VT == MVT::v8i8)
> - return SelectPostStore(Node, 3, ARM64::ST1Threev8b_POST);
> + return SelectPostStore(Node, 3, AArch64::ST1Threev8b_POST);
> else if (VT == MVT::v16i8)
> - return SelectPostStore(Node, 3, ARM64::ST1Threev16b_POST);
> + return SelectPostStore(Node, 3, AArch64::ST1Threev16b_POST);
> else if (VT == MVT::v4i16)
> - return SelectPostStore(Node, 3, ARM64::ST1Threev4h_POST);
> + return SelectPostStore(Node, 3, AArch64::ST1Threev4h_POST);
> else if (VT == MVT::v8i16)
> - return SelectPostStore(Node, 3, ARM64::ST1Threev8h_POST);
> + return SelectPostStore(Node, 3, AArch64::ST1Threev8h_POST);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostStore(Node, 3, ARM64::ST1Threev2s_POST);
> + return SelectPostStore(Node, 3, AArch64::ST1Threev2s_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostStore(Node, 3, ARM64::ST1Threev4s_POST);
> + return SelectPostStore(Node, 3, AArch64::ST1Threev4s_POST);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostStore(Node, 3, ARM64::ST1Threev1d_POST);
> + return SelectPostStore(Node, 3, AArch64::ST1Threev1d_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostStore(Node, 3, ARM64::ST1Threev2d_POST);
> + return SelectPostStore(Node, 3, AArch64::ST1Threev2d_POST);
> break;
> }
> - case ARM64ISD::ST1x4post: {
> + case AArch64ISD::ST1x4post: {
> VT = Node->getOperand(1).getValueType();
> if (VT == MVT::v8i8)
> - return SelectPostStore(Node, 4, ARM64::ST1Fourv8b_POST);
> + return SelectPostStore(Node, 4, AArch64::ST1Fourv8b_POST);
> else if (VT == MVT::v16i8)
> - return SelectPostStore(Node, 4, ARM64::ST1Fourv16b_POST);
> + return SelectPostStore(Node, 4, AArch64::ST1Fourv16b_POST);
> else if (VT == MVT::v4i16)
> - return SelectPostStore(Node, 4, ARM64::ST1Fourv4h_POST);
> + return SelectPostStore(Node, 4, AArch64::ST1Fourv4h_POST);
> else if (VT == MVT::v8i16)
> - return SelectPostStore(Node, 4, ARM64::ST1Fourv8h_POST);
> + return SelectPostStore(Node, 4, AArch64::ST1Fourv8h_POST);
> else if (VT == MVT::v2i32 || VT == MVT::v2f32)
> - return SelectPostStore(Node, 4, ARM64::ST1Fourv2s_POST);
> + return SelectPostStore(Node, 4, AArch64::ST1Fourv2s_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v4f32)
> - return SelectPostStore(Node, 4, ARM64::ST1Fourv4s_POST);
> + return SelectPostStore(Node, 4, AArch64::ST1Fourv4s_POST);
> else if (VT == MVT::v1i64 || VT == MVT::v1f64)
> - return SelectPostStore(Node, 4, ARM64::ST1Fourv1d_POST);
> + return SelectPostStore(Node, 4, AArch64::ST1Fourv1d_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v2f64)
> - return SelectPostStore(Node, 4, ARM64::ST1Fourv2d_POST);
> + return SelectPostStore(Node, 4, AArch64::ST1Fourv2d_POST);
> break;
> }
> - case ARM64ISD::ST2LANEpost: {
> + case AArch64ISD::ST2LANEpost: {
> VT = Node->getOperand(1).getValueType();
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectPostStoreLane(Node, 2, ARM64::ST2i8_POST);
> + return SelectPostStoreLane(Node, 2, AArch64::ST2i8_POST);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectPostStoreLane(Node, 2, ARM64::ST2i16_POST);
> + return SelectPostStoreLane(Node, 2, AArch64::ST2i16_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectPostStoreLane(Node, 2, ARM64::ST2i32_POST);
> + return SelectPostStoreLane(Node, 2, AArch64::ST2i32_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectPostStoreLane(Node, 2, ARM64::ST2i64_POST);
> + return SelectPostStoreLane(Node, 2, AArch64::ST2i64_POST);
> break;
> }
> - case ARM64ISD::ST3LANEpost: {
> + case AArch64ISD::ST3LANEpost: {
> VT = Node->getOperand(1).getValueType();
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectPostStoreLane(Node, 3, ARM64::ST3i8_POST);
> + return SelectPostStoreLane(Node, 3, AArch64::ST3i8_POST);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectPostStoreLane(Node, 3, ARM64::ST3i16_POST);
> + return SelectPostStoreLane(Node, 3, AArch64::ST3i16_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectPostStoreLane(Node, 3, ARM64::ST3i32_POST);
> + return SelectPostStoreLane(Node, 3, AArch64::ST3i32_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectPostStoreLane(Node, 3, ARM64::ST3i64_POST);
> + return SelectPostStoreLane(Node, 3, AArch64::ST3i64_POST);
> break;
> }
> - case ARM64ISD::ST4LANEpost: {
> + case AArch64ISD::ST4LANEpost: {
> VT = Node->getOperand(1).getValueType();
> if (VT == MVT::v16i8 || VT == MVT::v8i8)
> - return SelectPostStoreLane(Node, 4, ARM64::ST4i8_POST);
> + return SelectPostStoreLane(Node, 4, AArch64::ST4i8_POST);
> else if (VT == MVT::v8i16 || VT == MVT::v4i16)
> - return SelectPostStoreLane(Node, 4, ARM64::ST4i16_POST);
> + return SelectPostStoreLane(Node, 4, AArch64::ST4i16_POST);
> else if (VT == MVT::v4i32 || VT == MVT::v2i32 || VT == MVT::v4f32 ||
> VT == MVT::v2f32)
> - return SelectPostStoreLane(Node, 4, ARM64::ST4i32_POST);
> + return SelectPostStoreLane(Node, 4, AArch64::ST4i32_POST);
> else if (VT == MVT::v2i64 || VT == MVT::v1i64 || VT == MVT::v2f64 ||
> VT == MVT::v1f64)
> - return SelectPostStoreLane(Node, 4, ARM64::ST4i64_POST);
> + return SelectPostStoreLane(Node, 4, AArch64::ST4i64_POST);
> break;
> }
>
> @@ -3022,9 +3027,9 @@ SDNode *ARM64DAGToDAGISel::Select(SDNode
> return ResNode;
> }
>
> -/// createARM64ISelDag - This pass converts a legalized DAG into a
> -/// ARM64-specific DAG, ready for instruction scheduling.
> -FunctionPass *llvm::createARM64ISelDag(ARM64TargetMachine &TM,
> - CodeGenOpt::Level OptLevel) {
> - return new ARM64DAGToDAGISel(TM, OptLevel);
> +/// createAArch64ISelDag - This pass converts a legalized DAG into a
> +/// AArch64-specific DAG, ready for instruction scheduling.
> +FunctionPass *llvm::createAArch64ISelDag(AArch64TargetMachine &TM,
> + CodeGenOpt::Level OptLevel) {
> + return new AArch64DAGToDAGISel(TM, OptLevel);
> }
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64ISelLowering.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64ISelLowering.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64ISelLowering.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64ISelLowering.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64ISelLowering.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64ISelLowering.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64ISelLowering.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===-- ARM64ISelLowering.cpp - ARM64 DAG Lowering Implementation --------===//
> +//===-- AArch64ISelLowering.cpp - AArch64 DAG Lowering Implementation ----===//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,18 +7,18 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// This file implements the ARM64TargetLowering class.
> +// This file implements the AArch64TargetLowering class.
> //
> //===----------------------------------------------------------------------===//
>
> -#include "ARM64ISelLowering.h"
> -#include "ARM64PerfectShuffle.h"
> -#include "ARM64Subtarget.h"
> -#include "ARM64CallingConv.h"
> -#include "ARM64MachineFunctionInfo.h"
> -#include "ARM64TargetMachine.h"
> -#include "ARM64TargetObjectFile.h"
> -#include "MCTargetDesc/ARM64AddressingModes.h"
> +#include "AArch64ISelLowering.h"
> +#include "AArch64PerfectShuffle.h"
> +#include "AArch64Subtarget.h"
> +#include "AArch64CallingConv.h"
> +#include "AArch64MachineFunctionInfo.h"
> +#include "AArch64TargetMachine.h"
> +#include "AArch64TargetObjectFile.h"
> +#include "MCTargetDesc/AArch64AddressingModes.h"
> #include "llvm/ADT/Statistic.h"
> #include "llvm/CodeGen/CallingConvLower.h"
> #include "llvm/CodeGen/MachineFrameInfo.h"
> @@ -34,7 +34,7 @@
> #include "llvm/Target/TargetOptions.h"
> using namespace llvm;
>
> -#define DEBUG_TYPE "arm64-lower"
> +#define DEBUG_TYPE "aarch64-lower"
>
> STATISTIC(NumTailCalls, "Number of tail calls");
> STATISTIC(NumShiftInserts, "Number of vector shift inserts");
> @@ -48,38 +48,38 @@ static cl::opt<AlignMode>
> Align(cl::desc("Load/store alignment support"),
> cl::Hidden, cl::init(NoStrictAlign),
> cl::values(
> - clEnumValN(StrictAlign, "arm64-strict-align",
> + clEnumValN(StrictAlign, "aarch64-strict-align",
> "Disallow all unaligned memory accesses"),
> - clEnumValN(NoStrictAlign, "arm64-no-strict-align",
> + clEnumValN(NoStrictAlign, "aarch64-no-strict-align",
> "Allow unaligned memory accesses"),
> clEnumValEnd));
>
> // Place holder until extr generation is tested fully.
> static cl::opt<bool>
> -EnableARM64ExtrGeneration("arm64-extr-generation", cl::Hidden,
> - cl::desc("Allow ARM64 (or (shift)(shift))->extract"),
> +EnableAArch64ExtrGeneration("aarch64-extr-generation", cl::Hidden,
> + cl::desc("Allow AArch64 (or (shift)(shift))->extract"),
> cl::init(true));
>
> static cl::opt<bool>
> -EnableARM64SlrGeneration("arm64-shift-insert-generation", cl::Hidden,
> - cl::desc("Allow ARM64 SLI/SRI formation"),
> +EnableAArch64SlrGeneration("aarch64-shift-insert-generation", cl::Hidden,
> + cl::desc("Allow AArch64 SLI/SRI formation"),
> cl::init(false));
>
> //===----------------------------------------------------------------------===//
> -// ARM64 Lowering public interface.
> +// AArch64 Lowering public interface.
> //===----------------------------------------------------------------------===//
> static TargetLoweringObjectFile *createTLOF(TargetMachine &TM) {
> - if (TM.getSubtarget<ARM64Subtarget>().isTargetDarwin())
> - return new ARM64_MachoTargetObjectFile();
> + if (TM.getSubtarget<AArch64Subtarget>().isTargetDarwin())
> + return new AArch64_MachoTargetObjectFile();
>
> - return new ARM64_ELFTargetObjectFile();
> + return new AArch64_ELFTargetObjectFile();
> }
>
> -ARM64TargetLowering::ARM64TargetLowering(ARM64TargetMachine &TM)
> +AArch64TargetLowering::AArch64TargetLowering(AArch64TargetMachine &TM)
> : TargetLowering(TM, createTLOF(TM)) {
> - Subtarget = &TM.getSubtarget<ARM64Subtarget>();
> + Subtarget = &TM.getSubtarget<AArch64Subtarget>();
>
> - // ARM64 doesn't have comparisons which set GPRs or setcc instructions, so
> + // AArch64 doesn't have comparisons which set GPRs or setcc instructions, so
> // we have to make something up. Arbitrarily, choose ZeroOrOne.
> setBooleanContents(ZeroOrOneBooleanContent);
> // When comparing vectors the result sets the different elements in the
> @@ -87,19 +87,19 @@ ARM64TargetLowering::ARM64TargetLowering
> setBooleanVectorContents(ZeroOrNegativeOneBooleanContent);
>
> // Set up the register classes.
> - addRegisterClass(MVT::i32, &ARM64::GPR32allRegClass);
> - addRegisterClass(MVT::i64, &ARM64::GPR64allRegClass);
> + addRegisterClass(MVT::i32, &AArch64::GPR32allRegClass);
> + addRegisterClass(MVT::i64, &AArch64::GPR64allRegClass);
>
> if (Subtarget->hasFPARMv8()) {
> - addRegisterClass(MVT::f16, &ARM64::FPR16RegClass);
> - addRegisterClass(MVT::f32, &ARM64::FPR32RegClass);
> - addRegisterClass(MVT::f64, &ARM64::FPR64RegClass);
> - addRegisterClass(MVT::f128, &ARM64::FPR128RegClass);
> + addRegisterClass(MVT::f16, &AArch64::FPR16RegClass);
> + addRegisterClass(MVT::f32, &AArch64::FPR32RegClass);
> + addRegisterClass(MVT::f64, &AArch64::FPR64RegClass);
> + addRegisterClass(MVT::f128, &AArch64::FPR128RegClass);
> }
>
> if (Subtarget->hasNEON()) {
> - addRegisterClass(MVT::v16i8, &ARM64::FPR8RegClass);
> - addRegisterClass(MVT::v8i16, &ARM64::FPR16RegClass);
> + addRegisterClass(MVT::v16i8, &AArch64::FPR8RegClass);
> + addRegisterClass(MVT::v8i16, &AArch64::FPR16RegClass);
> // Someone set us up the NEON.
> addDRTypeForNEON(MVT::v2f32);
> addDRTypeForNEON(MVT::v8i8);
> @@ -209,8 +209,8 @@ ARM64TargetLowering::ARM64TargetLowering
>
> // Exception handling.
> // FIXME: These are guesses. Has this been defined yet?
> - setExceptionPointerRegister(ARM64::X0);
> - setExceptionSelectorRegister(ARM64::X1);
> + setExceptionPointerRegister(AArch64::X0);
> + setExceptionSelectorRegister(AArch64::X1);
>
> // Constant pool entries
> setOperationAction(ISD::ConstantPool, MVT::i64, Custom);
> @@ -228,17 +228,17 @@ ARM64TargetLowering::ARM64TargetLowering
> setOperationAction(ISD::SUBC, MVT::i64, Custom);
> setOperationAction(ISD::SUBE, MVT::i64, Custom);
>
> - // ARM64 lacks both left-rotate and popcount instructions.
> + // AArch64 lacks both left-rotate and popcount instructions.
> setOperationAction(ISD::ROTL, MVT::i32, Expand);
> setOperationAction(ISD::ROTL, MVT::i64, Expand);
>
> - // ARM64 doesn't have {U|S}MUL_LOHI.
> + // AArch64 doesn't have {U|S}MUL_LOHI.
> setOperationAction(ISD::UMUL_LOHI, MVT::i64, Expand);
> setOperationAction(ISD::SMUL_LOHI, MVT::i64, Expand);
>
>
> // Expand the undefined-at-zero variants to cttz/ctlz to their defined-at-zero
> - // counterparts, which ARM64 supports directly.
> + // counterparts, which AArch64 supports directly.
> setOperationAction(ISD::CTTZ_ZERO_UNDEF, MVT::i32, Expand);
> setOperationAction(ISD::CTLZ_ZERO_UNDEF, MVT::i32, Expand);
> setOperationAction(ISD::CTTZ_ZERO_UNDEF, MVT::i64, Expand);
> @@ -279,7 +279,7 @@ ARM64TargetLowering::ARM64TargetLowering
> setOperationAction(ISD::FCOPYSIGN, MVT::f64, Custom);
> setOperationAction(ISD::FCOPYSIGN, MVT::f32, Custom);
>
> - // ARM64 has implementations of a lot of rounding-like FP operations.
> + // AArch64 has implementations of a lot of rounding-like FP operations.
> static MVT RoundingTypes[] = { MVT::f32, MVT::f64};
> for (unsigned I = 0; I < array_lengthof(RoundingTypes); ++I) {
> MVT Ty = RoundingTypes[I];
> @@ -304,8 +304,8 @@ ARM64TargetLowering::ARM64TargetLowering
> setOperationAction(ISD::FSINCOS, MVT::f32, Expand);
> }
>
> - // ARM64 does not have floating-point extending loads, i1 sign-extending load,
> - // floating-point truncating stores, or v2i32->v2i16 truncating store.
> + // AArch64 does not have floating-point extending loads, i1 sign-extending
> + // load, floating-point truncating stores, or v2i32->v2i16 truncating store.
> setLoadExtAction(ISD::EXTLOAD, MVT::f32, Expand);
> setLoadExtAction(ISD::EXTLOAD, MVT::f64, Expand);
> setLoadExtAction(ISD::EXTLOAD, MVT::f80, Expand);
> @@ -371,7 +371,7 @@ ARM64TargetLowering::ARM64TargetLowering
> MaxStoresPerMemcpy = MaxStoresPerMemcpyOptSize = 4;
> MaxStoresPerMemmove = MaxStoresPerMemmoveOptSize = 4;
>
> - setStackPointerRegisterToSaveRestore(ARM64::SP);
> + setStackPointerRegisterToSaveRestore(AArch64::SP);
>
> setSchedulingPreference(Sched::Hybrid);
>
> @@ -421,7 +421,7 @@ ARM64TargetLowering::ARM64TargetLowering
>
> setOperationAction(ISD::MUL, MVT::v1i64, Expand);
>
> - // ARM64 doesn't have a direct vector ->f32 conversion instructions for
> + // AArch64 doesn't have a direct vector ->f32 conversion instructions for
> // elements smaller than i32, so promote the input to i32 first.
> setOperationAction(ISD::UINT_TO_FP, MVT::v4i8, Promote);
> setOperationAction(ISD::SINT_TO_FP, MVT::v4i8, Promote);
> @@ -433,7 +433,7 @@ ARM64TargetLowering::ARM64TargetLowering
> setOperationAction(ISD::SINT_TO_FP, MVT::v2i64, Custom);
> setOperationAction(ISD::UINT_TO_FP, MVT::v2i64, Custom);
>
> - // ARM64 doesn't have MUL.2d:
> + // AArch64 doesn't have MUL.2d:
> setOperationAction(ISD::MUL, MVT::v2i64, Expand);
> setOperationAction(ISD::ANY_EXTEND, MVT::v4i32, Legal);
> setTruncStoreAction(MVT::v2i32, MVT::v2i16, Expand);
> @@ -461,7 +461,7 @@ ARM64TargetLowering::ARM64TargetLowering
> setLoadExtAction(ISD::EXTLOAD, (MVT::SimpleValueType)VT, Expand);
> }
>
> - // ARM64 has implementations of a lot of rounding-like FP operations.
> + // AArch64 has implementations of a lot of rounding-like FP operations.
> static MVT RoundingVecTypes[] = {MVT::v2f32, MVT::v4f32, MVT::v2f64 };
> for (unsigned I = 0; I < array_lengthof(RoundingVecTypes); ++I) {
> MVT Ty = RoundingVecTypes[I];
> @@ -475,7 +475,7 @@ ARM64TargetLowering::ARM64TargetLowering
> }
> }
>
> -void ARM64TargetLowering::addTypeForNEON(EVT VT, EVT PromotedBitwiseVT) {
> +void AArch64TargetLowering::addTypeForNEON(EVT VT, EVT PromotedBitwiseVT) {
> if (VT == MVT::v2f32) {
> setOperationAction(ISD::LOAD, VT.getSimpleVT(), Promote);
> AddPromotedToType(ISD::LOAD, VT.getSimpleVT(), MVT::v2i32);
> @@ -543,17 +543,17 @@ void ARM64TargetLowering::addTypeForNEON
> }
> }
>
> -void ARM64TargetLowering::addDRTypeForNEON(MVT VT) {
> - addRegisterClass(VT, &ARM64::FPR64RegClass);
> +void AArch64TargetLowering::addDRTypeForNEON(MVT VT) {
> + addRegisterClass(VT, &AArch64::FPR64RegClass);
> addTypeForNEON(VT, MVT::v2i32);
> }
>
> -void ARM64TargetLowering::addQRTypeForNEON(MVT VT) {
> - addRegisterClass(VT, &ARM64::FPR128RegClass);
> +void AArch64TargetLowering::addQRTypeForNEON(MVT VT) {
> + addRegisterClass(VT, &AArch64::FPR128RegClass);
> addTypeForNEON(VT, MVT::v4i32);
> }
>
> -EVT ARM64TargetLowering::getSetCCResultType(LLVMContext &, EVT VT) const {
> +EVT AArch64TargetLowering::getSetCCResultType(LLVMContext &, EVT VT) const {
> if (!VT.isVector())
> return MVT::i32;
> return VT.changeVectorElementTypeToInteger();
> @@ -562,13 +562,13 @@ EVT ARM64TargetLowering::getSetCCResultT
> /// computeKnownBitsForTargetNode - Determine which of the bits specified in
> /// Mask are known to be either zero or one and return them in the
> /// KnownZero/KnownOne bitsets.
> -void ARM64TargetLowering::computeKnownBitsForTargetNode(
> +void AArch64TargetLowering::computeKnownBitsForTargetNode(
> const SDValue Op, APInt &KnownZero, APInt &KnownOne,
> const SelectionDAG &DAG, unsigned Depth) const {
> switch (Op.getOpcode()) {
> default:
> break;
> - case ARM64ISD::CSEL: {
> + case AArch64ISD::CSEL: {
> APInt KnownZero2, KnownOne2;
> DAG.computeKnownBits(Op->getOperand(0), KnownZero, KnownOne, Depth + 1);
> DAG.computeKnownBits(Op->getOperand(1), KnownZero2, KnownOne2, Depth + 1);
> @@ -581,8 +581,8 @@ void ARM64TargetLowering::computeKnownBi
> Intrinsic::ID IntID = static_cast<Intrinsic::ID>(CN->getZExtValue());
> switch (IntID) {
> default: return;
> - case Intrinsic::arm64_ldaxr:
> - case Intrinsic::arm64_ldxr: {
> + case Intrinsic::aarch64_ldaxr:
> + case Intrinsic::aarch64_ldxr: {
> unsigned BitWidth = KnownOne.getBitWidth();
> EVT VT = cast<MemIntrinsicSDNode>(Op)->getMemoryVT();
> unsigned MemBits = VT.getScalarType().getSizeInBits();
> @@ -598,8 +598,8 @@ void ARM64TargetLowering::computeKnownBi
> switch (IntNo) {
> default:
> break;
> - case Intrinsic::arm64_neon_umaxv:
> - case Intrinsic::arm64_neon_uminv: {
> + case Intrinsic::aarch64_neon_umaxv:
> + case Intrinsic::aarch64_neon_uminv: {
> // Figure out the datatype of the vector operand. The UMINV instruction
> // will zero extend the result, so we can mark as known zero all the
> // bits larger than the element datatype. 32-bit or larget doesn't need
> @@ -622,142 +622,142 @@ void ARM64TargetLowering::computeKnownBi
> }
> }
>
> -MVT ARM64TargetLowering::getScalarShiftAmountTy(EVT LHSTy) const {
> +MVT AArch64TargetLowering::getScalarShiftAmountTy(EVT LHSTy) const {
> return MVT::i64;
> }
>
> -unsigned ARM64TargetLowering::getMaximalGlobalOffset() const {
> - // FIXME: On ARM64, this depends on the type.
> +unsigned AArch64TargetLowering::getMaximalGlobalOffset() const {
> + // FIXME: On AArch64, this depends on the type.
> // Basically, the addressable offsets are o to 4095 * Ty.getSizeInBytes().
> // and the offset has to be a multiple of the related size in bytes.
> return 4095;
> }
>
> FastISel *
> -ARM64TargetLowering::createFastISel(FunctionLoweringInfo &funcInfo,
> - const TargetLibraryInfo *libInfo) const {
> - return ARM64::createFastISel(funcInfo, libInfo);
> +AArch64TargetLowering::createFastISel(FunctionLoweringInfo &funcInfo,
> + const TargetLibraryInfo *libInfo) const {
> + return AArch64::createFastISel(funcInfo, libInfo);
> }
>
> -const char *ARM64TargetLowering::getTargetNodeName(unsigned Opcode) const {
> +const char *AArch64TargetLowering::getTargetNodeName(unsigned Opcode) const {
> switch (Opcode) {
> default:
> return nullptr;
> - case ARM64ISD::CALL: return "ARM64ISD::CALL";
> - case ARM64ISD::ADRP: return "ARM64ISD::ADRP";
> - case ARM64ISD::ADDlow: return "ARM64ISD::ADDlow";
> - case ARM64ISD::LOADgot: return "ARM64ISD::LOADgot";
> - case ARM64ISD::RET_FLAG: return "ARM64ISD::RET_FLAG";
> - case ARM64ISD::BRCOND: return "ARM64ISD::BRCOND";
> - case ARM64ISD::CSEL: return "ARM64ISD::CSEL";
> - case ARM64ISD::FCSEL: return "ARM64ISD::FCSEL";
> - case ARM64ISD::CSINV: return "ARM64ISD::CSINV";
> - case ARM64ISD::CSNEG: return "ARM64ISD::CSNEG";
> - case ARM64ISD::CSINC: return "ARM64ISD::CSINC";
> - case ARM64ISD::THREAD_POINTER: return "ARM64ISD::THREAD_POINTER";
> - case ARM64ISD::TLSDESC_CALL: return "ARM64ISD::TLSDESC_CALL";
> - case ARM64ISD::ADC: return "ARM64ISD::ADC";
> - case ARM64ISD::SBC: return "ARM64ISD::SBC";
> - case ARM64ISD::ADDS: return "ARM64ISD::ADDS";
> - case ARM64ISD::SUBS: return "ARM64ISD::SUBS";
> - case ARM64ISD::ADCS: return "ARM64ISD::ADCS";
> - case ARM64ISD::SBCS: return "ARM64ISD::SBCS";
> - case ARM64ISD::ANDS: return "ARM64ISD::ANDS";
> - case ARM64ISD::FCMP: return "ARM64ISD::FCMP";
> - case ARM64ISD::FMIN: return "ARM64ISD::FMIN";
> - case ARM64ISD::FMAX: return "ARM64ISD::FMAX";
> - case ARM64ISD::DUP: return "ARM64ISD::DUP";
> - case ARM64ISD::DUPLANE8: return "ARM64ISD::DUPLANE8";
> - case ARM64ISD::DUPLANE16: return "ARM64ISD::DUPLANE16";
> - case ARM64ISD::DUPLANE32: return "ARM64ISD::DUPLANE32";
> - case ARM64ISD::DUPLANE64: return "ARM64ISD::DUPLANE64";
> - case ARM64ISD::MOVI: return "ARM64ISD::MOVI";
> - case ARM64ISD::MOVIshift: return "ARM64ISD::MOVIshift";
> - case ARM64ISD::MOVIedit: return "ARM64ISD::MOVIedit";
> - case ARM64ISD::MOVImsl: return "ARM64ISD::MOVImsl";
> - case ARM64ISD::FMOV: return "ARM64ISD::FMOV";
> - case ARM64ISD::MVNIshift: return "ARM64ISD::MVNIshift";
> - case ARM64ISD::MVNImsl: return "ARM64ISD::MVNImsl";
> - case ARM64ISD::BICi: return "ARM64ISD::BICi";
> - case ARM64ISD::ORRi: return "ARM64ISD::ORRi";
> - case ARM64ISD::BSL: return "ARM64ISD::BSL";
> - case ARM64ISD::NEG: return "ARM64ISD::NEG";
> - case ARM64ISD::EXTR: return "ARM64ISD::EXTR";
> - case ARM64ISD::ZIP1: return "ARM64ISD::ZIP1";
> - case ARM64ISD::ZIP2: return "ARM64ISD::ZIP2";
> - case ARM64ISD::UZP1: return "ARM64ISD::UZP1";
> - case ARM64ISD::UZP2: return "ARM64ISD::UZP2";
> - case ARM64ISD::TRN1: return "ARM64ISD::TRN1";
> - case ARM64ISD::TRN2: return "ARM64ISD::TRN2";
> - case ARM64ISD::REV16: return "ARM64ISD::REV16";
> - case ARM64ISD::REV32: return "ARM64ISD::REV32";
> - case ARM64ISD::REV64: return "ARM64ISD::REV64";
> - case ARM64ISD::EXT: return "ARM64ISD::EXT";
> - case ARM64ISD::VSHL: return "ARM64ISD::VSHL";
> - case ARM64ISD::VLSHR: return "ARM64ISD::VLSHR";
> - case ARM64ISD::VASHR: return "ARM64ISD::VASHR";
> - case ARM64ISD::CMEQ: return "ARM64ISD::CMEQ";
> - case ARM64ISD::CMGE: return "ARM64ISD::CMGE";
> - case ARM64ISD::CMGT: return "ARM64ISD::CMGT";
> - case ARM64ISD::CMHI: return "ARM64ISD::CMHI";
> - case ARM64ISD::CMHS: return "ARM64ISD::CMHS";
> - case ARM64ISD::FCMEQ: return "ARM64ISD::FCMEQ";
> - case ARM64ISD::FCMGE: return "ARM64ISD::FCMGE";
> - case ARM64ISD::FCMGT: return "ARM64ISD::FCMGT";
> - case ARM64ISD::CMEQz: return "ARM64ISD::CMEQz";
> - case ARM64ISD::CMGEz: return "ARM64ISD::CMGEz";
> - case ARM64ISD::CMGTz: return "ARM64ISD::CMGTz";
> - case ARM64ISD::CMLEz: return "ARM64ISD::CMLEz";
> - case ARM64ISD::CMLTz: return "ARM64ISD::CMLTz";
> - case ARM64ISD::FCMEQz: return "ARM64ISD::FCMEQz";
> - case ARM64ISD::FCMGEz: return "ARM64ISD::FCMGEz";
> - case ARM64ISD::FCMGTz: return "ARM64ISD::FCMGTz";
> - case ARM64ISD::FCMLEz: return "ARM64ISD::FCMLEz";
> - case ARM64ISD::FCMLTz: return "ARM64ISD::FCMLTz";
> - case ARM64ISD::NOT: return "ARM64ISD::NOT";
> - case ARM64ISD::BIT: return "ARM64ISD::BIT";
> - case ARM64ISD::CBZ: return "ARM64ISD::CBZ";
> - case ARM64ISD::CBNZ: return "ARM64ISD::CBNZ";
> - case ARM64ISD::TBZ: return "ARM64ISD::TBZ";
> - case ARM64ISD::TBNZ: return "ARM64ISD::TBNZ";
> - case ARM64ISD::TC_RETURN: return "ARM64ISD::TC_RETURN";
> - case ARM64ISD::SITOF: return "ARM64ISD::SITOF";
> - case ARM64ISD::UITOF: return "ARM64ISD::UITOF";
> - case ARM64ISD::SQSHL_I: return "ARM64ISD::SQSHL_I";
> - case ARM64ISD::UQSHL_I: return "ARM64ISD::UQSHL_I";
> - case ARM64ISD::SRSHR_I: return "ARM64ISD::SRSHR_I";
> - case ARM64ISD::URSHR_I: return "ARM64ISD::URSHR_I";
> - case ARM64ISD::SQSHLU_I: return "ARM64ISD::SQSHLU_I";
> - case ARM64ISD::WrapperLarge: return "ARM64ISD::WrapperLarge";
> - case ARM64ISD::LD2post: return "ARM64ISD::LD2post";
> - case ARM64ISD::LD3post: return "ARM64ISD::LD3post";
> - case ARM64ISD::LD4post: return "ARM64ISD::LD4post";
> - case ARM64ISD::ST2post: return "ARM64ISD::ST2post";
> - case ARM64ISD::ST3post: return "ARM64ISD::ST3post";
> - case ARM64ISD::ST4post: return "ARM64ISD::ST4post";
> - case ARM64ISD::LD1x2post: return "ARM64ISD::LD1x2post";
> - case ARM64ISD::LD1x3post: return "ARM64ISD::LD1x3post";
> - case ARM64ISD::LD1x4post: return "ARM64ISD::LD1x4post";
> - case ARM64ISD::ST1x2post: return "ARM64ISD::ST1x2post";
> - case ARM64ISD::ST1x3post: return "ARM64ISD::ST1x3post";
> - case ARM64ISD::ST1x4post: return "ARM64ISD::ST1x4post";
> - case ARM64ISD::LD1DUPpost: return "ARM64ISD::LD1DUPpost";
> - case ARM64ISD::LD2DUPpost: return "ARM64ISD::LD2DUPpost";
> - case ARM64ISD::LD3DUPpost: return "ARM64ISD::LD3DUPpost";
> - case ARM64ISD::LD4DUPpost: return "ARM64ISD::LD4DUPpost";
> - case ARM64ISD::LD1LANEpost: return "ARM64ISD::LD1LANEpost";
> - case ARM64ISD::LD2LANEpost: return "ARM64ISD::LD2LANEpost";
> - case ARM64ISD::LD3LANEpost: return "ARM64ISD::LD3LANEpost";
> - case ARM64ISD::LD4LANEpost: return "ARM64ISD::LD4LANEpost";
> - case ARM64ISD::ST2LANEpost: return "ARM64ISD::ST2LANEpost";
> - case ARM64ISD::ST3LANEpost: return "ARM64ISD::ST3LANEpost";
> - case ARM64ISD::ST4LANEpost: return "ARM64ISD::ST4LANEpost";
> + case AArch64ISD::CALL: return "AArch64ISD::CALL";
> + case AArch64ISD::ADRP: return "AArch64ISD::ADRP";
> + case AArch64ISD::ADDlow: return "AArch64ISD::ADDlow";
> + case AArch64ISD::LOADgot: return "AArch64ISD::LOADgot";
> + case AArch64ISD::RET_FLAG: return "AArch64ISD::RET_FLAG";
> + case AArch64ISD::BRCOND: return "AArch64ISD::BRCOND";
> + case AArch64ISD::CSEL: return "AArch64ISD::CSEL";
> + case AArch64ISD::FCSEL: return "AArch64ISD::FCSEL";
> + case AArch64ISD::CSINV: return "AArch64ISD::CSINV";
> + case AArch64ISD::CSNEG: return "AArch64ISD::CSNEG";
> + case AArch64ISD::CSINC: return "AArch64ISD::CSINC";
> + case AArch64ISD::THREAD_POINTER: return "AArch64ISD::THREAD_POINTER";
> + case AArch64ISD::TLSDESC_CALL: return "AArch64ISD::TLSDESC_CALL";
> + case AArch64ISD::ADC: return "AArch64ISD::ADC";
> + case AArch64ISD::SBC: return "AArch64ISD::SBC";
> + case AArch64ISD::ADDS: return "AArch64ISD::ADDS";
> + case AArch64ISD::SUBS: return "AArch64ISD::SUBS";
> + case AArch64ISD::ADCS: return "AArch64ISD::ADCS";
> + case AArch64ISD::SBCS: return "AArch64ISD::SBCS";
> + case AArch64ISD::ANDS: return "AArch64ISD::ANDS";
> + case AArch64ISD::FCMP: return "AArch64ISD::FCMP";
> + case AArch64ISD::FMIN: return "AArch64ISD::FMIN";
> + case AArch64ISD::FMAX: return "AArch64ISD::FMAX";
> + case AArch64ISD::DUP: return "AArch64ISD::DUP";
> + case AArch64ISD::DUPLANE8: return "AArch64ISD::DUPLANE8";
> + case AArch64ISD::DUPLANE16: return "AArch64ISD::DUPLANE16";
> + case AArch64ISD::DUPLANE32: return "AArch64ISD::DUPLANE32";
> + case AArch64ISD::DUPLANE64: return "AArch64ISD::DUPLANE64";
> + case AArch64ISD::MOVI: return "AArch64ISD::MOVI";
> + case AArch64ISD::MOVIshift: return "AArch64ISD::MOVIshift";
> + case AArch64ISD::MOVIedit: return "AArch64ISD::MOVIedit";
> + case AArch64ISD::MOVImsl: return "AArch64ISD::MOVImsl";
> + case AArch64ISD::FMOV: return "AArch64ISD::FMOV";
> + case AArch64ISD::MVNIshift: return "AArch64ISD::MVNIshift";
> + case AArch64ISD::MVNImsl: return "AArch64ISD::MVNImsl";
> + case AArch64ISD::BICi: return "AArch64ISD::BICi";
> + case AArch64ISD::ORRi: return "AArch64ISD::ORRi";
> + case AArch64ISD::BSL: return "AArch64ISD::BSL";
> + case AArch64ISD::NEG: return "AArch64ISD::NEG";
> + case AArch64ISD::EXTR: return "AArch64ISD::EXTR";
> + case AArch64ISD::ZIP1: return "AArch64ISD::ZIP1";
> + case AArch64ISD::ZIP2: return "AArch64ISD::ZIP2";
> + case AArch64ISD::UZP1: return "AArch64ISD::UZP1";
> + case AArch64ISD::UZP2: return "AArch64ISD::UZP2";
> + case AArch64ISD::TRN1: return "AArch64ISD::TRN1";
> + case AArch64ISD::TRN2: return "AArch64ISD::TRN2";
> + case AArch64ISD::REV16: return "AArch64ISD::REV16";
> + case AArch64ISD::REV32: return "AArch64ISD::REV32";
> + case AArch64ISD::REV64: return "AArch64ISD::REV64";
> + case AArch64ISD::EXT: return "AArch64ISD::EXT";
> + case AArch64ISD::VSHL: return "AArch64ISD::VSHL";
> + case AArch64ISD::VLSHR: return "AArch64ISD::VLSHR";
> + case AArch64ISD::VASHR: return "AArch64ISD::VASHR";
> + case AArch64ISD::CMEQ: return "AArch64ISD::CMEQ";
> + case AArch64ISD::CMGE: return "AArch64ISD::CMGE";
> + case AArch64ISD::CMGT: return "AArch64ISD::CMGT";
> + case AArch64ISD::CMHI: return "AArch64ISD::CMHI";
> + case AArch64ISD::CMHS: return "AArch64ISD::CMHS";
> + case AArch64ISD::FCMEQ: return "AArch64ISD::FCMEQ";
> + case AArch64ISD::FCMGE: return "AArch64ISD::FCMGE";
> + case AArch64ISD::FCMGT: return "AArch64ISD::FCMGT";
> + case AArch64ISD::CMEQz: return "AArch64ISD::CMEQz";
> + case AArch64ISD::CMGEz: return "AArch64ISD::CMGEz";
> + case AArch64ISD::CMGTz: return "AArch64ISD::CMGTz";
> + case AArch64ISD::CMLEz: return "AArch64ISD::CMLEz";
> + case AArch64ISD::CMLTz: return "AArch64ISD::CMLTz";
> + case AArch64ISD::FCMEQz: return "AArch64ISD::FCMEQz";
> + case AArch64ISD::FCMGEz: return "AArch64ISD::FCMGEz";
> + case AArch64ISD::FCMGTz: return "AArch64ISD::FCMGTz";
> + case AArch64ISD::FCMLEz: return "AArch64ISD::FCMLEz";
> + case AArch64ISD::FCMLTz: return "AArch64ISD::FCMLTz";
> + case AArch64ISD::NOT: return "AArch64ISD::NOT";
> + case AArch64ISD::BIT: return "AArch64ISD::BIT";
> + case AArch64ISD::CBZ: return "AArch64ISD::CBZ";
> + case AArch64ISD::CBNZ: return "AArch64ISD::CBNZ";
> + case AArch64ISD::TBZ: return "AArch64ISD::TBZ";
> + case AArch64ISD::TBNZ: return "AArch64ISD::TBNZ";
> + case AArch64ISD::TC_RETURN: return "AArch64ISD::TC_RETURN";
> + case AArch64ISD::SITOF: return "AArch64ISD::SITOF";
> + case AArch64ISD::UITOF: return "AArch64ISD::UITOF";
> + case AArch64ISD::SQSHL_I: return "AArch64ISD::SQSHL_I";
> + case AArch64ISD::UQSHL_I: return "AArch64ISD::UQSHL_I";
> + case AArch64ISD::SRSHR_I: return "AArch64ISD::SRSHR_I";
> + case AArch64ISD::URSHR_I: return "AArch64ISD::URSHR_I";
> + case AArch64ISD::SQSHLU_I: return "AArch64ISD::SQSHLU_I";
> + case AArch64ISD::WrapperLarge: return "AArch64ISD::WrapperLarge";
> + case AArch64ISD::LD2post: return "AArch64ISD::LD2post";
> + case AArch64ISD::LD3post: return "AArch64ISD::LD3post";
> + case AArch64ISD::LD4post: return "AArch64ISD::LD4post";
> + case AArch64ISD::ST2post: return "AArch64ISD::ST2post";
> + case AArch64ISD::ST3post: return "AArch64ISD::ST3post";
> + case AArch64ISD::ST4post: return "AArch64ISD::ST4post";
> + case AArch64ISD::LD1x2post: return "AArch64ISD::LD1x2post";
> + case AArch64ISD::LD1x3post: return "AArch64ISD::LD1x3post";
> + case AArch64ISD::LD1x4post: return "AArch64ISD::LD1x4post";
> + case AArch64ISD::ST1x2post: return "AArch64ISD::ST1x2post";
> + case AArch64ISD::ST1x3post: return "AArch64ISD::ST1x3post";
> + case AArch64ISD::ST1x4post: return "AArch64ISD::ST1x4post";
> + case AArch64ISD::LD1DUPpost: return "AArch64ISD::LD1DUPpost";
> + case AArch64ISD::LD2DUPpost: return "AArch64ISD::LD2DUPpost";
> + case AArch64ISD::LD3DUPpost: return "AArch64ISD::LD3DUPpost";
> + case AArch64ISD::LD4DUPpost: return "AArch64ISD::LD4DUPpost";
> + case AArch64ISD::LD1LANEpost: return "AArch64ISD::LD1LANEpost";
> + case AArch64ISD::LD2LANEpost: return "AArch64ISD::LD2LANEpost";
> + case AArch64ISD::LD3LANEpost: return "AArch64ISD::LD3LANEpost";
> + case AArch64ISD::LD4LANEpost: return "AArch64ISD::LD4LANEpost";
> + case AArch64ISD::ST2LANEpost: return "AArch64ISD::ST2LANEpost";
> + case AArch64ISD::ST3LANEpost: return "AArch64ISD::ST3LANEpost";
> + case AArch64ISD::ST4LANEpost: return "AArch64ISD::ST4LANEpost";
> }
> }
>
> MachineBasicBlock *
> -ARM64TargetLowering::EmitF128CSEL(MachineInstr *MI,
> - MachineBasicBlock *MBB) const {
> +AArch64TargetLowering::EmitF128CSEL(MachineInstr *MI,
> + MachineBasicBlock *MBB) const {
> // We materialise the F128CSEL pseudo-instruction as some control flow and a
> // phi node:
>
> @@ -793,8 +793,8 @@ ARM64TargetLowering::EmitF128CSEL(Machin
> MBB->end());
> EndBB->transferSuccessorsAndUpdatePHIs(MBB);
>
> - BuildMI(MBB, DL, TII->get(ARM64::Bcc)).addImm(CondCode).addMBB(TrueBB);
> - BuildMI(MBB, DL, TII->get(ARM64::B)).addMBB(EndBB);
> + BuildMI(MBB, DL, TII->get(AArch64::Bcc)).addImm(CondCode).addMBB(TrueBB);
> + BuildMI(MBB, DL, TII->get(AArch64::B)).addMBB(EndBB);
> MBB->addSuccessor(TrueBB);
> MBB->addSuccessor(EndBB);
>
> @@ -802,11 +802,11 @@ ARM64TargetLowering::EmitF128CSEL(Machin
> TrueBB->addSuccessor(EndBB);
>
> if (!NZCVKilled) {
> - TrueBB->addLiveIn(ARM64::NZCV);
> - EndBB->addLiveIn(ARM64::NZCV);
> + TrueBB->addLiveIn(AArch64::NZCV);
> + EndBB->addLiveIn(AArch64::NZCV);
> }
>
> - BuildMI(*EndBB, EndBB->begin(), DL, TII->get(ARM64::PHI), DestReg)
> + BuildMI(*EndBB, EndBB->begin(), DL, TII->get(AArch64::PHI), DestReg)
> .addReg(IfTrueReg)
> .addMBB(TrueBB)
> .addReg(IfFalseReg)
> @@ -817,7 +817,7 @@ ARM64TargetLowering::EmitF128CSEL(Machin
> }
>
> MachineBasicBlock *
> -ARM64TargetLowering::EmitInstrWithCustomInserter(MachineInstr *MI,
> +AArch64TargetLowering::EmitInstrWithCustomInserter(MachineInstr *MI,
> MachineBasicBlock *BB) const {
> switch (MI->getOpcode()) {
> default:
> @@ -827,7 +827,7 @@ ARM64TargetLowering::EmitInstrWithCustom
> assert(0 && "Unexpected instruction for custom inserter!");
> break;
>
> - case ARM64::F128CSEL:
> + case AArch64::F128CSEL:
> return EmitF128CSEL(MI, BB);
>
> case TargetOpcode::STACKMAP:
> @@ -838,120 +838,122 @@ ARM64TargetLowering::EmitInstrWithCustom
> }
>
> //===----------------------------------------------------------------------===//
> -// ARM64 Lowering private implementation.
> +// AArch64 Lowering private implementation.
> //===----------------------------------------------------------------------===//
>
> //===----------------------------------------------------------------------===//
> // Lowering Code
> //===----------------------------------------------------------------------===//
>
> -/// changeIntCCToARM64CC - Convert a DAG integer condition code to an ARM64 CC
> -static ARM64CC::CondCode changeIntCCToARM64CC(ISD::CondCode CC) {
> +/// changeIntCCToAArch64CC - Convert a DAG integer condition code to an AArch64
> +/// CC
> +static AArch64CC::CondCode changeIntCCToAArch64CC(ISD::CondCode CC) {
> switch (CC) {
> default:
> llvm_unreachable("Unknown condition code!");
> case ISD::SETNE:
> - return ARM64CC::NE;
> + return AArch64CC::NE;
> case ISD::SETEQ:
> - return ARM64CC::EQ;
> + return AArch64CC::EQ;
> case ISD::SETGT:
> - return ARM64CC::GT;
> + return AArch64CC::GT;
> case ISD::SETGE:
> - return ARM64CC::GE;
> + return AArch64CC::GE;
> case ISD::SETLT:
> - return ARM64CC::LT;
> + return AArch64CC::LT;
> case ISD::SETLE:
> - return ARM64CC::LE;
> + return AArch64CC::LE;
> case ISD::SETUGT:
> - return ARM64CC::HI;
> + return AArch64CC::HI;
> case ISD::SETUGE:
> - return ARM64CC::HS;
> + return AArch64CC::HS;
> case ISD::SETULT:
> - return ARM64CC::LO;
> + return AArch64CC::LO;
> case ISD::SETULE:
> - return ARM64CC::LS;
> + return AArch64CC::LS;
> }
> }
>
> -/// changeFPCCToARM64CC - Convert a DAG fp condition code to an ARM64 CC.
> -static void changeFPCCToARM64CC(ISD::CondCode CC, ARM64CC::CondCode &CondCode,
> - ARM64CC::CondCode &CondCode2) {
> - CondCode2 = ARM64CC::AL;
> +/// changeFPCCToAArch64CC - Convert a DAG fp condition code to an AArch64 CC.
> +static void changeFPCCToAArch64CC(ISD::CondCode CC,
> + AArch64CC::CondCode &CondCode,
> + AArch64CC::CondCode &CondCode2) {
> + CondCode2 = AArch64CC::AL;
> switch (CC) {
> default:
> llvm_unreachable("Unknown FP condition!");
> case ISD::SETEQ:
> case ISD::SETOEQ:
> - CondCode = ARM64CC::EQ;
> + CondCode = AArch64CC::EQ;
> break;
> case ISD::SETGT:
> case ISD::SETOGT:
> - CondCode = ARM64CC::GT;
> + CondCode = AArch64CC::GT;
> break;
> case ISD::SETGE:
> case ISD::SETOGE:
> - CondCode = ARM64CC::GE;
> + CondCode = AArch64CC::GE;
> break;
> case ISD::SETOLT:
> - CondCode = ARM64CC::MI;
> + CondCode = AArch64CC::MI;
> break;
> case ISD::SETOLE:
> - CondCode = ARM64CC::LS;
> + CondCode = AArch64CC::LS;
> break;
> case ISD::SETONE:
> - CondCode = ARM64CC::MI;
> - CondCode2 = ARM64CC::GT;
> + CondCode = AArch64CC::MI;
> + CondCode2 = AArch64CC::GT;
> break;
> case ISD::SETO:
> - CondCode = ARM64CC::VC;
> + CondCode = AArch64CC::VC;
> break;
> case ISD::SETUO:
> - CondCode = ARM64CC::VS;
> + CondCode = AArch64CC::VS;
> break;
> case ISD::SETUEQ:
> - CondCode = ARM64CC::EQ;
> - CondCode2 = ARM64CC::VS;
> + CondCode = AArch64CC::EQ;
> + CondCode2 = AArch64CC::VS;
> break;
> case ISD::SETUGT:
> - CondCode = ARM64CC::HI;
> + CondCode = AArch64CC::HI;
> break;
> case ISD::SETUGE:
> - CondCode = ARM64CC::PL;
> + CondCode = AArch64CC::PL;
> break;
> case ISD::SETLT:
> case ISD::SETULT:
> - CondCode = ARM64CC::LT;
> + CondCode = AArch64CC::LT;
> break;
> case ISD::SETLE:
> case ISD::SETULE:
> - CondCode = ARM64CC::LE;
> + CondCode = AArch64CC::LE;
> break;
> case ISD::SETNE:
> case ISD::SETUNE:
> - CondCode = ARM64CC::NE;
> + CondCode = AArch64CC::NE;
> break;
> }
> }
>
> -/// changeVectorFPCCToARM64CC - Convert a DAG fp condition code to an ARM64 CC
> -/// usable with the vector instructions. Fewer operations are available without
> -/// a real NZCV register, so we have to use less efficient combinations to get
> -/// the same effect.
> -static void changeVectorFPCCToARM64CC(ISD::CondCode CC,
> - ARM64CC::CondCode &CondCode,
> - ARM64CC::CondCode &CondCode2,
> - bool &Invert) {
> +/// changeVectorFPCCToAArch64CC - Convert a DAG fp condition code to an AArch64
> +/// CC usable with the vector instructions. Fewer operations are available
> +/// without a real NZCV register, so we have to use less efficient combinations
> +/// to get the same effect.
> +static void changeVectorFPCCToAArch64CC(ISD::CondCode CC,
> + AArch64CC::CondCode &CondCode,
> + AArch64CC::CondCode &CondCode2,
> + bool &Invert) {
> Invert = false;
> switch (CC) {
> default:
> // Mostly the scalar mappings work fine.
> - changeFPCCToARM64CC(CC, CondCode, CondCode2);
> + changeFPCCToAArch64CC(CC, CondCode, CondCode2);
> break;
> case ISD::SETUO:
> Invert = true; // Fallthrough
> case ISD::SETO:
> - CondCode = ARM64CC::MI;
> - CondCode2 = ARM64CC::GE;
> + CondCode = AArch64CC::MI;
> + CondCode2 = AArch64CC::GE;
> break;
> case ISD::SETUEQ:
> case ISD::SETULT:
> @@ -961,13 +963,13 @@ static void changeVectorFPCCToARM64CC(IS
> // All of the compare-mask comparisons are ordered, but we can switch
> // between the two by a double inversion. E.g. ULE == !OGT.
> Invert = true;
> - changeFPCCToARM64CC(getSetCCInverse(CC, false), CondCode, CondCode2);
> + changeFPCCToAArch64CC(getSetCCInverse(CC, false), CondCode, CondCode2);
> break;
> }
> }
>
> static bool isLegalArithImmed(uint64_t C) {
> - // Matches ARM64DAGToDAGISel::SelectArithImmed().
> + // Matches AArch64DAGToDAGISel::SelectArithImmed().
> return (C >> 12 == 0) || ((C & 0xFFFULL) == 0 && C >> 24 == 0);
> }
>
> @@ -976,13 +978,13 @@ static SDValue emitComparison(SDValue LH
> EVT VT = LHS.getValueType();
>
> if (VT.isFloatingPoint())
> - return DAG.getNode(ARM64ISD::FCMP, dl, VT, LHS, RHS);
> + return DAG.getNode(AArch64ISD::FCMP, dl, VT, LHS, RHS);
>
> // The CMP instruction is just an alias for SUBS, and representing it as
> // SUBS means that it's possible to get CSE with subtract operations.
> // A later phase can perform the optimization of setting the destination
> // register to WZR/XZR if it ends up being unused.
> - unsigned Opcode = ARM64ISD::SUBS;
> + unsigned Opcode = AArch64ISD::SUBS;
>
> if (RHS.getOpcode() == ISD::SUB && isa<ConstantSDNode>(RHS.getOperand(0)) &&
> cast<ConstantSDNode>(RHS.getOperand(0))->getZExtValue() == 0 &&
> @@ -997,7 +999,7 @@ static SDValue emitComparison(SDValue LH
> // So, finally, the only LLVM-native comparisons that don't mention C and V
> // are SETEQ and SETNE. They're the only ones we can safely use CMN for in
> // the absence of information about op2.
> - Opcode = ARM64ISD::ADDS;
> + Opcode = AArch64ISD::ADDS;
> RHS = RHS.getOperand(1);
> } else if (LHS.getOpcode() == ISD::AND && isa<ConstantSDNode>(RHS) &&
> cast<ConstantSDNode>(RHS)->getZExtValue() == 0 &&
> @@ -1005,7 +1007,7 @@ static SDValue emitComparison(SDValue LH
> // Similarly, (CMP (and X, Y), 0) can be implemented with a TST
> // (a.k.a. ANDS) except that the flags are only guaranteed to work for one
> // of the signed comparisons.
> - Opcode = ARM64ISD::ANDS;
> + Opcode = AArch64ISD::ANDS;
> RHS = LHS.getOperand(1);
> LHS = LHS.getOperand(0);
> }
> @@ -1014,8 +1016,8 @@ static SDValue emitComparison(SDValue LH
> .getValue(1);
> }
>
> -static SDValue getARM64Cmp(SDValue LHS, SDValue RHS, ISD::CondCode CC,
> - SDValue &ARM64cc, SelectionDAG &DAG, SDLoc dl) {
> +static SDValue getAArch64Cmp(SDValue LHS, SDValue RHS, ISD::CondCode CC,
> + SDValue &AArch64cc, SelectionDAG &DAG, SDLoc dl) {
> if (ConstantSDNode *RHSC = dyn_cast<ConstantSDNode>(RHS.getNode())) {
> EVT VT = RHS.getValueType();
> uint64_t C = RHSC->getZExtValue();
> @@ -1072,13 +1074,13 @@ static SDValue getARM64Cmp(SDValue LHS,
> }
>
> SDValue Cmp = emitComparison(LHS, RHS, CC, dl, DAG);
> - ARM64CC::CondCode ARM64CC = changeIntCCToARM64CC(CC);
> - ARM64cc = DAG.getConstant(ARM64CC, MVT::i32);
> + AArch64CC::CondCode AArch64CC = changeIntCCToAArch64CC(CC);
> + AArch64cc = DAG.getConstant(AArch64CC, MVT::i32);
> return Cmp;
> }
>
> static std::pair<SDValue, SDValue>
> -getARM64XALUOOp(ARM64CC::CondCode &CC, SDValue Op, SelectionDAG &DAG) {
> +getAArch64XALUOOp(AArch64CC::CondCode &CC, SDValue Op, SelectionDAG &DAG) {
> assert((Op.getValueType() == MVT::i32 || Op.getValueType() == MVT::i64) &&
> "Unsupported value type");
> SDValue Value, Overflow;
> @@ -1090,25 +1092,25 @@ getARM64XALUOOp(ARM64CC::CondCode &CC, S
> default:
> llvm_unreachable("Unknown overflow instruction!");
> case ISD::SADDO:
> - Opc = ARM64ISD::ADDS;
> - CC = ARM64CC::VS;
> + Opc = AArch64ISD::ADDS;
> + CC = AArch64CC::VS;
> break;
> case ISD::UADDO:
> - Opc = ARM64ISD::ADDS;
> - CC = ARM64CC::HS;
> + Opc = AArch64ISD::ADDS;
> + CC = AArch64CC::HS;
> break;
> case ISD::SSUBO:
> - Opc = ARM64ISD::SUBS;
> - CC = ARM64CC::VS;
> + Opc = AArch64ISD::SUBS;
> + CC = AArch64CC::VS;
> break;
> case ISD::USUBO:
> - Opc = ARM64ISD::SUBS;
> - CC = ARM64CC::LO;
> + Opc = AArch64ISD::SUBS;
> + CC = AArch64CC::LO;
> break;
> // Multiply needs a little bit extra work.
> case ISD::SMULO:
> case ISD::UMULO: {
> - CC = ARM64CC::NE;
> + CC = AArch64CC::NE;
> bool IsSigned = (Op.getOpcode() == ISD::SMULO) ? true : false;
> if (Op.getValueType() == MVT::i32) {
> unsigned ExtendOpc = IsSigned ? ISD::SIGN_EXTEND : ISD::ZERO_EXTEND;
> @@ -1121,7 +1123,7 @@ getARM64XALUOOp(ARM64CC::CondCode &CC, S
> SDValue Mul = DAG.getNode(ISD::MUL, DL, MVT::i64, LHS, RHS);
> SDValue Add = DAG.getNode(ISD::ADD, DL, MVT::i64, Mul,
> DAG.getConstant(0, MVT::i64));
> - // On ARM64 the upper 32 bits are always zero extended for a 32 bit
> + // On AArch64 the upper 32 bits are always zero extended for a 32 bit
> // operation. We need to clear out the upper 32 bits, because we used a
> // widening multiply that wrote all 64 bits. In the end this should be a
> // noop.
> @@ -1140,19 +1142,19 @@ getARM64XALUOOp(ARM64CC::CondCode &CC, S
> // It is important that LowerBits is last, otherwise the arithmetic
> // shift will not be folded into the compare (SUBS).
> SDVTList VTs = DAG.getVTList(MVT::i32, MVT::i32);
> - Overflow = DAG.getNode(ARM64ISD::SUBS, DL, VTs, UpperBits, LowerBits)
> + Overflow = DAG.getNode(AArch64ISD::SUBS, DL, VTs, UpperBits, LowerBits)
> .getValue(1);
> } else {
> // The overflow check for unsigned multiply is easy. We only need to
> // check if any of the upper 32 bits are set. This can be done with a
> // CMP (shifted register). For that we need to generate the following
> // pattern:
> - // (i64 ARM64ISD::SUBS i64 0, (i64 srl i64 %Mul, i64 32)
> + // (i64 AArch64ISD::SUBS i64 0, (i64 srl i64 %Mul, i64 32)
> SDValue UpperBits = DAG.getNode(ISD::SRL, DL, MVT::i64, Mul,
> DAG.getConstant(32, MVT::i64));
> SDVTList VTs = DAG.getVTList(MVT::i64, MVT::i32);
> Overflow =
> - DAG.getNode(ARM64ISD::SUBS, DL, VTs, DAG.getConstant(0, MVT::i64),
> + DAG.getNode(AArch64ISD::SUBS, DL, VTs, DAG.getConstant(0, MVT::i64),
> UpperBits).getValue(1);
> }
> break;
> @@ -1167,13 +1169,13 @@ getARM64XALUOOp(ARM64CC::CondCode &CC, S
> // It is important that LowerBits is last, otherwise the arithmetic
> // shift will not be folded into the compare (SUBS).
> SDVTList VTs = DAG.getVTList(MVT::i64, MVT::i32);
> - Overflow = DAG.getNode(ARM64ISD::SUBS, DL, VTs, UpperBits, LowerBits)
> + Overflow = DAG.getNode(AArch64ISD::SUBS, DL, VTs, UpperBits, LowerBits)
> .getValue(1);
> } else {
> SDValue UpperBits = DAG.getNode(ISD::MULHU, DL, MVT::i64, LHS, RHS);
> SDVTList VTs = DAG.getVTList(MVT::i64, MVT::i32);
> Overflow =
> - DAG.getNode(ARM64ISD::SUBS, DL, VTs, DAG.getConstant(0, MVT::i64),
> + DAG.getNode(AArch64ISD::SUBS, DL, VTs, DAG.getConstant(0, MVT::i64),
> UpperBits).getValue(1);
> }
> break;
> @@ -1183,15 +1185,15 @@ getARM64XALUOOp(ARM64CC::CondCode &CC, S
> if (Opc) {
> SDVTList VTs = DAG.getVTList(Op->getValueType(0), MVT::i32);
>
> - // Emit the ARM64 operation with overflow check.
> + // Emit the AArch64 operation with overflow check.
> Value = DAG.getNode(Opc, DL, VTs, LHS, RHS);
> Overflow = Value.getValue(1);
> }
> return std::make_pair(Value, Overflow);
> }
>
> -SDValue ARM64TargetLowering::LowerF128Call(SDValue Op, SelectionDAG &DAG,
> - RTLIB::Libcall Call) const {
> +SDValue AArch64TargetLowering::LowerF128Call(SDValue Op, SelectionDAG &DAG,
> + RTLIB::Libcall Call) const {
> SmallVector<SDValue, 2> Ops;
> for (unsigned i = 0, e = Op->getNumOperands(); i != e; ++i)
> Ops.push_back(Op.getOperand(i));
> @@ -1246,13 +1248,13 @@ static SDValue LowerXOR(SDValue Op, Sele
> // If the constants line up, perform the transform!
> if (CTVal->isNullValue() && CFVal->isAllOnesValue()) {
> SDValue CCVal;
> - SDValue Cmp = getARM64Cmp(LHS, RHS, CC, CCVal, DAG, dl);
> + SDValue Cmp = getAArch64Cmp(LHS, RHS, CC, CCVal, DAG, dl);
>
> FVal = Other;
> TVal = DAG.getNode(ISD::XOR, dl, Other.getValueType(), Other,
> DAG.getConstant(-1ULL, Other.getValueType()));
>
> - return DAG.getNode(ARM64ISD::CSEL, dl, Sel.getValueType(), FVal, TVal,
> + return DAG.getNode(AArch64ISD::CSEL, dl, Sel.getValueType(), FVal, TVal,
> CCVal, Cmp);
> }
>
> @@ -1274,17 +1276,17 @@ static SDValue LowerADDC_ADDE_SUBC_SUBE(
> default:
> assert(0 && "Invalid code");
> case ISD::ADDC:
> - Opc = ARM64ISD::ADDS;
> + Opc = AArch64ISD::ADDS;
> break;
> case ISD::SUBC:
> - Opc = ARM64ISD::SUBS;
> + Opc = AArch64ISD::SUBS;
> break;
> case ISD::ADDE:
> - Opc = ARM64ISD::ADCS;
> + Opc = AArch64ISD::ADCS;
> ExtraOp = true;
> break;
> case ISD::SUBE:
> - Opc = ARM64ISD::SBCS;
> + Opc = AArch64ISD::SBCS;
> ExtraOp = true;
> break;
> }
> @@ -1300,10 +1302,10 @@ static SDValue LowerXALUO(SDValue Op, Se
> if (!DAG.getTargetLoweringInfo().isTypeLegal(Op.getValueType()))
> return SDValue();
>
> - ARM64CC::CondCode CC;
> + AArch64CC::CondCode CC;
> // The actual operation that sets the overflow or carry flag.
> SDValue Value, Overflow;
> - std::tie(Value, Overflow) = getARM64XALUOOp(CC, Op, DAG);
> + std::tie(Value, Overflow) = getAArch64XALUOOp(CC, Op, DAG);
>
> // We use 0 and 1 as false and true values.
> SDValue TVal = DAG.getConstant(1, MVT::i32);
> @@ -1313,8 +1315,8 @@ static SDValue LowerXALUO(SDValue Op, Se
> // too. This will allow it to be selected to a single instruction:
> // CSINC Wd, WZR, WZR, invert(cond).
> SDValue CCVal = DAG.getConstant(getInvertedCondCode(CC), MVT::i32);
> - Overflow = DAG.getNode(ARM64ISD::CSEL, SDLoc(Op), MVT::i32, FVal, TVal, CCVal,
> - Overflow);
> + Overflow = DAG.getNode(AArch64ISD::CSEL, SDLoc(Op), MVT::i32, FVal, TVal,
> + CCVal, Overflow);
>
> SDVTList VTs = DAG.getVTList(Op.getValueType(), MVT::i32);
> return DAG.getNode(ISD::MERGE_VALUES, SDLoc(Op), VTs, Value, Overflow);
> @@ -1347,12 +1349,12 @@ static SDValue LowerPREFETCH(SDValue Op,
> unsigned PrfOp = (IsWrite << 4) | // Load/Store bit
> (Locality << 1) | // Cache level bits
> (unsigned)IsStream; // Stream bit
> - return DAG.getNode(ARM64ISD::PREFETCH, DL, MVT::Other, Op.getOperand(0),
> + return DAG.getNode(AArch64ISD::PREFETCH, DL, MVT::Other, Op.getOperand(0),
> DAG.getConstant(PrfOp, MVT::i32), Op.getOperand(1));
> }
>
> -SDValue ARM64TargetLowering::LowerFP_EXTEND(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerFP_EXTEND(SDValue Op,
> + SelectionDAG &DAG) const {
> assert(Op.getValueType() == MVT::f128 && "Unexpected lowering");
>
> RTLIB::Libcall LC;
> @@ -1361,8 +1363,8 @@ SDValue ARM64TargetLowering::LowerFP_EXT
> return LowerF128Call(Op, DAG, LC);
> }
>
> -SDValue ARM64TargetLowering::LowerFP_ROUND(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerFP_ROUND(SDValue Op,
> + SelectionDAG &DAG) const {
> if (Op.getOperand(0).getValueType() != MVT::f128) {
> // It's legal except when f128 is involved
> return Op;
> @@ -1380,7 +1382,7 @@ SDValue ARM64TargetLowering::LowerFP_ROU
> }
>
> static SDValue LowerVectorFP_TO_INT(SDValue Op, SelectionDAG &DAG) {
> - // Warning: We maintain cost tables in ARM64TargetTransformInfo.cpp.
> + // Warning: We maintain cost tables in AArch64TargetTransformInfo.cpp.
> // Any additional optimization in this function should be recorded
> // in the cost tables.
> EVT InVT = Op.getOperand(0).getValueType();
> @@ -1406,8 +1408,8 @@ static SDValue LowerVectorFP_TO_INT(SDVa
> return SDValue();
> }
>
> -SDValue ARM64TargetLowering::LowerFP_TO_INT(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerFP_TO_INT(SDValue Op,
> + SelectionDAG &DAG) const {
> if (Op.getOperand(0).getValueType().isVector())
> return LowerVectorFP_TO_INT(Op, DAG);
>
> @@ -1431,7 +1433,7 @@ SDValue ARM64TargetLowering::LowerFP_TO_
> }
>
> static SDValue LowerVectorINT_TO_FP(SDValue Op, SelectionDAG &DAG) {
> - // Warning: We maintain cost tables in ARM64TargetTransformInfo.cpp.
> + // Warning: We maintain cost tables in AArch64TargetTransformInfo.cpp.
> // Any additional optimization in this function should be recorded
> // in the cost tables.
> EVT VT = Op.getValueType();
> @@ -1467,7 +1469,7 @@ static SDValue LowerVectorINT_TO_FP(SDVa
> return DAG.getNode(ISD::BUILD_VECTOR, dl, VT, BuildVectorOps);
> }
>
> -SDValue ARM64TargetLowering::LowerINT_TO_FP(SDValue Op,
> +SDValue AArch64TargetLowering::LowerINT_TO_FP(SDValue Op,
> SelectionDAG &DAG) const {
> if (Op.getValueType().isVector())
> return LowerVectorINT_TO_FP(Op, DAG);
> @@ -1490,7 +1492,8 @@ SDValue ARM64TargetLowering::LowerINT_TO
> return LowerF128Call(Op, DAG, LC);
> }
>
> -SDValue ARM64TargetLowering::LowerFSINCOS(SDValue Op, SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerFSINCOS(SDValue Op,
> + SelectionDAG &DAG) const {
> // For iOS, we want to call an alternative entry point: __sincos_stret,
> // which returns the values in two S / D registers.
> SDLoc dl(Op);
> @@ -1520,8 +1523,8 @@ SDValue ARM64TargetLowering::LowerFSINCO
> return CallResult.first;
> }
>
> -SDValue ARM64TargetLowering::LowerOperation(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerOperation(SDValue Op,
> + SelectionDAG &DAG) const {
> switch (Op.getOpcode()) {
> default:
> llvm_unreachable("unimplemented operand");
> @@ -1621,7 +1624,7 @@ SDValue ARM64TargetLowering::LowerOperat
> }
>
> /// getFunctionAlignment - Return the Log2 alignment of this function.
> -unsigned ARM64TargetLowering::getFunctionAlignment(const Function *F) const {
> +unsigned AArch64TargetLowering::getFunctionAlignment(const Function *F) const {
> return 2;
> }
>
> @@ -1629,26 +1632,26 @@ unsigned ARM64TargetLowering::getFunctio
> // Calling Convention Implementation
> //===----------------------------------------------------------------------===//
>
> -#include "ARM64GenCallingConv.inc"
> +#include "AArch64GenCallingConv.inc"
>
> /// Selects the correct CCAssignFn for a the given CallingConvention
> /// value.
> -CCAssignFn *ARM64TargetLowering::CCAssignFnForCall(CallingConv::ID CC,
> - bool IsVarArg) const {
> +CCAssignFn *AArch64TargetLowering::CCAssignFnForCall(CallingConv::ID CC,
> + bool IsVarArg) const {
> switch (CC) {
> default:
> llvm_unreachable("Unsupported calling convention.");
> case CallingConv::WebKit_JS:
> - return CC_ARM64_WebKit_JS;
> + return CC_AArch64_WebKit_JS;
> case CallingConv::C:
> case CallingConv::Fast:
> if (!Subtarget->isTargetDarwin())
> - return CC_ARM64_AAPCS;
> - return IsVarArg ? CC_ARM64_DarwinPCS_VarArg : CC_ARM64_DarwinPCS;
> + return CC_AArch64_AAPCS;
> + return IsVarArg ? CC_AArch64_DarwinPCS_VarArg : CC_AArch64_DarwinPCS;
> }
> }
>
> -SDValue ARM64TargetLowering::LowerFormalArguments(
> +SDValue AArch64TargetLowering::LowerFormalArguments(
> SDValue Chain, CallingConv::ID CallConv, bool isVarArg,
> const SmallVectorImpl<ISD::InputArg> &Ins, SDLoc DL, SelectionDAG &DAG,
> SmallVectorImpl<SDValue> &InVals) const {
> @@ -1662,7 +1665,7 @@ SDValue ARM64TargetLowering::LowerFormal
>
> // At this point, Ins[].VT may already be promoted to i32. To correctly
> // handle passing i8 as i8 instead of i32 on stack, we pass in both i32 and
> - // i8 to CC_ARM64_AAPCS with i32 being ValVT and i8 being LocVT.
> + // i8 to CC_AArch64_AAPCS with i32 being ValVT and i8 being LocVT.
> // Since AnalyzeFormalArguments uses Ins[].VT for both ValVT and LocVT, here
> // we use a special version of AnalyzeFormalArguments to pass in ValVT and
> // LocVT.
> @@ -1718,15 +1721,15 @@ SDValue ARM64TargetLowering::LowerFormal
> const TargetRegisterClass *RC;
>
> if (RegVT == MVT::i32)
> - RC = &ARM64::GPR32RegClass;
> + RC = &AArch64::GPR32RegClass;
> else if (RegVT == MVT::i64)
> - RC = &ARM64::GPR64RegClass;
> + RC = &AArch64::GPR64RegClass;
> else if (RegVT == MVT::f32)
> - RC = &ARM64::FPR32RegClass;
> + RC = &AArch64::FPR32RegClass;
> else if (RegVT == MVT::f64 || RegVT.is64BitVector())
> - RC = &ARM64::FPR64RegClass;
> + RC = &AArch64::FPR64RegClass;
> else if (RegVT == MVT::f128 || RegVT.is128BitVector())
> - RC = &ARM64::FPR128RegClass;
> + RC = &AArch64::FPR128RegClass;
> else
> llvm_unreachable("RegVT not supported by FORMAL_ARGUMENTS Lowering");
>
> @@ -1802,7 +1805,7 @@ SDValue ARM64TargetLowering::LowerFormal
> saveVarArgRegisters(CCInfo, DAG, DL, Chain);
> }
>
> - ARM64FunctionInfo *AFI = MF.getInfo<ARM64FunctionInfo>();
> + AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>();
> // This will point to the next argument passed via stack.
> unsigned StackOffset = CCInfo.getNextStackOffset();
> // We currently pass all varargs at 8-byte alignment.
> @@ -1810,7 +1813,7 @@ SDValue ARM64TargetLowering::LowerFormal
> AFI->setVarArgsStackIndex(MFI->CreateFixedObject(4, StackOffset, true));
> }
>
> - ARM64FunctionInfo *FuncInfo = MF.getInfo<ARM64FunctionInfo>();
> + AArch64FunctionInfo *FuncInfo = MF.getInfo<AArch64FunctionInfo>();
> unsigned StackArgSize = CCInfo.getNextStackOffset();
> bool TailCallOpt = MF.getTarget().Options.GuaranteedTailCallOpt;
> if (DoesCalleeRestoreStack(CallConv, TailCallOpt)) {
> @@ -1834,18 +1837,18 @@ SDValue ARM64TargetLowering::LowerFormal
> return Chain;
> }
>
> -void ARM64TargetLowering::saveVarArgRegisters(CCState &CCInfo,
> - SelectionDAG &DAG, SDLoc DL,
> - SDValue &Chain) const {
> +void AArch64TargetLowering::saveVarArgRegisters(CCState &CCInfo,
> + SelectionDAG &DAG, SDLoc DL,
> + SDValue &Chain) const {
> MachineFunction &MF = DAG.getMachineFunction();
> MachineFrameInfo *MFI = MF.getFrameInfo();
> - ARM64FunctionInfo *FuncInfo = MF.getInfo<ARM64FunctionInfo>();
> + AArch64FunctionInfo *FuncInfo = MF.getInfo<AArch64FunctionInfo>();
>
> SmallVector<SDValue, 8> MemOps;
>
> - static const MCPhysReg GPRArgRegs[] = { ARM64::X0, ARM64::X1, ARM64::X2,
> - ARM64::X3, ARM64::X4, ARM64::X5,
> - ARM64::X6, ARM64::X7 };
> + static const MCPhysReg GPRArgRegs[] = { AArch64::X0, AArch64::X1, AArch64::X2,
> + AArch64::X3, AArch64::X4, AArch64::X5,
> + AArch64::X6, AArch64::X7 };
> static const unsigned NumGPRArgRegs = array_lengthof(GPRArgRegs);
> unsigned FirstVariadicGPR =
> CCInfo.getFirstUnallocated(GPRArgRegs, NumGPRArgRegs);
> @@ -1858,7 +1861,7 @@ void ARM64TargetLowering::saveVarArgRegi
> SDValue FIN = DAG.getFrameIndex(GPRIdx, getPointerTy());
>
> for (unsigned i = FirstVariadicGPR; i < NumGPRArgRegs; ++i) {
> - unsigned VReg = MF.addLiveIn(GPRArgRegs[i], &ARM64::GPR64RegClass);
> + unsigned VReg = MF.addLiveIn(GPRArgRegs[i], &AArch64::GPR64RegClass);
> SDValue Val = DAG.getCopyFromReg(Chain, DL, VReg, MVT::i64);
> SDValue Store =
> DAG.getStore(Val.getValue(1), DL, Val, FIN,
> @@ -1872,9 +1875,9 @@ void ARM64TargetLowering::saveVarArgRegi
> FuncInfo->setVarArgsGPRSize(GPRSaveSize);
>
> if (Subtarget->hasFPARMv8()) {
> - static const MCPhysReg FPRArgRegs[] = { ARM64::Q0, ARM64::Q1, ARM64::Q2,
> - ARM64::Q3, ARM64::Q4, ARM64::Q5,
> - ARM64::Q6, ARM64::Q7 };
> + static const MCPhysReg FPRArgRegs[] = {
> + AArch64::Q0, AArch64::Q1, AArch64::Q2, AArch64::Q3,
> + AArch64::Q4, AArch64::Q5, AArch64::Q6, AArch64::Q7};
> static const unsigned NumFPRArgRegs = array_lengthof(FPRArgRegs);
> unsigned FirstVariadicFPR =
> CCInfo.getFirstUnallocated(FPRArgRegs, NumFPRArgRegs);
> @@ -1887,7 +1890,7 @@ void ARM64TargetLowering::saveVarArgRegi
> SDValue FIN = DAG.getFrameIndex(FPRIdx, getPointerTy());
>
> for (unsigned i = FirstVariadicFPR; i < NumFPRArgRegs; ++i) {
> - unsigned VReg = MF.addLiveIn(FPRArgRegs[i], &ARM64::FPR128RegClass);
> + unsigned VReg = MF.addLiveIn(FPRArgRegs[i], &AArch64::FPR128RegClass);
> SDValue Val = DAG.getCopyFromReg(Chain, DL, VReg, MVT::f128);
>
> SDValue Store =
> @@ -1909,13 +1912,14 @@ void ARM64TargetLowering::saveVarArgRegi
>
> /// LowerCallResult - Lower the result values of a call into the
> /// appropriate copies out of appropriate physical registers.
> -SDValue ARM64TargetLowering::LowerCallResult(
> +SDValue AArch64TargetLowering::LowerCallResult(
> SDValue Chain, SDValue InFlag, CallingConv::ID CallConv, bool isVarArg,
> const SmallVectorImpl<ISD::InputArg> &Ins, SDLoc DL, SelectionDAG &DAG,
> SmallVectorImpl<SDValue> &InVals, bool isThisReturn,
> SDValue ThisVal) const {
> - CCAssignFn *RetCC = CallConv == CallingConv::WebKit_JS ? RetCC_ARM64_WebKit_JS
> - : RetCC_ARM64_AAPCS;
> + CCAssignFn *RetCC = CallConv == CallingConv::WebKit_JS
> + ? RetCC_AArch64_WebKit_JS
> + : RetCC_AArch64_AAPCS;
> // Assign locations to each value returned by this call.
> SmallVector<CCValAssign, 16> RVLocs;
> CCState CCInfo(CallConv, isVarArg, DAG.getMachineFunction(),
> @@ -1956,7 +1960,7 @@ SDValue ARM64TargetLowering::LowerCallRe
> return Chain;
> }
>
> -bool ARM64TargetLowering::isEligibleForTailCallOptimization(
> +bool AArch64TargetLowering::isEligibleForTailCallOptimization(
> SDValue Callee, CallingConv::ID CalleeCC, bool isVarArg,
> bool isCalleeStructRet, bool isCallerStructRet,
> const SmallVectorImpl<ISD::OutputArg> &Outs,
> @@ -2054,17 +2058,17 @@ bool ARM64TargetLowering::isEligibleForT
>
> CCInfo.AnalyzeCallOperands(Outs, CCAssignFnForCall(CalleeCC, isVarArg));
>
> - const ARM64FunctionInfo *FuncInfo = MF.getInfo<ARM64FunctionInfo>();
> + const AArch64FunctionInfo *FuncInfo = MF.getInfo<AArch64FunctionInfo>();
>
> // If the stack arguments for this call would fit into our own save area then
> // the call can be made tail.
> return CCInfo.getNextStackOffset() <= FuncInfo->getBytesInStackArgArea();
> }
>
> -SDValue ARM64TargetLowering::addTokenForArgument(SDValue Chain,
> - SelectionDAG &DAG,
> - MachineFrameInfo *MFI,
> - int ClobberedFI) const {
> +SDValue AArch64TargetLowering::addTokenForArgument(SDValue Chain,
> + SelectionDAG &DAG,
> + MachineFrameInfo *MFI,
> + int ClobberedFI) const {
> SmallVector<SDValue, 8> ArgChains;
> int64_t FirstByte = MFI->getObjectOffset(ClobberedFI);
> int64_t LastByte = FirstByte + MFI->getObjectSize(ClobberedFI) - 1;
> @@ -2094,19 +2098,20 @@ SDValue ARM64TargetLowering::addTokenFor
> return DAG.getNode(ISD::TokenFactor, SDLoc(Chain), MVT::Other, ArgChains);
> }
>
> -bool ARM64TargetLowering::DoesCalleeRestoreStack(CallingConv::ID CallCC,
> - bool TailCallOpt) const {
> +bool AArch64TargetLowering::DoesCalleeRestoreStack(CallingConv::ID CallCC,
> + bool TailCallOpt) const {
> return CallCC == CallingConv::Fast && TailCallOpt;
> }
>
> -bool ARM64TargetLowering::IsTailCallConvention(CallingConv::ID CallCC) const {
> +bool AArch64TargetLowering::IsTailCallConvention(CallingConv::ID CallCC) const {
> return CallCC == CallingConv::Fast;
> }
>
> /// LowerCall - Lower a call to a callseq_start + CALL + callseq_end chain,
> /// and add input and output parameter nodes.
> -SDValue ARM64TargetLowering::LowerCall(CallLoweringInfo &CLI,
> - SmallVectorImpl<SDValue> &InVals) const {
> +SDValue
> +AArch64TargetLowering::LowerCall(CallLoweringInfo &CLI,
> + SmallVectorImpl<SDValue> &InVals) const {
> SelectionDAG &DAG = CLI.DAG;
> SDLoc &DL = CLI.DL;
> SmallVector<ISD::OutputArg, 32> &Outs = CLI.Outs;
> @@ -2122,7 +2127,7 @@ SDValue ARM64TargetLowering::LowerCall(C
> bool IsStructRet = (Outs.empty()) ? false : Outs[0].Flags.isSRet();
> bool IsThisReturn = false;
>
> - ARM64FunctionInfo *FuncInfo = MF.getInfo<ARM64FunctionInfo>();
> + AArch64FunctionInfo *FuncInfo = MF.getInfo<AArch64FunctionInfo>();
> bool TailCallOpt = MF.getTarget().Options.GuaranteedTailCallOpt;
> bool IsSibCall = false;
>
> @@ -2166,7 +2171,7 @@ SDValue ARM64TargetLowering::LowerCall(C
> } else {
> // At this point, Outs[].VT may already be promoted to i32. To correctly
> // handle passing i8 as i8 instead of i32 on stack, we pass in both i32 and
> - // i8 to CC_ARM64_AAPCS with i32 being ValVT and i8 being LocVT.
> + // i8 to CC_AArch64_AAPCS with i32 being ValVT and i8 being LocVT.
> // Since AnalyzeCallOperands uses Ins[].VT for both ValVT and LocVT, here
> // we use a special version of AnalyzeCallOperands to pass in ValVT and
> // LocVT.
> @@ -2234,7 +2239,7 @@ SDValue ARM64TargetLowering::LowerCall(C
> Chain =
> DAG.getCALLSEQ_START(Chain, DAG.getIntPtrConstant(NumBytes, true), DL);
>
> - SDValue StackPtr = DAG.getCopyFromReg(Chain, DL, ARM64::SP, getPointerTy());
> + SDValue StackPtr = DAG.getCopyFromReg(Chain, DL, AArch64::SP, getPointerTy());
>
> SmallVector<std::pair<unsigned, SDValue>, 8> RegsToPass;
> SmallVector<SDValue, 8> MemOpChains;
> @@ -2367,15 +2372,15 @@ SDValue ARM64TargetLowering::LowerCall(C
> Callee = DAG.getTargetGlobalAddress(GV, DL, getPointerTy(), 0, 0);
> else {
> Callee = DAG.getTargetGlobalAddress(GV, DL, getPointerTy(), 0,
> - ARM64II::MO_GOT);
> - Callee = DAG.getNode(ARM64ISD::LOADgot, DL, getPointerTy(), Callee);
> + AArch64II::MO_GOT);
> + Callee = DAG.getNode(AArch64ISD::LOADgot, DL, getPointerTy(), Callee);
> }
> } else if (ExternalSymbolSDNode *S =
> dyn_cast<ExternalSymbolSDNode>(Callee)) {
> const char *Sym = S->getSymbol();
> Callee =
> - DAG.getTargetExternalSymbol(Sym, getPointerTy(), ARM64II::MO_GOT);
> - Callee = DAG.getNode(ARM64ISD::LOADgot, DL, getPointerTy(), Callee);
> + DAG.getTargetExternalSymbol(Sym, getPointerTy(), AArch64II::MO_GOT);
> + Callee = DAG.getNode(AArch64ISD::LOADgot, DL, getPointerTy(), Callee);
> }
> } else if (GlobalAddressSDNode *G = dyn_cast<GlobalAddressSDNode>(Callee)) {
> const GlobalValue *GV = G->getGlobal();
> @@ -2415,7 +2420,8 @@ SDValue ARM64TargetLowering::LowerCall(C
> // Add a register mask operand representing the call-preserved registers.
> const uint32_t *Mask;
> const TargetRegisterInfo *TRI = getTargetMachine().getRegisterInfo();
> - const ARM64RegisterInfo *ARI = static_cast<const ARM64RegisterInfo *>(TRI);
> + const AArch64RegisterInfo *ARI =
> + static_cast<const AArch64RegisterInfo *>(TRI);
> if (IsThisReturn) {
> // For 'this' returns, use the X0-preserving mask if applicable
> Mask = ARI->getThisReturnPreservedMask(CallConv);
> @@ -2437,10 +2443,10 @@ SDValue ARM64TargetLowering::LowerCall(C
> // If we're doing a tall call, use a TC_RETURN here rather than an
> // actual call instruction.
> if (IsTailCall)
> - return DAG.getNode(ARM64ISD::TC_RETURN, DL, NodeTys, Ops);
> + return DAG.getNode(AArch64ISD::TC_RETURN, DL, NodeTys, Ops);
>
> // Returns a chain and a flag for retval copy to use.
> - Chain = DAG.getNode(ARM64ISD::CALL, DL, NodeTys, Ops);
> + Chain = DAG.getNode(AArch64ISD::CALL, DL, NodeTys, Ops);
> InFlag = Chain.getValue(1);
>
> uint64_t CalleePopBytes = DoesCalleeRestoreStack(CallConv, TailCallOpt)
> @@ -2460,24 +2466,26 @@ SDValue ARM64TargetLowering::LowerCall(C
> IsThisReturn ? OutVals[0] : SDValue());
> }
>
> -bool ARM64TargetLowering::CanLowerReturn(
> +bool AArch64TargetLowering::CanLowerReturn(
> CallingConv::ID CallConv, MachineFunction &MF, bool isVarArg,
> const SmallVectorImpl<ISD::OutputArg> &Outs, LLVMContext &Context) const {
> - CCAssignFn *RetCC = CallConv == CallingConv::WebKit_JS ? RetCC_ARM64_WebKit_JS
> - : RetCC_ARM64_AAPCS;
> + CCAssignFn *RetCC = CallConv == CallingConv::WebKit_JS
> + ? RetCC_AArch64_WebKit_JS
> + : RetCC_AArch64_AAPCS;
> SmallVector<CCValAssign, 16> RVLocs;
> CCState CCInfo(CallConv, isVarArg, MF, getTargetMachine(), RVLocs, Context);
> return CCInfo.CheckReturn(Outs, RetCC);
> }
>
> SDValue
> -ARM64TargetLowering::LowerReturn(SDValue Chain, CallingConv::ID CallConv,
> - bool isVarArg,
> - const SmallVectorImpl<ISD::OutputArg> &Outs,
> - const SmallVectorImpl<SDValue> &OutVals,
> - SDLoc DL, SelectionDAG &DAG) const {
> - CCAssignFn *RetCC = CallConv == CallingConv::WebKit_JS ? RetCC_ARM64_WebKit_JS
> - : RetCC_ARM64_AAPCS;
> +AArch64TargetLowering::LowerReturn(SDValue Chain, CallingConv::ID CallConv,
> + bool isVarArg,
> + const SmallVectorImpl<ISD::OutputArg> &Outs,
> + const SmallVectorImpl<SDValue> &OutVals,
> + SDLoc DL, SelectionDAG &DAG) const {
> + CCAssignFn *RetCC = CallConv == CallingConv::WebKit_JS
> + ? RetCC_AArch64_WebKit_JS
> + : RetCC_AArch64_AAPCS;
> SmallVector<CCValAssign, 16> RVLocs;
> CCState CCInfo(CallConv, isVarArg, DAG.getMachineFunction(),
> getTargetMachine(), RVLocs, *DAG.getContext());
> @@ -2513,15 +2521,15 @@ ARM64TargetLowering::LowerReturn(SDValue
> if (Flag.getNode())
> RetOps.push_back(Flag);
>
> - return DAG.getNode(ARM64ISD::RET_FLAG, DL, MVT::Other, RetOps);
> + return DAG.getNode(AArch64ISD::RET_FLAG, DL, MVT::Other, RetOps);
> }
>
> //===----------------------------------------------------------------------===//
> // Other Lowering Code
> //===----------------------------------------------------------------------===//
>
> -SDValue ARM64TargetLowering::LowerGlobalAddress(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerGlobalAddress(SDValue Op,
> + SelectionDAG &DAG) const {
> EVT PtrVT = getPointerTy();
> SDLoc DL(Op);
> const GlobalValue *GV = cast<GlobalAddressSDNode>(Op)->getGlobal();
> @@ -2532,31 +2540,31 @@ SDValue ARM64TargetLowering::LowerGlobal
> "unexpected offset in global node");
>
> // This also catched the large code model case for Darwin.
> - if ((OpFlags & ARM64II::MO_GOT) != 0) {
> + if ((OpFlags & AArch64II::MO_GOT) != 0) {
> SDValue GotAddr = DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, OpFlags);
> // FIXME: Once remat is capable of dealing with instructions with register
> // operands, expand this into two nodes instead of using a wrapper node.
> - return DAG.getNode(ARM64ISD::LOADgot, DL, PtrVT, GotAddr);
> + return DAG.getNode(AArch64ISD::LOADgot, DL, PtrVT, GotAddr);
> }
>
> if (getTargetMachine().getCodeModel() == CodeModel::Large) {
> - const unsigned char MO_NC = ARM64II::MO_NC;
> + const unsigned char MO_NC = AArch64II::MO_NC;
> return DAG.getNode(
> - ARM64ISD::WrapperLarge, DL, PtrVT,
> - DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, ARM64II::MO_G3),
> - DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, ARM64II::MO_G2 | MO_NC),
> - DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, ARM64II::MO_G1 | MO_NC),
> - DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, ARM64II::MO_G0 | MO_NC));
> + AArch64ISD::WrapperLarge, DL, PtrVT,
> + DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, AArch64II::MO_G3),
> + DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, AArch64II::MO_G2 | MO_NC),
> + DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, AArch64II::MO_G1 | MO_NC),
> + DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, AArch64II::MO_G0 | MO_NC));
> } else {
> // Use ADRP/ADD or ADRP/LDR for everything else: the small model on ELF and
> // the only correct model on Darwin.
> SDValue Hi = DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0,
> - OpFlags | ARM64II::MO_PAGE);
> - unsigned char LoFlags = OpFlags | ARM64II::MO_PAGEOFF | ARM64II::MO_NC;
> + OpFlags | AArch64II::MO_PAGE);
> + unsigned char LoFlags = OpFlags | AArch64II::MO_PAGEOFF | AArch64II::MO_NC;
> SDValue Lo = DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, LoFlags);
>
> - SDValue ADRP = DAG.getNode(ARM64ISD::ADRP, DL, PtrVT, Hi);
> - return DAG.getNode(ARM64ISD::ADDlow, DL, PtrVT, ADRP, Lo);
> + SDValue ADRP = DAG.getNode(AArch64ISD::ADRP, DL, PtrVT, Hi);
> + return DAG.getNode(AArch64ISD::ADDlow, DL, PtrVT, ADRP, Lo);
> }
> }
>
> @@ -2589,8 +2597,8 @@ SDValue ARM64TargetLowering::LowerGlobal
> /// change the first "ldr" instruction to an appropriate "add x0, x0, #imm" for
> /// a slight efficiency gain.
> SDValue
> -ARM64TargetLowering::LowerDarwinGlobalTLSAddress(SDValue Op,
> - SelectionDAG &DAG) const {
> +AArch64TargetLowering::LowerDarwinGlobalTLSAddress(SDValue Op,
> + SelectionDAG &DAG) const {
> assert(Subtarget->isTargetDarwin() && "TLS only supported on Darwin");
>
> SDLoc DL(Op);
> @@ -2598,8 +2606,8 @@ ARM64TargetLowering::LowerDarwinGlobalTL
> const GlobalValue *GV = cast<GlobalAddressSDNode>(Op)->getGlobal();
>
> SDValue TLVPAddr =
> - DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, ARM64II::MO_TLS);
> - SDValue DescAddr = DAG.getNode(ARM64ISD::LOADgot, DL, PtrVT, TLVPAddr);
> + DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, AArch64II::MO_TLS);
> + SDValue DescAddr = DAG.getNode(AArch64ISD::LOADgot, DL, PtrVT, TLVPAddr);
>
> // The first entry in the descriptor is a function pointer that we must call
> // to obtain the address of the variable.
> @@ -2616,17 +2624,19 @@ ARM64TargetLowering::LowerDarwinGlobalTL
> // trashed: X0 (it takes an argument), LR (it's a call) and NZCV (let's not be
> // silly).
> const TargetRegisterInfo *TRI = getTargetMachine().getRegisterInfo();
> - const ARM64RegisterInfo *ARI = static_cast<const ARM64RegisterInfo *>(TRI);
> + const AArch64RegisterInfo *ARI =
> + static_cast<const AArch64RegisterInfo *>(TRI);
> const uint32_t *Mask = ARI->getTLSCallPreservedMask();
>
> // Finally, we can make the call. This is just a degenerate version of a
> - // normal ARM64 call node: x0 takes the address of the descriptor, and returns
> - // the address of the variable in this thread.
> - Chain = DAG.getCopyToReg(Chain, DL, ARM64::X0, DescAddr, SDValue());
> - Chain = DAG.getNode(ARM64ISD::CALL, DL, DAG.getVTList(MVT::Other, MVT::Glue),
> - Chain, FuncTLVGet, DAG.getRegister(ARM64::X0, MVT::i64),
> - DAG.getRegisterMask(Mask), Chain.getValue(1));
> - return DAG.getCopyFromReg(Chain, DL, ARM64::X0, PtrVT, Chain.getValue(1));
> + // normal AArch64 call node: x0 takes the address of the descriptor, and
> + // returns the address of the variable in this thread.
> + Chain = DAG.getCopyToReg(Chain, DL, AArch64::X0, DescAddr, SDValue());
> + Chain =
> + DAG.getNode(AArch64ISD::CALL, DL, DAG.getVTList(MVT::Other, MVT::Glue),
> + Chain, FuncTLVGet, DAG.getRegister(AArch64::X0, MVT::i64),
> + DAG.getRegisterMask(Mask), Chain.getValue(1));
> + return DAG.getCopyFromReg(Chain, DL, AArch64::X0, PtrVT, Chain.getValue(1));
> }
>
> /// When accessing thread-local variables under either the general-dynamic or
> @@ -2651,26 +2661,27 @@ ARM64TargetLowering::LowerDarwinGlobalTL
> ///
> /// FIXME: we currently produce an extra, duplicated, ADRP instruction, but this
> /// is harmless.
> -SDValue ARM64TargetLowering::LowerELFTLSDescCall(SDValue SymAddr,
> - SDValue DescAddr, SDLoc DL,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerELFTLSDescCall(SDValue SymAddr,
> + SDValue DescAddr, SDLoc DL,
> + SelectionDAG &DAG) const {
> EVT PtrVT = getPointerTy();
>
> // The function we need to call is simply the first entry in the GOT for this
> // descriptor, load it in preparation.
> - SDValue Func = DAG.getNode(ARM64ISD::LOADgot, DL, PtrVT, SymAddr);
> + SDValue Func = DAG.getNode(AArch64ISD::LOADgot, DL, PtrVT, SymAddr);
>
> // TLS calls preserve all registers except those that absolutely must be
> // trashed: X0 (it takes an argument), LR (it's a call) and NZCV (let's not be
> // silly).
> const TargetRegisterInfo *TRI = getTargetMachine().getRegisterInfo();
> - const ARM64RegisterInfo *ARI = static_cast<const ARM64RegisterInfo *>(TRI);
> + const AArch64RegisterInfo *ARI =
> + static_cast<const AArch64RegisterInfo *>(TRI);
> const uint32_t *Mask = ARI->getTLSCallPreservedMask();
>
> // The function takes only one argument: the address of the descriptor itself
> // in X0.
> SDValue Glue, Chain;
> - Chain = DAG.getCopyToReg(DAG.getEntryNode(), DL, ARM64::X0, DescAddr, Glue);
> + Chain = DAG.getCopyToReg(DAG.getEntryNode(), DL, AArch64::X0, DescAddr, Glue);
> Glue = Chain.getValue(1);
>
> // We're now ready to populate the argument list, as with a normal call:
> @@ -2678,19 +2689,20 @@ SDValue ARM64TargetLowering::LowerELFTLS
> Ops.push_back(Chain);
> Ops.push_back(Func);
> Ops.push_back(SymAddr);
> - Ops.push_back(DAG.getRegister(ARM64::X0, PtrVT));
> + Ops.push_back(DAG.getRegister(AArch64::X0, PtrVT));
> Ops.push_back(DAG.getRegisterMask(Mask));
> Ops.push_back(Glue);
>
> SDVTList NodeTys = DAG.getVTList(MVT::Other, MVT::Glue);
> - Chain = DAG.getNode(ARM64ISD::TLSDESC_CALL, DL, NodeTys, Ops);
> + Chain = DAG.getNode(AArch64ISD::TLSDESC_CALL, DL, NodeTys, Ops);
> Glue = Chain.getValue(1);
>
> - return DAG.getCopyFromReg(Chain, DL, ARM64::X0, PtrVT, Glue);
> + return DAG.getCopyFromReg(Chain, DL, AArch64::X0, PtrVT, Glue);
> }
>
> -SDValue ARM64TargetLowering::LowerELFGlobalTLSAddress(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue
> +AArch64TargetLowering::LowerELFGlobalTLSAddress(SDValue Op,
> + SelectionDAG &DAG) const {
> assert(Subtarget->isTargetELF() && "This function expects an ELF target");
> assert(getTargetMachine().getCodeModel() == CodeModel::Small &&
> "ELF TLS only supported in small memory model");
> @@ -2703,23 +2715,24 @@ SDValue ARM64TargetLowering::LowerELFGlo
> SDLoc DL(Op);
> const GlobalValue *GV = GA->getGlobal();
>
> - SDValue ThreadBase = DAG.getNode(ARM64ISD::THREAD_POINTER, DL, PtrVT);
> + SDValue ThreadBase = DAG.getNode(AArch64ISD::THREAD_POINTER, DL, PtrVT);
>
> if (Model == TLSModel::LocalExec) {
> SDValue HiVar = DAG.getTargetGlobalAddress(
> - GV, DL, PtrVT, 0, ARM64II::MO_TLS | ARM64II::MO_G1);
> + GV, DL, PtrVT, 0, AArch64II::MO_TLS | AArch64II::MO_G1);
> SDValue LoVar = DAG.getTargetGlobalAddress(
> - GV, DL, PtrVT, 0, ARM64II::MO_TLS | ARM64II::MO_G0 | ARM64II::MO_NC);
> + GV, DL, PtrVT, 0,
> + AArch64II::MO_TLS | AArch64II::MO_G0 | AArch64II::MO_NC);
>
> - TPOff = SDValue(DAG.getMachineNode(ARM64::MOVZXi, DL, PtrVT, HiVar,
> + TPOff = SDValue(DAG.getMachineNode(AArch64::MOVZXi, DL, PtrVT, HiVar,
> DAG.getTargetConstant(16, MVT::i32)),
> 0);
> - TPOff = SDValue(DAG.getMachineNode(ARM64::MOVKXi, DL, PtrVT, TPOff, LoVar,
> + TPOff = SDValue(DAG.getMachineNode(AArch64::MOVKXi, DL, PtrVT, TPOff, LoVar,
> DAG.getTargetConstant(0, MVT::i32)),
> 0);
> } else if (Model == TLSModel::InitialExec) {
> - TPOff = DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, ARM64II::MO_TLS);
> - TPOff = DAG.getNode(ARM64ISD::LOADgot, DL, PtrVT, TPOff);
> + TPOff = DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, AArch64II::MO_TLS);
> + TPOff = DAG.getNode(AArch64ISD::LOADgot, DL, PtrVT, TPOff);
> } else if (Model == TLSModel::LocalDynamic) {
> // Local-dynamic accesses proceed in two phases. A general-dynamic TLS
> // descriptor call against the special symbol _TLS_MODULE_BASE_ to calculate
> @@ -2727,28 +2740,28 @@ SDValue ARM64TargetLowering::LowerELFGlo
> // calculation.
>
> // These accesses will need deduplicating if there's more than one.
> - ARM64FunctionInfo *MFI =
> - DAG.getMachineFunction().getInfo<ARM64FunctionInfo>();
> + AArch64FunctionInfo *MFI =
> + DAG.getMachineFunction().getInfo<AArch64FunctionInfo>();
> MFI->incNumLocalDynamicTLSAccesses();
>
> // Accesses used in this sequence go via the TLS descriptor which lives in
> // the GOT. Prepare an address we can use to handle this.
> SDValue HiDesc = DAG.getTargetExternalSymbol(
> - "_TLS_MODULE_BASE_", PtrVT, ARM64II::MO_TLS | ARM64II::MO_PAGE);
> + "_TLS_MODULE_BASE_", PtrVT, AArch64II::MO_TLS | AArch64II::MO_PAGE);
> SDValue LoDesc = DAG.getTargetExternalSymbol(
> "_TLS_MODULE_BASE_", PtrVT,
> - ARM64II::MO_TLS | ARM64II::MO_PAGEOFF | ARM64II::MO_NC);
> + AArch64II::MO_TLS | AArch64II::MO_PAGEOFF | AArch64II::MO_NC);
>
> // First argument to the descriptor call is the address of the descriptor
> // itself.
> - SDValue DescAddr = DAG.getNode(ARM64ISD::ADRP, DL, PtrVT, HiDesc);
> - DescAddr = DAG.getNode(ARM64ISD::ADDlow, DL, PtrVT, DescAddr, LoDesc);
> + SDValue DescAddr = DAG.getNode(AArch64ISD::ADRP, DL, PtrVT, HiDesc);
> + DescAddr = DAG.getNode(AArch64ISD::ADDlow, DL, PtrVT, DescAddr, LoDesc);
>
> // The call needs a relocation too for linker relaxation. It doesn't make
> // sense to call it MO_PAGE or MO_PAGEOFF though so we need another copy of
> // the address.
> SDValue SymAddr = DAG.getTargetExternalSymbol("_TLS_MODULE_BASE_", PtrVT,
> - ARM64II::MO_TLS);
> + AArch64II::MO_TLS);
>
> // Now we can calculate the offset from TPIDR_EL0 to this module's
> // thread-local area.
> @@ -2757,38 +2770,40 @@ SDValue ARM64TargetLowering::LowerELFGlo
> // Now use :dtprel_whatever: operations to calculate this variable's offset
> // in its thread-storage area.
> SDValue HiVar = DAG.getTargetGlobalAddress(
> - GV, DL, MVT::i64, 0, ARM64II::MO_TLS | ARM64II::MO_G1);
> + GV, DL, MVT::i64, 0, AArch64II::MO_TLS | AArch64II::MO_G1);
> SDValue LoVar = DAG.getTargetGlobalAddress(
> - GV, DL, MVT::i64, 0, ARM64II::MO_TLS | ARM64II::MO_G0 | ARM64II::MO_NC);
> + GV, DL, MVT::i64, 0,
> + AArch64II::MO_TLS | AArch64II::MO_G0 | AArch64II::MO_NC);
>
> SDValue DTPOff =
> - SDValue(DAG.getMachineNode(ARM64::MOVZXi, DL, PtrVT, HiVar,
> + SDValue(DAG.getMachineNode(AArch64::MOVZXi, DL, PtrVT, HiVar,
> DAG.getTargetConstant(16, MVT::i32)),
> 0);
> - DTPOff = SDValue(DAG.getMachineNode(ARM64::MOVKXi, DL, PtrVT, DTPOff, LoVar,
> - DAG.getTargetConstant(0, MVT::i32)),
> - 0);
> + DTPOff =
> + SDValue(DAG.getMachineNode(AArch64::MOVKXi, DL, PtrVT, DTPOff, LoVar,
> + DAG.getTargetConstant(0, MVT::i32)),
> + 0);
>
> TPOff = DAG.getNode(ISD::ADD, DL, PtrVT, TPOff, DTPOff);
> } else if (Model == TLSModel::GeneralDynamic) {
> // Accesses used in this sequence go via the TLS descriptor which lives in
> // the GOT. Prepare an address we can use to handle this.
> SDValue HiDesc = DAG.getTargetGlobalAddress(
> - GV, DL, PtrVT, 0, ARM64II::MO_TLS | ARM64II::MO_PAGE);
> + GV, DL, PtrVT, 0, AArch64II::MO_TLS | AArch64II::MO_PAGE);
> SDValue LoDesc = DAG.getTargetGlobalAddress(
> GV, DL, PtrVT, 0,
> - ARM64II::MO_TLS | ARM64II::MO_PAGEOFF | ARM64II::MO_NC);
> + AArch64II::MO_TLS | AArch64II::MO_PAGEOFF | AArch64II::MO_NC);
>
> // First argument to the descriptor call is the address of the descriptor
> // itself.
> - SDValue DescAddr = DAG.getNode(ARM64ISD::ADRP, DL, PtrVT, HiDesc);
> - DescAddr = DAG.getNode(ARM64ISD::ADDlow, DL, PtrVT, DescAddr, LoDesc);
> + SDValue DescAddr = DAG.getNode(AArch64ISD::ADRP, DL, PtrVT, HiDesc);
> + DescAddr = DAG.getNode(AArch64ISD::ADDlow, DL, PtrVT, DescAddr, LoDesc);
>
> // The call needs a relocation too for linker relaxation. It doesn't make
> // sense to call it MO_PAGE or MO_PAGEOFF though so we need another copy of
> // the address.
> SDValue SymAddr =
> - DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, ARM64II::MO_TLS);
> + DAG.getTargetGlobalAddress(GV, DL, PtrVT, 0, AArch64II::MO_TLS);
>
> // Finally we can make a call to calculate the offset from tpidr_el0.
> TPOff = LowerELFTLSDescCall(SymAddr, DescAddr, DL, DAG);
> @@ -2798,8 +2813,8 @@ SDValue ARM64TargetLowering::LowerELFGlo
> return DAG.getNode(ISD::ADD, DL, PtrVT, ThreadBase, TPOff);
> }
>
> -SDValue ARM64TargetLowering::LowerGlobalTLSAddress(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerGlobalTLSAddress(SDValue Op,
> + SelectionDAG &DAG) const {
> if (Subtarget->isTargetDarwin())
> return LowerDarwinGlobalTLSAddress(Op, DAG);
> else if (Subtarget->isTargetELF())
> @@ -2807,7 +2822,7 @@ SDValue ARM64TargetLowering::LowerGlobal
>
> llvm_unreachable("Unexpected platform trying to use TLS");
> }
> -SDValue ARM64TargetLowering::LowerBR_CC(SDValue Op, SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerBR_CC(SDValue Op, SelectionDAG &DAG) const {
> SDValue Chain = Op.getOperand(0);
> ISD::CondCode CC = cast<CondCodeSDNode>(Op.getOperand(1))->get();
> SDValue LHS = Op.getOperand(2);
> @@ -2843,15 +2858,15 @@ SDValue ARM64TargetLowering::LowerBR_CC(
> return SDValue();
>
> // The actual operation with overflow check.
> - ARM64CC::CondCode OFCC;
> + AArch64CC::CondCode OFCC;
> SDValue Value, Overflow;
> - std::tie(Value, Overflow) = getARM64XALUOOp(OFCC, LHS.getValue(0), DAG);
> + std::tie(Value, Overflow) = getAArch64XALUOOp(OFCC, LHS.getValue(0), DAG);
>
> if (CC == ISD::SETNE)
> OFCC = getInvertedCondCode(OFCC);
> SDValue CCVal = DAG.getConstant(OFCC, MVT::i32);
>
> - return DAG.getNode(ARM64ISD::BRCOND, SDLoc(LHS), MVT::Other, Chain, Dest,
> + return DAG.getNode(AArch64ISD::BRCOND, SDLoc(LHS), MVT::Other, Chain, Dest,
> CCVal, Overflow);
> }
>
> @@ -2878,11 +2893,11 @@ SDValue ARM64TargetLowering::LowerBR_CC(
> if (Test.getValueType() == MVT::i32)
> Test = DAG.getAnyExtOrTrunc(Test, dl, MVT::i64);
>
> - return DAG.getNode(ARM64ISD::TBZ, dl, MVT::Other, Chain, Test,
> + return DAG.getNode(AArch64ISD::TBZ, dl, MVT::Other, Chain, Test,
> DAG.getConstant(Log2_64(Mask), MVT::i64), Dest);
> }
>
> - return DAG.getNode(ARM64ISD::CBZ, dl, MVT::Other, Chain, LHS, Dest);
> + return DAG.getNode(AArch64ISD::CBZ, dl, MVT::Other, Chain, LHS, Dest);
> } else if (CC == ISD::SETNE) {
> // See if we can use a TBZ to fold in an AND as well.
> // TBZ has a smaller branch displacement than CBZ. If the offset is
> @@ -2898,41 +2913,41 @@ SDValue ARM64TargetLowering::LowerBR_CC(
> if (Test.getValueType() == MVT::i32)
> Test = DAG.getAnyExtOrTrunc(Test, dl, MVT::i64);
>
> - return DAG.getNode(ARM64ISD::TBNZ, dl, MVT::Other, Chain, Test,
> + return DAG.getNode(AArch64ISD::TBNZ, dl, MVT::Other, Chain, Test,
> DAG.getConstant(Log2_64(Mask), MVT::i64), Dest);
> }
>
> - return DAG.getNode(ARM64ISD::CBNZ, dl, MVT::Other, Chain, LHS, Dest);
> + return DAG.getNode(AArch64ISD::CBNZ, dl, MVT::Other, Chain, LHS, Dest);
> }
> }
>
> SDValue CCVal;
> - SDValue Cmp = getARM64Cmp(LHS, RHS, CC, CCVal, DAG, dl);
> - return DAG.getNode(ARM64ISD::BRCOND, dl, MVT::Other, Chain, Dest, CCVal,
> + SDValue Cmp = getAArch64Cmp(LHS, RHS, CC, CCVal, DAG, dl);
> + return DAG.getNode(AArch64ISD::BRCOND, dl, MVT::Other, Chain, Dest, CCVal,
> Cmp);
> }
>
> assert(LHS.getValueType() == MVT::f32 || LHS.getValueType() == MVT::f64);
>
> - // Unfortunately, the mapping of LLVM FP CC's onto ARM64 CC's isn't totally
> + // Unfortunately, the mapping of LLVM FP CC's onto AArch64 CC's isn't totally
> // clean. Some of them require two branches to implement.
> SDValue Cmp = emitComparison(LHS, RHS, CC, dl, DAG);
> - ARM64CC::CondCode CC1, CC2;
> - changeFPCCToARM64CC(CC, CC1, CC2);
> + AArch64CC::CondCode CC1, CC2;
> + changeFPCCToAArch64CC(CC, CC1, CC2);
> SDValue CC1Val = DAG.getConstant(CC1, MVT::i32);
> SDValue BR1 =
> - DAG.getNode(ARM64ISD::BRCOND, dl, MVT::Other, Chain, Dest, CC1Val, Cmp);
> - if (CC2 != ARM64CC::AL) {
> + DAG.getNode(AArch64ISD::BRCOND, dl, MVT::Other, Chain, Dest, CC1Val, Cmp);
> + if (CC2 != AArch64CC::AL) {
> SDValue CC2Val = DAG.getConstant(CC2, MVT::i32);
> - return DAG.getNode(ARM64ISD::BRCOND, dl, MVT::Other, BR1, Dest, CC2Val,
> + return DAG.getNode(AArch64ISD::BRCOND, dl, MVT::Other, BR1, Dest, CC2Val,
> Cmp);
> }
>
> return BR1;
> }
>
> -SDValue ARM64TargetLowering::LowerFCOPYSIGN(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerFCOPYSIGN(SDValue Op,
> + SelectionDAG &DAG) const {
> EVT VT = Op.getValueType();
> SDLoc DL(Op);
>
> @@ -2959,9 +2974,9 @@ SDValue ARM64TargetLowering::LowerFCOPYS
> EltMask = DAG.getConstant(0x80000000ULL, EltVT);
>
> if (!VT.isVector()) {
> - VecVal1 = DAG.getTargetInsertSubreg(ARM64::ssub, DL, VecVT,
> + VecVal1 = DAG.getTargetInsertSubreg(AArch64::ssub, DL, VecVT,
> DAG.getUNDEF(VecVT), In1);
> - VecVal2 = DAG.getTargetInsertSubreg(ARM64::ssub, DL, VecVT,
> + VecVal2 = DAG.getTargetInsertSubreg(AArch64::ssub, DL, VecVT,
> DAG.getUNDEF(VecVT), In2);
> } else {
> VecVal1 = DAG.getNode(ISD::BITCAST, DL, VecVT, In1);
> @@ -2977,9 +2992,9 @@ SDValue ARM64TargetLowering::LowerFCOPYS
> EltMask = DAG.getConstant(0, EltVT);
>
> if (!VT.isVector()) {
> - VecVal1 = DAG.getTargetInsertSubreg(ARM64::dsub, DL, VecVT,
> + VecVal1 = DAG.getTargetInsertSubreg(AArch64::dsub, DL, VecVT,
> DAG.getUNDEF(VecVT), In1);
> - VecVal2 = DAG.getTargetInsertSubreg(ARM64::dsub, DL, VecVT,
> + VecVal2 = DAG.getTargetInsertSubreg(AArch64::dsub, DL, VecVT,
> DAG.getUNDEF(VecVT), In2);
> } else {
> VecVal1 = DAG.getNode(ISD::BITCAST, DL, VecVT, In1);
> @@ -3004,17 +3019,17 @@ SDValue ARM64TargetLowering::LowerFCOPYS
> }
>
> SDValue Sel =
> - DAG.getNode(ARM64ISD::BIT, DL, VecVT, VecVal1, VecVal2, BuildVec);
> + DAG.getNode(AArch64ISD::BIT, DL, VecVT, VecVal1, VecVal2, BuildVec);
>
> if (VT == MVT::f32)
> - return DAG.getTargetExtractSubreg(ARM64::ssub, DL, VT, Sel);
> + return DAG.getTargetExtractSubreg(AArch64::ssub, DL, VT, Sel);
> else if (VT == MVT::f64)
> - return DAG.getTargetExtractSubreg(ARM64::dsub, DL, VT, Sel);
> + return DAG.getTargetExtractSubreg(AArch64::dsub, DL, VT, Sel);
> else
> return DAG.getNode(ISD::BITCAST, DL, VT, Sel);
> }
>
> -SDValue ARM64TargetLowering::LowerCTPOP(SDValue Op, SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerCTPOP(SDValue Op, SelectionDAG &DAG) const {
> if (DAG.getMachineFunction().getFunction()->getAttributes().hasAttribute(
> AttributeSet::FunctionIndex, Attribute::NoImplicitFloat))
> return SDValue();
> @@ -3035,8 +3050,8 @@ SDValue ARM64TargetLowering::LowerCTPOP(
> SDValue VecVal;
> if (VT == MVT::i32) {
> VecVal = DAG.getNode(ISD::BITCAST, DL, MVT::f32, Val);
> - VecVal =
> - DAG.getTargetInsertSubreg(ARM64::ssub, DL, MVT::v8i8, ZeroVec, VecVal);
> + VecVal = DAG.getTargetInsertSubreg(AArch64::ssub, DL, MVT::v8i8, ZeroVec,
> + VecVal);
> } else {
> VecVal = DAG.getNode(ISD::BITCAST, DL, MVT::v8i8, Val);
> }
> @@ -3044,14 +3059,14 @@ SDValue ARM64TargetLowering::LowerCTPOP(
> SDValue CtPop = DAG.getNode(ISD::CTPOP, DL, MVT::v8i8, VecVal);
> SDValue UaddLV = DAG.getNode(
> ISD::INTRINSIC_WO_CHAIN, DL, MVT::i32,
> - DAG.getConstant(Intrinsic::arm64_neon_uaddlv, MVT::i32), CtPop);
> + DAG.getConstant(Intrinsic::aarch64_neon_uaddlv, MVT::i32), CtPop);
>
> if (VT == MVT::i64)
> UaddLV = DAG.getNode(ISD::ZERO_EXTEND, DL, MVT::i64, UaddLV);
> return UaddLV;
> }
>
> -SDValue ARM64TargetLowering::LowerSETCC(SDValue Op, SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerSETCC(SDValue Op, SelectionDAG &DAG) const {
>
> if (Op.getValueType().isVector())
> return LowerVSETCC(Op, DAG);
> @@ -3082,12 +3097,12 @@ SDValue ARM64TargetLowering::LowerSETCC(
> if (LHS.getValueType().isInteger()) {
> SDValue CCVal;
> SDValue Cmp =
> - getARM64Cmp(LHS, RHS, ISD::getSetCCInverse(CC, true), CCVal, DAG, dl);
> + getAArch64Cmp(LHS, RHS, ISD::getSetCCInverse(CC, true), CCVal, DAG, dl);
>
> // Note that we inverted the condition above, so we reverse the order of
> // the true and false operands here. This will allow the setcc to be
> // matched to a single CSINC instruction.
> - return DAG.getNode(ARM64ISD::CSEL, dl, VT, FVal, TVal, CCVal, Cmp);
> + return DAG.getNode(AArch64ISD::CSEL, dl, VT, FVal, TVal, CCVal, Cmp);
> }
>
> // Now we know we're dealing with FP values.
> @@ -3097,28 +3112,29 @@ SDValue ARM64TargetLowering::LowerSETCC(
> // and do the comparison.
> SDValue Cmp = emitComparison(LHS, RHS, CC, dl, DAG);
>
> - ARM64CC::CondCode CC1, CC2;
> - changeFPCCToARM64CC(CC, CC1, CC2);
> - if (CC2 == ARM64CC::AL) {
> - changeFPCCToARM64CC(ISD::getSetCCInverse(CC, false), CC1, CC2);
> + AArch64CC::CondCode CC1, CC2;
> + changeFPCCToAArch64CC(CC, CC1, CC2);
> + if (CC2 == AArch64CC::AL) {
> + changeFPCCToAArch64CC(ISD::getSetCCInverse(CC, false), CC1, CC2);
> SDValue CC1Val = DAG.getConstant(CC1, MVT::i32);
>
> // Note that we inverted the condition above, so we reverse the order of
> // the true and false operands here. This will allow the setcc to be
> // matched to a single CSINC instruction.
> - return DAG.getNode(ARM64ISD::CSEL, dl, VT, FVal, TVal, CC1Val, Cmp);
> + return DAG.getNode(AArch64ISD::CSEL, dl, VT, FVal, TVal, CC1Val, Cmp);
> } else {
> - // Unfortunately, the mapping of LLVM FP CC's onto ARM64 CC's isn't totally
> - // clean. Some of them require two CSELs to implement. As is in this case,
> - // we emit the first CSEL and then emit a second using the output of the
> - // first as the RHS. We're effectively OR'ing the two CC's together.
> + // Unfortunately, the mapping of LLVM FP CC's onto AArch64 CC's isn't
> + // totally clean. Some of them require two CSELs to implement. As is in
> + // this case, we emit the first CSEL and then emit a second using the output
> + // of the first as the RHS. We're effectively OR'ing the two CC's together.
>
> // FIXME: It would be nice if we could match the two CSELs to two CSINCs.
> SDValue CC1Val = DAG.getConstant(CC1, MVT::i32);
> - SDValue CS1 = DAG.getNode(ARM64ISD::CSEL, dl, VT, TVal, FVal, CC1Val, Cmp);
> + SDValue CS1 =
> + DAG.getNode(AArch64ISD::CSEL, dl, VT, TVal, FVal, CC1Val, Cmp);
>
> SDValue CC2Val = DAG.getConstant(CC2, MVT::i32);
> - return DAG.getNode(ARM64ISD::CSEL, dl, VT, TVal, CS1, CC2Val, Cmp);
> + return DAG.getNode(AArch64ISD::CSEL, dl, VT, TVal, CS1, CC2Val, Cmp);
> }
> }
>
> @@ -3147,7 +3163,8 @@ static bool selectCCOpsAreFMaxCompatible
> return Result->getOpcode() == ISD::FP_EXTEND && Result->getOperand(0) == Cmp;
> }
>
> -SDValue ARM64TargetLowering::LowerSELECT(SDValue Op, SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerSELECT(SDValue Op,
> + SelectionDAG &DAG) const {
> SDValue CC = Op->getOperand(0);
> SDValue TVal = Op->getOperand(1);
> SDValue FVal = Op->getOperand(2);
> @@ -3163,13 +3180,13 @@ SDValue ARM64TargetLowering::LowerSELECT
> if (!DAG.getTargetLoweringInfo().isTypeLegal(CC->getValueType(0)))
> return SDValue();
>
> - ARM64CC::CondCode OFCC;
> + AArch64CC::CondCode OFCC;
> SDValue Value, Overflow;
> - std::tie(Value, Overflow) = getARM64XALUOOp(OFCC, CC.getValue(0), DAG);
> + std::tie(Value, Overflow) = getAArch64XALUOOp(OFCC, CC.getValue(0), DAG);
> SDValue CCVal = DAG.getConstant(OFCC, MVT::i32);
>
> - return DAG.getNode(ARM64ISD::CSEL, DL, Op.getValueType(), TVal, FVal, CCVal,
> - Overflow);
> + return DAG.getNode(AArch64ISD::CSEL, DL, Op.getValueType(), TVal, FVal,
> + CCVal, Overflow);
> }
>
> if (CC.getOpcode() == ISD::SETCC)
> @@ -3180,8 +3197,8 @@ SDValue ARM64TargetLowering::LowerSELECT
> FVal, ISD::SETNE);
> }
>
> -SDValue ARM64TargetLowering::LowerSELECT_CC(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerSELECT_CC(SDValue Op,
> + SelectionDAG &DAG) const {
> ISD::CondCode CC = cast<CondCodeSDNode>(Op.getOperand(4))->get();
> SDValue LHS = Op.getOperand(0);
> SDValue RHS = Op.getOperand(1);
> @@ -3207,7 +3224,7 @@ SDValue ARM64TargetLowering::LowerSELECT
> assert((LHS.getValueType() == RHS.getValueType()) &&
> (LHS.getValueType() == MVT::i32 || LHS.getValueType() == MVT::i64));
>
> - unsigned Opcode = ARM64ISD::CSEL;
> + unsigned Opcode = AArch64ISD::CSEL;
>
> // If both the TVal and the FVal are constants, see if we can swap them in
> // order to for a CSINV or CSINC out of them.
> @@ -3251,9 +3268,9 @@ SDValue ARM64TargetLowering::LowerSELECT
> // inverse/negation/increment of TVal and generate a CSINV/CSNEG/CSINC
> // instead of a CSEL in that case.
> if (TrueVal == ~FalseVal) {
> - Opcode = ARM64ISD::CSINV;
> + Opcode = AArch64ISD::CSINV;
> } else if (TrueVal == -FalseVal) {
> - Opcode = ARM64ISD::CSNEG;
> + Opcode = AArch64ISD::CSNEG;
> } else if (TVal.getValueType() == MVT::i32) {
> // If our operands are only 32-bit wide, make sure we use 32-bit
> // arithmetic for the check whether we can use CSINC. This ensures that
> @@ -3264,7 +3281,7 @@ SDValue ARM64TargetLowering::LowerSELECT
> const uint32_t FalseVal32 = CFVal->getZExtValue();
>
> if ((TrueVal32 == FalseVal32 + 1) || (TrueVal32 + 1 == FalseVal32)) {
> - Opcode = ARM64ISD::CSINC;
> + Opcode = AArch64ISD::CSINC;
>
> if (TrueVal32 > FalseVal32) {
> Swap = true;
> @@ -3272,7 +3289,7 @@ SDValue ARM64TargetLowering::LowerSELECT
> }
> // 64-bit check whether we can use CSINC.
> } else if ((TrueVal == FalseVal + 1) || (TrueVal + 1 == FalseVal)) {
> - Opcode = ARM64ISD::CSINC;
> + Opcode = AArch64ISD::CSINC;
>
> if (TrueVal > FalseVal) {
> Swap = true;
> @@ -3286,7 +3303,7 @@ SDValue ARM64TargetLowering::LowerSELECT
> CC = ISD::getSetCCInverse(CC, true);
> }
>
> - if (Opcode != ARM64ISD::CSEL) {
> + if (Opcode != AArch64ISD::CSEL) {
> // Drop FVal since we can get its value by simply inverting/negating
> // TVal.
> FVal = TVal;
> @@ -3294,7 +3311,7 @@ SDValue ARM64TargetLowering::LowerSELECT
> }
>
> SDValue CCVal;
> - SDValue Cmp = getARM64Cmp(LHS, RHS, CC, CCVal, DAG, dl);
> + SDValue Cmp = getAArch64Cmp(LHS, RHS, CC, CCVal, DAG, dl);
>
> EVT VT = Op.getValueType();
> return DAG.getNode(Opcode, dl, VT, TVal, FVal, CCVal, Cmp);
> @@ -3328,7 +3345,7 @@ SDValue ARM64TargetLowering::LowerSELECT
> case ISD::SETUGE:
> case ISD::SETOGT:
> case ISD::SETOGE:
> - return DAG.getNode(ARM64ISD::FMAX, dl, VT, MinMaxLHS, MinMaxRHS);
> + return DAG.getNode(AArch64ISD::FMAX, dl, VT, MinMaxLHS, MinMaxRHS);
> break;
> case ISD::SETLT:
> case ISD::SETLE:
> @@ -3336,7 +3353,7 @@ SDValue ARM64TargetLowering::LowerSELECT
> case ISD::SETULE:
> case ISD::SETOLT:
> case ISD::SETOLE:
> - return DAG.getNode(ARM64ISD::FMIN, dl, VT, MinMaxLHS, MinMaxRHS);
> + return DAG.getNode(AArch64ISD::FMIN, dl, VT, MinMaxLHS, MinMaxRHS);
> break;
> }
> }
> @@ -3346,26 +3363,26 @@ SDValue ARM64TargetLowering::LowerSELECT
> // and do the comparison.
> SDValue Cmp = emitComparison(LHS, RHS, CC, dl, DAG);
>
> - // Unfortunately, the mapping of LLVM FP CC's onto ARM64 CC's isn't totally
> + // Unfortunately, the mapping of LLVM FP CC's onto AArch64 CC's isn't totally
> // clean. Some of them require two CSELs to implement.
> - ARM64CC::CondCode CC1, CC2;
> - changeFPCCToARM64CC(CC, CC1, CC2);
> + AArch64CC::CondCode CC1, CC2;
> + changeFPCCToAArch64CC(CC, CC1, CC2);
> SDValue CC1Val = DAG.getConstant(CC1, MVT::i32);
> - SDValue CS1 = DAG.getNode(ARM64ISD::CSEL, dl, VT, TVal, FVal, CC1Val, Cmp);
> + SDValue CS1 = DAG.getNode(AArch64ISD::CSEL, dl, VT, TVal, FVal, CC1Val, Cmp);
>
> // If we need a second CSEL, emit it, using the output of the first as the
> // RHS. We're effectively OR'ing the two CC's together.
> - if (CC2 != ARM64CC::AL) {
> + if (CC2 != AArch64CC::AL) {
> SDValue CC2Val = DAG.getConstant(CC2, MVT::i32);
> - return DAG.getNode(ARM64ISD::CSEL, dl, VT, TVal, CS1, CC2Val, Cmp);
> + return DAG.getNode(AArch64ISD::CSEL, dl, VT, TVal, CS1, CC2Val, Cmp);
> }
>
> // Otherwise, return the output of the first CSEL.
> return CS1;
> }
>
> -SDValue ARM64TargetLowering::LowerJumpTable(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerJumpTable(SDValue Op,
> + SelectionDAG &DAG) const {
> // Jump table entries as PC relative offsets. No additional tweaking
> // is necessary here. Just get the address of the jump table.
> JumpTableSDNode *JT = cast<JumpTableSDNode>(Op);
> @@ -3374,24 +3391,26 @@ SDValue ARM64TargetLowering::LowerJumpTa
>
> if (getTargetMachine().getCodeModel() == CodeModel::Large &&
> !Subtarget->isTargetMachO()) {
> - const unsigned char MO_NC = ARM64II::MO_NC;
> + const unsigned char MO_NC = AArch64II::MO_NC;
> return DAG.getNode(
> - ARM64ISD::WrapperLarge, DL, PtrVT,
> - DAG.getTargetJumpTable(JT->getIndex(), PtrVT, ARM64II::MO_G3),
> - DAG.getTargetJumpTable(JT->getIndex(), PtrVT, ARM64II::MO_G2 | MO_NC),
> - DAG.getTargetJumpTable(JT->getIndex(), PtrVT, ARM64II::MO_G1 | MO_NC),
> - DAG.getTargetJumpTable(JT->getIndex(), PtrVT, ARM64II::MO_G0 | MO_NC));
> + AArch64ISD::WrapperLarge, DL, PtrVT,
> + DAG.getTargetJumpTable(JT->getIndex(), PtrVT, AArch64II::MO_G3),
> + DAG.getTargetJumpTable(JT->getIndex(), PtrVT, AArch64II::MO_G2 | MO_NC),
> + DAG.getTargetJumpTable(JT->getIndex(), PtrVT, AArch64II::MO_G1 | MO_NC),
> + DAG.getTargetJumpTable(JT->getIndex(), PtrVT,
> + AArch64II::MO_G0 | MO_NC));
> }
>
> - SDValue Hi = DAG.getTargetJumpTable(JT->getIndex(), PtrVT, ARM64II::MO_PAGE);
> + SDValue Hi =
> + DAG.getTargetJumpTable(JT->getIndex(), PtrVT, AArch64II::MO_PAGE);
> SDValue Lo = DAG.getTargetJumpTable(JT->getIndex(), PtrVT,
> - ARM64II::MO_PAGEOFF | ARM64II::MO_NC);
> - SDValue ADRP = DAG.getNode(ARM64ISD::ADRP, DL, PtrVT, Hi);
> - return DAG.getNode(ARM64ISD::ADDlow, DL, PtrVT, ADRP, Lo);
> + AArch64II::MO_PAGEOFF | AArch64II::MO_NC);
> + SDValue ADRP = DAG.getNode(AArch64ISD::ADRP, DL, PtrVT, Hi);
> + return DAG.getNode(AArch64ISD::ADDlow, DL, PtrVT, ADRP, Lo);
> }
>
> -SDValue ARM64TargetLowering::LowerConstantPool(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerConstantPool(SDValue Op,
> + SelectionDAG &DAG) const {
> ConstantPoolSDNode *CP = cast<ConstantPoolSDNode>(Op);
> EVT PtrVT = getPointerTy();
> SDLoc DL(Op);
> @@ -3401,63 +3420,63 @@ SDValue ARM64TargetLowering::LowerConsta
> if (Subtarget->isTargetMachO()) {
> SDValue GotAddr = DAG.getTargetConstantPool(
> CP->getConstVal(), PtrVT, CP->getAlignment(), CP->getOffset(),
> - ARM64II::MO_GOT);
> - return DAG.getNode(ARM64ISD::LOADgot, DL, PtrVT, GotAddr);
> + AArch64II::MO_GOT);
> + return DAG.getNode(AArch64ISD::LOADgot, DL, PtrVT, GotAddr);
> }
>
> - const unsigned char MO_NC = ARM64II::MO_NC;
> + const unsigned char MO_NC = AArch64II::MO_NC;
> return DAG.getNode(
> - ARM64ISD::WrapperLarge, DL, PtrVT,
> + AArch64ISD::WrapperLarge, DL, PtrVT,
> DAG.getTargetConstantPool(CP->getConstVal(), PtrVT, CP->getAlignment(),
> - CP->getOffset(), ARM64II::MO_G3),
> + CP->getOffset(), AArch64II::MO_G3),
> DAG.getTargetConstantPool(CP->getConstVal(), PtrVT, CP->getAlignment(),
> - CP->getOffset(), ARM64II::MO_G2 | MO_NC),
> + CP->getOffset(), AArch64II::MO_G2 | MO_NC),
> DAG.getTargetConstantPool(CP->getConstVal(), PtrVT, CP->getAlignment(),
> - CP->getOffset(), ARM64II::MO_G1 | MO_NC),
> + CP->getOffset(), AArch64II::MO_G1 | MO_NC),
> DAG.getTargetConstantPool(CP->getConstVal(), PtrVT, CP->getAlignment(),
> - CP->getOffset(), ARM64II::MO_G0 | MO_NC));
> + CP->getOffset(), AArch64II::MO_G0 | MO_NC));
> } else {
> // Use ADRP/ADD or ADRP/LDR for everything else: the small memory model on
> // ELF, the only valid one on Darwin.
> SDValue Hi =
> DAG.getTargetConstantPool(CP->getConstVal(), PtrVT, CP->getAlignment(),
> - CP->getOffset(), ARM64II::MO_PAGE);
> + CP->getOffset(), AArch64II::MO_PAGE);
> SDValue Lo = DAG.getTargetConstantPool(
> CP->getConstVal(), PtrVT, CP->getAlignment(), CP->getOffset(),
> - ARM64II::MO_PAGEOFF | ARM64II::MO_NC);
> + AArch64II::MO_PAGEOFF | AArch64II::MO_NC);
>
> - SDValue ADRP = DAG.getNode(ARM64ISD::ADRP, DL, PtrVT, Hi);
> - return DAG.getNode(ARM64ISD::ADDlow, DL, PtrVT, ADRP, Lo);
> + SDValue ADRP = DAG.getNode(AArch64ISD::ADRP, DL, PtrVT, Hi);
> + return DAG.getNode(AArch64ISD::ADDlow, DL, PtrVT, ADRP, Lo);
> }
> }
>
> -SDValue ARM64TargetLowering::LowerBlockAddress(SDValue Op,
> +SDValue AArch64TargetLowering::LowerBlockAddress(SDValue Op,
> SelectionDAG &DAG) const {
> const BlockAddress *BA = cast<BlockAddressSDNode>(Op)->getBlockAddress();
> EVT PtrVT = getPointerTy();
> SDLoc DL(Op);
> if (getTargetMachine().getCodeModel() == CodeModel::Large &&
> !Subtarget->isTargetMachO()) {
> - const unsigned char MO_NC = ARM64II::MO_NC;
> + const unsigned char MO_NC = AArch64II::MO_NC;
> return DAG.getNode(
> - ARM64ISD::WrapperLarge, DL, PtrVT,
> - DAG.getTargetBlockAddress(BA, PtrVT, 0, ARM64II::MO_G3),
> - DAG.getTargetBlockAddress(BA, PtrVT, 0, ARM64II::MO_G2 | MO_NC),
> - DAG.getTargetBlockAddress(BA, PtrVT, 0, ARM64II::MO_G1 | MO_NC),
> - DAG.getTargetBlockAddress(BA, PtrVT, 0, ARM64II::MO_G0 | MO_NC));
> + AArch64ISD::WrapperLarge, DL, PtrVT,
> + DAG.getTargetBlockAddress(BA, PtrVT, 0, AArch64II::MO_G3),
> + DAG.getTargetBlockAddress(BA, PtrVT, 0, AArch64II::MO_G2 | MO_NC),
> + DAG.getTargetBlockAddress(BA, PtrVT, 0, AArch64II::MO_G1 | MO_NC),
> + DAG.getTargetBlockAddress(BA, PtrVT, 0, AArch64II::MO_G0 | MO_NC));
> } else {
> - SDValue Hi = DAG.getTargetBlockAddress(BA, PtrVT, 0, ARM64II::MO_PAGE);
> - SDValue Lo = DAG.getTargetBlockAddress(BA, PtrVT, 0, ARM64II::MO_PAGEOFF |
> - ARM64II::MO_NC);
> - SDValue ADRP = DAG.getNode(ARM64ISD::ADRP, DL, PtrVT, Hi);
> - return DAG.getNode(ARM64ISD::ADDlow, DL, PtrVT, ADRP, Lo);
> + SDValue Hi = DAG.getTargetBlockAddress(BA, PtrVT, 0, AArch64II::MO_PAGE);
> + SDValue Lo = DAG.getTargetBlockAddress(BA, PtrVT, 0, AArch64II::MO_PAGEOFF |
> + AArch64II::MO_NC);
> + SDValue ADRP = DAG.getNode(AArch64ISD::ADRP, DL, PtrVT, Hi);
> + return DAG.getNode(AArch64ISD::ADDlow, DL, PtrVT, ADRP, Lo);
> }
> }
>
> -SDValue ARM64TargetLowering::LowerDarwin_VASTART(SDValue Op,
> +SDValue AArch64TargetLowering::LowerDarwin_VASTART(SDValue Op,
> SelectionDAG &DAG) const {
> - ARM64FunctionInfo *FuncInfo =
> - DAG.getMachineFunction().getInfo<ARM64FunctionInfo>();
> + AArch64FunctionInfo *FuncInfo =
> + DAG.getMachineFunction().getInfo<AArch64FunctionInfo>();
>
> SDLoc DL(Op);
> SDValue FR =
> @@ -3467,12 +3486,12 @@ SDValue ARM64TargetLowering::LowerDarwin
> MachinePointerInfo(SV), false, false, 0);
> }
>
> -SDValue ARM64TargetLowering::LowerAAPCS_VASTART(SDValue Op,
> +SDValue AArch64TargetLowering::LowerAAPCS_VASTART(SDValue Op,
> SelectionDAG &DAG) const {
> // The layout of the va_list struct is specified in the AArch64 Procedure Call
> // Standard, section B.3.
> MachineFunction &MF = DAG.getMachineFunction();
> - ARM64FunctionInfo *FuncInfo = MF.getInfo<ARM64FunctionInfo>();
> + AArch64FunctionInfo *FuncInfo = MF.getInfo<AArch64FunctionInfo>();
> SDLoc DL(Op);
>
> SDValue Chain = Op.getOperand(0);
> @@ -3534,12 +3553,14 @@ SDValue ARM64TargetLowering::LowerAAPCS_
> return DAG.getNode(ISD::TokenFactor, DL, MVT::Other, MemOps);
> }
>
> -SDValue ARM64TargetLowering::LowerVASTART(SDValue Op, SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerVASTART(SDValue Op,
> + SelectionDAG &DAG) const {
> return Subtarget->isTargetDarwin() ? LowerDarwin_VASTART(Op, DAG)
> : LowerAAPCS_VASTART(Op, DAG);
> }
>
> -SDValue ARM64TargetLowering::LowerVACOPY(SDValue Op, SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerVACOPY(SDValue Op,
> + SelectionDAG &DAG) const {
> // AAPCS has three pointers and two ints (= 32 bytes), Darwin has single
> // pointer.
> unsigned VaListSize = Subtarget->isTargetDarwin() ? 8 : 32;
> @@ -3552,7 +3573,7 @@ SDValue ARM64TargetLowering::LowerVACOPY
> MachinePointerInfo(SrcSV));
> }
>
> -SDValue ARM64TargetLowering::LowerVAARG(SDValue Op, SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerVAARG(SDValue Op, SelectionDAG &DAG) const {
> assert(Subtarget->isTargetDarwin() &&
> "automatic va_arg instruction only works on Darwin");
>
> @@ -3614,15 +3635,16 @@ SDValue ARM64TargetLowering::LowerVAARG(
> false, false, 0);
> }
>
> -SDValue ARM64TargetLowering::LowerFRAMEADDR(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerFRAMEADDR(SDValue Op,
> + SelectionDAG &DAG) const {
> MachineFrameInfo *MFI = DAG.getMachineFunction().getFrameInfo();
> MFI->setFrameAddressIsTaken(true);
>
> EVT VT = Op.getValueType();
> SDLoc DL(Op);
> unsigned Depth = cast<ConstantSDNode>(Op.getOperand(0))->getZExtValue();
> - SDValue FrameAddr = DAG.getCopyFromReg(DAG.getEntryNode(), DL, ARM64::FP, VT);
> + SDValue FrameAddr =
> + DAG.getCopyFromReg(DAG.getEntryNode(), DL, AArch64::FP, VT);
> while (Depth--)
> FrameAddr = DAG.getLoad(VT, DL, DAG.getEntryNode(), FrameAddr,
> MachinePointerInfo(), false, false, false, 0);
> @@ -3631,18 +3653,18 @@ SDValue ARM64TargetLowering::LowerFRAMEA
>
> // FIXME? Maybe this could be a TableGen attribute on some registers and
> // this table could be generated automatically from RegInfo.
> -unsigned ARM64TargetLowering::getRegisterByName(const char* RegName,
> - EVT VT) const {
> +unsigned AArch64TargetLowering::getRegisterByName(const char* RegName,
> + EVT VT) const {
> unsigned Reg = StringSwitch<unsigned>(RegName)
> - .Case("sp", ARM64::SP)
> + .Case("sp", AArch64::SP)
> .Default(0);
> if (Reg)
> return Reg;
> report_fatal_error("Invalid register name global variable");
> }
>
> -SDValue ARM64TargetLowering::LowerRETURNADDR(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerRETURNADDR(SDValue Op,
> + SelectionDAG &DAG) const {
> MachineFunction &MF = DAG.getMachineFunction();
> MachineFrameInfo *MFI = MF.getFrameInfo();
> MFI->setReturnAddressIsTaken(true);
> @@ -3659,14 +3681,14 @@ SDValue ARM64TargetLowering::LowerRETURN
> }
>
> // Return LR, which contains the return address. Mark it an implicit live-in.
> - unsigned Reg = MF.addLiveIn(ARM64::LR, &ARM64::GPR64RegClass);
> + unsigned Reg = MF.addLiveIn(AArch64::LR, &AArch64::GPR64RegClass);
> return DAG.getCopyFromReg(DAG.getEntryNode(), DL, Reg, VT);
> }
>
> /// LowerShiftRightParts - Lower SRA_PARTS, which returns two
> /// i64 values and take a 2 x i64 value to shift plus a shift amount.
> -SDValue ARM64TargetLowering::LowerShiftRightParts(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerShiftRightParts(SDValue Op,
> + SelectionDAG &DAG) const {
> assert(Op.getNumOperands() == 3 && "Not a double-shift!");
> EVT VT = Op.getValueType();
> unsigned VTBits = VT.getSizeInBits();
> @@ -3688,14 +3710,14 @@ SDValue ARM64TargetLowering::LowerShiftR
>
> SDValue Cmp = emitComparison(ExtraShAmt, DAG.getConstant(0, MVT::i64),
> ISD::SETGE, dl, DAG);
> - SDValue CCVal = DAG.getConstant(ARM64CC::GE, MVT::i32);
> + SDValue CCVal = DAG.getConstant(AArch64CC::GE, MVT::i32);
>
> SDValue FalseValLo = DAG.getNode(ISD::OR, dl, VT, Tmp1, Tmp2);
> SDValue TrueValLo = DAG.getNode(Opc, dl, VT, ShOpHi, ExtraShAmt);
> SDValue Lo =
> - DAG.getNode(ARM64ISD::CSEL, dl, VT, TrueValLo, FalseValLo, CCVal, Cmp);
> + DAG.getNode(AArch64ISD::CSEL, dl, VT, TrueValLo, FalseValLo, CCVal, Cmp);
>
> - // ARM64 shifts larger than the register width are wrapped rather than
> + // AArch64 shifts larger than the register width are wrapped rather than
> // clamped, so we can't just emit "hi >> x".
> SDValue FalseValHi = DAG.getNode(Opc, dl, VT, ShOpHi, ShAmt);
> SDValue TrueValHi = Opc == ISD::SRA
> @@ -3703,7 +3725,7 @@ SDValue ARM64TargetLowering::LowerShiftR
> DAG.getConstant(VTBits - 1, MVT::i64))
> : DAG.getConstant(0, VT);
> SDValue Hi =
> - DAG.getNode(ARM64ISD::CSEL, dl, VT, TrueValHi, FalseValHi, CCVal, Cmp);
> + DAG.getNode(AArch64ISD::CSEL, dl, VT, TrueValHi, FalseValHi, CCVal, Cmp);
>
> SDValue Ops[2] = { Lo, Hi };
> return DAG.getMergeValues(Ops, dl);
> @@ -3711,7 +3733,7 @@ SDValue ARM64TargetLowering::LowerShiftR
>
> /// LowerShiftLeftParts - Lower SHL_PARTS, which returns two
> /// i64 values and take a 2 x i64 value to shift plus a shift amount.
> -SDValue ARM64TargetLowering::LowerShiftLeftParts(SDValue Op,
> +SDValue AArch64TargetLowering::LowerShiftLeftParts(SDValue Op,
> SelectionDAG &DAG) const {
> assert(Op.getNumOperands() == 3 && "Not a double-shift!");
> EVT VT = Op.getValueType();
> @@ -3735,45 +3757,46 @@ SDValue ARM64TargetLowering::LowerShiftL
>
> SDValue Cmp = emitComparison(ExtraShAmt, DAG.getConstant(0, MVT::i64),
> ISD::SETGE, dl, DAG);
> - SDValue CCVal = DAG.getConstant(ARM64CC::GE, MVT::i32);
> - SDValue Hi = DAG.getNode(ARM64ISD::CSEL, dl, VT, Tmp3, FalseVal, CCVal, Cmp);
> + SDValue CCVal = DAG.getConstant(AArch64CC::GE, MVT::i32);
> + SDValue Hi =
> + DAG.getNode(AArch64ISD::CSEL, dl, VT, Tmp3, FalseVal, CCVal, Cmp);
>
> - // ARM64 shifts of larger than register sizes are wrapped rather than clamped,
> - // so we can't just emit "lo << a" if a is too big.
> + // AArch64 shifts of larger than register sizes are wrapped rather than
> + // clamped, so we can't just emit "lo << a" if a is too big.
> SDValue TrueValLo = DAG.getConstant(0, VT);
> SDValue FalseValLo = DAG.getNode(ISD::SHL, dl, VT, ShOpLo, ShAmt);
> SDValue Lo =
> - DAG.getNode(ARM64ISD::CSEL, dl, VT, TrueValLo, FalseValLo, CCVal, Cmp);
> + DAG.getNode(AArch64ISD::CSEL, dl, VT, TrueValLo, FalseValLo, CCVal, Cmp);
>
> SDValue Ops[2] = { Lo, Hi };
> return DAG.getMergeValues(Ops, dl);
> }
>
> -bool
> -ARM64TargetLowering::isOffsetFoldingLegal(const GlobalAddressSDNode *GA) const {
> - // The ARM64 target doesn't support folding offsets into global addresses.
> +bool AArch64TargetLowering::isOffsetFoldingLegal(
> + const GlobalAddressSDNode *GA) const {
> + // The AArch64 target doesn't support folding offsets into global addresses.
> return false;
> }
>
> -bool ARM64TargetLowering::isFPImmLegal(const APFloat &Imm, EVT VT) const {
> +bool AArch64TargetLowering::isFPImmLegal(const APFloat &Imm, EVT VT) const {
> // We can materialize #0.0 as fmov $Rd, XZR for 64-bit and 32-bit cases.
> // FIXME: We should be able to handle f128 as well with a clever lowering.
> if (Imm.isPosZero() && (VT == MVT::f64 || VT == MVT::f32))
> return true;
>
> if (VT == MVT::f64)
> - return ARM64_AM::getFP64Imm(Imm) != -1;
> + return AArch64_AM::getFP64Imm(Imm) != -1;
> else if (VT == MVT::f32)
> - return ARM64_AM::getFP32Imm(Imm) != -1;
> + return AArch64_AM::getFP32Imm(Imm) != -1;
> return false;
> }
>
> //===----------------------------------------------------------------------===//
> -// ARM64 Optimization Hooks
> +// AArch64 Optimization Hooks
> //===----------------------------------------------------------------------===//
>
> //===----------------------------------------------------------------------===//
> -// ARM64 Inline Assembly Support
> +// AArch64 Inline Assembly Support
> //===----------------------------------------------------------------------===//
>
> // Table of Constraints
> @@ -3802,8 +3825,8 @@ bool ARM64TargetLowering::isFPImmLegal(c
>
> /// getConstraintType - Given a constraint letter, return the type of
> /// constraint it is for this target.
> -ARM64TargetLowering::ConstraintType
> -ARM64TargetLowering::getConstraintType(const std::string &Constraint) const {
> +AArch64TargetLowering::ConstraintType
> +AArch64TargetLowering::getConstraintType(const std::string &Constraint) const {
> if (Constraint.size() == 1) {
> switch (Constraint[0]) {
> default:
> @@ -3826,7 +3849,7 @@ ARM64TargetLowering::getConstraintType(c
> /// This object must already have been set up with the operand type
> /// and the current alternative constraint selected.
> TargetLowering::ConstraintWeight
> -ARM64TargetLowering::getSingleConstraintMatchWeight(
> +AArch64TargetLowering::getSingleConstraintMatchWeight(
> AsmOperandInfo &info, const char *constraint) const {
> ConstraintWeight weight = CW_Invalid;
> Value *CallOperandVal = info.CallOperandVal;
> @@ -3853,32 +3876,32 @@ ARM64TargetLowering::getSingleConstraint
> }
>
> std::pair<unsigned, const TargetRegisterClass *>
> -ARM64TargetLowering::getRegForInlineAsmConstraint(const std::string &Constraint,
> - MVT VT) const {
> +AArch64TargetLowering::getRegForInlineAsmConstraint(
> + const std::string &Constraint, MVT VT) const {
> if (Constraint.size() == 1) {
> switch (Constraint[0]) {
> case 'r':
> if (VT.getSizeInBits() == 64)
> - return std::make_pair(0U, &ARM64::GPR64commonRegClass);
> - return std::make_pair(0U, &ARM64::GPR32commonRegClass);
> + return std::make_pair(0U, &AArch64::GPR64commonRegClass);
> + return std::make_pair(0U, &AArch64::GPR32commonRegClass);
> case 'w':
> if (VT == MVT::f32)
> - return std::make_pair(0U, &ARM64::FPR32RegClass);
> + return std::make_pair(0U, &AArch64::FPR32RegClass);
> if (VT.getSizeInBits() == 64)
> - return std::make_pair(0U, &ARM64::FPR64RegClass);
> + return std::make_pair(0U, &AArch64::FPR64RegClass);
> if (VT.getSizeInBits() == 128)
> - return std::make_pair(0U, &ARM64::FPR128RegClass);
> + return std::make_pair(0U, &AArch64::FPR128RegClass);
> break;
> // The instructions that this constraint is designed for can
> // only take 128-bit registers so just use that regclass.
> case 'x':
> if (VT.getSizeInBits() == 128)
> - return std::make_pair(0U, &ARM64::FPR128_loRegClass);
> + return std::make_pair(0U, &AArch64::FPR128_loRegClass);
> break;
> }
> }
> if (StringRef("{cc}").equals_lower(Constraint))
> - return std::make_pair(unsigned(ARM64::NZCV), &ARM64::CCRRegClass);
> + return std::make_pair(unsigned(AArch64::NZCV), &AArch64::CCRRegClass);
>
> // Use the default implementation in TargetLowering to convert the register
> // constraint into a member of a register class.
> @@ -3897,8 +3920,8 @@ ARM64TargetLowering::getRegForInlineAsmC
> // v0 - v31 are aliases of q0 - q31.
> // By default we'll emit v0-v31 for this unless there's a modifier where
> // we'll emit the correct register as well.
> - Res.first = ARM64::FPR128RegClass.getRegister(RegNo);
> - Res.second = &ARM64::FPR128RegClass;
> + Res.first = AArch64::FPR128RegClass.getRegister(RegNo);
> + Res.second = &AArch64::FPR128RegClass;
> }
> }
> }
> @@ -3908,7 +3931,7 @@ ARM64TargetLowering::getRegForInlineAsmC
>
> /// LowerAsmOperandForConstraint - Lower the specified operand into the Ops
> /// vector. If it is invalid, don't add anything to Ops.
> -void ARM64TargetLowering::LowerAsmOperandForConstraint(
> +void AArch64TargetLowering::LowerAsmOperandForConstraint(
> SDValue Op, std::string &Constraint, std::vector<SDValue> &Ops,
> SelectionDAG &DAG) const {
> SDValue Result;
> @@ -3931,9 +3954,9 @@ void ARM64TargetLowering::LowerAsmOperan
> return;
>
> if (Op.getValueType() == MVT::i64)
> - Result = DAG.getRegister(ARM64::XZR, MVT::i64);
> + Result = DAG.getRegister(AArch64::XZR, MVT::i64);
> else
> - Result = DAG.getRegister(ARM64::WZR, MVT::i32);
> + Result = DAG.getRegister(AArch64::WZR, MVT::i32);
> break;
> }
>
> @@ -3974,11 +3997,11 @@ void ARM64TargetLowering::LowerAsmOperan
> // not a valid bimm64 (L) where 0xaaaaaaaaaaaaaaaa would be valid, and vice
> // versa.
> case 'K':
> - if (ARM64_AM::isLogicalImmediate(CVal, 32))
> + if (AArch64_AM::isLogicalImmediate(CVal, 32))
> break;
> return;
> case 'L':
> - if (ARM64_AM::isLogicalImmediate(CVal, 64))
> + if (AArch64_AM::isLogicalImmediate(CVal, 64))
> break;
> return;
> // The M and N constraints are a superset of K and L respectively, for use
> @@ -3990,7 +4013,7 @@ void ARM64TargetLowering::LowerAsmOperan
> case 'M': {
> if (!isUInt<32>(CVal))
> return;
> - if (ARM64_AM::isLogicalImmediate(CVal, 32))
> + if (AArch64_AM::isLogicalImmediate(CVal, 32))
> break;
> if ((CVal & 0xFFFF) == CVal)
> break;
> @@ -4004,7 +4027,7 @@ void ARM64TargetLowering::LowerAsmOperan
> return;
> }
> case 'N': {
> - if (ARM64_AM::isLogicalImmediate(CVal, 64))
> + if (AArch64_AM::isLogicalImmediate(CVal, 64))
> break;
> if ((CVal & 0xFFFFULL) == CVal)
> break;
> @@ -4043,7 +4066,7 @@ void ARM64TargetLowering::LowerAsmOperan
> }
>
> //===----------------------------------------------------------------------===//
> -// ARM64 Advanced SIMD Support
> +// AArch64 Advanced SIMD Support
> //===----------------------------------------------------------------------===//
>
> /// WidenVector - Given a value in the V64 register class, produce the
> @@ -4075,13 +4098,13 @@ static SDValue NarrowVector(SDValue V128
> MVT NarrowTy = MVT::getVectorVT(EltTy, WideSize / 2);
> SDLoc DL(V128Reg);
>
> - return DAG.getTargetExtractSubreg(ARM64::dsub, DL, NarrowTy, V128Reg);
> + return DAG.getTargetExtractSubreg(AArch64::dsub, DL, NarrowTy, V128Reg);
> }
>
> // Gather data to see if the operation can be modelled as a
> // shuffle in combination with VEXTs.
> -SDValue ARM64TargetLowering::ReconstructShuffle(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::ReconstructShuffle(SDValue Op,
> + SelectionDAG &DAG) const {
> SDLoc dl(Op);
> EVT VT = Op.getValueType();
> unsigned NumElts = VT.getVectorNumElements();
> @@ -4186,7 +4209,7 @@ SDValue ARM64TargetLowering::Reconstruct
> DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl, VT, SourceVecs[i],
> DAG.getIntPtrConstant(NumElts));
> unsigned Imm = VEXTOffsets[i] * getExtFactor(VEXTSrc1);
> - ShuffleSrcs[i] = DAG.getNode(ARM64ISD::EXT, dl, VT, VEXTSrc1, VEXTSrc2,
> + ShuffleSrcs[i] = DAG.getNode(AArch64ISD::EXT, dl, VT, VEXTSrc1, VEXTSrc2,
> DAG.getConstant(Imm, MVT::i32));
> }
> }
> @@ -4542,13 +4565,13 @@ static SDValue GeneratePerfectShuffle(un
> // VREV divides the vector in half and swaps within the half.
> if (VT.getVectorElementType() == MVT::i32 ||
> VT.getVectorElementType() == MVT::f32)
> - return DAG.getNode(ARM64ISD::REV64, dl, VT, OpLHS);
> + return DAG.getNode(AArch64ISD::REV64, dl, VT, OpLHS);
> // vrev <4 x i16> -> REV32
> if (VT.getVectorElementType() == MVT::i16)
> - return DAG.getNode(ARM64ISD::REV32, dl, VT, OpLHS);
> + return DAG.getNode(AArch64ISD::REV32, dl, VT, OpLHS);
> // vrev <4 x i8> -> REV16
> assert(VT.getVectorElementType() == MVT::i8);
> - return DAG.getNode(ARM64ISD::REV16, dl, VT, OpLHS);
> + return DAG.getNode(AArch64ISD::REV16, dl, VT, OpLHS);
> case OP_VDUP0:
> case OP_VDUP1:
> case OP_VDUP2:
> @@ -4556,13 +4579,13 @@ static SDValue GeneratePerfectShuffle(un
> EVT EltTy = VT.getVectorElementType();
> unsigned Opcode;
> if (EltTy == MVT::i8)
> - Opcode = ARM64ISD::DUPLANE8;
> + Opcode = AArch64ISD::DUPLANE8;
> else if (EltTy == MVT::i16)
> - Opcode = ARM64ISD::DUPLANE16;
> + Opcode = AArch64ISD::DUPLANE16;
> else if (EltTy == MVT::i32 || EltTy == MVT::f32)
> - Opcode = ARM64ISD::DUPLANE32;
> + Opcode = AArch64ISD::DUPLANE32;
> else if (EltTy == MVT::i64 || EltTy == MVT::f64)
> - Opcode = ARM64ISD::DUPLANE64;
> + Opcode = AArch64ISD::DUPLANE64;
> else
> llvm_unreachable("Invalid vector element type?");
>
> @@ -4575,21 +4598,27 @@ static SDValue GeneratePerfectShuffle(un
> case OP_VEXT2:
> case OP_VEXT3: {
> unsigned Imm = (OpNum - OP_VEXT1 + 1) * getExtFactor(OpLHS);
> - return DAG.getNode(ARM64ISD::EXT, dl, VT, OpLHS, OpRHS,
> + return DAG.getNode(AArch64ISD::EXT, dl, VT, OpLHS, OpRHS,
> DAG.getConstant(Imm, MVT::i32));
> }
> case OP_VUZPL:
> - return DAG.getNode(ARM64ISD::UZP1, dl, DAG.getVTList(VT, VT), OpLHS, OpRHS);
> + return DAG.getNode(AArch64ISD::UZP1, dl, DAG.getVTList(VT, VT), OpLHS,
> + OpRHS);
> case OP_VUZPR:
> - return DAG.getNode(ARM64ISD::UZP2, dl, DAG.getVTList(VT, VT), OpLHS, OpRHS);
> + return DAG.getNode(AArch64ISD::UZP2, dl, DAG.getVTList(VT, VT), OpLHS,
> + OpRHS);
> case OP_VZIPL:
> - return DAG.getNode(ARM64ISD::ZIP1, dl, DAG.getVTList(VT, VT), OpLHS, OpRHS);
> + return DAG.getNode(AArch64ISD::ZIP1, dl, DAG.getVTList(VT, VT), OpLHS,
> + OpRHS);
> case OP_VZIPR:
> - return DAG.getNode(ARM64ISD::ZIP2, dl, DAG.getVTList(VT, VT), OpLHS, OpRHS);
> + return DAG.getNode(AArch64ISD::ZIP2, dl, DAG.getVTList(VT, VT), OpLHS,
> + OpRHS);
> case OP_VTRNL:
> - return DAG.getNode(ARM64ISD::TRN1, dl, DAG.getVTList(VT, VT), OpLHS, OpRHS);
> + return DAG.getNode(AArch64ISD::TRN1, dl, DAG.getVTList(VT, VT), OpLHS,
> + OpRHS);
> case OP_VTRNR:
> - return DAG.getNode(ARM64ISD::TRN2, dl, DAG.getVTList(VT, VT), OpLHS, OpRHS);
> + return DAG.getNode(AArch64ISD::TRN2, dl, DAG.getVTList(VT, VT), OpLHS,
> + OpRHS);
> }
> }
>
> @@ -4627,7 +4656,7 @@ static SDValue GenerateTBL(SDValue Op, A
> V1Cst = DAG.getNode(ISD::CONCAT_VECTORS, DL, MVT::v16i8, V1Cst, V1Cst);
> Shuffle = DAG.getNode(
> ISD::INTRINSIC_WO_CHAIN, DL, IndexVT,
> - DAG.getConstant(Intrinsic::arm64_neon_tbl1, MVT::i32), V1Cst,
> + DAG.getConstant(Intrinsic::aarch64_neon_tbl1, MVT::i32), V1Cst,
> DAG.getNode(ISD::BUILD_VECTOR, DL, IndexVT,
> makeArrayRef(TBLMask.data(), IndexLen)));
> } else {
> @@ -4635,19 +4664,19 @@ static SDValue GenerateTBL(SDValue Op, A
> V1Cst = DAG.getNode(ISD::CONCAT_VECTORS, DL, MVT::v16i8, V1Cst, V2Cst);
> Shuffle = DAG.getNode(
> ISD::INTRINSIC_WO_CHAIN, DL, IndexVT,
> - DAG.getConstant(Intrinsic::arm64_neon_tbl1, MVT::i32), V1Cst,
> + DAG.getConstant(Intrinsic::aarch64_neon_tbl1, MVT::i32), V1Cst,
> DAG.getNode(ISD::BUILD_VECTOR, DL, IndexVT,
> makeArrayRef(TBLMask.data(), IndexLen)));
> } else {
> // FIXME: We cannot, for the moment, emit a TBL2 instruction because we
> // cannot currently represent the register constraints on the input
> // table registers.
> - // Shuffle = DAG.getNode(ARM64ISD::TBL2, DL, IndexVT, V1Cst, V2Cst,
> + // Shuffle = DAG.getNode(AArch64ISD::TBL2, DL, IndexVT, V1Cst, V2Cst,
> // DAG.getNode(ISD::BUILD_VECTOR, DL, IndexVT,
> // &TBLMask[0], IndexLen));
> Shuffle = DAG.getNode(
> ISD::INTRINSIC_WO_CHAIN, DL, IndexVT,
> - DAG.getConstant(Intrinsic::arm64_neon_tbl2, MVT::i32), V1Cst, V2Cst,
> + DAG.getConstant(Intrinsic::aarch64_neon_tbl2, MVT::i32), V1Cst, V2Cst,
> DAG.getNode(ISD::BUILD_VECTOR, DL, IndexVT,
> makeArrayRef(TBLMask.data(), IndexLen)));
> }
> @@ -4657,19 +4686,19 @@ static SDValue GenerateTBL(SDValue Op, A
>
> static unsigned getDUPLANEOp(EVT EltType) {
> if (EltType == MVT::i8)
> - return ARM64ISD::DUPLANE8;
> + return AArch64ISD::DUPLANE8;
> if (EltType == MVT::i16)
> - return ARM64ISD::DUPLANE16;
> + return AArch64ISD::DUPLANE16;
> if (EltType == MVT::i32 || EltType == MVT::f32)
> - return ARM64ISD::DUPLANE32;
> + return AArch64ISD::DUPLANE32;
> if (EltType == MVT::i64 || EltType == MVT::f64)
> - return ARM64ISD::DUPLANE64;
> + return AArch64ISD::DUPLANE64;
>
> llvm_unreachable("Invalid vector element type?");
> }
>
> -SDValue ARM64TargetLowering::LowerVECTOR_SHUFFLE(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerVECTOR_SHUFFLE(SDValue Op,
> + SelectionDAG &DAG) const {
> SDLoc dl(Op);
> EVT VT = Op.getValueType();
>
> @@ -4692,13 +4721,13 @@ SDValue ARM64TargetLowering::LowerVECTOR
> Lane = 0;
>
> if (Lane == 0 && V1.getOpcode() == ISD::SCALAR_TO_VECTOR)
> - return DAG.getNode(ARM64ISD::DUP, dl, V1.getValueType(),
> + return DAG.getNode(AArch64ISD::DUP, dl, V1.getValueType(),
> V1.getOperand(0));
> // Test if V1 is a BUILD_VECTOR and the lane being referenced is a non-
> // constant. If so, we can just reference the lane's definition directly.
> if (V1.getOpcode() == ISD::BUILD_VECTOR &&
> !isa<ConstantSDNode>(V1.getOperand(Lane)))
> - return DAG.getNode(ARM64ISD::DUP, dl, VT, V1.getOperand(Lane));
> + return DAG.getNode(AArch64ISD::DUP, dl, VT, V1.getOperand(Lane));
>
> // Otherwise, duplicate from the lane of the input vector.
> unsigned Opcode = getDUPLANEOp(V1.getValueType().getVectorElementType());
> @@ -4720,11 +4749,11 @@ SDValue ARM64TargetLowering::LowerVECTOR
> }
>
> if (isREVMask(ShuffleMask, VT, 64))
> - return DAG.getNode(ARM64ISD::REV64, dl, V1.getValueType(), V1, V2);
> + return DAG.getNode(AArch64ISD::REV64, dl, V1.getValueType(), V1, V2);
> if (isREVMask(ShuffleMask, VT, 32))
> - return DAG.getNode(ARM64ISD::REV32, dl, V1.getValueType(), V1, V2);
> + return DAG.getNode(AArch64ISD::REV32, dl, V1.getValueType(), V1, V2);
> if (isREVMask(ShuffleMask, VT, 16))
> - return DAG.getNode(ARM64ISD::REV16, dl, V1.getValueType(), V1, V2);
> + return DAG.getNode(AArch64ISD::REV16, dl, V1.getValueType(), V1, V2);
>
> bool ReverseEXT = false;
> unsigned Imm;
> @@ -4732,39 +4761,39 @@ SDValue ARM64TargetLowering::LowerVECTOR
> if (ReverseEXT)
> std::swap(V1, V2);
> Imm *= getExtFactor(V1);
> - return DAG.getNode(ARM64ISD::EXT, dl, V1.getValueType(), V1, V2,
> + return DAG.getNode(AArch64ISD::EXT, dl, V1.getValueType(), V1, V2,
> DAG.getConstant(Imm, MVT::i32));
> } else if (V2->getOpcode() == ISD::UNDEF &&
> isSingletonEXTMask(ShuffleMask, VT, Imm)) {
> Imm *= getExtFactor(V1);
> - return DAG.getNode(ARM64ISD::EXT, dl, V1.getValueType(), V1, V1,
> + return DAG.getNode(AArch64ISD::EXT, dl, V1.getValueType(), V1, V1,
> DAG.getConstant(Imm, MVT::i32));
> }
>
> unsigned WhichResult;
> if (isZIPMask(ShuffleMask, VT, WhichResult)) {
> - unsigned Opc = (WhichResult == 0) ? ARM64ISD::ZIP1 : ARM64ISD::ZIP2;
> + unsigned Opc = (WhichResult == 0) ? AArch64ISD::ZIP1 : AArch64ISD::ZIP2;
> return DAG.getNode(Opc, dl, V1.getValueType(), V1, V2);
> }
> if (isUZPMask(ShuffleMask, VT, WhichResult)) {
> - unsigned Opc = (WhichResult == 0) ? ARM64ISD::UZP1 : ARM64ISD::UZP2;
> + unsigned Opc = (WhichResult == 0) ? AArch64ISD::UZP1 : AArch64ISD::UZP2;
> return DAG.getNode(Opc, dl, V1.getValueType(), V1, V2);
> }
> if (isTRNMask(ShuffleMask, VT, WhichResult)) {
> - unsigned Opc = (WhichResult == 0) ? ARM64ISD::TRN1 : ARM64ISD::TRN2;
> + unsigned Opc = (WhichResult == 0) ? AArch64ISD::TRN1 : AArch64ISD::TRN2;
> return DAG.getNode(Opc, dl, V1.getValueType(), V1, V2);
> }
>
> if (isZIP_v_undef_Mask(ShuffleMask, VT, WhichResult)) {
> - unsigned Opc = (WhichResult == 0) ? ARM64ISD::ZIP1 : ARM64ISD::ZIP2;
> + unsigned Opc = (WhichResult == 0) ? AArch64ISD::ZIP1 : AArch64ISD::ZIP2;
> return DAG.getNode(Opc, dl, V1.getValueType(), V1, V1);
> }
> if (isUZP_v_undef_Mask(ShuffleMask, VT, WhichResult)) {
> - unsigned Opc = (WhichResult == 0) ? ARM64ISD::UZP1 : ARM64ISD::UZP2;
> + unsigned Opc = (WhichResult == 0) ? AArch64ISD::UZP1 : AArch64ISD::UZP2;
> return DAG.getNode(Opc, dl, V1.getValueType(), V1, V1);
> }
> if (isTRN_v_undef_Mask(ShuffleMask, VT, WhichResult)) {
> - unsigned Opc = (WhichResult == 0) ? ARM64ISD::TRN1 : ARM64ISD::TRN2;
> + unsigned Opc = (WhichResult == 0) ? AArch64ISD::TRN1 : AArch64ISD::TRN2;
> return DAG.getNode(Opc, dl, V1.getValueType(), V1, V1);
> }
>
> @@ -4844,8 +4873,8 @@ static bool resolveBuildVector(BuildVect
> return false;
> }
>
> -SDValue ARM64TargetLowering::LowerVectorAND(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerVectorAND(SDValue Op,
> + SelectionDAG &DAG) const {
> BuildVectorSDNode *BVN =
> dyn_cast<BuildVectorSDNode>(Op.getOperand(1).getNode());
> SDValue LHS = Op.getOperand(0);
> @@ -4870,55 +4899,55 @@ SDValue ARM64TargetLowering::LowerVector
> CnstBits = CnstBits.zextOrTrunc(64);
> uint64_t CnstVal = CnstBits.getZExtValue();
>
> - if (ARM64_AM::isAdvSIMDModImmType1(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType1(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType1(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType1(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::BICi, dl, MovTy, LHS,
> + SDValue Mov = DAG.getNode(AArch64ISD::BICi, dl, MovTy, LHS,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(0, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType2(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType2(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType2(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType2(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::BICi, dl, MovTy, LHS,
> + SDValue Mov = DAG.getNode(AArch64ISD::BICi, dl, MovTy, LHS,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(8, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType3(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType3(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType3(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType3(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::BICi, dl, MovTy, LHS,
> + SDValue Mov = DAG.getNode(AArch64ISD::BICi, dl, MovTy, LHS,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(16, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType4(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType4(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType4(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType4(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::BICi, dl, MovTy, LHS,
> + SDValue Mov = DAG.getNode(AArch64ISD::BICi, dl, MovTy, LHS,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(24, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType5(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType5(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType5(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType5(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v8i16 : MVT::v4i16;
> - SDValue Mov = DAG.getNode(ARM64ISD::BICi, dl, MovTy, LHS,
> + SDValue Mov = DAG.getNode(AArch64ISD::BICi, dl, MovTy, LHS,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(0, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType6(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType6(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType6(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType6(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v8i16 : MVT::v4i16;
> - SDValue Mov = DAG.getNode(ARM64ISD::BICi, dl, MovTy, LHS,
> + SDValue Mov = DAG.getNode(AArch64ISD::BICi, dl, MovTy, LHS,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(8, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> @@ -4990,12 +5019,12 @@ static SDValue tryLowerToSLI(SDNode *N,
>
> // Is the second op an shl or lshr?
> SDValue Shift = N->getOperand(1);
> - // This will have been turned into: ARM64ISD::VSHL vector, #shift
> - // or ARM64ISD::VLSHR vector, #shift
> + // This will have been turned into: AArch64ISD::VSHL vector, #shift
> + // or AArch64ISD::VLSHR vector, #shift
> unsigned ShiftOpc = Shift.getOpcode();
> - if ((ShiftOpc != ARM64ISD::VSHL && ShiftOpc != ARM64ISD::VLSHR))
> + if ((ShiftOpc != AArch64ISD::VSHL && ShiftOpc != AArch64ISD::VLSHR))
> return SDValue();
> - bool IsShiftRight = ShiftOpc == ARM64ISD::VLSHR;
> + bool IsShiftRight = ShiftOpc == AArch64ISD::VLSHR;
>
> // Is the shift amount constant?
> ConstantSDNode *C2node = dyn_cast<ConstantSDNode>(Shift.getOperand(1));
> @@ -5021,12 +5050,12 @@ static SDValue tryLowerToSLI(SDNode *N,
> SDValue Y = Shift.getOperand(0);
>
> unsigned Intrin =
> - IsShiftRight ? Intrinsic::arm64_neon_vsri : Intrinsic::arm64_neon_vsli;
> + IsShiftRight ? Intrinsic::aarch64_neon_vsri : Intrinsic::aarch64_neon_vsli;
> SDValue ResultSLI =
> DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, VT,
> DAG.getConstant(Intrin, MVT::i32), X, Y, Shift.getOperand(1));
>
> - DEBUG(dbgs() << "arm64-lower: transformed: \n");
> + DEBUG(dbgs() << "aarch64-lower: transformed: \n");
> DEBUG(N->dump(&DAG));
> DEBUG(dbgs() << "into: \n");
> DEBUG(ResultSLI->dump(&DAG));
> @@ -5035,10 +5064,10 @@ static SDValue tryLowerToSLI(SDNode *N,
> return ResultSLI;
> }
>
> -SDValue ARM64TargetLowering::LowerVectorOR(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerVectorOR(SDValue Op,
> + SelectionDAG &DAG) const {
> // Attempt to form a vector S[LR]I from (or (and X, C1), (lsl Y, C2))
> - if (EnableARM64SlrGeneration) {
> + if (EnableAArch64SlrGeneration) {
> SDValue Res = tryLowerToSLI(Op.getNode(), DAG);
> if (Res.getNode())
> return Res;
> @@ -5070,55 +5099,55 @@ SDValue ARM64TargetLowering::LowerVector
> CnstBits = CnstBits.zextOrTrunc(64);
> uint64_t CnstVal = CnstBits.getZExtValue();
>
> - if (ARM64_AM::isAdvSIMDModImmType1(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType1(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType1(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType1(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::ORRi, dl, MovTy, LHS,
> + SDValue Mov = DAG.getNode(AArch64ISD::ORRi, dl, MovTy, LHS,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(0, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType2(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType2(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType2(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType2(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::ORRi, dl, MovTy, LHS,
> + SDValue Mov = DAG.getNode(AArch64ISD::ORRi, dl, MovTy, LHS,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(8, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType3(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType3(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType3(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType3(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::ORRi, dl, MovTy, LHS,
> + SDValue Mov = DAG.getNode(AArch64ISD::ORRi, dl, MovTy, LHS,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(16, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType4(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType4(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType4(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType4(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::ORRi, dl, MovTy, LHS,
> + SDValue Mov = DAG.getNode(AArch64ISD::ORRi, dl, MovTy, LHS,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(24, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType5(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType5(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType5(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType5(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v8i16 : MVT::v4i16;
> - SDValue Mov = DAG.getNode(ARM64ISD::ORRi, dl, MovTy, LHS,
> + SDValue Mov = DAG.getNode(AArch64ISD::ORRi, dl, MovTy, LHS,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(0, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType6(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType6(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType6(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType6(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v8i16 : MVT::v4i16;
> - SDValue Mov = DAG.getNode(ARM64ISD::ORRi, dl, MovTy, LHS,
> + SDValue Mov = DAG.getNode(AArch64ISD::ORRi, dl, MovTy, LHS,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(8, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> @@ -5137,8 +5166,8 @@ FailedModImm:
> return Op;
> }
>
> -SDValue ARM64TargetLowering::LowerBUILD_VECTOR(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerBUILD_VECTOR(SDValue Op,
> + SelectionDAG &DAG) const {
> BuildVectorSDNode *BVN = cast<BuildVectorSDNode>(Op.getNode());
> SDLoc dl(Op);
> EVT VT = Op.getValueType();
> @@ -5163,186 +5192,186 @@ SDValue ARM64TargetLowering::LowerBUILD_
> return Op;
>
> // The many faces of MOVI...
> - if (ARM64_AM::isAdvSIMDModImmType10(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType10(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType10(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType10(CnstVal);
> if (VT.getSizeInBits() == 128) {
> - SDValue Mov = DAG.getNode(ARM64ISD::MOVIedit, dl, MVT::v2i64,
> + SDValue Mov = DAG.getNode(AArch64ISD::MOVIedit, dl, MVT::v2i64,
> DAG.getConstant(CnstVal, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> // Support the V64 version via subregister insertion.
> - SDValue Mov = DAG.getNode(ARM64ISD::MOVIedit, dl, MVT::f64,
> + SDValue Mov = DAG.getNode(AArch64ISD::MOVIedit, dl, MVT::f64,
> DAG.getConstant(CnstVal, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType1(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType1(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType1(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType1(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::MOVIshift, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MOVIshift, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(0, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType2(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType2(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType2(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType2(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::MOVIshift, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MOVIshift, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(8, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType3(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType3(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType3(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType3(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::MOVIshift, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MOVIshift, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(16, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType4(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType4(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType4(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType4(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::MOVIshift, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MOVIshift, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(24, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType5(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType5(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType5(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType5(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v8i16 : MVT::v4i16;
> - SDValue Mov = DAG.getNode(ARM64ISD::MOVIshift, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MOVIshift, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(0, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType6(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType6(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType6(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType6(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v8i16 : MVT::v4i16;
> - SDValue Mov = DAG.getNode(ARM64ISD::MOVIshift, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MOVIshift, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(8, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType7(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType7(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType7(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType7(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::MOVImsl, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MOVImsl, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(264, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType8(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType8(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType8(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType8(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::MOVImsl, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MOVImsl, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(272, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType9(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType9(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType9(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType9(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v16i8 : MVT::v8i8;
> - SDValue Mov = DAG.getNode(ARM64ISD::MOVI, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MOVI, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> // The few faces of FMOV...
> - if (ARM64_AM::isAdvSIMDModImmType11(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType11(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType11(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType11(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4f32 : MVT::v2f32;
> - SDValue Mov = DAG.getNode(ARM64ISD::FMOV, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::FMOV, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType12(CnstVal) &&
> + if (AArch64_AM::isAdvSIMDModImmType12(CnstVal) &&
> VT.getSizeInBits() == 128) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType12(CnstVal);
> - SDValue Mov = DAG.getNode(ARM64ISD::FMOV, dl, MVT::v2f64,
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType12(CnstVal);
> + SDValue Mov = DAG.getNode(AArch64ISD::FMOV, dl, MVT::v2f64,
> DAG.getConstant(CnstVal, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> // The many faces of MVNI...
> CnstVal = ~CnstVal;
> - if (ARM64_AM::isAdvSIMDModImmType1(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType1(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType1(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType1(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::MVNIshift, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MVNIshift, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(0, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType2(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType2(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType2(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType2(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::MVNIshift, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MVNIshift, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(8, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType3(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType3(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType3(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType3(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::MVNIshift, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MVNIshift, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(16, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType4(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType4(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType4(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType4(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::MVNIshift, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MVNIshift, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(24, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType5(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType5(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType5(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType5(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v8i16 : MVT::v4i16;
> - SDValue Mov = DAG.getNode(ARM64ISD::MVNIshift, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MVNIshift, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(0, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType6(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType6(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType6(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType6(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v8i16 : MVT::v4i16;
> - SDValue Mov = DAG.getNode(ARM64ISD::MVNIshift, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MVNIshift, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(8, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType7(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType7(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType7(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType7(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::MVNImsl, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MVNImsl, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(264, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> }
>
> - if (ARM64_AM::isAdvSIMDModImmType8(CnstVal)) {
> - CnstVal = ARM64_AM::encodeAdvSIMDModImmType8(CnstVal);
> + if (AArch64_AM::isAdvSIMDModImmType8(CnstVal)) {
> + CnstVal = AArch64_AM::encodeAdvSIMDModImmType8(CnstVal);
> MVT MovTy = (VT.getSizeInBits() == 128) ? MVT::v4i32 : MVT::v2i32;
> - SDValue Mov = DAG.getNode(ARM64ISD::MVNImsl, dl, MovTy,
> + SDValue Mov = DAG.getNode(AArch64ISD::MVNImsl, dl, MovTy,
> DAG.getConstant(CnstVal, MVT::i32),
> DAG.getConstant(272, MVT::i32));
> return DAG.getNode(ISD::BITCAST, dl, VT, Mov);
> @@ -5411,7 +5440,7 @@ FailedModImm:
> if (!isConstant) {
> if (Value.getOpcode() != ISD::EXTRACT_VECTOR_ELT ||
> Value.getValueType() != VT)
> - return DAG.getNode(ARM64ISD::DUP, dl, VT, Value);
> + return DAG.getNode(AArch64ISD::DUP, dl, VT, Value);
>
> // This is actually a DUPLANExx operation, which keeps everything vectory.
>
> @@ -5444,7 +5473,7 @@ FailedModImm:
> // is better than the default, which will perform a separate initialization
> // for each lane.
> if (NumConstantLanes > 0 && usesOnlyOneConstantValue) {
> - SDValue Val = DAG.getNode(ARM64ISD::DUP, dl, VT, ConstantValue);
> + SDValue Val = DAG.getNode(AArch64ISD::DUP, dl, VT, ConstantValue);
> // Now insert the non-constant lanes.
> for (unsigned i = 0; i < NumElts; ++i) {
> SDValue V = Op.getOperand(i);
> @@ -5487,7 +5516,7 @@ FailedModImm:
> // b) Allow the register coalescer to fold away the copy if the
> // value is already in an S or D register.
> if (Op0.getOpcode() != ISD::UNDEF && (ElemSize == 32 || ElemSize == 64)) {
> - unsigned SubIdx = ElemSize == 32 ? ARM64::ssub : ARM64::dsub;
> + unsigned SubIdx = ElemSize == 32 ? AArch64::ssub : AArch64::dsub;
> MachineSDNode *N =
> DAG.getMachineNode(TargetOpcode::INSERT_SUBREG, dl, VT, Vec, Op0,
> DAG.getTargetConstant(SubIdx, MVT::i32));
> @@ -5508,8 +5537,8 @@ FailedModImm:
> return SDValue();
> }
>
> -SDValue ARM64TargetLowering::LowerINSERT_VECTOR_ELT(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerINSERT_VECTOR_ELT(SDValue Op,
> + SelectionDAG &DAG) const {
> assert(Op.getOpcode() == ISD::INSERT_VECTOR_ELT && "Unknown opcode!");
>
> // Check for non-constant lane.
> @@ -5539,8 +5568,9 @@ SDValue ARM64TargetLowering::LowerINSERT
> return NarrowVector(Node, DAG);
> }
>
> -SDValue ARM64TargetLowering::LowerEXTRACT_VECTOR_ELT(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue
> +AArch64TargetLowering::LowerEXTRACT_VECTOR_ELT(SDValue Op,
> + SelectionDAG &DAG) const {
> assert(Op.getOpcode() == ISD::EXTRACT_VECTOR_ELT && "Unknown opcode!");
>
> // Check for non-constant lane.
> @@ -5573,8 +5603,8 @@ SDValue ARM64TargetLowering::LowerEXTRAC
> Op.getOperand(1));
> }
>
> -SDValue ARM64TargetLowering::LowerEXTRACT_SUBVECTOR(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerEXTRACT_SUBVECTOR(SDValue Op,
> + SelectionDAG &DAG) const {
> EVT VT = Op.getOperand(0).getValueType();
> SDLoc dl(Op);
> // Just in case...
> @@ -5590,16 +5620,16 @@ SDValue ARM64TargetLowering::LowerEXTRAC
> if (Val == 0) {
> switch (Size) {
> case 8:
> - return DAG.getTargetExtractSubreg(ARM64::bsub, dl, Op.getValueType(),
> + return DAG.getTargetExtractSubreg(AArch64::bsub, dl, Op.getValueType(),
> Op.getOperand(0));
> case 16:
> - return DAG.getTargetExtractSubreg(ARM64::hsub, dl, Op.getValueType(),
> + return DAG.getTargetExtractSubreg(AArch64::hsub, dl, Op.getValueType(),
> Op.getOperand(0));
> case 32:
> - return DAG.getTargetExtractSubreg(ARM64::ssub, dl, Op.getValueType(),
> + return DAG.getTargetExtractSubreg(AArch64::ssub, dl, Op.getValueType(),
> Op.getOperand(0));
> case 64:
> - return DAG.getTargetExtractSubreg(ARM64::dsub, dl, Op.getValueType(),
> + return DAG.getTargetExtractSubreg(AArch64::dsub, dl, Op.getValueType(),
> Op.getOperand(0));
> default:
> llvm_unreachable("Unexpected vector type in extract_subvector!");
> @@ -5613,8 +5643,8 @@ SDValue ARM64TargetLowering::LowerEXTRAC
> return SDValue();
> }
>
> -bool ARM64TargetLowering::isShuffleMaskLegal(const SmallVectorImpl<int> &M,
> - EVT VT) const {
> +bool AArch64TargetLowering::isShuffleMaskLegal(const SmallVectorImpl<int> &M,
> + EVT VT) const {
> if (VT.getVectorNumElements() == 4 &&
> (VT.is128BitVector() || VT.is64BitVector())) {
> unsigned PFIndexes[4];
> @@ -5700,8 +5730,8 @@ static bool isVShiftRImm(SDValue Op, EVT
> return (Cnt >= 1 && Cnt <= (isNarrow ? ElementBits / 2 : ElementBits));
> }
>
> -SDValue ARM64TargetLowering::LowerVectorSRA_SRL_SHL(SDValue Op,
> - SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerVectorSRA_SRL_SHL(SDValue Op,
> + SelectionDAG &DAG) const {
> EVT VT = Op.getValueType();
> SDLoc DL(Op);
> int64_t Cnt;
> @@ -5716,10 +5746,10 @@ SDValue ARM64TargetLowering::LowerVector
>
> case ISD::SHL:
> if (isVShiftLImm(Op.getOperand(1), VT, false, Cnt) && Cnt < EltSize)
> - return DAG.getNode(ARM64ISD::VSHL, SDLoc(Op), VT, Op.getOperand(0),
> + return DAG.getNode(AArch64ISD::VSHL, SDLoc(Op), VT, Op.getOperand(0),
> DAG.getConstant(Cnt, MVT::i32));
> return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, VT,
> - DAG.getConstant(Intrinsic::arm64_neon_ushl, MVT::i32),
> + DAG.getConstant(Intrinsic::aarch64_neon_ushl, MVT::i32),
> Op.getOperand(0), Op.getOperand(1));
> case ISD::SRA:
> case ISD::SRL:
> @@ -5727,7 +5757,7 @@ SDValue ARM64TargetLowering::LowerVector
> if (isVShiftRImm(Op.getOperand(1), VT, false, false, Cnt) &&
> Cnt < EltSize) {
> unsigned Opc =
> - (Op.getOpcode() == ISD::SRA) ? ARM64ISD::VASHR : ARM64ISD::VLSHR;
> + (Op.getOpcode() == ISD::SRA) ? AArch64ISD::VASHR : AArch64ISD::VLSHR;
> return DAG.getNode(Opc, SDLoc(Op), VT, Op.getOperand(0),
> DAG.getConstant(Cnt, MVT::i32));
> }
> @@ -5735,10 +5765,10 @@ SDValue ARM64TargetLowering::LowerVector
> // Right shift register. Note, there is not a shift right register
> // instruction, but the shift left register instruction takes a signed
> // value, where negative numbers specify a right shift.
> - unsigned Opc = (Op.getOpcode() == ISD::SRA) ? Intrinsic::arm64_neon_sshl
> - : Intrinsic::arm64_neon_ushl;
> + unsigned Opc = (Op.getOpcode() == ISD::SRA) ? Intrinsic::aarch64_neon_sshl
> + : Intrinsic::aarch64_neon_ushl;
> // negate the shift amount
> - SDValue NegShift = DAG.getNode(ARM64ISD::NEG, DL, VT, Op.getOperand(1));
> + SDValue NegShift = DAG.getNode(AArch64ISD::NEG, DL, VT, Op.getOperand(1));
> SDValue NegShiftLeft =
> DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, VT,
> DAG.getConstant(Opc, MVT::i32), Op.getOperand(0), NegShift);
> @@ -5749,7 +5779,7 @@ SDValue ARM64TargetLowering::LowerVector
> }
>
> static SDValue EmitVectorComparison(SDValue LHS, SDValue RHS,
> - ARM64CC::CondCode CC, bool NoNans, EVT VT,
> + AArch64CC::CondCode CC, bool NoNans, EVT VT,
> SDLoc dl, SelectionDAG &DAG) {
> EVT SrcVT = LHS.getValueType();
>
> @@ -5763,85 +5793,86 @@ static SDValue EmitVectorComparison(SDVa
> switch (CC) {
> default:
> return SDValue();
> - case ARM64CC::NE: {
> + case AArch64CC::NE: {
> SDValue Fcmeq;
> if (IsZero)
> - Fcmeq = DAG.getNode(ARM64ISD::FCMEQz, dl, VT, LHS);
> + Fcmeq = DAG.getNode(AArch64ISD::FCMEQz, dl, VT, LHS);
> else
> - Fcmeq = DAG.getNode(ARM64ISD::FCMEQ, dl, VT, LHS, RHS);
> - return DAG.getNode(ARM64ISD::NOT, dl, VT, Fcmeq);
> + Fcmeq = DAG.getNode(AArch64ISD::FCMEQ, dl, VT, LHS, RHS);
> + return DAG.getNode(AArch64ISD::NOT, dl, VT, Fcmeq);
> }
> - case ARM64CC::EQ:
> + case AArch64CC::EQ:
> if (IsZero)
> - return DAG.getNode(ARM64ISD::FCMEQz, dl, VT, LHS);
> - return DAG.getNode(ARM64ISD::FCMEQ, dl, VT, LHS, RHS);
> - case ARM64CC::GE:
> + return DAG.getNode(AArch64ISD::FCMEQz, dl, VT, LHS);
> + return DAG.getNode(AArch64ISD::FCMEQ, dl, VT, LHS, RHS);
> + case AArch64CC::GE:
> if (IsZero)
> - return DAG.getNode(ARM64ISD::FCMGEz, dl, VT, LHS);
> - return DAG.getNode(ARM64ISD::FCMGE, dl, VT, LHS, RHS);
> - case ARM64CC::GT:
> + return DAG.getNode(AArch64ISD::FCMGEz, dl, VT, LHS);
> + return DAG.getNode(AArch64ISD::FCMGE, dl, VT, LHS, RHS);
> + case AArch64CC::GT:
> if (IsZero)
> - return DAG.getNode(ARM64ISD::FCMGTz, dl, VT, LHS);
> - return DAG.getNode(ARM64ISD::FCMGT, dl, VT, LHS, RHS);
> - case ARM64CC::LS:
> + return DAG.getNode(AArch64ISD::FCMGTz, dl, VT, LHS);
> + return DAG.getNode(AArch64ISD::FCMGT, dl, VT, LHS, RHS);
> + case AArch64CC::LS:
> if (IsZero)
> - return DAG.getNode(ARM64ISD::FCMLEz, dl, VT, LHS);
> - return DAG.getNode(ARM64ISD::FCMGE, dl, VT, RHS, LHS);
> - case ARM64CC::LT:
> + return DAG.getNode(AArch64ISD::FCMLEz, dl, VT, LHS);
> + return DAG.getNode(AArch64ISD::FCMGE, dl, VT, RHS, LHS);
> + case AArch64CC::LT:
> if (!NoNans)
> return SDValue();
> // If we ignore NaNs then we can use to the MI implementation.
> // Fallthrough.
> - case ARM64CC::MI:
> + case AArch64CC::MI:
> if (IsZero)
> - return DAG.getNode(ARM64ISD::FCMLTz, dl, VT, LHS);
> - return DAG.getNode(ARM64ISD::FCMGT, dl, VT, RHS, LHS);
> + return DAG.getNode(AArch64ISD::FCMLTz, dl, VT, LHS);
> + return DAG.getNode(AArch64ISD::FCMGT, dl, VT, RHS, LHS);
> }
> }
>
> switch (CC) {
> default:
> return SDValue();
> - case ARM64CC::NE: {
> + case AArch64CC::NE: {
> SDValue Cmeq;
> if (IsZero)
> - Cmeq = DAG.getNode(ARM64ISD::CMEQz, dl, VT, LHS);
> + Cmeq = DAG.getNode(AArch64ISD::CMEQz, dl, VT, LHS);
> else
> - Cmeq = DAG.getNode(ARM64ISD::CMEQ, dl, VT, LHS, RHS);
> - return DAG.getNode(ARM64ISD::NOT, dl, VT, Cmeq);
> + Cmeq = DAG.getNode(AArch64ISD::CMEQ, dl, VT, LHS, RHS);
> + return DAG.getNode(AArch64ISD::NOT, dl, VT, Cmeq);
> }
> - case ARM64CC::EQ:
> + case AArch64CC::EQ:
> if (IsZero)
> - return DAG.getNode(ARM64ISD::CMEQz, dl, VT, LHS);
> - return DAG.getNode(ARM64ISD::CMEQ, dl, VT, LHS, RHS);
> - case ARM64CC::GE:
> + return DAG.getNode(AArch64ISD::CMEQz, dl, VT, LHS);
> + return DAG.getNode(AArch64ISD::CMEQ, dl, VT, LHS, RHS);
> + case AArch64CC::GE:
> if (IsZero)
> - return DAG.getNode(ARM64ISD::CMGEz, dl, VT, LHS);
> - return DAG.getNode(ARM64ISD::CMGE, dl, VT, LHS, RHS);
> - case ARM64CC::GT:
> + return DAG.getNode(AArch64ISD::CMGEz, dl, VT, LHS);
> + return DAG.getNode(AArch64ISD::CMGE, dl, VT, LHS, RHS);
> + case AArch64CC::GT:
> if (IsZero)
> - return DAG.getNode(ARM64ISD::CMGTz, dl, VT, LHS);
> - return DAG.getNode(ARM64ISD::CMGT, dl, VT, LHS, RHS);
> - case ARM64CC::LE:
> + return DAG.getNode(AArch64ISD::CMGTz, dl, VT, LHS);
> + return DAG.getNode(AArch64ISD::CMGT, dl, VT, LHS, RHS);
> + case AArch64CC::LE:
> if (IsZero)
> - return DAG.getNode(ARM64ISD::CMLEz, dl, VT, LHS);
> - return DAG.getNode(ARM64ISD::CMGE, dl, VT, RHS, LHS);
> - case ARM64CC::LS:
> - return DAG.getNode(ARM64ISD::CMHS, dl, VT, RHS, LHS);
> - case ARM64CC::LO:
> - return DAG.getNode(ARM64ISD::CMHI, dl, VT, RHS, LHS);
> - case ARM64CC::LT:
> + return DAG.getNode(AArch64ISD::CMLEz, dl, VT, LHS);
> + return DAG.getNode(AArch64ISD::CMGE, dl, VT, RHS, LHS);
> + case AArch64CC::LS:
> + return DAG.getNode(AArch64ISD::CMHS, dl, VT, RHS, LHS);
> + case AArch64CC::LO:
> + return DAG.getNode(AArch64ISD::CMHI, dl, VT, RHS, LHS);
> + case AArch64CC::LT:
> if (IsZero)
> - return DAG.getNode(ARM64ISD::CMLTz, dl, VT, LHS);
> - return DAG.getNode(ARM64ISD::CMGT, dl, VT, RHS, LHS);
> - case ARM64CC::HI:
> - return DAG.getNode(ARM64ISD::CMHI, dl, VT, LHS, RHS);
> - case ARM64CC::HS:
> - return DAG.getNode(ARM64ISD::CMHS, dl, VT, LHS, RHS);
> + return DAG.getNode(AArch64ISD::CMLTz, dl, VT, LHS);
> + return DAG.getNode(AArch64ISD::CMGT, dl, VT, RHS, LHS);
> + case AArch64CC::HI:
> + return DAG.getNode(AArch64ISD::CMHI, dl, VT, LHS, RHS);
> + case AArch64CC::HS:
> + return DAG.getNode(AArch64ISD::CMHS, dl, VT, LHS, RHS);
> }
> }
>
> -SDValue ARM64TargetLowering::LowerVSETCC(SDValue Op, SelectionDAG &DAG) const {
> +SDValue AArch64TargetLowering::LowerVSETCC(SDValue Op,
> + SelectionDAG &DAG) const {
> ISD::CondCode CC = cast<CondCodeSDNode>(Op.getOperand(2))->get();
> SDValue LHS = Op.getOperand(0);
> SDValue RHS = Op.getOperand(1);
> @@ -5849,19 +5880,19 @@ SDValue ARM64TargetLowering::LowerVSETCC
>
> if (LHS.getValueType().getVectorElementType().isInteger()) {
> assert(LHS.getValueType() == RHS.getValueType());
> - ARM64CC::CondCode ARM64CC = changeIntCCToARM64CC(CC);
> - return EmitVectorComparison(LHS, RHS, ARM64CC, false, Op.getValueType(), dl,
> - DAG);
> + AArch64CC::CondCode AArch64CC = changeIntCCToAArch64CC(CC);
> + return EmitVectorComparison(LHS, RHS, AArch64CC, false, Op.getValueType(),
> + dl, DAG);
> }
>
> assert(LHS.getValueType().getVectorElementType() == MVT::f32 ||
> LHS.getValueType().getVectorElementType() == MVT::f64);
>
> - // Unfortunately, the mapping of LLVM FP CC's onto ARM64 CC's isn't totally
> + // Unfortunately, the mapping of LLVM FP CC's onto AArch64 CC's isn't totally
> // clean. Some of them require two branches to implement.
> - ARM64CC::CondCode CC1, CC2;
> + AArch64CC::CondCode CC1, CC2;
> bool ShouldInvert;
> - changeVectorFPCCToARM64CC(CC, CC1, CC2, ShouldInvert);
> + changeVectorFPCCToAArch64CC(CC, CC1, CC2, ShouldInvert);
>
> bool NoNaNs = getTargetMachine().Options.NoNaNsFPMath;
> SDValue Cmp =
> @@ -5869,7 +5900,7 @@ SDValue ARM64TargetLowering::LowerVSETCC
> if (!Cmp.getNode())
> return SDValue();
>
> - if (CC2 != ARM64CC::AL) {
> + if (CC2 != AArch64CC::AL) {
> SDValue Cmp2 =
> EmitVectorComparison(LHS, RHS, CC2, NoNaNs, Op.getValueType(), dl, DAG);
> if (!Cmp2.getNode())
> @@ -5887,22 +5918,22 @@ SDValue ARM64TargetLowering::LowerVSETCC
> /// getTgtMemIntrinsic - Represent NEON load and store intrinsics as
> /// MemIntrinsicNodes. The associated MachineMemOperands record the alignment
> /// specified in the intrinsic calls.
> -bool ARM64TargetLowering::getTgtMemIntrinsic(IntrinsicInfo &Info,
> - const CallInst &I,
> - unsigned Intrinsic) const {
> +bool AArch64TargetLowering::getTgtMemIntrinsic(IntrinsicInfo &Info,
> + const CallInst &I,
> + unsigned Intrinsic) const {
> switch (Intrinsic) {
> - case Intrinsic::arm64_neon_ld2:
> - case Intrinsic::arm64_neon_ld3:
> - case Intrinsic::arm64_neon_ld4:
> - case Intrinsic::arm64_neon_ld1x2:
> - case Intrinsic::arm64_neon_ld1x3:
> - case Intrinsic::arm64_neon_ld1x4:
> - case Intrinsic::arm64_neon_ld2lane:
> - case Intrinsic::arm64_neon_ld3lane:
> - case Intrinsic::arm64_neon_ld4lane:
> - case Intrinsic::arm64_neon_ld2r:
> - case Intrinsic::arm64_neon_ld3r:
> - case Intrinsic::arm64_neon_ld4r: {
> + case Intrinsic::aarch64_neon_ld2:
> + case Intrinsic::aarch64_neon_ld3:
> + case Intrinsic::aarch64_neon_ld4:
> + case Intrinsic::aarch64_neon_ld1x2:
> + case Intrinsic::aarch64_neon_ld1x3:
> + case Intrinsic::aarch64_neon_ld1x4:
> + case Intrinsic::aarch64_neon_ld2lane:
> + case Intrinsic::aarch64_neon_ld3lane:
> + case Intrinsic::aarch64_neon_ld4lane:
> + case Intrinsic::aarch64_neon_ld2r:
> + case Intrinsic::aarch64_neon_ld3r:
> + case Intrinsic::aarch64_neon_ld4r: {
> Info.opc = ISD::INTRINSIC_W_CHAIN;
> // Conservatively set memVT to the entire set of vectors loaded.
> uint64_t NumElts = getDataLayout()->getTypeAllocSize(I.getType()) / 8;
> @@ -5915,15 +5946,15 @@ bool ARM64TargetLowering::getTgtMemIntri
> Info.writeMem = false;
> return true;
> }
> - case Intrinsic::arm64_neon_st2:
> - case Intrinsic::arm64_neon_st3:
> - case Intrinsic::arm64_neon_st4:
> - case Intrinsic::arm64_neon_st1x2:
> - case Intrinsic::arm64_neon_st1x3:
> - case Intrinsic::arm64_neon_st1x4:
> - case Intrinsic::arm64_neon_st2lane:
> - case Intrinsic::arm64_neon_st3lane:
> - case Intrinsic::arm64_neon_st4lane: {
> + case Intrinsic::aarch64_neon_st2:
> + case Intrinsic::aarch64_neon_st3:
> + case Intrinsic::aarch64_neon_st4:
> + case Intrinsic::aarch64_neon_st1x2:
> + case Intrinsic::aarch64_neon_st1x3:
> + case Intrinsic::aarch64_neon_st1x4:
> + case Intrinsic::aarch64_neon_st2lane:
> + case Intrinsic::aarch64_neon_st3lane:
> + case Intrinsic::aarch64_neon_st4lane: {
> Info.opc = ISD::INTRINSIC_VOID;
> // Conservatively set memVT to the entire set of vectors stored.
> unsigned NumElts = 0;
> @@ -5942,8 +5973,8 @@ bool ARM64TargetLowering::getTgtMemIntri
> Info.writeMem = true;
> return true;
> }
> - case Intrinsic::arm64_ldaxr:
> - case Intrinsic::arm64_ldxr: {
> + case Intrinsic::aarch64_ldaxr:
> + case Intrinsic::aarch64_ldxr: {
> PointerType *PtrTy = cast<PointerType>(I.getArgOperand(0)->getType());
> Info.opc = ISD::INTRINSIC_W_CHAIN;
> Info.memVT = MVT::getVT(PtrTy->getElementType());
> @@ -5955,8 +5986,8 @@ bool ARM64TargetLowering::getTgtMemIntri
> Info.writeMem = false;
> return true;
> }
> - case Intrinsic::arm64_stlxr:
> - case Intrinsic::arm64_stxr: {
> + case Intrinsic::aarch64_stlxr:
> + case Intrinsic::aarch64_stxr: {
> PointerType *PtrTy = cast<PointerType>(I.getArgOperand(1)->getType());
> Info.opc = ISD::INTRINSIC_W_CHAIN;
> Info.memVT = MVT::getVT(PtrTy->getElementType());
> @@ -5968,8 +5999,8 @@ bool ARM64TargetLowering::getTgtMemIntri
> Info.writeMem = true;
> return true;
> }
> - case Intrinsic::arm64_ldaxp:
> - case Intrinsic::arm64_ldxp: {
> + case Intrinsic::aarch64_ldaxp:
> + case Intrinsic::aarch64_ldxp: {
> Info.opc = ISD::INTRINSIC_W_CHAIN;
> Info.memVT = MVT::i128;
> Info.ptrVal = I.getArgOperand(0);
> @@ -5980,8 +6011,8 @@ bool ARM64TargetLowering::getTgtMemIntri
> Info.writeMem = false;
> return true;
> }
> - case Intrinsic::arm64_stlxp:
> - case Intrinsic::arm64_stxp: {
> + case Intrinsic::aarch64_stlxp:
> + case Intrinsic::aarch64_stxp: {
> Info.opc = ISD::INTRINSIC_W_CHAIN;
> Info.memVT = MVT::i128;
> Info.ptrVal = I.getArgOperand(2);
> @@ -6000,7 +6031,7 @@ bool ARM64TargetLowering::getTgtMemIntri
> }
>
> // Truncations from 64-bit GPR to 32-bit GPR is free.
> -bool ARM64TargetLowering::isTruncateFree(Type *Ty1, Type *Ty2) const {
> +bool AArch64TargetLowering::isTruncateFree(Type *Ty1, Type *Ty2) const {
> if (!Ty1->isIntegerTy() || !Ty2->isIntegerTy())
> return false;
> unsigned NumBits1 = Ty1->getPrimitiveSizeInBits();
> @@ -6009,7 +6040,7 @@ bool ARM64TargetLowering::isTruncateFree
> return false;
> return true;
> }
> -bool ARM64TargetLowering::isTruncateFree(EVT VT1, EVT VT2) const {
> +bool AArch64TargetLowering::isTruncateFree(EVT VT1, EVT VT2) const {
> if (!VT1.isInteger() || !VT2.isInteger())
> return false;
> unsigned NumBits1 = VT1.getSizeInBits();
> @@ -6021,7 +6052,7 @@ bool ARM64TargetLowering::isTruncateFree
>
> // All 32-bit GPR operations implicitly zero the high-half of the corresponding
> // 64-bit GPR.
> -bool ARM64TargetLowering::isZExtFree(Type *Ty1, Type *Ty2) const {
> +bool AArch64TargetLowering::isZExtFree(Type *Ty1, Type *Ty2) const {
> if (!Ty1->isIntegerTy() || !Ty2->isIntegerTy())
> return false;
> unsigned NumBits1 = Ty1->getPrimitiveSizeInBits();
> @@ -6030,7 +6061,7 @@ bool ARM64TargetLowering::isZExtFree(Typ
> return true;
> return false;
> }
> -bool ARM64TargetLowering::isZExtFree(EVT VT1, EVT VT2) const {
> +bool AArch64TargetLowering::isZExtFree(EVT VT1, EVT VT2) const {
> if (!VT1.isInteger() || !VT2.isInteger())
> return false;
> unsigned NumBits1 = VT1.getSizeInBits();
> @@ -6040,7 +6071,7 @@ bool ARM64TargetLowering::isZExtFree(EVT
> return false;
> }
>
> -bool ARM64TargetLowering::isZExtFree(SDValue Val, EVT VT2) const {
> +bool AArch64TargetLowering::isZExtFree(SDValue Val, EVT VT2) const {
> EVT VT1 = Val.getValueType();
> if (isZExtFree(VT1, VT2)) {
> return true;
> @@ -6054,8 +6085,8 @@ bool ARM64TargetLowering::isZExtFree(SDV
> VT2.isInteger() && VT1.getSizeInBits() <= 32);
> }
>
> -bool ARM64TargetLowering::hasPairedLoad(Type *LoadedType,
> - unsigned &RequiredAligment) const {
> +bool AArch64TargetLowering::hasPairedLoad(Type *LoadedType,
> + unsigned &RequiredAligment) const {
> if (!LoadedType->isIntegerTy() && !LoadedType->isFloatTy())
> return false;
> // Cyclone supports unaligned accesses.
> @@ -6064,8 +6095,8 @@ bool ARM64TargetLowering::hasPairedLoad(
> return NumBits == 32 || NumBits == 64;
> }
>
> -bool ARM64TargetLowering::hasPairedLoad(EVT LoadedType,
> - unsigned &RequiredAligment) const {
> +bool AArch64TargetLowering::hasPairedLoad(EVT LoadedType,
> + unsigned &RequiredAligment) const {
> if (!LoadedType.isSimple() ||
> (!LoadedType.isInteger() && !LoadedType.isFloatingPoint()))
> return false;
> @@ -6081,10 +6112,11 @@ static bool memOpAlign(unsigned DstAlign
> (DstAlign == 0 || DstAlign % AlignCheck == 0));
> }
>
> -EVT ARM64TargetLowering::getOptimalMemOpType(uint64_t Size, unsigned DstAlign,
> - unsigned SrcAlign, bool IsMemset,
> - bool ZeroMemset, bool MemcpyStrSrc,
> - MachineFunction &MF) const {
> +EVT AArch64TargetLowering::getOptimalMemOpType(uint64_t Size, unsigned DstAlign,
> + unsigned SrcAlign, bool IsMemset,
> + bool ZeroMemset,
> + bool MemcpyStrSrc,
> + MachineFunction &MF) const {
> // Don't use AdvSIMD to implement 16-byte memset. It would have taken one
> // instruction to materialize the v2i64 zero and one store (with restrictive
> // addressing mode). Just do two i64 store of zero-registers.
> @@ -6101,7 +6133,7 @@ EVT ARM64TargetLowering::getOptimalMemOp
> }
>
> // 12-bit optionally shifted immediates are legal for adds.
> -bool ARM64TargetLowering::isLegalAddImmediate(int64_t Immed) const {
> +bool AArch64TargetLowering::isLegalAddImmediate(int64_t Immed) const {
> if ((Immed >> 12) == 0 || ((Immed & 0xfff) == 0 && Immed >> 24 == 0))
> return true;
> return false;
> @@ -6109,7 +6141,7 @@ bool ARM64TargetLowering::isLegalAddImme
>
> // Integer comparisons are implemented with ADDS/SUBS, so the range of valid
> // immediates is the same as for an add or a sub.
> -bool ARM64TargetLowering::isLegalICmpImmediate(int64_t Immed) const {
> +bool AArch64TargetLowering::isLegalICmpImmediate(int64_t Immed) const {
> if (Immed < 0)
> Immed *= -1;
> return isLegalAddImmediate(Immed);
> @@ -6117,9 +6149,9 @@ bool ARM64TargetLowering::isLegalICmpImm
>
> /// isLegalAddressingMode - Return true if the addressing mode represented
> /// by AM is legal for this target, for a load/store of the specified type.
> -bool ARM64TargetLowering::isLegalAddressingMode(const AddrMode &AM,
> - Type *Ty) const {
> - // ARM64 has five basic addressing modes:
> +bool AArch64TargetLowering::isLegalAddressingMode(const AddrMode &AM,
> + Type *Ty) const {
> + // AArch64 has five basic addressing modes:
> // reg
> // reg + 9-bit signed offset
> // reg + SIZE_IN_BYTES * 12-bit unsigned offset
> @@ -6168,8 +6200,8 @@ bool ARM64TargetLowering::isLegalAddress
> return false;
> }
>
> -int ARM64TargetLowering::getScalingFactorCost(const AddrMode &AM,
> - Type *Ty) const {
> +int AArch64TargetLowering::getScalingFactorCost(const AddrMode &AM,
> + Type *Ty) const {
> // Scaling factors are not free at all.
> // Operands | Rt Latency
> // -------------------------------------------
> @@ -6184,7 +6216,7 @@ int ARM64TargetLowering::getScalingFacto
> return -1;
> }
>
> -bool ARM64TargetLowering::isFMAFasterThanFMulAndFAdd(EVT VT) const {
> +bool AArch64TargetLowering::isFMAFasterThanFMulAndFAdd(EVT VT) const {
> VT = VT.getScalarType();
>
> if (!VT.isSimple())
> @@ -6202,17 +6234,18 @@ bool ARM64TargetLowering::isFMAFasterTha
> }
>
> const MCPhysReg *
> -ARM64TargetLowering::getScratchRegisters(CallingConv::ID) const {
> +AArch64TargetLowering::getScratchRegisters(CallingConv::ID) const {
> // LR is a callee-save register, but we must treat it as clobbered by any call
> // site. Hence we include LR in the scratch registers, which are in turn added
> // as implicit-defs for stackmaps and patchpoints.
> static const MCPhysReg ScratchRegs[] = {
> - ARM64::X16, ARM64::X17, ARM64::LR, 0
> + AArch64::X16, AArch64::X17, AArch64::LR, 0
> };
> return ScratchRegs;
> }
>
> -bool ARM64TargetLowering::isDesirableToCommuteWithShift(const SDNode *N) const {
> +bool
> +AArch64TargetLowering::isDesirableToCommuteWithShift(const SDNode *N) const {
> EVT VT = N->getValueType(0);
> // If N is unsigned bit extraction: ((x >> C) & mask), then do not combine
> // it with shift to let it be lowered to UBFX.
> @@ -6227,8 +6260,8 @@ bool ARM64TargetLowering::isDesirableToC
> return true;
> }
>
> -bool ARM64TargetLowering::shouldConvertConstantLoadToIntImm(const APInt &Imm,
> - Type *Ty) const {
> +bool AArch64TargetLowering::shouldConvertConstantLoadToIntImm(const APInt &Imm,
> + Type *Ty) const {
> assert(Ty->isIntegerTy());
>
> unsigned BitSize = Ty->getPrimitiveSizeInBits();
> @@ -6236,7 +6269,7 @@ bool ARM64TargetLowering::shouldConvertC
> return false;
>
> int64_t Val = Imm.getSExtValue();
> - if (Val == 0 || ARM64_AM::isLogicalImmediate(Val, BitSize))
> + if (Val == 0 || AArch64_AM::isLogicalImmediate(Val, BitSize))
> return true;
>
> if ((int64_t)Val < 0)
> @@ -6269,10 +6302,10 @@ static SDValue performIntegerAbsCombine(
> N0.getOperand(0));
> // Generate SUBS & CSEL.
> SDValue Cmp =
> - DAG.getNode(ARM64ISD::SUBS, DL, DAG.getVTList(VT, MVT::i32),
> + DAG.getNode(AArch64ISD::SUBS, DL, DAG.getVTList(VT, MVT::i32),
> N0.getOperand(0), DAG.getConstant(0, VT));
> - return DAG.getNode(ARM64ISD::CSEL, DL, VT, N0.getOperand(0), Neg,
> - DAG.getConstant(ARM64CC::PL, MVT::i32),
> + return DAG.getNode(AArch64ISD::CSEL, DL, VT, N0.getOperand(0), Neg,
> + DAG.getConstant(AArch64CC::PL, MVT::i32),
> SDValue(Cmp.getNode(), 1));
> }
> return SDValue();
> @@ -6281,7 +6314,7 @@ static SDValue performIntegerAbsCombine(
> // performXorCombine - Attempts to handle integer ABS.
> static SDValue performXorCombine(SDNode *N, SelectionDAG &DAG,
> TargetLowering::DAGCombinerInfo &DCI,
> - const ARM64Subtarget *Subtarget) {
> + const AArch64Subtarget *Subtarget) {
> if (DCI.isBeforeLegalizeOps())
> return SDValue();
>
> @@ -6290,7 +6323,7 @@ static SDValue performXorCombine(SDNode
>
> static SDValue performMulCombine(SDNode *N, SelectionDAG &DAG,
> TargetLowering::DAGCombinerInfo &DCI,
> - const ARM64Subtarget *Subtarget) {
> + const AArch64Subtarget *Subtarget) {
> if (DCI.isBeforeLegalizeOps())
> return SDValue();
>
> @@ -6350,7 +6383,7 @@ static SDValue performIntToFpCombine(SDN
> DAG.ReplaceAllUsesOfValueWith(SDValue(LN0, 1), Load.getValue(1));
>
> unsigned Opcode =
> - (N->getOpcode() == ISD::SINT_TO_FP) ? ARM64ISD::SITOF : ARM64ISD::UITOF;
> + (N->getOpcode() == ISD::SINT_TO_FP) ? AArch64ISD::SITOF : AArch64ISD::UITOF;
> return DAG.getNode(Opcode, SDLoc(N), VT, Load);
> }
>
> @@ -6417,7 +6450,7 @@ static SDValue tryCombineToEXTR(SDNode *
> std::swap(ShiftLHS, ShiftRHS);
> }
>
> - return DAG.getNode(ARM64ISD::EXTR, DL, VT, LHS, RHS,
> + return DAG.getNode(AArch64ISD::EXTR, DL, VT, LHS, RHS,
> DAG.getConstant(ShiftRHS, MVT::i64));
> }
>
> @@ -6461,7 +6494,7 @@ static SDValue tryCombineToBSL(SDNode *N
> }
>
> if (FoundMatch)
> - return DAG.getNode(ARM64ISD::BSL, DL, VT, SDValue(BVN0, 0),
> + return DAG.getNode(AArch64ISD::BSL, DL, VT, SDValue(BVN0, 0),
> N0->getOperand(1 - i), N1->getOperand(1 - j));
> }
>
> @@ -6469,9 +6502,9 @@ static SDValue tryCombineToBSL(SDNode *N
> }
>
> static SDValue performORCombine(SDNode *N, TargetLowering::DAGCombinerInfo &DCI,
> - const ARM64Subtarget *Subtarget) {
> + const AArch64Subtarget *Subtarget) {
> // Attempt to form an EXTR from (or (shl VAL1, #N), (srl VAL2, #RegWidth-N))
> - if (!EnableARM64ExtrGeneration)
> + if (!EnableAArch64ExtrGeneration)
> return SDValue();
> SelectionDAG &DAG = DCI.DAG;
> EVT VT = N->getValueType(0);
> @@ -6517,14 +6550,14 @@ static SDValue performBitcastCombine(SDN
> SDValue Op0 = N->getOperand(0);
> if (Op0->getOpcode() != ISD::EXTRACT_SUBVECTOR &&
> !(Op0->isMachineOpcode() &&
> - Op0->getMachineOpcode() == ARM64::EXTRACT_SUBREG))
> + Op0->getMachineOpcode() == AArch64::EXTRACT_SUBREG))
> return SDValue();
> uint64_t idx = cast<ConstantSDNode>(Op0->getOperand(1))->getZExtValue();
> if (Op0->getOpcode() == ISD::EXTRACT_SUBVECTOR) {
> if (Op0->getValueType(0).getVectorNumElements() != idx && idx != 0)
> return SDValue();
> - } else if (Op0->getMachineOpcode() == ARM64::EXTRACT_SUBREG) {
> - if (idx != ARM64::dsub)
> + } else if (Op0->getMachineOpcode() == AArch64::EXTRACT_SUBREG) {
> + if (idx != AArch64::dsub)
> return SDValue();
> // The dsub reference is equivalent to a lane zero subvector reference.
> idx = 0;
> @@ -6539,7 +6572,7 @@ static SDValue performBitcastCombine(SDN
> if (SVT.getVectorNumElements() != VT.getVectorNumElements() * 2)
> return SDValue();
>
> - DEBUG(dbgs() << "arm64-lower: bitcast extract_subvector simplification\n");
> + DEBUG(dbgs() << "aarch64-lower: bitcast extract_subvector simplification\n");
>
> // Create the simplified form to just extract the low or high half of the
> // vector directly rather than bothering with the bitcasts.
> @@ -6549,7 +6582,7 @@ static SDValue performBitcastCombine(SDN
> SDValue HalfIdx = DAG.getConstant(NumElements, MVT::i64);
> return DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl, VT, Source, HalfIdx);
> } else {
> - SDValue SubReg = DAG.getTargetConstant(ARM64::dsub, MVT::i32);
> + SDValue SubReg = DAG.getTargetConstant(AArch64::dsub, MVT::i32);
> return SDValue(DAG.getMachineNode(TargetOpcode::EXTRACT_SUBREG, dl, VT,
> Source, SubReg),
> 0);
> @@ -6572,7 +6605,7 @@ static SDValue performConcatVectorsCombi
> // canonicalise to that.
> if (N->getOperand(0) == N->getOperand(1) && VT.getVectorNumElements() == 2) {
> assert(VT.getVectorElementType().getSizeInBits() == 64);
> - return DAG.getNode(ARM64ISD::DUPLANE64, dl, VT,
> + return DAG.getNode(AArch64ISD::DUPLANE64, dl, VT,
> WidenVector(N->getOperand(0), DAG),
> DAG.getConstant(0, MVT::i64));
> }
> @@ -6595,7 +6628,7 @@ static SDValue performConcatVectorsCombi
> if (!RHSTy.isVector())
> return SDValue();
>
> - DEBUG(dbgs() << "arm64-lower: concat_vectors bitcast simplification\n");
> + DEBUG(dbgs() << "aarch64-lower: concat_vectors bitcast simplification\n");
>
> MVT ConcatTy = MVT::getVectorVT(RHSTy.getVectorElementType(),
> RHSTy.getVectorNumElements() * 2);
> @@ -6670,13 +6703,13 @@ static SDValue tryExtendDUPToExtractHigh
> // operand saying *which* lane, so we need to know.
> bool IsDUPLANE;
> switch (N.getOpcode()) {
> - case ARM64ISD::DUP:
> + case AArch64ISD::DUP:
> IsDUPLANE = false;
> break;
> - case ARM64ISD::DUPLANE8:
> - case ARM64ISD::DUPLANE16:
> - case ARM64ISD::DUPLANE32:
> - case ARM64ISD::DUPLANE64:
> + case AArch64ISD::DUPLANE8:
> + case AArch64ISD::DUPLANE16:
> + case AArch64ISD::DUPLANE32:
> + case AArch64ISD::DUPLANE64:
> IsDUPLANE = true;
> break;
> default:
> @@ -6696,7 +6729,7 @@ static SDValue tryExtendDUPToExtractHigh
> NewDUP = DAG.getNode(N.getOpcode(), SDLoc(N), NewDUPVT, N.getOperand(0),
> N.getOperand(1));
> else
> - NewDUP = DAG.getNode(ARM64ISD::DUP, SDLoc(N), NewDUPVT, N.getOperand(0));
> + NewDUP = DAG.getNode(AArch64ISD::DUP, SDLoc(N), NewDUPVT, N.getOperand(0));
>
> return DAG.getNode(ISD::EXTRACT_SUBVECTOR, SDLoc(N.getNode()), NarrowTy,
> NewDUP, DAG.getConstant(NumElems, MVT::i64));
> @@ -6717,29 +6750,29 @@ struct GenericSetCCInfo {
> ISD::CondCode CC;
> };
>
> -/// \brief Helper structure to keep track of a SET_CC lowered into ARM64 code.
> -struct ARM64SetCCInfo {
> +/// \brief Helper structure to keep track of a SET_CC lowered into AArch64 code.
> +struct AArch64SetCCInfo {
> const SDValue *Cmp;
> - ARM64CC::CondCode CC;
> + AArch64CC::CondCode CC;
> };
>
> /// \brief Helper structure to keep track of SetCC information.
> union SetCCInfo {
> GenericSetCCInfo Generic;
> - ARM64SetCCInfo ARM64;
> + AArch64SetCCInfo AArch64;
> };
>
> -/// \brief Helper structure to be able to read SetCC information.
> -/// If set to true, IsARM64 field, Info is a ARM64SetCCInfo, otherwise Info is
> -/// a GenericSetCCInfo.
> +/// \brief Helper structure to be able to read SetCC information. If set to
> +/// true, IsAArch64 field, Info is a AArch64SetCCInfo, otherwise Info is a
> +/// GenericSetCCInfo.
> struct SetCCInfoAndKind {
> SetCCInfo Info;
> - bool IsARM64;
> + bool IsAArch64;
> };
>
> /// \brief Check whether or not \p Op is a SET_CC operation, either a generic or
> /// an
> -/// ARM64 lowered one.
> +/// AArch64 lowered one.
> /// \p SetCCInfo is filled accordingly.
> /// \post SetCCInfo is meanginfull only when this function returns true.
> /// \return True when Op is a kind of SET_CC operation.
> @@ -6749,20 +6782,20 @@ static bool isSetCC(SDValue Op, SetCCInf
> SetCCInfo.Info.Generic.Opnd0 = &Op.getOperand(0);
> SetCCInfo.Info.Generic.Opnd1 = &Op.getOperand(1);
> SetCCInfo.Info.Generic.CC = cast<CondCodeSDNode>(Op.getOperand(2))->get();
> - SetCCInfo.IsARM64 = false;
> + SetCCInfo.IsAArch64 = false;
> return true;
> }
> // Otherwise, check if this is a matching csel instruction.
> // In other words:
> // - csel 1, 0, cc
> // - csel 0, 1, !cc
> - if (Op.getOpcode() != ARM64ISD::CSEL)
> + if (Op.getOpcode() != AArch64ISD::CSEL)
> return false;
> // Set the information about the operands.
> // TODO: we want the operands of the Cmp not the csel
> - SetCCInfo.Info.ARM64.Cmp = &Op.getOperand(3);
> - SetCCInfo.IsARM64 = true;
> - SetCCInfo.Info.ARM64.CC = static_cast<ARM64CC::CondCode>(
> + SetCCInfo.Info.AArch64.Cmp = &Op.getOperand(3);
> + SetCCInfo.IsAArch64 = true;
> + SetCCInfo.Info.AArch64.CC = static_cast<AArch64CC::CondCode>(
> cast<ConstantSDNode>(Op.getOperand(2))->getZExtValue());
>
> // Check that the operands matches the constraints:
> @@ -6779,8 +6812,8 @@ static bool isSetCC(SDValue Op, SetCCInf
> if (!TValue->isOne()) {
> // Update the comparison when we are interested in !cc.
> std::swap(TValue, FValue);
> - SetCCInfo.Info.ARM64.CC =
> - ARM64CC::getInvertedCondCode(SetCCInfo.Info.ARM64.CC);
> + SetCCInfo.Info.AArch64.CC =
> + AArch64CC::getInvertedCondCode(SetCCInfo.Info.AArch64.CC);
> }
> return TValue->isOne() && FValue->isNullValue();
> }
> @@ -6813,8 +6846,8 @@ static SDValue performSetccAddFolding(SD
> }
>
> // FIXME: This could be generatized to work for FP comparisons.
> - EVT CmpVT = InfoAndKind.IsARM64
> - ? InfoAndKind.Info.ARM64.Cmp->getOperand(0).getValueType()
> + EVT CmpVT = InfoAndKind.IsAArch64
> + ? InfoAndKind.Info.AArch64.Cmp->getOperand(0).getValueType()
> : InfoAndKind.Info.Generic.Opnd0->getValueType();
> if (CmpVT != MVT::i32 && CmpVT != MVT::i64)
> return SDValue();
> @@ -6822,19 +6855,19 @@ static SDValue performSetccAddFolding(SD
> SDValue CCVal;
> SDValue Cmp;
> SDLoc dl(Op);
> - if (InfoAndKind.IsARM64) {
> + if (InfoAndKind.IsAArch64) {
> CCVal = DAG.getConstant(
> - ARM64CC::getInvertedCondCode(InfoAndKind.Info.ARM64.CC), MVT::i32);
> - Cmp = *InfoAndKind.Info.ARM64.Cmp;
> + AArch64CC::getInvertedCondCode(InfoAndKind.Info.AArch64.CC), MVT::i32);
> + Cmp = *InfoAndKind.Info.AArch64.Cmp;
> } else
> - Cmp = getARM64Cmp(*InfoAndKind.Info.Generic.Opnd0,
> + Cmp = getAArch64Cmp(*InfoAndKind.Info.Generic.Opnd0,
> *InfoAndKind.Info.Generic.Opnd1,
> ISD::getSetCCInverse(InfoAndKind.Info.Generic.CC, true),
> CCVal, DAG, dl);
>
> EVT VT = Op->getValueType(0);
> LHS = DAG.getNode(ISD::ADD, dl, VT, RHS, DAG.getConstant(1, VT));
> - return DAG.getNode(ARM64ISD::CSEL, dl, VT, RHS, LHS, CCVal, Cmp);
> + return DAG.getNode(AArch64ISD::CSEL, dl, VT, RHS, LHS, CCVal, Cmp);
> }
>
> // The basic add/sub long vector instructions have variants with "2" on the end
> @@ -6893,8 +6926,8 @@ static SDValue performAddSubLongCombine(
> // Massage DAGs which we can use the high-half "long" operations on into
> // something isel will recognize better. E.g.
> //
> -// (arm64_neon_umull (extract_high vec) (dupv64 scalar)) -->
> -// (arm64_neon_umull (extract_high (v2i64 vec)))
> +// (aarch64_neon_umull (extract_high vec) (dupv64 scalar)) -->
> +// (aarch64_neon_umull (extract_high (v2i64 vec)))
> // (extract_high (v2i64 (dup128 scalar)))))
> //
> static SDValue tryCombineLongOpWithDup(unsigned IID, SDNode *N,
> @@ -6951,24 +6984,24 @@ static SDValue tryCombineShiftImm(unsign
> switch (IID) {
> default:
> llvm_unreachable("Unknown shift intrinsic");
> - case Intrinsic::arm64_neon_sqshl:
> - Opcode = ARM64ISD::SQSHL_I;
> + case Intrinsic::aarch64_neon_sqshl:
> + Opcode = AArch64ISD::SQSHL_I;
> IsRightShift = false;
> break;
> - case Intrinsic::arm64_neon_uqshl:
> - Opcode = ARM64ISD::UQSHL_I;
> + case Intrinsic::aarch64_neon_uqshl:
> + Opcode = AArch64ISD::UQSHL_I;
> IsRightShift = false;
> break;
> - case Intrinsic::arm64_neon_srshl:
> - Opcode = ARM64ISD::SRSHR_I;
> + case Intrinsic::aarch64_neon_srshl:
> + Opcode = AArch64ISD::SRSHR_I;
> IsRightShift = true;
> break;
> - case Intrinsic::arm64_neon_urshl:
> - Opcode = ARM64ISD::URSHR_I;
> + case Intrinsic::aarch64_neon_urshl:
> + Opcode = AArch64ISD::URSHR_I;
> IsRightShift = true;
> break;
> - case Intrinsic::arm64_neon_sqshlu:
> - Opcode = ARM64ISD::SQSHLU_I;
> + case Intrinsic::aarch64_neon_sqshlu:
> + Opcode = AArch64ISD::SQSHLU_I;
> IsRightShift = false;
> break;
> }
> @@ -7001,38 +7034,38 @@ static SDValue tryCombineCRC32(unsigned
>
> static SDValue performIntrinsicCombine(SDNode *N,
> TargetLowering::DAGCombinerInfo &DCI,
> - const ARM64Subtarget *Subtarget) {
> + const AArch64Subtarget *Subtarget) {
> SelectionDAG &DAG = DCI.DAG;
> unsigned IID = getIntrinsicID(N);
> switch (IID) {
> default:
> break;
> - case Intrinsic::arm64_neon_vcvtfxs2fp:
> - case Intrinsic::arm64_neon_vcvtfxu2fp:
> + case Intrinsic::aarch64_neon_vcvtfxs2fp:
> + case Intrinsic::aarch64_neon_vcvtfxu2fp:
> return tryCombineFixedPointConvert(N, DCI, DAG);
> break;
> - case Intrinsic::arm64_neon_fmax:
> - return DAG.getNode(ARM64ISD::FMAX, SDLoc(N), N->getValueType(0),
> + case Intrinsic::aarch64_neon_fmax:
> + return DAG.getNode(AArch64ISD::FMAX, SDLoc(N), N->getValueType(0),
> N->getOperand(1), N->getOperand(2));
> - case Intrinsic::arm64_neon_fmin:
> - return DAG.getNode(ARM64ISD::FMIN, SDLoc(N), N->getValueType(0),
> + case Intrinsic::aarch64_neon_fmin:
> + return DAG.getNode(AArch64ISD::FMIN, SDLoc(N), N->getValueType(0),
> N->getOperand(1), N->getOperand(2));
> - case Intrinsic::arm64_neon_smull:
> - case Intrinsic::arm64_neon_umull:
> - case Intrinsic::arm64_neon_pmull:
> - case Intrinsic::arm64_neon_sqdmull:
> + case Intrinsic::aarch64_neon_smull:
> + case Intrinsic::aarch64_neon_umull:
> + case Intrinsic::aarch64_neon_pmull:
> + case Intrinsic::aarch64_neon_sqdmull:
> return tryCombineLongOpWithDup(IID, N, DCI, DAG);
> - case Intrinsic::arm64_neon_sqshl:
> - case Intrinsic::arm64_neon_uqshl:
> - case Intrinsic::arm64_neon_sqshlu:
> - case Intrinsic::arm64_neon_srshl:
> - case Intrinsic::arm64_neon_urshl:
> + case Intrinsic::aarch64_neon_sqshl:
> + case Intrinsic::aarch64_neon_uqshl:
> + case Intrinsic::aarch64_neon_sqshlu:
> + case Intrinsic::aarch64_neon_srshl:
> + case Intrinsic::aarch64_neon_urshl:
> return tryCombineShiftImm(IID, N, DAG);
> - case Intrinsic::arm64_crc32b:
> - case Intrinsic::arm64_crc32cb:
> + case Intrinsic::aarch64_crc32b:
> + case Intrinsic::aarch64_crc32cb:
> return tryCombineCRC32(0xff, N, DAG);
> - case Intrinsic::arm64_crc32h:
> - case Intrinsic::arm64_crc32ch:
> + case Intrinsic::aarch64_crc32h:
> + case Intrinsic::aarch64_crc32ch:
> return tryCombineCRC32(0xffff, N, DAG);
> }
> return SDValue();
> @@ -7049,8 +7082,8 @@ static SDValue performExtendCombine(SDNo
> N->getOperand(0).getOpcode() == ISD::INTRINSIC_WO_CHAIN) {
> SDNode *ABDNode = N->getOperand(0).getNode();
> unsigned IID = getIntrinsicID(ABDNode);
> - if (IID == Intrinsic::arm64_neon_sabd ||
> - IID == Intrinsic::arm64_neon_uabd) {
> + if (IID == Intrinsic::aarch64_neon_sabd ||
> + IID == Intrinsic::aarch64_neon_uabd) {
> SDValue NewABD = tryCombineLongOpWithDup(IID, ABDNode, DCI, DAG);
> if (!NewABD.getNode())
> return SDValue();
> @@ -7060,7 +7093,7 @@ static SDValue performExtendCombine(SDNo
> }
> }
>
> - // This is effectively a custom type legalization for ARM64.
> + // This is effectively a custom type legalization for AArch64.
> //
> // Type legalization will split an extend of a small, legal, type to a larger
> // illegal type by first splitting the destination type, often creating
> @@ -7074,7 +7107,7 @@ static SDValue performExtendCombine(SDNo
> // %hi = v4i32 sext v4i8 %hisrc
> // Things go rapidly downhill from there.
> //
> - // For ARM64, the [sz]ext vector instructions can only go up one element
> + // For AArch64, the [sz]ext vector instructions can only go up one element
> // size, so we can, e.g., extend from i8 to i16, but to go from i8 to i32
> // take two instructions.
> //
> @@ -7199,7 +7232,7 @@ static SDValue replaceSplatVectorStore(S
> static SDValue performSTORECombine(SDNode *N,
> TargetLowering::DAGCombinerInfo &DCI,
> SelectionDAG &DAG,
> - const ARM64Subtarget *Subtarget) {
> + const AArch64Subtarget *Subtarget) {
> if (!DCI.isBeforeLegalize())
> return SDValue();
>
> @@ -7322,7 +7355,7 @@ static SDValue performPostLD1Combine(SDN
> unsigned NumBytes = VT.getScalarSizeInBits() / 8;
> if (IncVal != NumBytes)
> continue;
> - Inc = DAG.getRegister(ARM64::XZR, MVT::i64);
> + Inc = DAG.getRegister(AArch64::XZR, MVT::i64);
> }
>
> SmallVector<SDValue, 8> Ops;
> @@ -7336,7 +7369,7 @@ static SDValue performPostLD1Combine(SDN
>
> EVT Tys[3] = { VT, MVT::i64, MVT::Other };
> SDVTList SDTys = DAG.getVTList(ArrayRef<EVT>(Tys, 3));
> - unsigned NewOp = IsLaneOp ? ARM64ISD::LD1LANEpost : ARM64ISD::LD1DUPpost;
> + unsigned NewOp = IsLaneOp ? AArch64ISD::LD1LANEpost : AArch64ISD::LD1DUPpost;
> SDValue UpdN = DAG.getMemIntrinsicNode(NewOp, SDLoc(N), SDTys, Ops,
> MemVT,
> LoadSDN->getMemOperand());
> @@ -7387,47 +7420,47 @@ static SDValue performNEONPostLDSTCombin
> unsigned IntNo = cast<ConstantSDNode>(N->getOperand(1))->getZExtValue();
> switch (IntNo) {
> default: llvm_unreachable("unexpected intrinsic for Neon base update");
> - case Intrinsic::arm64_neon_ld2: NewOpc = ARM64ISD::LD2post;
> + case Intrinsic::aarch64_neon_ld2: NewOpc = AArch64ISD::LD2post;
> NumVecs = 2; break;
> - case Intrinsic::arm64_neon_ld3: NewOpc = ARM64ISD::LD3post;
> + case Intrinsic::aarch64_neon_ld3: NewOpc = AArch64ISD::LD3post;
> NumVecs = 3; break;
> - case Intrinsic::arm64_neon_ld4: NewOpc = ARM64ISD::LD4post;
> + case Intrinsic::aarch64_neon_ld4: NewOpc = AArch64ISD::LD4post;
> NumVecs = 4; break;
> - case Intrinsic::arm64_neon_st2: NewOpc = ARM64ISD::ST2post;
> + case Intrinsic::aarch64_neon_st2: NewOpc = AArch64ISD::ST2post;
> NumVecs = 2; IsStore = true; break;
> - case Intrinsic::arm64_neon_st3: NewOpc = ARM64ISD::ST3post;
> + case Intrinsic::aarch64_neon_st3: NewOpc = AArch64ISD::ST3post;
> NumVecs = 3; IsStore = true; break;
> - case Intrinsic::arm64_neon_st4: NewOpc = ARM64ISD::ST4post;
> + case Intrinsic::aarch64_neon_st4: NewOpc = AArch64ISD::ST4post;
> NumVecs = 4; IsStore = true; break;
> - case Intrinsic::arm64_neon_ld1x2: NewOpc = ARM64ISD::LD1x2post;
> + case Intrinsic::aarch64_neon_ld1x2: NewOpc = AArch64ISD::LD1x2post;
> NumVecs = 2; break;
> - case Intrinsic::arm64_neon_ld1x3: NewOpc = ARM64ISD::LD1x3post;
> + case Intrinsic::aarch64_neon_ld1x3: NewOpc = AArch64ISD::LD1x3post;
> NumVecs = 3; break;
> - case Intrinsic::arm64_neon_ld1x4: NewOpc = ARM64ISD::LD1x4post;
> + case Intrinsic::aarch64_neon_ld1x4: NewOpc = AArch64ISD::LD1x4post;
> NumVecs = 4; break;
> - case Intrinsic::arm64_neon_st1x2: NewOpc = ARM64ISD::ST1x2post;
> + case Intrinsic::aarch64_neon_st1x2: NewOpc = AArch64ISD::ST1x2post;
> NumVecs = 2; IsStore = true; break;
> - case Intrinsic::arm64_neon_st1x3: NewOpc = ARM64ISD::ST1x3post;
> + case Intrinsic::aarch64_neon_st1x3: NewOpc = AArch64ISD::ST1x3post;
> NumVecs = 3; IsStore = true; break;
> - case Intrinsic::arm64_neon_st1x4: NewOpc = ARM64ISD::ST1x4post;
> + case Intrinsic::aarch64_neon_st1x4: NewOpc = AArch64ISD::ST1x4post;
> NumVecs = 4; IsStore = true; break;
> - case Intrinsic::arm64_neon_ld2r: NewOpc = ARM64ISD::LD2DUPpost;
> + case Intrinsic::aarch64_neon_ld2r: NewOpc = AArch64ISD::LD2DUPpost;
> NumVecs = 2; IsDupOp = true; break;
> - case Intrinsic::arm64_neon_ld3r: NewOpc = ARM64ISD::LD3DUPpost;
> + case Intrinsic::aarch64_neon_ld3r: NewOpc = AArch64ISD::LD3DUPpost;
> NumVecs = 3; IsDupOp = true; break;
> - case Intrinsic::arm64_neon_ld4r: NewOpc = ARM64ISD::LD4DUPpost;
> + case Intrinsic::aarch64_neon_ld4r: NewOpc = AArch64ISD::LD4DUPpost;
> NumVecs = 4; IsDupOp = true; break;
> - case Intrinsic::arm64_neon_ld2lane: NewOpc = ARM64ISD::LD2LANEpost;
> + case Intrinsic::aarch64_neon_ld2lane: NewOpc = AArch64ISD::LD2LANEpost;
> NumVecs = 2; IsLaneOp = true; break;
> - case Intrinsic::arm64_neon_ld3lane: NewOpc = ARM64ISD::LD3LANEpost;
> + case Intrinsic::aarch64_neon_ld3lane: NewOpc = AArch64ISD::LD3LANEpost;
> NumVecs = 3; IsLaneOp = true; break;
> - case Intrinsic::arm64_neon_ld4lane: NewOpc = ARM64ISD::LD4LANEpost;
> + case Intrinsic::aarch64_neon_ld4lane: NewOpc = AArch64ISD::LD4LANEpost;
> NumVecs = 4; IsLaneOp = true; break;
> - case Intrinsic::arm64_neon_st2lane: NewOpc = ARM64ISD::ST2LANEpost;
> + case Intrinsic::aarch64_neon_st2lane: NewOpc = AArch64ISD::ST2LANEpost;
> NumVecs = 2; IsStore = true; IsLaneOp = true; break;
> - case Intrinsic::arm64_neon_st3lane: NewOpc = ARM64ISD::ST3LANEpost;
> + case Intrinsic::aarch64_neon_st3lane: NewOpc = AArch64ISD::ST3LANEpost;
> NumVecs = 3; IsStore = true; IsLaneOp = true; break;
> - case Intrinsic::arm64_neon_st4lane: NewOpc = ARM64ISD::ST4LANEpost;
> + case Intrinsic::aarch64_neon_st4lane: NewOpc = AArch64ISD::ST4LANEpost;
> NumVecs = 4; IsStore = true; IsLaneOp = true; break;
> }
>
> @@ -7446,7 +7479,7 @@ static SDValue performNEONPostLDSTCombin
> NumBytes /= VecTy.getVectorNumElements();
> if (IncVal != NumBytes)
> continue;
> - Inc = DAG.getRegister(ARM64::XZR, MVT::i64);
> + Inc = DAG.getRegister(AArch64::XZR, MVT::i64);
> }
> SmallVector<SDValue, 8> Ops;
> Ops.push_back(N->getOperand(0)); // Incoming chain
> @@ -7497,11 +7530,11 @@ static SDValue performBRCONDCombine(SDNo
>
> assert(isa<ConstantSDNode>(CCVal) && "Expected a ConstantSDNode here!");
> unsigned CC = cast<ConstantSDNode>(CCVal)->getZExtValue();
> - if (CC != ARM64CC::EQ && CC != ARM64CC::NE)
> + if (CC != AArch64CC::EQ && CC != AArch64CC::NE)
> return SDValue();
>
> unsigned CmpOpc = Cmp.getOpcode();
> - if (CmpOpc != ARM64ISD::ADDS && CmpOpc != ARM64ISD::SUBS)
> + if (CmpOpc != AArch64ISD::ADDS && CmpOpc != AArch64ISD::SUBS)
> return SDValue();
>
> // Only attempt folding if there is only one use of the flag and no use of the
> @@ -7529,10 +7562,10 @@ static SDValue performBRCONDCombine(SDNo
>
> // Fold the compare into the branch instruction.
> SDValue BR;
> - if (CC == ARM64CC::EQ)
> - BR = DAG.getNode(ARM64ISD::CBZ, SDLoc(N), MVT::Other, Chain, LHS, Dest);
> + if (CC == AArch64CC::EQ)
> + BR = DAG.getNode(AArch64ISD::CBZ, SDLoc(N), MVT::Other, Chain, LHS, Dest);
> else
> - BR = DAG.getNode(ARM64ISD::CBNZ, SDLoc(N), MVT::Other, Chain, LHS, Dest);
> + BR = DAG.getNode(AArch64ISD::CBNZ, SDLoc(N), MVT::Other, Chain, LHS, Dest);
>
> // Do not add new nodes to DAG combiner worklist.
> DCI.CombineTo(N, BR, false);
> @@ -7608,8 +7641,8 @@ static SDValue performSelectCombine(SDNo
> return DAG.getSelect(DL, ResVT, Mask, N->getOperand(1), N->getOperand(2));
> }
>
> -SDValue ARM64TargetLowering::PerformDAGCombine(SDNode *N,
> - DAGCombinerInfo &DCI) const {
> +SDValue AArch64TargetLowering::PerformDAGCombine(SDNode *N,
> + DAGCombinerInfo &DCI) const {
> SelectionDAG &DAG = DCI.DAG;
> switch (N->getOpcode()) {
> default:
> @@ -7642,36 +7675,36 @@ SDValue ARM64TargetLowering::PerformDAGC
> return performVSelectCombine(N, DCI.DAG);
> case ISD::STORE:
> return performSTORECombine(N, DCI, DAG, Subtarget);
> - case ARM64ISD::BRCOND:
> + case AArch64ISD::BRCOND:
> return performBRCONDCombine(N, DCI, DAG);
> - case ARM64ISD::DUP:
> + case AArch64ISD::DUP:
> return performPostLD1Combine(N, DCI, false);
> case ISD::INSERT_VECTOR_ELT:
> return performPostLD1Combine(N, DCI, true);
> case ISD::INTRINSIC_VOID:
> case ISD::INTRINSIC_W_CHAIN:
> switch (cast<ConstantSDNode>(N->getOperand(1))->getZExtValue()) {
> - case Intrinsic::arm64_neon_ld2:
> - case Intrinsic::arm64_neon_ld3:
> - case Intrinsic::arm64_neon_ld4:
> - case Intrinsic::arm64_neon_ld1x2:
> - case Intrinsic::arm64_neon_ld1x3:
> - case Intrinsic::arm64_neon_ld1x4:
> - case Intrinsic::arm64_neon_ld2lane:
> - case Intrinsic::arm64_neon_ld3lane:
> - case Intrinsic::arm64_neon_ld4lane:
> - case Intrinsic::arm64_neon_ld2r:
> - case Intrinsic::arm64_neon_ld3r:
> - case Intrinsic::arm64_neon_ld4r:
> - case Intrinsic::arm64_neon_st2:
> - case Intrinsic::arm64_neon_st3:
> - case Intrinsic::arm64_neon_st4:
> - case Intrinsic::arm64_neon_st1x2:
> - case Intrinsic::arm64_neon_st1x3:
> - case Intrinsic::arm64_neon_st1x4:
> - case Intrinsic::arm64_neon_st2lane:
> - case Intrinsic::arm64_neon_st3lane:
> - case Intrinsic::arm64_neon_st4lane:
> + case Intrinsic::aarch64_neon_ld2:
> + case Intrinsic::aarch64_neon_ld3:
> + case Intrinsic::aarch64_neon_ld4:
> + case Intrinsic::aarch64_neon_ld1x2:
> + case Intrinsic::aarch64_neon_ld1x3:
> + case Intrinsic::aarch64_neon_ld1x4:
> + case Intrinsic::aarch64_neon_ld2lane:
> + case Intrinsic::aarch64_neon_ld3lane:
> + case Intrinsic::aarch64_neon_ld4lane:
> + case Intrinsic::aarch64_neon_ld2r:
> + case Intrinsic::aarch64_neon_ld3r:
> + case Intrinsic::aarch64_neon_ld4r:
> + case Intrinsic::aarch64_neon_st2:
> + case Intrinsic::aarch64_neon_st3:
> + case Intrinsic::aarch64_neon_st4:
> + case Intrinsic::aarch64_neon_st1x2:
> + case Intrinsic::aarch64_neon_st1x3:
> + case Intrinsic::aarch64_neon_st1x4:
> + case Intrinsic::aarch64_neon_st2lane:
> + case Intrinsic::aarch64_neon_st3lane:
> + case Intrinsic::aarch64_neon_st4lane:
> return performNEONPostLDSTCombine(N, DCI, DAG);
> default:
> break;
> @@ -7684,7 +7717,8 @@ SDValue ARM64TargetLowering::PerformDAGC
> // we can't perform a tail-call. In particular, we need to check for
> // target ISD nodes that are returns and any other "odd" constructs
> // that the generic analysis code won't necessarily catch.
> -bool ARM64TargetLowering::isUsedByReturnOnly(SDNode *N, SDValue &Chain) const {
> +bool AArch64TargetLowering::isUsedByReturnOnly(SDNode *N,
> + SDValue &Chain) const {
> if (N->getNumValues() != 1)
> return false;
> if (!N->hasNUsesOfValue(1, 0))
> @@ -7704,7 +7738,7 @@ bool ARM64TargetLowering::isUsedByReturn
>
> bool HasRet = false;
> for (SDNode *Node : Copy->uses()) {
> - if (Node->getOpcode() != ARM64ISD::RET_FLAG)
> + if (Node->getOpcode() != AArch64ISD::RET_FLAG)
> return false;
> HasRet = true;
> }
> @@ -7720,18 +7754,18 @@ bool ARM64TargetLowering::isUsedByReturn
> // call. This will cause the optimizers to attempt to move, or duplicate,
> // return instructions to help enable tail call optimizations for this
> // instruction.
> -bool ARM64TargetLowering::mayBeEmittedAsTailCall(CallInst *CI) const {
> +bool AArch64TargetLowering::mayBeEmittedAsTailCall(CallInst *CI) const {
> if (!CI->isTailCall())
> return false;
>
> return true;
> }
>
> -bool ARM64TargetLowering::getIndexedAddressParts(SDNode *Op, SDValue &Base,
> - SDValue &Offset,
> - ISD::MemIndexedMode &AM,
> - bool &IsInc,
> - SelectionDAG &DAG) const {
> +bool AArch64TargetLowering::getIndexedAddressParts(SDNode *Op, SDValue &Base,
> + SDValue &Offset,
> + ISD::MemIndexedMode &AM,
> + bool &IsInc,
> + SelectionDAG &DAG) const {
> if (Op->getOpcode() != ISD::ADD && Op->getOpcode() != ISD::SUB)
> return false;
>
> @@ -7749,10 +7783,10 @@ bool ARM64TargetLowering::getIndexedAddr
> return false;
> }
>
> -bool ARM64TargetLowering::getPreIndexedAddressParts(SDNode *N, SDValue &Base,
> - SDValue &Offset,
> - ISD::MemIndexedMode &AM,
> - SelectionDAG &DAG) const {
> +bool AArch64TargetLowering::getPreIndexedAddressParts(SDNode *N, SDValue &Base,
> + SDValue &Offset,
> + ISD::MemIndexedMode &AM,
> + SelectionDAG &DAG) const {
> EVT VT;
> SDValue Ptr;
> if (LoadSDNode *LD = dyn_cast<LoadSDNode>(N)) {
> @@ -7771,11 +7805,9 @@ bool ARM64TargetLowering::getPreIndexedA
> return true;
> }
>
> -bool ARM64TargetLowering::getPostIndexedAddressParts(SDNode *N, SDNode *Op,
> - SDValue &Base,
> - SDValue &Offset,
> - ISD::MemIndexedMode &AM,
> - SelectionDAG &DAG) const {
> +bool AArch64TargetLowering::getPostIndexedAddressParts(
> + SDNode *N, SDNode *Op, SDValue &Base, SDValue &Offset,
> + ISD::MemIndexedMode &AM, SelectionDAG &DAG) const {
> EVT VT;
> SDValue Ptr;
> if (LoadSDNode *LD = dyn_cast<LoadSDNode>(N)) {
> @@ -7798,9 +7830,8 @@ bool ARM64TargetLowering::getPostIndexed
> return true;
> }
>
> -void ARM64TargetLowering::ReplaceNodeResults(SDNode *N,
> - SmallVectorImpl<SDValue> &Results,
> - SelectionDAG &DAG) const {
> +void AArch64TargetLowering::ReplaceNodeResults(
> + SDNode *N, SmallVectorImpl<SDValue> &Results, SelectionDAG &DAG) const {
> switch (N->getOpcode()) {
> default:
> llvm_unreachable("Don't know how to custom expand this");
> @@ -7812,7 +7843,7 @@ void ARM64TargetLowering::ReplaceNodeRes
> }
> }
>
> -bool ARM64TargetLowering::shouldExpandAtomicInIR(Instruction *Inst) const {
> +bool AArch64TargetLowering::shouldExpandAtomicInIR(Instruction *Inst) const {
> // Loads and stores less than 128-bits are already atomic; ones above that
> // are doomed anyway, so defer to the default libcall and blame the OS when
> // things go wrong:
> @@ -7825,8 +7856,8 @@ bool ARM64TargetLowering::shouldExpandAt
> return Inst->getType()->getPrimitiveSizeInBits() <= 128;
> }
>
> -Value *ARM64TargetLowering::emitLoadLinked(IRBuilder<> &Builder, Value *Addr,
> - AtomicOrdering Ord) const {
> +Value *AArch64TargetLowering::emitLoadLinked(IRBuilder<> &Builder, Value *Addr,
> + AtomicOrdering Ord) const {
> Module *M = Builder.GetInsertBlock()->getParent()->getParent();
> Type *ValTy = cast<PointerType>(Addr->getType())->getElementType();
> bool IsAcquire =
> @@ -7837,7 +7868,7 @@ Value *ARM64TargetLowering::emitLoadLink
> // single i128 here.
> if (ValTy->getPrimitiveSizeInBits() == 128) {
> Intrinsic::ID Int =
> - IsAcquire ? Intrinsic::arm64_ldaxp : Intrinsic::arm64_ldxp;
> + IsAcquire ? Intrinsic::aarch64_ldaxp : Intrinsic::aarch64_ldxp;
> Function *Ldxr = llvm::Intrinsic::getDeclaration(M, Int);
>
> Addr = Builder.CreateBitCast(Addr, Type::getInt8PtrTy(M->getContext()));
> @@ -7853,7 +7884,7 @@ Value *ARM64TargetLowering::emitLoadLink
>
> Type *Tys[] = { Addr->getType() };
> Intrinsic::ID Int =
> - IsAcquire ? Intrinsic::arm64_ldaxr : Intrinsic::arm64_ldxr;
> + IsAcquire ? Intrinsic::aarch64_ldaxr : Intrinsic::aarch64_ldxr;
> Function *Ldxr = llvm::Intrinsic::getDeclaration(M, Int, Tys);
>
> return Builder.CreateTruncOrBitCast(
> @@ -7861,9 +7892,9 @@ Value *ARM64TargetLowering::emitLoadLink
> cast<PointerType>(Addr->getType())->getElementType());
> }
>
> -Value *ARM64TargetLowering::emitStoreConditional(IRBuilder<> &Builder,
> - Value *Val, Value *Addr,
> - AtomicOrdering Ord) const {
> +Value *AArch64TargetLowering::emitStoreConditional(IRBuilder<> &Builder,
> + Value *Val, Value *Addr,
> + AtomicOrdering Ord) const {
> Module *M = Builder.GetInsertBlock()->getParent()->getParent();
> bool IsRelease =
> Ord == Release || Ord == AcquireRelease || Ord == SequentiallyConsistent;
> @@ -7873,7 +7904,7 @@ Value *ARM64TargetLowering::emitStoreCon
> // before the call.
> if (Val->getType()->getPrimitiveSizeInBits() == 128) {
> Intrinsic::ID Int =
> - IsRelease ? Intrinsic::arm64_stlxp : Intrinsic::arm64_stxp;
> + IsRelease ? Intrinsic::aarch64_stlxp : Intrinsic::aarch64_stxp;
> Function *Stxr = Intrinsic::getDeclaration(M, Int);
> Type *Int64Ty = Type::getInt64Ty(M->getContext());
>
> @@ -7884,7 +7915,7 @@ Value *ARM64TargetLowering::emitStoreCon
> }
>
> Intrinsic::ID Int =
> - IsRelease ? Intrinsic::arm64_stlxr : Intrinsic::arm64_stxr;
> + IsRelease ? Intrinsic::aarch64_stlxr : Intrinsic::aarch64_stxr;
> Type *Tys[] = { Addr->getType() };
> Function *Stxr = Intrinsic::getDeclaration(M, Int, Tys);
>
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64ISelLowering.h (from r209576, llvm/trunk/lib/Target/ARM64/ARM64ISelLowering.h)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64ISelLowering.h?p2=llvm/trunk/lib/Target/AArch64/AArch64ISelLowering.h&p1=llvm/trunk/lib/Target/ARM64/ARM64ISelLowering.h&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64ISelLowering.h (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64ISelLowering.h Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//==-- ARM64ISelLowering.h - ARM64 DAG Lowering Interface --------*- C++ -*-==//
> +//==-- AArch64ISelLowering.h - AArch64 DAG Lowering Interface ----*- C++ -*-==//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,13 +7,13 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// This file defines the interfaces that ARM64 uses to lower LLVM code into a
> +// This file defines the interfaces that AArch64 uses to lower LLVM code into a
> // selection DAG.
> //
> //===----------------------------------------------------------------------===//
>
> -#ifndef LLVM_TARGET_ARM64_ISELLOWERING_H
> -#define LLVM_TARGET_ARM64_ISELLOWERING_H
> +#ifndef LLVM_TARGET_AArch64_ISELLOWERING_H
> +#define LLVM_TARGET_AArch64_ISELLOWERING_H
>
> #include "llvm/CodeGen/CallingConvLower.h"
> #include "llvm/CodeGen/SelectionDAG.h"
> @@ -22,7 +22,7 @@
>
> namespace llvm {
>
> -namespace ARM64ISD {
> +namespace AArch64ISD {
>
> enum {
> FIRST_NUMBER = ISD::BUILTIN_OP_END,
> @@ -188,16 +188,16 @@ enum {
> ST4LANEpost
> };
>
> -} // end namespace ARM64ISD
> +} // end namespace AArch64ISD
>
> -class ARM64Subtarget;
> -class ARM64TargetMachine;
> +class AArch64Subtarget;
> +class AArch64TargetMachine;
>
> -class ARM64TargetLowering : public TargetLowering {
> +class AArch64TargetLowering : public TargetLowering {
> bool RequireStrictAlign;
>
> public:
> - explicit ARM64TargetLowering(ARM64TargetMachine &TM);
> + explicit AArch64TargetLowering(AArch64TargetMachine &TM);
>
> /// Selects the correct CCAssignFn for a the given CallingConvention
> /// value.
> @@ -325,9 +325,9 @@ public:
> bool shouldExpandAtomicInIR(Instruction *Inst) const override;
>
> private:
> - /// Subtarget - Keep a pointer to the ARM64Subtarget around so that we can
> + /// Subtarget - Keep a pointer to the AArch64Subtarget around so that we can
> /// make the right decision when generating code for different targets.
> - const ARM64Subtarget *Subtarget;
> + const AArch64Subtarget *Subtarget;
>
> void addTypeForNEON(EVT VT, EVT PromotedBitwiseVT);
> void addDRTypeForNEON(MVT VT);
> @@ -454,11 +454,11 @@ private:
> SelectionDAG &DAG) const override;
> };
>
> -namespace ARM64 {
> +namespace AArch64 {
> FastISel *createFastISel(FunctionLoweringInfo &funcInfo,
> const TargetLibraryInfo *libInfo);
> -} // end namespace ARM64
> +} // end namespace AArch64
>
> } // end namespace llvm
>
> -#endif // LLVM_TARGET_ARM64_ISELLOWERING_H
> +#endif // LLVM_TARGET_AArch64_ISELLOWERING_H
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64InstrAtomics.td (from r209576, llvm/trunk/lib/Target/ARM64/ARM64InstrAtomics.td)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64InstrAtomics.td?p2=llvm/trunk/lib/Target/AArch64/AArch64InstrAtomics.td&p1=llvm/trunk/lib/Target/ARM64/ARM64InstrAtomics.td&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64InstrAtomics.td (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64InstrAtomics.td Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===- ARM64InstrAtomics.td - ARM64 Atomic codegen support -*- tablegen -*-===//
> +//=- AArch64InstrAtomics.td - AArch64 Atomic codegen support -*- tablegen -*-=//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,7 +7,7 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// ARM64 Atomic operand code-gen constructs.
> +// AArch64 Atomic operand code-gen constructs.
> //
> //===----------------------------------------------------------------------===//
>
> @@ -117,7 +117,7 @@ class releasing_store<PatFrag base>
> return Ordering == Release || Ordering == SequentiallyConsistent;
> }]>;
>
> -// An atomic store operation that doesn't actually need to be atomic on ARM64.
> +// An atomic store operation that doesn't actually need to be atomic on AArch64.
> class relaxed_store<PatFrag base>
> : PatFrag<(ops node:$ptr, node:$val), (base node:$ptr, node:$val), [{
> AtomicOrdering Ordering = cast<AtomicSDNode>(N)->getOrdering();
> @@ -202,19 +202,19 @@ def : Pat<(relaxed_store<atomic_store_64
>
> // Load-exclusives.
>
> -def ldxr_1 : PatFrag<(ops node:$ptr), (int_arm64_ldxr node:$ptr), [{
> +def ldxr_1 : PatFrag<(ops node:$ptr), (int_aarch64_ldxr node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i8;
> }]>;
>
> -def ldxr_2 : PatFrag<(ops node:$ptr), (int_arm64_ldxr node:$ptr), [{
> +def ldxr_2 : PatFrag<(ops node:$ptr), (int_aarch64_ldxr node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i16;
> }]>;
>
> -def ldxr_4 : PatFrag<(ops node:$ptr), (int_arm64_ldxr node:$ptr), [{
> +def ldxr_4 : PatFrag<(ops node:$ptr), (int_aarch64_ldxr node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i32;
> }]>;
>
> -def ldxr_8 : PatFrag<(ops node:$ptr), (int_arm64_ldxr node:$ptr), [{
> +def ldxr_8 : PatFrag<(ops node:$ptr), (int_aarch64_ldxr node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i64;
> }]>;
>
> @@ -235,19 +235,19 @@ def : Pat<(and (ldxr_4 GPR64sp:$addr), 0
>
> // Load-exclusives.
>
> -def ldaxr_1 : PatFrag<(ops node:$ptr), (int_arm64_ldaxr node:$ptr), [{
> +def ldaxr_1 : PatFrag<(ops node:$ptr), (int_aarch64_ldaxr node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i8;
> }]>;
>
> -def ldaxr_2 : PatFrag<(ops node:$ptr), (int_arm64_ldaxr node:$ptr), [{
> +def ldaxr_2 : PatFrag<(ops node:$ptr), (int_aarch64_ldaxr node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i16;
> }]>;
>
> -def ldaxr_4 : PatFrag<(ops node:$ptr), (int_arm64_ldaxr node:$ptr), [{
> +def ldaxr_4 : PatFrag<(ops node:$ptr), (int_aarch64_ldaxr node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i32;
> }]>;
>
> -def ldaxr_8 : PatFrag<(ops node:$ptr), (int_arm64_ldaxr node:$ptr), [{
> +def ldaxr_8 : PatFrag<(ops node:$ptr), (int_aarch64_ldaxr node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i64;
> }]>;
>
> @@ -269,22 +269,22 @@ def : Pat<(and (ldaxr_4 GPR64sp:$addr),
> // Store-exclusives.
>
> def stxr_1 : PatFrag<(ops node:$val, node:$ptr),
> - (int_arm64_stxr node:$val, node:$ptr), [{
> + (int_aarch64_stxr node:$val, node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i8;
> }]>;
>
> def stxr_2 : PatFrag<(ops node:$val, node:$ptr),
> - (int_arm64_stxr node:$val, node:$ptr), [{
> + (int_aarch64_stxr node:$val, node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i16;
> }]>;
>
> def stxr_4 : PatFrag<(ops node:$val, node:$ptr),
> - (int_arm64_stxr node:$val, node:$ptr), [{
> + (int_aarch64_stxr node:$val, node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i32;
> }]>;
>
> def stxr_8 : PatFrag<(ops node:$val, node:$ptr),
> - (int_arm64_stxr node:$val, node:$ptr), [{
> + (int_aarch64_stxr node:$val, node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i64;
> }]>;
>
> @@ -315,22 +315,22 @@ def : Pat<(stxr_4 (and GPR64:$val, 0xfff
> // Store-release-exclusives.
>
> def stlxr_1 : PatFrag<(ops node:$val, node:$ptr),
> - (int_arm64_stlxr node:$val, node:$ptr), [{
> + (int_aarch64_stlxr node:$val, node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i8;
> }]>;
>
> def stlxr_2 : PatFrag<(ops node:$val, node:$ptr),
> - (int_arm64_stlxr node:$val, node:$ptr), [{
> + (int_aarch64_stlxr node:$val, node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i16;
> }]>;
>
> def stlxr_4 : PatFrag<(ops node:$val, node:$ptr),
> - (int_arm64_stlxr node:$val, node:$ptr), [{
> + (int_aarch64_stlxr node:$val, node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i32;
> }]>;
>
> def stlxr_8 : PatFrag<(ops node:$val, node:$ptr),
> - (int_arm64_stlxr node:$val, node:$ptr), [{
> + (int_aarch64_stlxr node:$val, node:$ptr), [{
> return cast<MemIntrinsicSDNode>(N)->getMemoryVT() == MVT::i64;
> }]>;
>
> @@ -361,4 +361,4 @@ def : Pat<(stlxr_4 (and GPR64:$val, 0xff
>
> // And clear exclusive.
>
> -def : Pat<(int_arm64_clrex), (CLREX 0xf)>;
> +def : Pat<(int_aarch64_clrex), (CLREX 0xf)>;
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64InstrFormats.td (from r209576, llvm/trunk/lib/Target/ARM64/ARM64InstrFormats.td)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64InstrFormats.td?p2=llvm/trunk/lib/Target/AArch64/AArch64InstrFormats.td&p1=llvm/trunk/lib/Target/ARM64/ARM64InstrFormats.td&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64InstrFormats.td (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64InstrFormats.td Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===- ARM64InstrFormats.td - ARM64 Instruction Formats ------*- tblgen -*-===//
> +//===- AArch64InstrFormats.td - AArch64 Instruction Formats --*- tblgen -*-===//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -8,7 +8,7 @@
> //===----------------------------------------------------------------------===//
>
> //===----------------------------------------------------------------------===//
> -// Describe ARM64 instructions format here
> +// Describe AArch64 instructions format here
> //
>
> // Format specifies the encoding used by the instruction. This is part of the
> @@ -21,8 +21,8 @@ class Format<bits<2> val> {
> def PseudoFrm : Format<0>;
> def NormalFrm : Format<1>; // Do we need any others?
>
> -// ARM64 Instruction Format
> -class ARM64Inst<Format f, string cstr> : Instruction {
> +// AArch64 Instruction Format
> +class AArch64Inst<Format f, string cstr> : Instruction {
> field bits<32> Inst; // Instruction encoding.
> // Mask of bits that cause an encoding to be UNPREDICTABLE.
> // If a bit is set, then if the corresponding bit in the
> @@ -32,7 +32,7 @@ class ARM64Inst<Format f, string cstr> :
> // SoftFail is the generic name for this field, but we alias it so
> // as to make it more obvious what it means in ARM-land.
> field bits<32> SoftFail = Unpredictable;
> - let Namespace = "ARM64";
> + let Namespace = "AArch64";
> Format F = f;
> bits<2> Form = F.Value;
> let Pattern = [];
> @@ -41,7 +41,7 @@ class ARM64Inst<Format f, string cstr> :
>
> // Pseudo instructions (don't have encoding information)
> class Pseudo<dag oops, dag iops, list<dag> pattern, string cstr = "">
> - : ARM64Inst<PseudoFrm, cstr> {
> + : AArch64Inst<PseudoFrm, cstr> {
> dag OutOperandList = oops;
> dag InOperandList = iops;
> let Pattern = pattern;
> @@ -49,7 +49,7 @@ class Pseudo<dag oops, dag iops, list<da
> }
>
> // Real instructions (have encoding information)
> -class EncodedI<string cstr, list<dag> pattern> : ARM64Inst<NormalFrm, cstr> {
> +class EncodedI<string cstr, list<dag> pattern> : AArch64Inst<NormalFrm, cstr> {
> let Pattern = pattern;
> let Size = 4;
> }
> @@ -440,11 +440,11 @@ def vecshiftL64 : Operand<i32>, ImmLeaf<
> // Crazy immediate formats used by 32-bit and 64-bit logical immediate
> // instructions for splatting repeating bit patterns across the immediate.
> def logical_imm32_XFORM : SDNodeXForm<imm, [{
> - uint64_t enc = ARM64_AM::encodeLogicalImmediate(N->getZExtValue(), 32);
> + uint64_t enc = AArch64_AM::encodeLogicalImmediate(N->getZExtValue(), 32);
> return CurDAG->getTargetConstant(enc, MVT::i32);
> }]>;
> def logical_imm64_XFORM : SDNodeXForm<imm, [{
> - uint64_t enc = ARM64_AM::encodeLogicalImmediate(N->getZExtValue(), 64);
> + uint64_t enc = AArch64_AM::encodeLogicalImmediate(N->getZExtValue(), 64);
> return CurDAG->getTargetConstant(enc, MVT::i32);
> }]>;
>
> @@ -457,13 +457,13 @@ def LogicalImm64Operand : AsmOperandClas
> let DiagnosticType = "LogicalSecondSource";
> }
> def logical_imm32 : Operand<i32>, PatLeaf<(imm), [{
> - return ARM64_AM::isLogicalImmediate(N->getZExtValue(), 32);
> + return AArch64_AM::isLogicalImmediate(N->getZExtValue(), 32);
> }], logical_imm32_XFORM> {
> let PrintMethod = "printLogicalImm32";
> let ParserMatchClass = LogicalImm32Operand;
> }
> def logical_imm64 : Operand<i64>, PatLeaf<(imm), [{
> - return ARM64_AM::isLogicalImmediate(N->getZExtValue(), 64);
> + return AArch64_AM::isLogicalImmediate(N->getZExtValue(), 64);
> }], logical_imm64_XFORM> {
> let PrintMethod = "printLogicalImm64";
> let ParserMatchClass = LogicalImm64Operand;
> @@ -661,10 +661,10 @@ class arith_extended_reg32to64<ValueType
> // Floating-point immediate.
> def fpimm32 : Operand<f32>,
> PatLeaf<(f32 fpimm), [{
> - return ARM64_AM::getFP32Imm(N->getValueAPF()) != -1;
> + return AArch64_AM::getFP32Imm(N->getValueAPF()) != -1;
> }], SDNodeXForm<fpimm, [{
> APFloat InVal = N->getValueAPF();
> - uint32_t enc = ARM64_AM::getFP32Imm(InVal);
> + uint32_t enc = AArch64_AM::getFP32Imm(InVal);
> return CurDAG->getTargetConstant(enc, MVT::i32);
> }]>> {
> let ParserMatchClass = FPImmOperand;
> @@ -672,10 +672,10 @@ def fpimm32 : Operand<f32>,
> }
> def fpimm64 : Operand<f64>,
> PatLeaf<(f64 fpimm), [{
> - return ARM64_AM::getFP64Imm(N->getValueAPF()) != -1;
> + return AArch64_AM::getFP64Imm(N->getValueAPF()) != -1;
> }], SDNodeXForm<fpimm, [{
> APFloat InVal = N->getValueAPF();
> - uint32_t enc = ARM64_AM::getFP64Imm(InVal);
> + uint32_t enc = AArch64_AM::getFP64Imm(InVal);
> return CurDAG->getTargetConstant(enc, MVT::i32);
> }]>> {
> let ParserMatchClass = FPImmOperand;
> @@ -743,12 +743,12 @@ def VectorIndexD : Operand<i64>, ImmLeaf
> // are encoded as the eight bit value 'abcdefgh'.
> def simdimmtype10 : Operand<i32>,
> PatLeaf<(f64 fpimm), [{
> - return ARM64_AM::isAdvSIMDModImmType10(N->getValueAPF()
> + return AArch64_AM::isAdvSIMDModImmType10(N->getValueAPF()
> .bitcastToAPInt()
> .getZExtValue());
> }], SDNodeXForm<fpimm, [{
> APFloat InVal = N->getValueAPF();
> - uint32_t enc = ARM64_AM::encodeAdvSIMDModImmType10(N->getValueAPF()
> + uint32_t enc = AArch64_AM::encodeAdvSIMDModImmType10(N->getValueAPF()
> .bitcastToAPInt()
> .getZExtValue());
> return CurDAG->getTargetConstant(enc, MVT::i32);
> @@ -982,7 +982,7 @@ def am_brcond : Operand<OtherVT> {
>
> class BranchCond : I<(outs), (ins ccode:$cond, am_brcond:$target),
> "b", ".$cond\t$target", "",
> - [(ARM64brcond bb:$target, imm:$cond, NZCV)]>,
> + [(AArch64brcond bb:$target, imm:$cond, NZCV)]>,
> Sched<[WriteBr]> {
> let isBranch = 1;
> let isTerminator = 1;
> @@ -1759,7 +1759,7 @@ multiclass AddSubS<bit isSub, string mne
> //---
> def SDTA64EXTR : SDTypeProfile<1, 3, [SDTCisSameAs<0, 1>, SDTCisSameAs<0, 2>,
> SDTCisPtrTy<3>]>;
> -def ARM64Extr : SDNode<"ARM64ISD::EXTR", SDTA64EXTR>;
> +def AArch64Extr : SDNode<"AArch64ISD::EXTR", SDTA64EXTR>;
>
> class BaseExtractImm<RegisterClass regtype, Operand imm_type, string asm,
> list<dag> patterns>
> @@ -1782,7 +1782,7 @@ class BaseExtractImm<RegisterClass regty
> multiclass ExtractImm<string asm> {
> def Wrri : BaseExtractImm<GPR32, imm0_31, asm,
> [(set GPR32:$Rd,
> - (ARM64Extr GPR32:$Rn, GPR32:$Rm, imm0_31:$imm))]> {
> + (AArch64Extr GPR32:$Rn, GPR32:$Rm, imm0_31:$imm))]> {
> let Inst{31} = 0;
> let Inst{22} = 0;
> // imm<5> must be zero.
> @@ -1790,7 +1790,7 @@ multiclass ExtractImm<string asm> {
> }
> def Xrri : BaseExtractImm<GPR64, imm0_63, asm,
> [(set GPR64:$Rd,
> - (ARM64Extr GPR64:$Rn, GPR64:$Rm, imm0_63:$imm))]> {
> + (AArch64Extr GPR64:$Rn, GPR64:$Rm, imm0_63:$imm))]> {
>
> let Inst{31} = 1;
> let Inst{22} = 1;
> @@ -2081,7 +2081,7 @@ class BaseCondSelect<bit op, bits<2> op2
> : I<(outs regtype:$Rd), (ins regtype:$Rn, regtype:$Rm, ccode:$cond),
> asm, "\t$Rd, $Rn, $Rm, $cond", "",
> [(set regtype:$Rd,
> - (ARM64csel regtype:$Rn, regtype:$Rm, (i32 imm:$cond), NZCV))]>,
> + (AArch64csel regtype:$Rn, regtype:$Rm, (i32 imm:$cond), NZCV))]>,
> Sched<[WriteI, ReadI, ReadI]> {
> let Uses = [NZCV];
>
> @@ -2113,7 +2113,7 @@ class BaseCondSelectOp<bit op, bits<2> o
> : I<(outs regtype:$Rd), (ins regtype:$Rn, regtype:$Rm, ccode:$cond),
> asm, "\t$Rd, $Rn, $Rm, $cond", "",
> [(set regtype:$Rd,
> - (ARM64csel regtype:$Rn, (frag regtype:$Rm),
> + (AArch64csel regtype:$Rn, (frag regtype:$Rm),
> (i32 imm:$cond), NZCV))]>,
> Sched<[WriteI, ReadI, ReadI]> {
> let Uses = [NZCV];
> @@ -2133,8 +2133,8 @@ class BaseCondSelectOp<bit op, bits<2> o
> }
>
> def inv_cond_XFORM : SDNodeXForm<imm, [{
> - ARM64CC::CondCode CC = static_cast<ARM64CC::CondCode>(N->getZExtValue());
> - return CurDAG->getTargetConstant(ARM64CC::getInvertedCondCode(CC), MVT::i32);
> + AArch64CC::CondCode CC = static_cast<AArch64CC::CondCode>(N->getZExtValue());
> + return CurDAG->getTargetConstant(AArch64CC::getInvertedCondCode(CC), MVT::i32);
> }]>;
>
> multiclass CondSelectOp<bit op, bits<2> op2, string asm, PatFrag frag> {
> @@ -2145,11 +2145,11 @@ multiclass CondSelectOp<bit op, bits<2>
> let Inst{31} = 1;
> }
>
> - def : Pat<(ARM64csel (frag GPR32:$Rm), GPR32:$Rn, (i32 imm:$cond), NZCV),
> + def : Pat<(AArch64csel (frag GPR32:$Rm), GPR32:$Rn, (i32 imm:$cond), NZCV),
> (!cast<Instruction>(NAME # Wr) GPR32:$Rn, GPR32:$Rm,
> (inv_cond_XFORM imm:$cond))>;
>
> - def : Pat<(ARM64csel (frag GPR64:$Rm), GPR64:$Rn, (i32 imm:$cond), NZCV),
> + def : Pat<(AArch64csel (frag GPR64:$Rm), GPR64:$Rn, (i32 imm:$cond), NZCV),
> (!cast<Instruction>(NAME # Xr) GPR64:$Rn, GPR64:$Rm,
> (inv_cond_XFORM imm:$cond))>;
> }
> @@ -2194,7 +2194,7 @@ class uimm12_scaled<int Scale> : Operand
> let ParserMatchClass
> = !cast<AsmOperandClass>("UImm12OffsetScale" # Scale # "Operand");
> let EncoderMethod
> - = "getLdStUImm12OpValue<ARM64::fixup_arm64_ldst_imm12_scale" # Scale # ">";
> + = "getLdStUImm12OpValue<AArch64::fixup_aarch64_ldst_imm12_scale" # Scale # ">";
> let PrintMethod = "printUImm12Offset<" # Scale # ">";
> }
>
> @@ -2782,7 +2782,7 @@ class BasePrefetchRO<bits<2> sz, bit V,
> multiclass PrefetchRO<bits<2> sz, bit V, bits<2> opc, string asm> {
> def roW : BasePrefetchRO<sz, V, opc, (outs),
> (ins prfop:$Rt, GPR64sp:$Rn, GPR32:$Rm, ro_Wextend64:$extend),
> - asm, [(ARM64Prefetch imm:$Rt,
> + asm, [(AArch64Prefetch imm:$Rt,
> (ro_Windexed64 GPR64sp:$Rn, GPR32:$Rm,
> ro_Wextend64:$extend))]> {
> let Inst{13} = 0b0;
> @@ -2790,7 +2790,7 @@ multiclass PrefetchRO<bits<2> sz, bit V,
>
> def roX : BasePrefetchRO<sz, V, opc, (outs),
> (ins prfop:$Rt, GPR64sp:$Rn, GPR64:$Rm, ro_Xextend64:$extend),
> - asm, [(ARM64Prefetch imm:$Rt,
> + asm, [(AArch64Prefetch imm:$Rt,
> (ro_Xindexed64 GPR64sp:$Rn, GPR64:$Rm,
> ro_Xextend64:$extend))]> {
> let Inst{13} = 0b1;
> @@ -3912,7 +3912,7 @@ class BaseFPCondSelect<RegisterClass reg
> : I<(outs regtype:$Rd), (ins regtype:$Rn, regtype:$Rm, ccode:$cond),
> asm, "\t$Rd, $Rn, $Rm, $cond", "",
> [(set regtype:$Rd,
> - (ARM64csel (vt regtype:$Rn), regtype:$Rm,
> + (AArch64csel (vt regtype:$Rn), regtype:$Rm,
> (i32 imm:$cond), NZCV))]>,
> Sched<[WriteF]> {
> bits<5> Rd;
> @@ -5074,28 +5074,28 @@ multiclass SIMDLongThreeVectorSQDMLXTied
> asm, ".4s", ".4h", ".4h",
> [(set (v4i32 V128:$dst),
> (Accum (v4i32 V128:$Rd),
> - (v4i32 (int_arm64_neon_sqdmull (v4i16 V64:$Rn),
> + (v4i32 (int_aarch64_neon_sqdmull (v4i16 V64:$Rn),
> (v4i16 V64:$Rm)))))]>;
> def v8i16_v4i32 : BaseSIMDDifferentThreeVectorTied<U, 0b011, opc,
> V128, V128, V128,
> asm#"2", ".4s", ".8h", ".8h",
> [(set (v4i32 V128:$dst),
> (Accum (v4i32 V128:$Rd),
> - (v4i32 (int_arm64_neon_sqdmull (extract_high_v8i16 V128:$Rn),
> + (v4i32 (int_aarch64_neon_sqdmull (extract_high_v8i16 V128:$Rn),
> (extract_high_v8i16 V128:$Rm)))))]>;
> def v2i32_v2i64 : BaseSIMDDifferentThreeVectorTied<U, 0b100, opc,
> V128, V64, V64,
> asm, ".2d", ".2s", ".2s",
> [(set (v2i64 V128:$dst),
> (Accum (v2i64 V128:$Rd),
> - (v2i64 (int_arm64_neon_sqdmull (v2i32 V64:$Rn),
> + (v2i64 (int_aarch64_neon_sqdmull (v2i32 V64:$Rn),
> (v2i32 V64:$Rm)))))]>;
> def v4i32_v2i64 : BaseSIMDDifferentThreeVectorTied<U, 0b101, opc,
> V128, V128, V128,
> asm#"2", ".2d", ".4s", ".4s",
> [(set (v2i64 V128:$dst),
> (Accum (v2i64 V128:$Rd),
> - (v2i64 (int_arm64_neon_sqdmull (extract_high_v4i32 V128:$Rn),
> + (v2i64 (int_aarch64_neon_sqdmull (extract_high_v4i32 V128:$Rn),
> (extract_high_v4i32 V128:$Rm)))))]>;
> }
>
> @@ -5140,7 +5140,7 @@ class BaseSIMDBitwiseExtract<bit size, R
> "{\t$Rd" # kind # ", $Rn" # kind # ", $Rm" # kind # ", $imm" #
> "|" # kind # "\t$Rd, $Rn, $Rm, $imm}", "",
> [(set (vty regtype:$Rd),
> - (ARM64ext regtype:$Rn, regtype:$Rm, (i32 imm:$imm)))]>,
> + (AArch64ext regtype:$Rn, regtype:$Rm, (i32 imm:$imm)))]>,
> Sched<[WriteV]> {
> bits<5> Rd;
> bits<5> Rn;
> @@ -5409,7 +5409,7 @@ class BaseSIMDCmpTwoScalar<bit U, bits<2
>
> class SIMDInexactCvtTwoScalar<bits<5> opcode, string asm>
> : I<(outs FPR32:$Rd), (ins FPR64:$Rn), asm, "\t$Rd, $Rn", "",
> - [(set (f32 FPR32:$Rd), (int_arm64_sisd_fcvtxn (f64 FPR64:$Rn)))]>,
> + [(set (f32 FPR32:$Rd), (int_aarch64_sisd_fcvtxn (f64 FPR64:$Rn)))]>,
> Sched<[WriteV]> {
> bits<5> Rd;
> bits<5> Rn;
> @@ -5627,7 +5627,7 @@ class SIMDDupFromMain<bit Q, bits<5> imm
> : BaseSIMDInsDup<Q, 0, (outs vecreg:$Rd), (ins regtype:$Rn), "dup",
> "{\t$Rd" # size # ", $Rn" #
> "|" # size # "\t$Rd, $Rn}", "",
> - [(set (vectype vecreg:$Rd), (ARM64dup regtype:$Rn))]> {
> + [(set (vectype vecreg:$Rd), (AArch64dup regtype:$Rn))]> {
> let Inst{20-16} = imm5;
> let Inst{14-11} = 0b0001;
> }
> @@ -5646,7 +5646,7 @@ class SIMDDupFromElement<bit Q, string d
>
> class SIMDDup64FromElement
> : SIMDDupFromElement<1, ".2d", ".d", v2i64, v2i64, V128,
> - VectorIndexD, i64, ARM64duplane64> {
> + VectorIndexD, i64, AArch64duplane64> {
> bits<1> idx;
> let Inst{20} = idx;
> let Inst{19-16} = 0b1000;
> @@ -5655,7 +5655,7 @@ class SIMDDup64FromElement
> class SIMDDup32FromElement<bit Q, string size, ValueType vectype,
> RegisterOperand vecreg>
> : SIMDDupFromElement<Q, size, ".s", vectype, v4i32, vecreg,
> - VectorIndexS, i64, ARM64duplane32> {
> + VectorIndexS, i64, AArch64duplane32> {
> bits<2> idx;
> let Inst{20-19} = idx;
> let Inst{18-16} = 0b100;
> @@ -5664,7 +5664,7 @@ class SIMDDup32FromElement<bit Q, string
> class SIMDDup16FromElement<bit Q, string size, ValueType vectype,
> RegisterOperand vecreg>
> : SIMDDupFromElement<Q, size, ".h", vectype, v8i16, vecreg,
> - VectorIndexH, i64, ARM64duplane16> {
> + VectorIndexH, i64, AArch64duplane16> {
> bits<3> idx;
> let Inst{20-18} = idx;
> let Inst{17-16} = 0b10;
> @@ -5673,7 +5673,7 @@ class SIMDDup16FromElement<bit Q, string
> class SIMDDup8FromElement<bit Q, string size, ValueType vectype,
> RegisterOperand vecreg>
> : SIMDDupFromElement<Q, size, ".b", vectype, v16i8, vecreg,
> - VectorIndexB, i64, ARM64duplane8> {
> + VectorIndexB, i64, AArch64duplane8> {
> bits<4> idx;
> let Inst{20-17} = idx;
> let Inst{16} = 1;
> @@ -6312,7 +6312,7 @@ multiclass SIMDFPIndexedSD<bit U, bits<4
> asm, ".2s", ".2s", ".2s", ".s",
> [(set (v2f32 V64:$Rd),
> (OpNode (v2f32 V64:$Rn),
> - (v2f32 (ARM64duplane32 (v4f32 V128:$Rm), VectorIndexS:$idx))))]> {
> + (v2f32 (AArch64duplane32 (v4f32 V128:$Rm), VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> let Inst{21} = idx{0};
> @@ -6324,7 +6324,7 @@ multiclass SIMDFPIndexedSD<bit U, bits<4
> asm, ".4s", ".4s", ".4s", ".s",
> [(set (v4f32 V128:$Rd),
> (OpNode (v4f32 V128:$Rn),
> - (v4f32 (ARM64duplane32 (v4f32 V128:$Rm), VectorIndexS:$idx))))]> {
> + (v4f32 (AArch64duplane32 (v4f32 V128:$Rm), VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> let Inst{21} = idx{0};
> @@ -6336,7 +6336,7 @@ multiclass SIMDFPIndexedSD<bit U, bits<4
> asm, ".2d", ".2d", ".2d", ".d",
> [(set (v2f64 V128:$Rd),
> (OpNode (v2f64 V128:$Rn),
> - (v2f64 (ARM64duplane64 (v2f64 V128:$Rm), VectorIndexD:$idx))))]> {
> + (v2f64 (AArch64duplane64 (v2f64 V128:$Rm), VectorIndexD:$idx))))]> {
> bits<1> idx;
> let Inst{11} = idx{0};
> let Inst{21} = 0;
> @@ -6370,35 +6370,35 @@ multiclass SIMDFPIndexedSD<bit U, bits<4
> multiclass SIMDFPIndexedSDTiedPatterns<string INST, SDPatternOperator OpNode> {
> // 2 variants for the .2s version: DUPLANE from 128-bit and DUP scalar.
> def : Pat<(v2f32 (OpNode (v2f32 V64:$Rd), (v2f32 V64:$Rn),
> - (ARM64duplane32 (v4f32 V128:$Rm),
> + (AArch64duplane32 (v4f32 V128:$Rm),
> VectorIndexS:$idx))),
> (!cast<Instruction>(INST # v2i32_indexed)
> V64:$Rd, V64:$Rn, V128:$Rm, VectorIndexS:$idx)>;
> def : Pat<(v2f32 (OpNode (v2f32 V64:$Rd), (v2f32 V64:$Rn),
> - (ARM64dup (f32 FPR32Op:$Rm)))),
> + (AArch64dup (f32 FPR32Op:$Rm)))),
> (!cast<Instruction>(INST # "v2i32_indexed") V64:$Rd, V64:$Rn,
> (SUBREG_TO_REG (i32 0), FPR32Op:$Rm, ssub), (i64 0))>;
>
>
> // 2 variants for the .4s version: DUPLANE from 128-bit and DUP scalar.
> def : Pat<(v4f32 (OpNode (v4f32 V128:$Rd), (v4f32 V128:$Rn),
> - (ARM64duplane32 (v4f32 V128:$Rm),
> + (AArch64duplane32 (v4f32 V128:$Rm),
> VectorIndexS:$idx))),
> (!cast<Instruction>(INST # "v4i32_indexed")
> V128:$Rd, V128:$Rn, V128:$Rm, VectorIndexS:$idx)>;
> def : Pat<(v4f32 (OpNode (v4f32 V128:$Rd), (v4f32 V128:$Rn),
> - (ARM64dup (f32 FPR32Op:$Rm)))),
> + (AArch64dup (f32 FPR32Op:$Rm)))),
> (!cast<Instruction>(INST # "v4i32_indexed") V128:$Rd, V128:$Rn,
> (SUBREG_TO_REG (i32 0), FPR32Op:$Rm, ssub), (i64 0))>;
>
> // 2 variants for the .2d version: DUPLANE from 128-bit and DUP scalar.
> def : Pat<(v2f64 (OpNode (v2f64 V128:$Rd), (v2f64 V128:$Rn),
> - (ARM64duplane64 (v2f64 V128:$Rm),
> + (AArch64duplane64 (v2f64 V128:$Rm),
> VectorIndexD:$idx))),
> (!cast<Instruction>(INST # "v2i64_indexed")
> V128:$Rd, V128:$Rn, V128:$Rm, VectorIndexS:$idx)>;
> def : Pat<(v2f64 (OpNode (v2f64 V128:$Rd), (v2f64 V128:$Rn),
> - (ARM64dup (f64 FPR64Op:$Rm)))),
> + (AArch64dup (f64 FPR64Op:$Rm)))),
> (!cast<Instruction>(INST # "v2i64_indexed") V128:$Rd, V128:$Rn,
> (SUBREG_TO_REG (i32 0), FPR64Op:$Rm, dsub), (i64 0))>;
>
> @@ -6471,7 +6471,7 @@ multiclass SIMDIndexedHS<bit U, bits<4>
> asm, ".4h", ".4h", ".4h", ".h",
> [(set (v4i16 V64:$Rd),
> (OpNode (v4i16 V64:$Rn),
> - (v4i16 (ARM64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> + (v4i16 (AArch64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> bits<3> idx;
> let Inst{11} = idx{2};
> let Inst{21} = idx{1};
> @@ -6484,7 +6484,7 @@ multiclass SIMDIndexedHS<bit U, bits<4>
> asm, ".8h", ".8h", ".8h", ".h",
> [(set (v8i16 V128:$Rd),
> (OpNode (v8i16 V128:$Rn),
> - (v8i16 (ARM64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> + (v8i16 (AArch64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> bits<3> idx;
> let Inst{11} = idx{2};
> let Inst{21} = idx{1};
> @@ -6497,7 +6497,7 @@ multiclass SIMDIndexedHS<bit U, bits<4>
> asm, ".2s", ".2s", ".2s", ".s",
> [(set (v2i32 V64:$Rd),
> (OpNode (v2i32 V64:$Rn),
> - (v2i32 (ARM64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> + (v2i32 (AArch64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> let Inst{21} = idx{0};
> @@ -6509,7 +6509,7 @@ multiclass SIMDIndexedHS<bit U, bits<4>
> asm, ".4s", ".4s", ".4s", ".s",
> [(set (v4i32 V128:$Rd),
> (OpNode (v4i32 V128:$Rn),
> - (v4i32 (ARM64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> + (v4i32 (AArch64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> let Inst{21} = idx{0};
> @@ -6545,7 +6545,7 @@ multiclass SIMDVectorIndexedHS<bit U, bi
> asm, ".4h", ".4h", ".4h", ".h",
> [(set (v4i16 V64:$Rd),
> (OpNode (v4i16 V64:$Rn),
> - (v4i16 (ARM64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> + (v4i16 (AArch64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> bits<3> idx;
> let Inst{11} = idx{2};
> let Inst{21} = idx{1};
> @@ -6558,7 +6558,7 @@ multiclass SIMDVectorIndexedHS<bit U, bi
> asm, ".8h", ".8h", ".8h", ".h",
> [(set (v8i16 V128:$Rd),
> (OpNode (v8i16 V128:$Rn),
> - (v8i16 (ARM64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> + (v8i16 (AArch64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> bits<3> idx;
> let Inst{11} = idx{2};
> let Inst{21} = idx{1};
> @@ -6571,7 +6571,7 @@ multiclass SIMDVectorIndexedHS<bit U, bi
> asm, ".2s", ".2s", ".2s", ".s",
> [(set (v2i32 V64:$Rd),
> (OpNode (v2i32 V64:$Rn),
> - (v2i32 (ARM64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> + (v2i32 (AArch64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> let Inst{21} = idx{0};
> @@ -6583,7 +6583,7 @@ multiclass SIMDVectorIndexedHS<bit U, bi
> asm, ".4s", ".4s", ".4s", ".s",
> [(set (v4i32 V128:$Rd),
> (OpNode (v4i32 V128:$Rn),
> - (v4i32 (ARM64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> + (v4i32 (AArch64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> let Inst{21} = idx{0};
> @@ -6597,7 +6597,7 @@ multiclass SIMDVectorIndexedHSTied<bit U
> asm, ".4h", ".4h", ".4h", ".h",
> [(set (v4i16 V64:$dst),
> (OpNode (v4i16 V64:$Rd),(v4i16 V64:$Rn),
> - (v4i16 (ARM64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> + (v4i16 (AArch64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> bits<3> idx;
> let Inst{11} = idx{2};
> let Inst{21} = idx{1};
> @@ -6610,7 +6610,7 @@ multiclass SIMDVectorIndexedHSTied<bit U
> asm, ".8h", ".8h", ".8h", ".h",
> [(set (v8i16 V128:$dst),
> (OpNode (v8i16 V128:$Rd), (v8i16 V128:$Rn),
> - (v8i16 (ARM64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> + (v8i16 (AArch64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> bits<3> idx;
> let Inst{11} = idx{2};
> let Inst{21} = idx{1};
> @@ -6623,7 +6623,7 @@ multiclass SIMDVectorIndexedHSTied<bit U
> asm, ".2s", ".2s", ".2s", ".s",
> [(set (v2i32 V64:$dst),
> (OpNode (v2i32 V64:$Rd), (v2i32 V64:$Rn),
> - (v2i32 (ARM64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> + (v2i32 (AArch64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> let Inst{21} = idx{0};
> @@ -6635,7 +6635,7 @@ multiclass SIMDVectorIndexedHSTied<bit U
> asm, ".4s", ".4s", ".4s", ".s",
> [(set (v4i32 V128:$dst),
> (OpNode (v4i32 V128:$Rd), (v4i32 V128:$Rn),
> - (v4i32 (ARM64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> + (v4i32 (AArch64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> let Inst{21} = idx{0};
> @@ -6650,7 +6650,7 @@ multiclass SIMDIndexedLongSD<bit U, bits
> asm, ".4s", ".4s", ".4h", ".h",
> [(set (v4i32 V128:$Rd),
> (OpNode (v4i16 V64:$Rn),
> - (v4i16 (ARM64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> + (v4i16 (AArch64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> bits<3> idx;
> let Inst{11} = idx{2};
> let Inst{21} = idx{1};
> @@ -6663,7 +6663,7 @@ multiclass SIMDIndexedLongSD<bit U, bits
> asm#"2", ".4s", ".4s", ".8h", ".h",
> [(set (v4i32 V128:$Rd),
> (OpNode (extract_high_v8i16 V128:$Rn),
> - (extract_high_v8i16 (ARM64duplane16 (v8i16 V128_lo:$Rm),
> + (extract_high_v8i16 (AArch64duplane16 (v8i16 V128_lo:$Rm),
> VectorIndexH:$idx))))]> {
>
> bits<3> idx;
> @@ -6678,7 +6678,7 @@ multiclass SIMDIndexedLongSD<bit U, bits
> asm, ".2d", ".2d", ".2s", ".s",
> [(set (v2i64 V128:$Rd),
> (OpNode (v2i32 V64:$Rn),
> - (v2i32 (ARM64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> + (v2i32 (AArch64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> let Inst{21} = idx{0};
> @@ -6690,7 +6690,7 @@ multiclass SIMDIndexedLongSD<bit U, bits
> asm#"2", ".2d", ".2d", ".4s", ".s",
> [(set (v2i64 V128:$Rd),
> (OpNode (extract_high_v4i32 V128:$Rn),
> - (extract_high_v4i32 (ARM64duplane32 (v4i32 V128:$Rm),
> + (extract_high_v4i32 (AArch64duplane32 (v4i32 V128:$Rm),
> VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> @@ -6723,9 +6723,9 @@ multiclass SIMDIndexedLongSQDMLXSDTied<b
> asm, ".4s", ".4s", ".4h", ".h",
> [(set (v4i32 V128:$dst),
> (Accum (v4i32 V128:$Rd),
> - (v4i32 (int_arm64_neon_sqdmull
> + (v4i32 (int_aarch64_neon_sqdmull
> (v4i16 V64:$Rn),
> - (v4i16 (ARM64duplane16 (v8i16 V128_lo:$Rm),
> + (v4i16 (AArch64duplane16 (v8i16 V128_lo:$Rm),
> VectorIndexH:$idx))))))]> {
> bits<3> idx;
> let Inst{11} = idx{2};
> @@ -6737,8 +6737,8 @@ multiclass SIMDIndexedLongSQDMLXSDTied<b
> // intermediate EXTRACT_SUBREG would be untyped.
> def : Pat<(i32 (Accum (i32 FPR32Op:$Rd),
> (i32 (vector_extract (v4i32
> - (int_arm64_neon_sqdmull (v4i16 V64:$Rn),
> - (v4i16 (ARM64duplane16 (v8i16 V128_lo:$Rm),
> + (int_aarch64_neon_sqdmull (v4i16 V64:$Rn),
> + (v4i16 (AArch64duplane16 (v8i16 V128_lo:$Rm),
> VectorIndexH:$idx)))),
> (i64 0))))),
> (EXTRACT_SUBREG
> @@ -6753,10 +6753,10 @@ multiclass SIMDIndexedLongSQDMLXSDTied<b
> asm#"2", ".4s", ".4s", ".8h", ".h",
> [(set (v4i32 V128:$dst),
> (Accum (v4i32 V128:$Rd),
> - (v4i32 (int_arm64_neon_sqdmull
> + (v4i32 (int_aarch64_neon_sqdmull
> (extract_high_v8i16 V128:$Rn),
> (extract_high_v8i16
> - (ARM64duplane16 (v8i16 V128_lo:$Rm),
> + (AArch64duplane16 (v8i16 V128_lo:$Rm),
> VectorIndexH:$idx))))))]> {
> bits<3> idx;
> let Inst{11} = idx{2};
> @@ -6770,9 +6770,9 @@ multiclass SIMDIndexedLongSQDMLXSDTied<b
> asm, ".2d", ".2d", ".2s", ".s",
> [(set (v2i64 V128:$dst),
> (Accum (v2i64 V128:$Rd),
> - (v2i64 (int_arm64_neon_sqdmull
> + (v2i64 (int_aarch64_neon_sqdmull
> (v2i32 V64:$Rn),
> - (v2i32 (ARM64duplane32 (v4i32 V128:$Rm),
> + (v2i32 (AArch64duplane32 (v4i32 V128:$Rm),
> VectorIndexS:$idx))))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> @@ -6785,10 +6785,10 @@ multiclass SIMDIndexedLongSQDMLXSDTied<b
> asm#"2", ".2d", ".2d", ".4s", ".s",
> [(set (v2i64 V128:$dst),
> (Accum (v2i64 V128:$Rd),
> - (v2i64 (int_arm64_neon_sqdmull
> + (v2i64 (int_aarch64_neon_sqdmull
> (extract_high_v4i32 V128:$Rn),
> (extract_high_v4i32
> - (ARM64duplane32 (v4i32 V128:$Rm),
> + (AArch64duplane32 (v4i32 V128:$Rm),
> VectorIndexS:$idx))))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> @@ -6810,7 +6810,7 @@ multiclass SIMDIndexedLongSQDMLXSDTied<b
> asm, ".s", "", "", ".s",
> [(set (i64 FPR64Op:$dst),
> (Accum (i64 FPR64Op:$Rd),
> - (i64 (int_arm64_neon_sqdmulls_scalar
> + (i64 (int_aarch64_neon_sqdmulls_scalar
> (i32 FPR32Op:$Rn),
> (i32 (vector_extract (v4i32 V128:$Rm),
> VectorIndexS:$idx))))))]> {
> @@ -6830,7 +6830,7 @@ multiclass SIMDVectorIndexedLongSD<bit U
> asm, ".4s", ".4s", ".4h", ".h",
> [(set (v4i32 V128:$Rd),
> (OpNode (v4i16 V64:$Rn),
> - (v4i16 (ARM64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> + (v4i16 (AArch64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> bits<3> idx;
> let Inst{11} = idx{2};
> let Inst{21} = idx{1};
> @@ -6843,7 +6843,7 @@ multiclass SIMDVectorIndexedLongSD<bit U
> asm#"2", ".4s", ".4s", ".8h", ".h",
> [(set (v4i32 V128:$Rd),
> (OpNode (extract_high_v8i16 V128:$Rn),
> - (extract_high_v8i16 (ARM64duplane16 (v8i16 V128_lo:$Rm),
> + (extract_high_v8i16 (AArch64duplane16 (v8i16 V128_lo:$Rm),
> VectorIndexH:$idx))))]> {
>
> bits<3> idx;
> @@ -6858,7 +6858,7 @@ multiclass SIMDVectorIndexedLongSD<bit U
> asm, ".2d", ".2d", ".2s", ".s",
> [(set (v2i64 V128:$Rd),
> (OpNode (v2i32 V64:$Rn),
> - (v2i32 (ARM64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> + (v2i32 (AArch64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> let Inst{21} = idx{0};
> @@ -6870,7 +6870,7 @@ multiclass SIMDVectorIndexedLongSD<bit U
> asm#"2", ".2d", ".2d", ".4s", ".s",
> [(set (v2i64 V128:$Rd),
> (OpNode (extract_high_v4i32 V128:$Rn),
> - (extract_high_v4i32 (ARM64duplane32 (v4i32 V128:$Rm),
> + (extract_high_v4i32 (AArch64duplane32 (v4i32 V128:$Rm),
> VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> @@ -6888,7 +6888,7 @@ multiclass SIMDVectorIndexedLongSDTied<b
> asm, ".4s", ".4s", ".4h", ".h",
> [(set (v4i32 V128:$dst),
> (OpNode (v4i32 V128:$Rd), (v4i16 V64:$Rn),
> - (v4i16 (ARM64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> + (v4i16 (AArch64duplane16 (v8i16 V128_lo:$Rm), VectorIndexH:$idx))))]> {
> bits<3> idx;
> let Inst{11} = idx{2};
> let Inst{21} = idx{1};
> @@ -6902,7 +6902,7 @@ multiclass SIMDVectorIndexedLongSDTied<b
> [(set (v4i32 V128:$dst),
> (OpNode (v4i32 V128:$Rd),
> (extract_high_v8i16 V128:$Rn),
> - (extract_high_v8i16 (ARM64duplane16 (v8i16 V128_lo:$Rm),
> + (extract_high_v8i16 (AArch64duplane16 (v8i16 V128_lo:$Rm),
> VectorIndexH:$idx))))]> {
> bits<3> idx;
> let Inst{11} = idx{2};
> @@ -6916,7 +6916,7 @@ multiclass SIMDVectorIndexedLongSDTied<b
> asm, ".2d", ".2d", ".2s", ".s",
> [(set (v2i64 V128:$dst),
> (OpNode (v2i64 V128:$Rd), (v2i32 V64:$Rn),
> - (v2i32 (ARM64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> + (v2i32 (AArch64duplane32 (v4i32 V128:$Rm), VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
> let Inst{21} = idx{0};
> @@ -6929,7 +6929,7 @@ multiclass SIMDVectorIndexedLongSDTied<b
> [(set (v2i64 V128:$dst),
> (OpNode (v2i64 V128:$Rd),
> (extract_high_v4i32 V128:$Rn),
> - (extract_high_v4i32 (ARM64duplane32 (v4i32 V128:$Rm),
> + (extract_high_v4i32 (AArch64duplane32 (v4i32 V128:$Rm),
> VectorIndexS:$idx))))]> {
> bits<2> idx;
> let Inst{11} = idx{1};
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.cpp (from r209576, llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.cpp)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.cpp?p2=llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.cpp&p1=llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.cpp&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.cpp (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.cpp Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===- ARM64InstrInfo.cpp - ARM64 Instruction Information -----------------===//
> +//===- AArch64InstrInfo.cpp - AArch64 Instruction Information -------------===//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,13 +7,13 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// This file contains the ARM64 implementation of the TargetInstrInfo class.
> +// This file contains the AArch64 implementation of the TargetInstrInfo class.
> //
> //===----------------------------------------------------------------------===//
>
> -#include "ARM64InstrInfo.h"
> -#include "ARM64Subtarget.h"
> -#include "MCTargetDesc/ARM64AddressingModes.h"
> +#include "AArch64InstrInfo.h"
> +#include "AArch64Subtarget.h"
> +#include "MCTargetDesc/AArch64AddressingModes.h"
> #include "llvm/CodeGen/MachineFrameInfo.h"
> #include "llvm/CodeGen/MachineInstrBuilder.h"
> #include "llvm/CodeGen/MachineMemOperand.h"
> @@ -26,15 +26,15 @@
> using namespace llvm;
>
> #define GET_INSTRINFO_CTOR_DTOR
> -#include "ARM64GenInstrInfo.inc"
> +#include "AArch64GenInstrInfo.inc"
>
> -ARM64InstrInfo::ARM64InstrInfo(const ARM64Subtarget &STI)
> - : ARM64GenInstrInfo(ARM64::ADJCALLSTACKDOWN, ARM64::ADJCALLSTACKUP),
> +AArch64InstrInfo::AArch64InstrInfo(const AArch64Subtarget &STI)
> + : AArch64GenInstrInfo(AArch64::ADJCALLSTACKDOWN, AArch64::ADJCALLSTACKUP),
> RI(this, &STI), Subtarget(STI) {}
>
> /// GetInstSize - Return the number of bytes of code the specified
> /// instruction may be. This returns the maximum number of bytes.
> -unsigned ARM64InstrInfo::GetInstSizeInBytes(const MachineInstr *MI) const {
> +unsigned AArch64InstrInfo::GetInstSizeInBytes(const MachineInstr *MI) const {
> const MCInstrDesc &Desc = MI->getDesc();
>
> switch (Desc.getOpcode()) {
> @@ -57,23 +57,23 @@ static void parseCondBranch(MachineInstr
> switch (LastInst->getOpcode()) {
> default:
> llvm_unreachable("Unknown branch instruction?");
> - case ARM64::Bcc:
> + case AArch64::Bcc:
> Target = LastInst->getOperand(1).getMBB();
> Cond.push_back(LastInst->getOperand(0));
> break;
> - case ARM64::CBZW:
> - case ARM64::CBZX:
> - case ARM64::CBNZW:
> - case ARM64::CBNZX:
> + case AArch64::CBZW:
> + case AArch64::CBZX:
> + case AArch64::CBNZW:
> + case AArch64::CBNZX:
> Target = LastInst->getOperand(1).getMBB();
> Cond.push_back(MachineOperand::CreateImm(-1));
> Cond.push_back(MachineOperand::CreateImm(LastInst->getOpcode()));
> Cond.push_back(LastInst->getOperand(0));
> break;
> - case ARM64::TBZW:
> - case ARM64::TBZX:
> - case ARM64::TBNZW:
> - case ARM64::TBNZX:
> + case AArch64::TBZW:
> + case AArch64::TBZX:
> + case AArch64::TBNZW:
> + case AArch64::TBNZX:
> Target = LastInst->getOperand(2).getMBB();
> Cond.push_back(MachineOperand::CreateImm(-1));
> Cond.push_back(MachineOperand::CreateImm(LastInst->getOpcode()));
> @@ -83,7 +83,7 @@ static void parseCondBranch(MachineInstr
> }
>
> // Branch analysis.
> -bool ARM64InstrInfo::AnalyzeBranch(MachineBasicBlock &MBB,
> +bool AArch64InstrInfo::AnalyzeBranch(MachineBasicBlock &MBB,
> MachineBasicBlock *&TBB,
> MachineBasicBlock *&FBB,
> SmallVectorImpl<MachineOperand> &Cond,
> @@ -175,40 +175,40 @@ bool ARM64InstrInfo::AnalyzeBranch(Machi
> return true;
> }
>
> -bool ARM64InstrInfo::ReverseBranchCondition(
> +bool AArch64InstrInfo::ReverseBranchCondition(
> SmallVectorImpl<MachineOperand> &Cond) const {
> if (Cond[0].getImm() != -1) {
> // Regular Bcc
> - ARM64CC::CondCode CC = (ARM64CC::CondCode)(int)Cond[0].getImm();
> - Cond[0].setImm(ARM64CC::getInvertedCondCode(CC));
> + AArch64CC::CondCode CC = (AArch64CC::CondCode)(int)Cond[0].getImm();
> + Cond[0].setImm(AArch64CC::getInvertedCondCode(CC));
> } else {
> // Folded compare-and-branch
> switch (Cond[1].getImm()) {
> default:
> llvm_unreachable("Unknown conditional branch!");
> - case ARM64::CBZW:
> - Cond[1].setImm(ARM64::CBNZW);
> + case AArch64::CBZW:
> + Cond[1].setImm(AArch64::CBNZW);
> break;
> - case ARM64::CBNZW:
> - Cond[1].setImm(ARM64::CBZW);
> + case AArch64::CBNZW:
> + Cond[1].setImm(AArch64::CBZW);
> break;
> - case ARM64::CBZX:
> - Cond[1].setImm(ARM64::CBNZX);
> + case AArch64::CBZX:
> + Cond[1].setImm(AArch64::CBNZX);
> break;
> - case ARM64::CBNZX:
> - Cond[1].setImm(ARM64::CBZX);
> + case AArch64::CBNZX:
> + Cond[1].setImm(AArch64::CBZX);
> break;
> - case ARM64::TBZW:
> - Cond[1].setImm(ARM64::TBNZW);
> + case AArch64::TBZW:
> + Cond[1].setImm(AArch64::TBNZW);
> break;
> - case ARM64::TBNZW:
> - Cond[1].setImm(ARM64::TBZW);
> + case AArch64::TBNZW:
> + Cond[1].setImm(AArch64::TBZW);
> break;
> - case ARM64::TBZX:
> - Cond[1].setImm(ARM64::TBNZX);
> + case AArch64::TBZX:
> + Cond[1].setImm(AArch64::TBNZX);
> break;
> - case ARM64::TBNZX:
> - Cond[1].setImm(ARM64::TBZX);
> + case AArch64::TBNZX:
> + Cond[1].setImm(AArch64::TBZX);
> break;
> }
> }
> @@ -216,7 +216,7 @@ bool ARM64InstrInfo::ReverseBranchCondit
> return false;
> }
>
> -unsigned ARM64InstrInfo::RemoveBranch(MachineBasicBlock &MBB) const {
> +unsigned AArch64InstrInfo::RemoveBranch(MachineBasicBlock &MBB) const {
> MachineBasicBlock::iterator I = MBB.end();
> if (I == MBB.begin())
> return 0;
> @@ -246,12 +246,12 @@ unsigned ARM64InstrInfo::RemoveBranch(Ma
> return 2;
> }
>
> -void ARM64InstrInfo::instantiateCondBranch(
> +void AArch64InstrInfo::instantiateCondBranch(
> MachineBasicBlock &MBB, DebugLoc DL, MachineBasicBlock *TBB,
> const SmallVectorImpl<MachineOperand> &Cond) const {
> if (Cond[0].getImm() != -1) {
> // Regular Bcc
> - BuildMI(&MBB, DL, get(ARM64::Bcc)).addImm(Cond[0].getImm()).addMBB(TBB);
> + BuildMI(&MBB, DL, get(AArch64::Bcc)).addImm(Cond[0].getImm()).addMBB(TBB);
> } else {
> // Folded compare-and-branch
> const MachineInstrBuilder MIB =
> @@ -262,7 +262,7 @@ void ARM64InstrInfo::instantiateCondBran
> }
> }
>
> -unsigned ARM64InstrInfo::InsertBranch(
> +unsigned AArch64InstrInfo::InsertBranch(
> MachineBasicBlock &MBB, MachineBasicBlock *TBB, MachineBasicBlock *FBB,
> const SmallVectorImpl<MachineOperand> &Cond, DebugLoc DL) const {
> // Shouldn't be a fall through.
> @@ -270,7 +270,7 @@ unsigned ARM64InstrInfo::InsertBranch(
>
> if (!FBB) {
> if (Cond.empty()) // Unconditional branch?
> - BuildMI(&MBB, DL, get(ARM64::B)).addMBB(TBB);
> + BuildMI(&MBB, DL, get(AArch64::B)).addMBB(TBB);
> else
> instantiateCondBranch(MBB, DL, TBB, Cond);
> return 1;
> @@ -278,7 +278,7 @@ unsigned ARM64InstrInfo::InsertBranch(
>
> // Two-way conditional branch.
> instantiateCondBranch(MBB, DL, TBB, Cond);
> - BuildMI(&MBB, DL, get(ARM64::B)).addMBB(FBB);
> + BuildMI(&MBB, DL, get(AArch64::B)).addMBB(FBB);
> return 2;
> }
>
> @@ -302,52 +302,52 @@ static unsigned canFoldIntoCSel(const Ma
> if (!TargetRegisterInfo::isVirtualRegister(VReg))
> return 0;
>
> - bool Is64Bit = ARM64::GPR64allRegClass.hasSubClassEq(MRI.getRegClass(VReg));
> + bool Is64Bit = AArch64::GPR64allRegClass.hasSubClassEq(MRI.getRegClass(VReg));
> const MachineInstr *DefMI = MRI.getVRegDef(VReg);
> unsigned Opc = 0;
> unsigned SrcOpNum = 0;
> switch (DefMI->getOpcode()) {
> - case ARM64::ADDSXri:
> - case ARM64::ADDSWri:
> + case AArch64::ADDSXri:
> + case AArch64::ADDSWri:
> // if NZCV is used, do not fold.
> - if (DefMI->findRegisterDefOperandIdx(ARM64::NZCV, true) == -1)
> + if (DefMI->findRegisterDefOperandIdx(AArch64::NZCV, true) == -1)
> return 0;
> // fall-through to ADDXri and ADDWri.
> - case ARM64::ADDXri:
> - case ARM64::ADDWri:
> + case AArch64::ADDXri:
> + case AArch64::ADDWri:
> // add x, 1 -> csinc.
> if (!DefMI->getOperand(2).isImm() || DefMI->getOperand(2).getImm() != 1 ||
> DefMI->getOperand(3).getImm() != 0)
> return 0;
> SrcOpNum = 1;
> - Opc = Is64Bit ? ARM64::CSINCXr : ARM64::CSINCWr;
> + Opc = Is64Bit ? AArch64::CSINCXr : AArch64::CSINCWr;
> break;
>
> - case ARM64::ORNXrr:
> - case ARM64::ORNWrr: {
> + case AArch64::ORNXrr:
> + case AArch64::ORNWrr: {
> // not x -> csinv, represented as orn dst, xzr, src.
> unsigned ZReg = removeCopies(MRI, DefMI->getOperand(1).getReg());
> - if (ZReg != ARM64::XZR && ZReg != ARM64::WZR)
> + if (ZReg != AArch64::XZR && ZReg != AArch64::WZR)
> return 0;
> SrcOpNum = 2;
> - Opc = Is64Bit ? ARM64::CSINVXr : ARM64::CSINVWr;
> + Opc = Is64Bit ? AArch64::CSINVXr : AArch64::CSINVWr;
> break;
> }
>
> - case ARM64::SUBSXrr:
> - case ARM64::SUBSWrr:
> + case AArch64::SUBSXrr:
> + case AArch64::SUBSWrr:
> // if NZCV is used, do not fold.
> - if (DefMI->findRegisterDefOperandIdx(ARM64::NZCV, true) == -1)
> + if (DefMI->findRegisterDefOperandIdx(AArch64::NZCV, true) == -1)
> return 0;
> // fall-through to SUBXrr and SUBWrr.
> - case ARM64::SUBXrr:
> - case ARM64::SUBWrr: {
> + case AArch64::SUBXrr:
> + case AArch64::SUBWrr: {
> // neg x -> csneg, represented as sub dst, xzr, src.
> unsigned ZReg = removeCopies(MRI, DefMI->getOperand(1).getReg());
> - if (ZReg != ARM64::XZR && ZReg != ARM64::WZR)
> + if (ZReg != AArch64::XZR && ZReg != AArch64::WZR)
> return 0;
> SrcOpNum = 2;
> - Opc = Is64Bit ? ARM64::CSNEGXr : ARM64::CSNEGWr;
> + Opc = Is64Bit ? AArch64::CSNEGXr : AArch64::CSNEGWr;
> break;
> }
> default:
> @@ -360,7 +360,7 @@ static unsigned canFoldIntoCSel(const Ma
> return Opc;
> }
>
> -bool ARM64InstrInfo::canInsertSelect(
> +bool AArch64InstrInfo::canInsertSelect(
> const MachineBasicBlock &MBB, const SmallVectorImpl<MachineOperand> &Cond,
> unsigned TrueReg, unsigned FalseReg, int &CondCycles, int &TrueCycles,
> int &FalseCycles) const {
> @@ -376,8 +376,8 @@ bool ARM64InstrInfo::canInsertSelect(
>
> // GPRs are handled by csel.
> // FIXME: Fold in x+1, -x, and ~x when applicable.
> - if (ARM64::GPR64allRegClass.hasSubClassEq(RC) ||
> - ARM64::GPR32allRegClass.hasSubClassEq(RC)) {
> + if (AArch64::GPR64allRegClass.hasSubClassEq(RC) ||
> + AArch64::GPR32allRegClass.hasSubClassEq(RC)) {
> // Single-cycle csel, csinc, csinv, and csneg.
> CondCycles = 1 + ExtraCondLat;
> TrueCycles = FalseCycles = 1;
> @@ -390,8 +390,8 @@ bool ARM64InstrInfo::canInsertSelect(
>
> // Scalar floating point is handled by fcsel.
> // FIXME: Form fabs, fmin, and fmax when applicable.
> - if (ARM64::FPR64RegClass.hasSubClassEq(RC) ||
> - ARM64::FPR32RegClass.hasSubClassEq(RC)) {
> + if (AArch64::FPR64RegClass.hasSubClassEq(RC) ||
> + AArch64::FPR32RegClass.hasSubClassEq(RC)) {
> CondCycles = 5 + ExtraCondLat;
> TrueCycles = FalseCycles = 2;
> return true;
> @@ -401,20 +401,20 @@ bool ARM64InstrInfo::canInsertSelect(
> return false;
> }
>
> -void ARM64InstrInfo::insertSelect(MachineBasicBlock &MBB,
> - MachineBasicBlock::iterator I, DebugLoc DL,
> - unsigned DstReg,
> - const SmallVectorImpl<MachineOperand> &Cond,
> - unsigned TrueReg, unsigned FalseReg) const {
> +void AArch64InstrInfo::insertSelect(MachineBasicBlock &MBB,
> + MachineBasicBlock::iterator I, DebugLoc DL,
> + unsigned DstReg,
> + const SmallVectorImpl<MachineOperand> &Cond,
> + unsigned TrueReg, unsigned FalseReg) const {
> MachineRegisterInfo &MRI = MBB.getParent()->getRegInfo();
>
> // Parse the condition code, see parseCondBranch() above.
> - ARM64CC::CondCode CC;
> + AArch64CC::CondCode CC;
> switch (Cond.size()) {
> default:
> llvm_unreachable("Unknown condition opcode in Cond");
> case 1: // b.cc
> - CC = ARM64CC::CondCode(Cond[0].getImm());
> + CC = AArch64CC::CondCode(Cond[0].getImm());
> break;
> case 3: { // cbz/cbnz
> // We must insert a compare against 0.
> @@ -422,34 +422,34 @@ void ARM64InstrInfo::insertSelect(Machin
> switch (Cond[1].getImm()) {
> default:
> llvm_unreachable("Unknown branch opcode in Cond");
> - case ARM64::CBZW:
> + case AArch64::CBZW:
> Is64Bit = 0;
> - CC = ARM64CC::EQ;
> + CC = AArch64CC::EQ;
> break;
> - case ARM64::CBZX:
> + case AArch64::CBZX:
> Is64Bit = 1;
> - CC = ARM64CC::EQ;
> + CC = AArch64CC::EQ;
> break;
> - case ARM64::CBNZW:
> + case AArch64::CBNZW:
> Is64Bit = 0;
> - CC = ARM64CC::NE;
> + CC = AArch64CC::NE;
> break;
> - case ARM64::CBNZX:
> + case AArch64::CBNZX:
> Is64Bit = 1;
> - CC = ARM64CC::NE;
> + CC = AArch64CC::NE;
> break;
> }
> unsigned SrcReg = Cond[2].getReg();
> if (Is64Bit) {
> // cmp reg, #0 is actually subs xzr, reg, #0.
> - MRI.constrainRegClass(SrcReg, &ARM64::GPR64spRegClass);
> - BuildMI(MBB, I, DL, get(ARM64::SUBSXri), ARM64::XZR)
> + MRI.constrainRegClass(SrcReg, &AArch64::GPR64spRegClass);
> + BuildMI(MBB, I, DL, get(AArch64::SUBSXri), AArch64::XZR)
> .addReg(SrcReg)
> .addImm(0)
> .addImm(0);
> } else {
> - MRI.constrainRegClass(SrcReg, &ARM64::GPR32spRegClass);
> - BuildMI(MBB, I, DL, get(ARM64::SUBSWri), ARM64::WZR)
> + MRI.constrainRegClass(SrcReg, &AArch64::GPR32spRegClass);
> + BuildMI(MBB, I, DL, get(AArch64::SUBSWri), AArch64::WZR)
> .addReg(SrcReg)
> .addImm(0)
> .addImm(0);
> @@ -461,24 +461,26 @@ void ARM64InstrInfo::insertSelect(Machin
> switch (Cond[1].getImm()) {
> default:
> llvm_unreachable("Unknown branch opcode in Cond");
> - case ARM64::TBZW:
> - case ARM64::TBZX:
> - CC = ARM64CC::EQ;
> + case AArch64::TBZW:
> + case AArch64::TBZX:
> + CC = AArch64CC::EQ;
> break;
> - case ARM64::TBNZW:
> - case ARM64::TBNZX:
> - CC = ARM64CC::NE;
> + case AArch64::TBNZW:
> + case AArch64::TBNZX:
> + CC = AArch64CC::NE;
> break;
> }
> // cmp reg, #foo is actually ands xzr, reg, #1<<foo.
> - if (Cond[1].getImm() == ARM64::TBZW || Cond[1].getImm() == ARM64::TBNZW)
> - BuildMI(MBB, I, DL, get(ARM64::ANDSWri), ARM64::WZR)
> + if (Cond[1].getImm() == AArch64::TBZW || Cond[1].getImm() == AArch64::TBNZW)
> + BuildMI(MBB, I, DL, get(AArch64::ANDSWri), AArch64::WZR)
> .addReg(Cond[2].getReg())
> - .addImm(ARM64_AM::encodeLogicalImmediate(1ull << Cond[3].getImm(), 32));
> + .addImm(
> + AArch64_AM::encodeLogicalImmediate(1ull << Cond[3].getImm(), 32));
> else
> - BuildMI(MBB, I, DL, get(ARM64::ANDSXri), ARM64::XZR)
> + BuildMI(MBB, I, DL, get(AArch64::ANDSXri), AArch64::XZR)
> .addReg(Cond[2].getReg())
> - .addImm(ARM64_AM::encodeLogicalImmediate(1ull << Cond[3].getImm(), 64));
> + .addImm(
> + AArch64_AM::encodeLogicalImmediate(1ull << Cond[3].getImm(), 64));
> break;
> }
> }
> @@ -486,20 +488,20 @@ void ARM64InstrInfo::insertSelect(Machin
> unsigned Opc = 0;
> const TargetRegisterClass *RC = nullptr;
> bool TryFold = false;
> - if (MRI.constrainRegClass(DstReg, &ARM64::GPR64RegClass)) {
> - RC = &ARM64::GPR64RegClass;
> - Opc = ARM64::CSELXr;
> + if (MRI.constrainRegClass(DstReg, &AArch64::GPR64RegClass)) {
> + RC = &AArch64::GPR64RegClass;
> + Opc = AArch64::CSELXr;
> TryFold = true;
> - } else if (MRI.constrainRegClass(DstReg, &ARM64::GPR32RegClass)) {
> - RC = &ARM64::GPR32RegClass;
> - Opc = ARM64::CSELWr;
> + } else if (MRI.constrainRegClass(DstReg, &AArch64::GPR32RegClass)) {
> + RC = &AArch64::GPR32RegClass;
> + Opc = AArch64::CSELWr;
> TryFold = true;
> - } else if (MRI.constrainRegClass(DstReg, &ARM64::FPR64RegClass)) {
> - RC = &ARM64::FPR64RegClass;
> - Opc = ARM64::FCSELDrrr;
> - } else if (MRI.constrainRegClass(DstReg, &ARM64::FPR32RegClass)) {
> - RC = &ARM64::FPR32RegClass;
> - Opc = ARM64::FCSELSrrr;
> + } else if (MRI.constrainRegClass(DstReg, &AArch64::FPR64RegClass)) {
> + RC = &AArch64::FPR64RegClass;
> + Opc = AArch64::FCSELDrrr;
> + } else if (MRI.constrainRegClass(DstReg, &AArch64::FPR32RegClass)) {
> + RC = &AArch64::FPR32RegClass;
> + Opc = AArch64::FCSELSrrr;
> }
> assert(RC && "Unsupported regclass");
>
> @@ -510,7 +512,7 @@ void ARM64InstrInfo::insertSelect(Machin
> if (FoldedOpc) {
> // The folded opcodes csinc, csinc and csneg apply the operation to
> // FalseReg, so we need to invert the condition.
> - CC = ARM64CC::getInvertedCondCode(CC);
> + CC = AArch64CC::getInvertedCondCode(CC);
> TrueReg = FalseReg;
> } else
> FoldedOpc = canFoldIntoCSel(MRI, FalseReg, &NewVReg);
> @@ -533,14 +535,14 @@ void ARM64InstrInfo::insertSelect(Machin
> CC);
> }
>
> -bool ARM64InstrInfo::isCoalescableExtInstr(const MachineInstr &MI,
> - unsigned &SrcReg, unsigned &DstReg,
> - unsigned &SubIdx) const {
> +bool AArch64InstrInfo::isCoalescableExtInstr(const MachineInstr &MI,
> + unsigned &SrcReg, unsigned &DstReg,
> + unsigned &SubIdx) const {
> switch (MI.getOpcode()) {
> default:
> return false;
> - case ARM64::SBFMXri: // aka sxtw
> - case ARM64::UBFMXri: // aka uxtw
> + case AArch64::SBFMXri: // aka sxtw
> + case AArch64::UBFMXri: // aka uxtw
> // Check for the 32 -> 64 bit extension case, these instructions can do
> // much more.
> if (MI.getOperand(2).getImm() != 0 || MI.getOperand(3).getImm() != 31)
> @@ -548,7 +550,7 @@ bool ARM64InstrInfo::isCoalescableExtIns
> // This is a signed or unsigned 32 -> 64 bit extension.
> SrcReg = MI.getOperand(1).getReg();
> DstReg = MI.getOperand(0).getReg();
> - SubIdx = ARM64::sub_32;
> + SubIdx = AArch64::sub_32;
> return true;
> }
> }
> @@ -556,49 +558,49 @@ bool ARM64InstrInfo::isCoalescableExtIns
> /// analyzeCompare - For a comparison instruction, return the source registers
> /// in SrcReg and SrcReg2, and the value it compares against in CmpValue.
> /// Return true if the comparison instruction can be analyzed.
> -bool ARM64InstrInfo::analyzeCompare(const MachineInstr *MI, unsigned &SrcReg,
> - unsigned &SrcReg2, int &CmpMask,
> - int &CmpValue) const {
> +bool AArch64InstrInfo::analyzeCompare(const MachineInstr *MI, unsigned &SrcReg,
> + unsigned &SrcReg2, int &CmpMask,
> + int &CmpValue) const {
> switch (MI->getOpcode()) {
> default:
> break;
> - case ARM64::SUBSWrr:
> - case ARM64::SUBSWrs:
> - case ARM64::SUBSWrx:
> - case ARM64::SUBSXrr:
> - case ARM64::SUBSXrs:
> - case ARM64::SUBSXrx:
> - case ARM64::ADDSWrr:
> - case ARM64::ADDSWrs:
> - case ARM64::ADDSWrx:
> - case ARM64::ADDSXrr:
> - case ARM64::ADDSXrs:
> - case ARM64::ADDSXrx:
> + case AArch64::SUBSWrr:
> + case AArch64::SUBSWrs:
> + case AArch64::SUBSWrx:
> + case AArch64::SUBSXrr:
> + case AArch64::SUBSXrs:
> + case AArch64::SUBSXrx:
> + case AArch64::ADDSWrr:
> + case AArch64::ADDSWrs:
> + case AArch64::ADDSWrx:
> + case AArch64::ADDSXrr:
> + case AArch64::ADDSXrs:
> + case AArch64::ADDSXrx:
> // Replace SUBSWrr with SUBWrr if NZCV is not used.
> SrcReg = MI->getOperand(1).getReg();
> SrcReg2 = MI->getOperand(2).getReg();
> CmpMask = ~0;
> CmpValue = 0;
> return true;
> - case ARM64::SUBSWri:
> - case ARM64::ADDSWri:
> - case ARM64::SUBSXri:
> - case ARM64::ADDSXri:
> + case AArch64::SUBSWri:
> + case AArch64::ADDSWri:
> + case AArch64::SUBSXri:
> + case AArch64::ADDSXri:
> SrcReg = MI->getOperand(1).getReg();
> SrcReg2 = 0;
> CmpMask = ~0;
> CmpValue = MI->getOperand(2).getImm();
> return true;
> - case ARM64::ANDSWri:
> - case ARM64::ANDSXri:
> + case AArch64::ANDSWri:
> + case AArch64::ANDSXri:
> // ANDS does not use the same encoding scheme as the others xxxS
> // instructions.
> SrcReg = MI->getOperand(1).getReg();
> SrcReg2 = 0;
> CmpMask = ~0;
> - CmpValue = ARM64_AM::decodeLogicalImmediate(
> + CmpValue = AArch64_AM::decodeLogicalImmediate(
> MI->getOperand(2).getImm(),
> - MI->getOpcode() == ARM64::ANDSWri ? 32 : 64);
> + MI->getOpcode() == AArch64::ANDSWri ? 32 : 64);
> return true;
> }
>
> @@ -646,33 +648,33 @@ static bool UpdateOperandRegClass(Machin
>
> /// optimizeCompareInstr - Convert the instruction supplying the argument to the
> /// comparison into one that sets the zero bit in the flags register.
> -bool ARM64InstrInfo::optimizeCompareInstr(
> +bool AArch64InstrInfo::optimizeCompareInstr(
> MachineInstr *CmpInstr, unsigned SrcReg, unsigned SrcReg2, int CmpMask,
> int CmpValue, const MachineRegisterInfo *MRI) const {
>
> // Replace SUBSWrr with SUBWrr if NZCV is not used.
> - int Cmp_NZCV = CmpInstr->findRegisterDefOperandIdx(ARM64::NZCV, true);
> + int Cmp_NZCV = CmpInstr->findRegisterDefOperandIdx(AArch64::NZCV, true);
> if (Cmp_NZCV != -1) {
> unsigned NewOpc;
> switch (CmpInstr->getOpcode()) {
> default:
> return false;
> - case ARM64::ADDSWrr: NewOpc = ARM64::ADDWrr; break;
> - case ARM64::ADDSWri: NewOpc = ARM64::ADDWri; break;
> - case ARM64::ADDSWrs: NewOpc = ARM64::ADDWrs; break;
> - case ARM64::ADDSWrx: NewOpc = ARM64::ADDWrx; break;
> - case ARM64::ADDSXrr: NewOpc = ARM64::ADDXrr; break;
> - case ARM64::ADDSXri: NewOpc = ARM64::ADDXri; break;
> - case ARM64::ADDSXrs: NewOpc = ARM64::ADDXrs; break;
> - case ARM64::ADDSXrx: NewOpc = ARM64::ADDXrx; break;
> - case ARM64::SUBSWrr: NewOpc = ARM64::SUBWrr; break;
> - case ARM64::SUBSWri: NewOpc = ARM64::SUBWri; break;
> - case ARM64::SUBSWrs: NewOpc = ARM64::SUBWrs; break;
> - case ARM64::SUBSWrx: NewOpc = ARM64::SUBWrx; break;
> - case ARM64::SUBSXrr: NewOpc = ARM64::SUBXrr; break;
> - case ARM64::SUBSXri: NewOpc = ARM64::SUBXri; break;
> - case ARM64::SUBSXrs: NewOpc = ARM64::SUBXrs; break;
> - case ARM64::SUBSXrx: NewOpc = ARM64::SUBXrx; break;
> + case AArch64::ADDSWrr: NewOpc = AArch64::ADDWrr; break;
> + case AArch64::ADDSWri: NewOpc = AArch64::ADDWri; break;
> + case AArch64::ADDSWrs: NewOpc = AArch64::ADDWrs; break;
> + case AArch64::ADDSWrx: NewOpc = AArch64::ADDWrx; break;
> + case AArch64::ADDSXrr: NewOpc = AArch64::ADDXrr; break;
> + case AArch64::ADDSXri: NewOpc = AArch64::ADDXri; break;
> + case AArch64::ADDSXrs: NewOpc = AArch64::ADDXrs; break;
> + case AArch64::ADDSXrx: NewOpc = AArch64::ADDXrx; break;
> + case AArch64::SUBSWrr: NewOpc = AArch64::SUBWrr; break;
> + case AArch64::SUBSWri: NewOpc = AArch64::SUBWri; break;
> + case AArch64::SUBSWrs: NewOpc = AArch64::SUBWrs; break;
> + case AArch64::SUBSWrx: NewOpc = AArch64::SUBWrx; break;
> + case AArch64::SUBSXrr: NewOpc = AArch64::SUBXrr; break;
> + case AArch64::SUBSXri: NewOpc = AArch64::SUBXri; break;
> + case AArch64::SUBSXrs: NewOpc = AArch64::SUBXrs; break;
> + case AArch64::SUBSXrx: NewOpc = AArch64::SUBXrx; break;
> }
>
> const MCInstrDesc &MCID = get(NewOpc);
> @@ -718,8 +720,8 @@ bool ARM64InstrInfo::optimizeCompareInst
> for (--I; I != E; --I) {
> const MachineInstr &Instr = *I;
>
> - if (Instr.modifiesRegister(ARM64::NZCV, TRI) ||
> - Instr.readsRegister(ARM64::NZCV, TRI))
> + if (Instr.modifiesRegister(AArch64::NZCV, TRI) ||
> + Instr.readsRegister(AArch64::NZCV, TRI))
> // This instruction modifies or uses NZCV after the one we want to
> // change. We can't do this transformation.
> return false;
> @@ -732,29 +734,29 @@ bool ARM64InstrInfo::optimizeCompareInst
> switch (MI->getOpcode()) {
> default:
> return false;
> - case ARM64::ADDSWrr:
> - case ARM64::ADDSWri:
> - case ARM64::ADDSXrr:
> - case ARM64::ADDSXri:
> - case ARM64::SUBSWrr:
> - case ARM64::SUBSWri:
> - case ARM64::SUBSXrr:
> - case ARM64::SUBSXri:
> - break;
> - case ARM64::ADDWrr: NewOpc = ARM64::ADDSWrr; break;
> - case ARM64::ADDWri: NewOpc = ARM64::ADDSWri; break;
> - case ARM64::ADDXrr: NewOpc = ARM64::ADDSXrr; break;
> - case ARM64::ADDXri: NewOpc = ARM64::ADDSXri; break;
> - case ARM64::ADCWr: NewOpc = ARM64::ADCSWr; break;
> - case ARM64::ADCXr: NewOpc = ARM64::ADCSXr; break;
> - case ARM64::SUBWrr: NewOpc = ARM64::SUBSWrr; break;
> - case ARM64::SUBWri: NewOpc = ARM64::SUBSWri; break;
> - case ARM64::SUBXrr: NewOpc = ARM64::SUBSXrr; break;
> - case ARM64::SUBXri: NewOpc = ARM64::SUBSXri; break;
> - case ARM64::SBCWr: NewOpc = ARM64::SBCSWr; break;
> - case ARM64::SBCXr: NewOpc = ARM64::SBCSXr; break;
> - case ARM64::ANDWri: NewOpc = ARM64::ANDSWri; break;
> - case ARM64::ANDXri: NewOpc = ARM64::ANDSXri; break;
> + case AArch64::ADDSWrr:
> + case AArch64::ADDSWri:
> + case AArch64::ADDSXrr:
> + case AArch64::ADDSXri:
> + case AArch64::SUBSWrr:
> + case AArch64::SUBSWri:
> + case AArch64::SUBSXrr:
> + case AArch64::SUBSXri:
> + break;
> + case AArch64::ADDWrr: NewOpc = AArch64::ADDSWrr; break;
> + case AArch64::ADDWri: NewOpc = AArch64::ADDSWri; break;
> + case AArch64::ADDXrr: NewOpc = AArch64::ADDSXrr; break;
> + case AArch64::ADDXri: NewOpc = AArch64::ADDSXri; break;
> + case AArch64::ADCWr: NewOpc = AArch64::ADCSWr; break;
> + case AArch64::ADCXr: NewOpc = AArch64::ADCSXr; break;
> + case AArch64::SUBWrr: NewOpc = AArch64::SUBSWrr; break;
> + case AArch64::SUBWri: NewOpc = AArch64::SUBSWri; break;
> + case AArch64::SUBXrr: NewOpc = AArch64::SUBSXrr; break;
> + case AArch64::SUBXri: NewOpc = AArch64::SUBSXri; break;
> + case AArch64::SBCWr: NewOpc = AArch64::SBCSWr; break;
> + case AArch64::SBCXr: NewOpc = AArch64::SBCSXr; break;
> + case AArch64::ANDWri: NewOpc = AArch64::ANDSWri; break;
> + case AArch64::ANDXri: NewOpc = AArch64::ANDSXri; break;
> }
>
> // Scan forward for the use of NZCV.
> @@ -771,11 +773,11 @@ bool ARM64InstrInfo::optimizeCompareInst
> for (unsigned IO = 0, EO = Instr.getNumOperands(); !IsSafe && IO != EO;
> ++IO) {
> const MachineOperand &MO = Instr.getOperand(IO);
> - if (MO.isRegMask() && MO.clobbersPhysReg(ARM64::NZCV)) {
> + if (MO.isRegMask() && MO.clobbersPhysReg(AArch64::NZCV)) {
> IsSafe = true;
> break;
> }
> - if (!MO.isReg() || MO.getReg() != ARM64::NZCV)
> + if (!MO.isReg() || MO.getReg() != AArch64::NZCV)
> continue;
> if (MO.isDef()) {
> IsSafe = true;
> @@ -784,24 +786,24 @@ bool ARM64InstrInfo::optimizeCompareInst
>
> // Decode the condition code.
> unsigned Opc = Instr.getOpcode();
> - ARM64CC::CondCode CC;
> + AArch64CC::CondCode CC;
> switch (Opc) {
> default:
> return false;
> - case ARM64::Bcc:
> - CC = (ARM64CC::CondCode)Instr.getOperand(IO - 2).getImm();
> + case AArch64::Bcc:
> + CC = (AArch64CC::CondCode)Instr.getOperand(IO - 2).getImm();
> break;
> - case ARM64::CSINVWr:
> - case ARM64::CSINVXr:
> - case ARM64::CSINCWr:
> - case ARM64::CSINCXr:
> - case ARM64::CSELWr:
> - case ARM64::CSELXr:
> - case ARM64::CSNEGWr:
> - case ARM64::CSNEGXr:
> - case ARM64::FCSELSrrr:
> - case ARM64::FCSELDrrr:
> - CC = (ARM64CC::CondCode)Instr.getOperand(IO - 1).getImm();
> + case AArch64::CSINVWr:
> + case AArch64::CSINVXr:
> + case AArch64::CSINCWr:
> + case AArch64::CSINCXr:
> + case AArch64::CSELWr:
> + case AArch64::CSELXr:
> + case AArch64::CSNEGWr:
> + case AArch64::CSNEGXr:
> + case AArch64::FCSELSrrr:
> + case AArch64::FCSELDrrr:
> + CC = (AArch64CC::CondCode)Instr.getOperand(IO - 1).getImm();
> break;
> }
>
> @@ -810,12 +812,12 @@ bool ARM64InstrInfo::optimizeCompareInst
> default:
> // NZCV can be used multiple times, we should continue.
> break;
> - case ARM64CC::VS:
> - case ARM64CC::VC:
> - case ARM64CC::GE:
> - case ARM64CC::LT:
> - case ARM64CC::GT:
> - case ARM64CC::LE:
> + case AArch64CC::VS:
> + case AArch64CC::VC:
> + case AArch64CC::GE:
> + case AArch64CC::LT:
> + case AArch64CC::GT:
> + case AArch64CC::LE:
> return false;
> }
> }
> @@ -826,7 +828,7 @@ bool ARM64InstrInfo::optimizeCompareInst
> if (!IsSafe) {
> MachineBasicBlock *ParentBlock = CmpInstr->getParent();
> for (auto *MBB : ParentBlock->successors())
> - if (MBB->isLiveIn(ARM64::NZCV))
> + if (MBB->isLiveIn(AArch64::NZCV))
> return false;
> }
>
> @@ -836,47 +838,47 @@ bool ARM64InstrInfo::optimizeCompareInst
> bool succeeded = UpdateOperandRegClass(MI);
> (void)succeeded;
> assert(succeeded && "Some operands reg class are incompatible!");
> - MI->addRegisterDefined(ARM64::NZCV, TRI);
> + MI->addRegisterDefined(AArch64::NZCV, TRI);
> return true;
> }
>
> /// Return true if this is this instruction has a non-zero immediate
> -bool ARM64InstrInfo::hasShiftedReg(const MachineInstr *MI) const {
> +bool AArch64InstrInfo::hasShiftedReg(const MachineInstr *MI) const {
> switch (MI->getOpcode()) {
> default:
> break;
> - case ARM64::ADDSWrs:
> - case ARM64::ADDSXrs:
> - case ARM64::ADDWrs:
> - case ARM64::ADDXrs:
> - case ARM64::ANDSWrs:
> - case ARM64::ANDSXrs:
> - case ARM64::ANDWrs:
> - case ARM64::ANDXrs:
> - case ARM64::BICSWrs:
> - case ARM64::BICSXrs:
> - case ARM64::BICWrs:
> - case ARM64::BICXrs:
> - case ARM64::CRC32Brr:
> - case ARM64::CRC32CBrr:
> - case ARM64::CRC32CHrr:
> - case ARM64::CRC32CWrr:
> - case ARM64::CRC32CXrr:
> - case ARM64::CRC32Hrr:
> - case ARM64::CRC32Wrr:
> - case ARM64::CRC32Xrr:
> - case ARM64::EONWrs:
> - case ARM64::EONXrs:
> - case ARM64::EORWrs:
> - case ARM64::EORXrs:
> - case ARM64::ORNWrs:
> - case ARM64::ORNXrs:
> - case ARM64::ORRWrs:
> - case ARM64::ORRXrs:
> - case ARM64::SUBSWrs:
> - case ARM64::SUBSXrs:
> - case ARM64::SUBWrs:
> - case ARM64::SUBXrs:
> + case AArch64::ADDSWrs:
> + case AArch64::ADDSXrs:
> + case AArch64::ADDWrs:
> + case AArch64::ADDXrs:
> + case AArch64::ANDSWrs:
> + case AArch64::ANDSXrs:
> + case AArch64::ANDWrs:
> + case AArch64::ANDXrs:
> + case AArch64::BICSWrs:
> + case AArch64::BICSXrs:
> + case AArch64::BICWrs:
> + case AArch64::BICXrs:
> + case AArch64::CRC32Brr:
> + case AArch64::CRC32CBrr:
> + case AArch64::CRC32CHrr:
> + case AArch64::CRC32CWrr:
> + case AArch64::CRC32CXrr:
> + case AArch64::CRC32Hrr:
> + case AArch64::CRC32Wrr:
> + case AArch64::CRC32Xrr:
> + case AArch64::EONWrs:
> + case AArch64::EONXrs:
> + case AArch64::EORWrs:
> + case AArch64::EORXrs:
> + case AArch64::ORNWrs:
> + case AArch64::ORNXrs:
> + case AArch64::ORRWrs:
> + case AArch64::ORRXrs:
> + case AArch64::SUBSWrs:
> + case AArch64::SUBSXrs:
> + case AArch64::SUBWrs:
> + case AArch64::SUBXrs:
> if (MI->getOperand(3).isImm()) {
> unsigned val = MI->getOperand(3).getImm();
> return (val != 0);
> @@ -887,22 +889,22 @@ bool ARM64InstrInfo::hasShiftedReg(const
> }
>
> /// Return true if this is this instruction has a non-zero immediate
> -bool ARM64InstrInfo::hasExtendedReg(const MachineInstr *MI) const {
> +bool AArch64InstrInfo::hasExtendedReg(const MachineInstr *MI) const {
> switch (MI->getOpcode()) {
> default:
> break;
> - case ARM64::ADDSWrx:
> - case ARM64::ADDSXrx:
> - case ARM64::ADDSXrx64:
> - case ARM64::ADDWrx:
> - case ARM64::ADDXrx:
> - case ARM64::ADDXrx64:
> - case ARM64::SUBSWrx:
> - case ARM64::SUBSXrx:
> - case ARM64::SUBSXrx64:
> - case ARM64::SUBWrx:
> - case ARM64::SUBXrx:
> - case ARM64::SUBXrx64:
> + case AArch64::ADDSWrx:
> + case AArch64::ADDSXrx:
> + case AArch64::ADDSXrx64:
> + case AArch64::ADDWrx:
> + case AArch64::ADDXrx:
> + case AArch64::ADDXrx64:
> + case AArch64::SUBSWrx:
> + case AArch64::SUBSXrx:
> + case AArch64::SUBSXrx64:
> + case AArch64::SUBWrx:
> + case AArch64::SUBXrx:
> + case AArch64::SUBXrx64:
> if (MI->getOperand(3).isImm()) {
> unsigned val = MI->getOperand(3).getImm();
> return (val != 0);
> @@ -915,47 +917,47 @@ bool ARM64InstrInfo::hasExtendedReg(cons
>
> // Return true if this instruction simply sets its single destination register
> // to zero. This is equivalent to a register rename of the zero-register.
> -bool ARM64InstrInfo::isGPRZero(const MachineInstr *MI) const {
> +bool AArch64InstrInfo::isGPRZero(const MachineInstr *MI) const {
> switch (MI->getOpcode()) {
> default:
> break;
> - case ARM64::MOVZWi:
> - case ARM64::MOVZXi: // movz Rd, #0 (LSL #0)
> + case AArch64::MOVZWi:
> + case AArch64::MOVZXi: // movz Rd, #0 (LSL #0)
> if (MI->getOperand(1).isImm() && MI->getOperand(1).getImm() == 0) {
> assert(MI->getDesc().getNumOperands() == 3 &&
> MI->getOperand(2).getImm() == 0 && "invalid MOVZi operands");
> return true;
> }
> break;
> - case ARM64::ANDWri: // and Rd, Rzr, #imm
> - return MI->getOperand(1).getReg() == ARM64::WZR;
> - case ARM64::ANDXri:
> - return MI->getOperand(1).getReg() == ARM64::XZR;
> + case AArch64::ANDWri: // and Rd, Rzr, #imm
> + return MI->getOperand(1).getReg() == AArch64::WZR;
> + case AArch64::ANDXri:
> + return MI->getOperand(1).getReg() == AArch64::XZR;
> case TargetOpcode::COPY:
> - return MI->getOperand(1).getReg() == ARM64::WZR;
> + return MI->getOperand(1).getReg() == AArch64::WZR;
> }
> return false;
> }
>
> // Return true if this instruction simply renames a general register without
> // modifying bits.
> -bool ARM64InstrInfo::isGPRCopy(const MachineInstr *MI) const {
> +bool AArch64InstrInfo::isGPRCopy(const MachineInstr *MI) const {
> switch (MI->getOpcode()) {
> default:
> break;
> case TargetOpcode::COPY: {
> // GPR32 copies will by lowered to ORRXrs
> unsigned DstReg = MI->getOperand(0).getReg();
> - return (ARM64::GPR32RegClass.contains(DstReg) ||
> - ARM64::GPR64RegClass.contains(DstReg));
> + return (AArch64::GPR32RegClass.contains(DstReg) ||
> + AArch64::GPR64RegClass.contains(DstReg));
> }
> - case ARM64::ORRXrs: // orr Xd, Xzr, Xm (LSL #0)
> - if (MI->getOperand(1).getReg() == ARM64::XZR) {
> + case AArch64::ORRXrs: // orr Xd, Xzr, Xm (LSL #0)
> + if (MI->getOperand(1).getReg() == AArch64::XZR) {
> assert(MI->getDesc().getNumOperands() == 4 &&
> MI->getOperand(3).getImm() == 0 && "invalid ORRrs operands");
> return true;
> }
> - case ARM64::ADDXri: // add Xd, Xn, #0 (LSL #0)
> + case AArch64::ADDXri: // add Xd, Xn, #0 (LSL #0)
> if (MI->getOperand(2).getImm() == 0) {
> assert(MI->getDesc().getNumOperands() == 4 &&
> MI->getOperand(3).getImm() == 0 && "invalid ADDXri operands");
> @@ -967,17 +969,17 @@ bool ARM64InstrInfo::isGPRCopy(const Mac
>
> // Return true if this instruction simply renames a general register without
> // modifying bits.
> -bool ARM64InstrInfo::isFPRCopy(const MachineInstr *MI) const {
> +bool AArch64InstrInfo::isFPRCopy(const MachineInstr *MI) const {
> switch (MI->getOpcode()) {
> default:
> break;
> case TargetOpcode::COPY: {
> // FPR64 copies will by lowered to ORR.16b
> unsigned DstReg = MI->getOperand(0).getReg();
> - return (ARM64::FPR64RegClass.contains(DstReg) ||
> - ARM64::FPR128RegClass.contains(DstReg));
> + return (AArch64::FPR64RegClass.contains(DstReg) ||
> + AArch64::FPR128RegClass.contains(DstReg));
> }
> - case ARM64::ORRv16i8:
> + case AArch64::ORRv16i8:
> if (MI->getOperand(1).getReg() == MI->getOperand(2).getReg()) {
> assert(MI->getDesc().getNumOperands() == 3 && MI->getOperand(0).isReg() &&
> "invalid ORRv16i8 operands");
> @@ -987,18 +989,18 @@ bool ARM64InstrInfo::isFPRCopy(const Mac
> return false;
> }
>
> -unsigned ARM64InstrInfo::isLoadFromStackSlot(const MachineInstr *MI,
> - int &FrameIndex) const {
> +unsigned AArch64InstrInfo::isLoadFromStackSlot(const MachineInstr *MI,
> + int &FrameIndex) const {
> switch (MI->getOpcode()) {
> default:
> break;
> - case ARM64::LDRWui:
> - case ARM64::LDRXui:
> - case ARM64::LDRBui:
> - case ARM64::LDRHui:
> - case ARM64::LDRSui:
> - case ARM64::LDRDui:
> - case ARM64::LDRQui:
> + case AArch64::LDRWui:
> + case AArch64::LDRXui:
> + case AArch64::LDRBui:
> + case AArch64::LDRHui:
> + case AArch64::LDRSui:
> + case AArch64::LDRDui:
> + case AArch64::LDRQui:
> if (MI->getOperand(0).getSubReg() == 0 && MI->getOperand(1).isFI() &&
> MI->getOperand(2).isImm() && MI->getOperand(2).getImm() == 0) {
> FrameIndex = MI->getOperand(1).getIndex();
> @@ -1010,18 +1012,18 @@ unsigned ARM64InstrInfo::isLoadFromStack
> return 0;
> }
>
> -unsigned ARM64InstrInfo::isStoreToStackSlot(const MachineInstr *MI,
> - int &FrameIndex) const {
> +unsigned AArch64InstrInfo::isStoreToStackSlot(const MachineInstr *MI,
> + int &FrameIndex) const {
> switch (MI->getOpcode()) {
> default:
> break;
> - case ARM64::STRWui:
> - case ARM64::STRXui:
> - case ARM64::STRBui:
> - case ARM64::STRHui:
> - case ARM64::STRSui:
> - case ARM64::STRDui:
> - case ARM64::STRQui:
> + case AArch64::STRWui:
> + case AArch64::STRXui:
> + case AArch64::STRBui:
> + case AArch64::STRHui:
> + case AArch64::STRSui:
> + case AArch64::STRDui:
> + case AArch64::STRQui:
> if (MI->getOperand(0).getSubReg() == 0 && MI->getOperand(1).isFI() &&
> MI->getOperand(2).isImm() && MI->getOperand(2).getImm() == 0) {
> FrameIndex = MI->getOperand(1).getIndex();
> @@ -1035,66 +1037,66 @@ unsigned ARM64InstrInfo::isStoreToStackS
> /// Return true if this is load/store scales or extends its register offset.
> /// This refers to scaling a dynamic index as opposed to scaled immediates.
> /// MI should be a memory op that allows scaled addressing.
> -bool ARM64InstrInfo::isScaledAddr(const MachineInstr *MI) const {
> +bool AArch64InstrInfo::isScaledAddr(const MachineInstr *MI) const {
> switch (MI->getOpcode()) {
> default:
> break;
> - case ARM64::LDRBBroW:
> - case ARM64::LDRBroW:
> - case ARM64::LDRDroW:
> - case ARM64::LDRHHroW:
> - case ARM64::LDRHroW:
> - case ARM64::LDRQroW:
> - case ARM64::LDRSBWroW:
> - case ARM64::LDRSBXroW:
> - case ARM64::LDRSHWroW:
> - case ARM64::LDRSHXroW:
> - case ARM64::LDRSWroW:
> - case ARM64::LDRSroW:
> - case ARM64::LDRWroW:
> - case ARM64::LDRXroW:
> - case ARM64::STRBBroW:
> - case ARM64::STRBroW:
> - case ARM64::STRDroW:
> - case ARM64::STRHHroW:
> - case ARM64::STRHroW:
> - case ARM64::STRQroW:
> - case ARM64::STRSroW:
> - case ARM64::STRWroW:
> - case ARM64::STRXroW:
> - case ARM64::LDRBBroX:
> - case ARM64::LDRBroX:
> - case ARM64::LDRDroX:
> - case ARM64::LDRHHroX:
> - case ARM64::LDRHroX:
> - case ARM64::LDRQroX:
> - case ARM64::LDRSBWroX:
> - case ARM64::LDRSBXroX:
> - case ARM64::LDRSHWroX:
> - case ARM64::LDRSHXroX:
> - case ARM64::LDRSWroX:
> - case ARM64::LDRSroX:
> - case ARM64::LDRWroX:
> - case ARM64::LDRXroX:
> - case ARM64::STRBBroX:
> - case ARM64::STRBroX:
> - case ARM64::STRDroX:
> - case ARM64::STRHHroX:
> - case ARM64::STRHroX:
> - case ARM64::STRQroX:
> - case ARM64::STRSroX:
> - case ARM64::STRWroX:
> - case ARM64::STRXroX:
> + case AArch64::LDRBBroW:
> + case AArch64::LDRBroW:
> + case AArch64::LDRDroW:
> + case AArch64::LDRHHroW:
> + case AArch64::LDRHroW:
> + case AArch64::LDRQroW:
> + case AArch64::LDRSBWroW:
> + case AArch64::LDRSBXroW:
> + case AArch64::LDRSHWroW:
> + case AArch64::LDRSHXroW:
> + case AArch64::LDRSWroW:
> + case AArch64::LDRSroW:
> + case AArch64::LDRWroW:
> + case AArch64::LDRXroW:
> + case AArch64::STRBBroW:
> + case AArch64::STRBroW:
> + case AArch64::STRDroW:
> + case AArch64::STRHHroW:
> + case AArch64::STRHroW:
> + case AArch64::STRQroW:
> + case AArch64::STRSroW:
> + case AArch64::STRWroW:
> + case AArch64::STRXroW:
> + case AArch64::LDRBBroX:
> + case AArch64::LDRBroX:
> + case AArch64::LDRDroX:
> + case AArch64::LDRHHroX:
> + case AArch64::LDRHroX:
> + case AArch64::LDRQroX:
> + case AArch64::LDRSBWroX:
> + case AArch64::LDRSBXroX:
> + case AArch64::LDRSHWroX:
> + case AArch64::LDRSHXroX:
> + case AArch64::LDRSWroX:
> + case AArch64::LDRSroX:
> + case AArch64::LDRWroX:
> + case AArch64::LDRXroX:
> + case AArch64::STRBBroX:
> + case AArch64::STRBroX:
> + case AArch64::STRDroX:
> + case AArch64::STRHHroX:
> + case AArch64::STRHroX:
> + case AArch64::STRQroX:
> + case AArch64::STRSroX:
> + case AArch64::STRWroX:
> + case AArch64::STRXroX:
>
> unsigned Val = MI->getOperand(3).getImm();
> - ARM64_AM::ShiftExtendType ExtType = ARM64_AM::getMemExtendType(Val);
> - return (ExtType != ARM64_AM::UXTX) || ARM64_AM::getMemDoShift(Val);
> + AArch64_AM::ShiftExtendType ExtType = AArch64_AM::getMemExtendType(Val);
> + return (ExtType != AArch64_AM::UXTX) || AArch64_AM::getMemDoShift(Val);
> }
> return false;
> }
>
> /// Check all MachineMemOperands for a hint to suppress pairing.
> -bool ARM64InstrInfo::isLdStPairSuppressed(const MachineInstr *MI) const {
> +bool AArch64InstrInfo::isLdStPairSuppressed(const MachineInstr *MI) const {
> assert(MOSuppressPair < (1 << MachineMemOperand::MOTargetNumBits) &&
> "Too many target MO flags");
> for (auto *MM : MI->memoperands()) {
> @@ -1107,7 +1109,7 @@ bool ARM64InstrInfo::isLdStPairSuppresse
> }
>
> /// Set a flag on the first MachineMemOperand to suppress pairing.
> -void ARM64InstrInfo::suppressLdStPair(MachineInstr *MI) const {
> +void AArch64InstrInfo::suppressLdStPair(MachineInstr *MI) const {
> if (MI->memoperands_empty())
> return;
>
> @@ -1117,22 +1119,23 @@ void ARM64InstrInfo::suppressLdStPair(Ma
> ->setFlags(MOSuppressPair << MachineMemOperand::MOTargetStartBit);
> }
>
> -bool ARM64InstrInfo::getLdStBaseRegImmOfs(MachineInstr *LdSt, unsigned &BaseReg,
> - unsigned &Offset,
> - const TargetRegisterInfo *TRI) const {
> +bool
> +AArch64InstrInfo::getLdStBaseRegImmOfs(MachineInstr *LdSt, unsigned &BaseReg,
> + unsigned &Offset,
> + const TargetRegisterInfo *TRI) const {
> switch (LdSt->getOpcode()) {
> default:
> return false;
> - case ARM64::STRSui:
> - case ARM64::STRDui:
> - case ARM64::STRQui:
> - case ARM64::STRXui:
> - case ARM64::STRWui:
> - case ARM64::LDRSui:
> - case ARM64::LDRDui:
> - case ARM64::LDRQui:
> - case ARM64::LDRXui:
> - case ARM64::LDRWui:
> + case AArch64::STRSui:
> + case AArch64::STRDui:
> + case AArch64::STRQui:
> + case AArch64::STRXui:
> + case AArch64::STRWui:
> + case AArch64::LDRSui:
> + case AArch64::LDRDui:
> + case AArch64::LDRQui:
> + case AArch64::LDRXui:
> + case AArch64::LDRWui:
> if (!LdSt->getOperand(1).isReg() || !LdSt->getOperand(2).isImm())
> return false;
> BaseReg = LdSt->getOperand(1).getReg();
> @@ -1146,9 +1149,9 @@ bool ARM64InstrInfo::getLdStBaseRegImmOf
> /// Detect opportunities for ldp/stp formation.
> ///
> /// Only called for LdSt for which getLdStBaseRegImmOfs returns true.
> -bool ARM64InstrInfo::shouldClusterLoads(MachineInstr *FirstLdSt,
> - MachineInstr *SecondLdSt,
> - unsigned NumLoads) const {
> +bool AArch64InstrInfo::shouldClusterLoads(MachineInstr *FirstLdSt,
> + MachineInstr *SecondLdSt,
> + unsigned NumLoads) const {
> // Only cluster up to a single pair.
> if (NumLoads > 1)
> return false;
> @@ -1164,33 +1167,33 @@ bool ARM64InstrInfo::shouldClusterLoads(
> return Ofs1 + 1 == Ofs2;
> }
>
> -bool ARM64InstrInfo::shouldScheduleAdjacent(MachineInstr *First,
> - MachineInstr *Second) const {
> +bool AArch64InstrInfo::shouldScheduleAdjacent(MachineInstr *First,
> + MachineInstr *Second) const {
> // Cyclone can fuse CMN, CMP followed by Bcc.
>
> // FIXME: B0 can also fuse:
> // AND, BIC, ORN, ORR, or EOR (optional S) followed by Bcc or CBZ or CBNZ.
> - if (Second->getOpcode() != ARM64::Bcc)
> + if (Second->getOpcode() != AArch64::Bcc)
> return false;
> switch (First->getOpcode()) {
> default:
> return false;
> - case ARM64::SUBSWri:
> - case ARM64::ADDSWri:
> - case ARM64::ANDSWri:
> - case ARM64::SUBSXri:
> - case ARM64::ADDSXri:
> - case ARM64::ANDSXri:
> + case AArch64::SUBSWri:
> + case AArch64::ADDSWri:
> + case AArch64::ANDSWri:
> + case AArch64::SUBSXri:
> + case AArch64::ADDSXri:
> + case AArch64::ANDSXri:
> return true;
> }
> }
>
> -MachineInstr *ARM64InstrInfo::emitFrameIndexDebugValue(MachineFunction &MF,
> - int FrameIx,
> - uint64_t Offset,
> - const MDNode *MDPtr,
> - DebugLoc DL) const {
> - MachineInstrBuilder MIB = BuildMI(MF, DL, get(ARM64::DBG_VALUE))
> +MachineInstr *AArch64InstrInfo::emitFrameIndexDebugValue(MachineFunction &MF,
> + int FrameIx,
> + uint64_t Offset,
> + const MDNode *MDPtr,
> + DebugLoc DL) const {
> + MachineInstrBuilder MIB = BuildMI(MF, DL, get(AArch64::DBG_VALUE))
> .addFrameIndex(FrameIx)
> .addImm(0)
> .addImm(Offset)
> @@ -1217,12 +1220,10 @@ static bool forwardCopyWillClobberTuple(
> return ((DestReg - SrcReg) & 0x1f) < NumRegs;
> }
>
> -void ARM64InstrInfo::copyPhysRegTuple(MachineBasicBlock &MBB,
> - MachineBasicBlock::iterator I,
> - DebugLoc DL, unsigned DestReg,
> - unsigned SrcReg, bool KillSrc,
> - unsigned Opcode,
> - llvm::ArrayRef<unsigned> Indices) const {
> +void AArch64InstrInfo::copyPhysRegTuple(
> + MachineBasicBlock &MBB, MachineBasicBlock::iterator I, DebugLoc DL,
> + unsigned DestReg, unsigned SrcReg, bool KillSrc, unsigned Opcode,
> + llvm::ArrayRef<unsigned> Indices) const {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register copy without NEON");
> const TargetRegisterInfo *TRI = &getRegisterInfo();
> @@ -1245,258 +1246,263 @@ void ARM64InstrInfo::copyPhysRegTuple(Ma
> }
> }
>
> -void ARM64InstrInfo::copyPhysReg(MachineBasicBlock &MBB,
> - MachineBasicBlock::iterator I, DebugLoc DL,
> - unsigned DestReg, unsigned SrcReg,
> - bool KillSrc) const {
> - if (ARM64::GPR32spRegClass.contains(DestReg) &&
> - (ARM64::GPR32spRegClass.contains(SrcReg) || SrcReg == ARM64::WZR)) {
> +void AArch64InstrInfo::copyPhysReg(MachineBasicBlock &MBB,
> + MachineBasicBlock::iterator I, DebugLoc DL,
> + unsigned DestReg, unsigned SrcReg,
> + bool KillSrc) const {
> + if (AArch64::GPR32spRegClass.contains(DestReg) &&
> + (AArch64::GPR32spRegClass.contains(SrcReg) || SrcReg == AArch64::WZR)) {
> const TargetRegisterInfo *TRI = &getRegisterInfo();
>
> - if (DestReg == ARM64::WSP || SrcReg == ARM64::WSP) {
> + if (DestReg == AArch64::WSP || SrcReg == AArch64::WSP) {
> // If either operand is WSP, expand to ADD #0.
> if (Subtarget.hasZeroCycleRegMove()) {
> // Cyclone recognizes "ADD Xd, Xn, #0" as a zero-cycle register move.
> - unsigned DestRegX = TRI->getMatchingSuperReg(DestReg, ARM64::sub_32,
> - &ARM64::GPR64spRegClass);
> - unsigned SrcRegX = TRI->getMatchingSuperReg(SrcReg, ARM64::sub_32,
> - &ARM64::GPR64spRegClass);
> + unsigned DestRegX = TRI->getMatchingSuperReg(DestReg, AArch64::sub_32,
> + &AArch64::GPR64spRegClass);
> + unsigned SrcRegX = TRI->getMatchingSuperReg(SrcReg, AArch64::sub_32,
> + &AArch64::GPR64spRegClass);
> // This instruction is reading and writing X registers. This may upset
> // the register scavenger and machine verifier, so we need to indicate
> // that we are reading an undefined value from SrcRegX, but a proper
> // value from SrcReg.
> - BuildMI(MBB, I, DL, get(ARM64::ADDXri), DestRegX)
> + BuildMI(MBB, I, DL, get(AArch64::ADDXri), DestRegX)
> .addReg(SrcRegX, RegState::Undef)
> .addImm(0)
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, 0))
> + .addImm(AArch64_AM::getShifterImm(AArch64_AM::LSL, 0))
> .addReg(SrcReg, RegState::Implicit | getKillRegState(KillSrc));
> } else {
> - BuildMI(MBB, I, DL, get(ARM64::ADDWri), DestReg)
> + BuildMI(MBB, I, DL, get(AArch64::ADDWri), DestReg)
> .addReg(SrcReg, getKillRegState(KillSrc))
> .addImm(0)
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, 0));
> + .addImm(AArch64_AM::getShifterImm(AArch64_AM::LSL, 0));
> }
> - } else if (SrcReg == ARM64::WZR && Subtarget.hasZeroCycleZeroing()) {
> - BuildMI(MBB, I, DL, get(ARM64::MOVZWi), DestReg).addImm(0).addImm(
> - ARM64_AM::getShifterImm(ARM64_AM::LSL, 0));
> + } else if (SrcReg == AArch64::WZR && Subtarget.hasZeroCycleZeroing()) {
> + BuildMI(MBB, I, DL, get(AArch64::MOVZWi), DestReg).addImm(0).addImm(
> + AArch64_AM::getShifterImm(AArch64_AM::LSL, 0));
> } else {
> if (Subtarget.hasZeroCycleRegMove()) {
> // Cyclone recognizes "ORR Xd, XZR, Xm" as a zero-cycle register move.
> - unsigned DestRegX = TRI->getMatchingSuperReg(DestReg, ARM64::sub_32,
> - &ARM64::GPR64spRegClass);
> - unsigned SrcRegX = TRI->getMatchingSuperReg(SrcReg, ARM64::sub_32,
> - &ARM64::GPR64spRegClass);
> + unsigned DestRegX = TRI->getMatchingSuperReg(DestReg, AArch64::sub_32,
> + &AArch64::GPR64spRegClass);
> + unsigned SrcRegX = TRI->getMatchingSuperReg(SrcReg, AArch64::sub_32,
> + &AArch64::GPR64spRegClass);
> // This instruction is reading and writing X registers. This may upset
> // the register scavenger and machine verifier, so we need to indicate
> // that we are reading an undefined value from SrcRegX, but a proper
> // value from SrcReg.
> - BuildMI(MBB, I, DL, get(ARM64::ORRXrr), DestRegX)
> - .addReg(ARM64::XZR)
> + BuildMI(MBB, I, DL, get(AArch64::ORRXrr), DestRegX)
> + .addReg(AArch64::XZR)
> .addReg(SrcRegX, RegState::Undef)
> .addReg(SrcReg, RegState::Implicit | getKillRegState(KillSrc));
> } else {
> // Otherwise, expand to ORR WZR.
> - BuildMI(MBB, I, DL, get(ARM64::ORRWrr), DestReg)
> - .addReg(ARM64::WZR)
> + BuildMI(MBB, I, DL, get(AArch64::ORRWrr), DestReg)
> + .addReg(AArch64::WZR)
> .addReg(SrcReg, getKillRegState(KillSrc));
> }
> }
> return;
> }
>
> - if (ARM64::GPR64spRegClass.contains(DestReg) &&
> - (ARM64::GPR64spRegClass.contains(SrcReg) || SrcReg == ARM64::XZR)) {
> - if (DestReg == ARM64::SP || SrcReg == ARM64::SP) {
> + if (AArch64::GPR64spRegClass.contains(DestReg) &&
> + (AArch64::GPR64spRegClass.contains(SrcReg) || SrcReg == AArch64::XZR)) {
> + if (DestReg == AArch64::SP || SrcReg == AArch64::SP) {
> // If either operand is SP, expand to ADD #0.
> - BuildMI(MBB, I, DL, get(ARM64::ADDXri), DestReg)
> + BuildMI(MBB, I, DL, get(AArch64::ADDXri), DestReg)
> .addReg(SrcReg, getKillRegState(KillSrc))
> .addImm(0)
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, 0));
> - } else if (SrcReg == ARM64::XZR && Subtarget.hasZeroCycleZeroing()) {
> - BuildMI(MBB, I, DL, get(ARM64::MOVZXi), DestReg).addImm(0).addImm(
> - ARM64_AM::getShifterImm(ARM64_AM::LSL, 0));
> + .addImm(AArch64_AM::getShifterImm(AArch64_AM::LSL, 0));
> + } else if (SrcReg == AArch64::XZR && Subtarget.hasZeroCycleZeroing()) {
> + BuildMI(MBB, I, DL, get(AArch64::MOVZXi), DestReg).addImm(0).addImm(
> + AArch64_AM::getShifterImm(AArch64_AM::LSL, 0));
> } else {
> // Otherwise, expand to ORR XZR.
> - BuildMI(MBB, I, DL, get(ARM64::ORRXrr), DestReg)
> - .addReg(ARM64::XZR)
> + BuildMI(MBB, I, DL, get(AArch64::ORRXrr), DestReg)
> + .addReg(AArch64::XZR)
> .addReg(SrcReg, getKillRegState(KillSrc));
> }
> return;
> }
>
> // Copy a DDDD register quad by copying the individual sub-registers.
> - if (ARM64::DDDDRegClass.contains(DestReg) &&
> - ARM64::DDDDRegClass.contains(SrcReg)) {
> - static const unsigned Indices[] = { ARM64::dsub0, ARM64::dsub1,
> - ARM64::dsub2, ARM64::dsub3 };
> - copyPhysRegTuple(MBB, I, DL, DestReg, SrcReg, KillSrc, ARM64::ORRv8i8,
> + if (AArch64::DDDDRegClass.contains(DestReg) &&
> + AArch64::DDDDRegClass.contains(SrcReg)) {
> + static const unsigned Indices[] = { AArch64::dsub0, AArch64::dsub1,
> + AArch64::dsub2, AArch64::dsub3 };
> + copyPhysRegTuple(MBB, I, DL, DestReg, SrcReg, KillSrc, AArch64::ORRv8i8,
> Indices);
> return;
> }
>
> // Copy a DDD register triple by copying the individual sub-registers.
> - if (ARM64::DDDRegClass.contains(DestReg) &&
> - ARM64::DDDRegClass.contains(SrcReg)) {
> - static const unsigned Indices[] = { ARM64::dsub0, ARM64::dsub1,
> - ARM64::dsub2 };
> - copyPhysRegTuple(MBB, I, DL, DestReg, SrcReg, KillSrc, ARM64::ORRv8i8,
> + if (AArch64::DDDRegClass.contains(DestReg) &&
> + AArch64::DDDRegClass.contains(SrcReg)) {
> + static const unsigned Indices[] = { AArch64::dsub0, AArch64::dsub1,
> + AArch64::dsub2 };
> + copyPhysRegTuple(MBB, I, DL, DestReg, SrcReg, KillSrc, AArch64::ORRv8i8,
> Indices);
> return;
> }
>
> // Copy a DD register pair by copying the individual sub-registers.
> - if (ARM64::DDRegClass.contains(DestReg) &&
> - ARM64::DDRegClass.contains(SrcReg)) {
> - static const unsigned Indices[] = { ARM64::dsub0, ARM64::dsub1 };
> - copyPhysRegTuple(MBB, I, DL, DestReg, SrcReg, KillSrc, ARM64::ORRv8i8,
> + if (AArch64::DDRegClass.contains(DestReg) &&
> + AArch64::DDRegClass.contains(SrcReg)) {
> + static const unsigned Indices[] = { AArch64::dsub0, AArch64::dsub1 };
> + copyPhysRegTuple(MBB, I, DL, DestReg, SrcReg, KillSrc, AArch64::ORRv8i8,
> Indices);
> return;
> }
>
> // Copy a QQQQ register quad by copying the individual sub-registers.
> - if (ARM64::QQQQRegClass.contains(DestReg) &&
> - ARM64::QQQQRegClass.contains(SrcReg)) {
> - static const unsigned Indices[] = { ARM64::qsub0, ARM64::qsub1,
> - ARM64::qsub2, ARM64::qsub3 };
> - copyPhysRegTuple(MBB, I, DL, DestReg, SrcReg, KillSrc, ARM64::ORRv16i8,
> + if (AArch64::QQQQRegClass.contains(DestReg) &&
> + AArch64::QQQQRegClass.contains(SrcReg)) {
> + static const unsigned Indices[] = { AArch64::qsub0, AArch64::qsub1,
> + AArch64::qsub2, AArch64::qsub3 };
> + copyPhysRegTuple(MBB, I, DL, DestReg, SrcReg, KillSrc, AArch64::ORRv16i8,
> Indices);
> return;
> }
>
> // Copy a QQQ register triple by copying the individual sub-registers.
> - if (ARM64::QQQRegClass.contains(DestReg) &&
> - ARM64::QQQRegClass.contains(SrcReg)) {
> - static const unsigned Indices[] = { ARM64::qsub0, ARM64::qsub1,
> - ARM64::qsub2 };
> - copyPhysRegTuple(MBB, I, DL, DestReg, SrcReg, KillSrc, ARM64::ORRv16i8,
> + if (AArch64::QQQRegClass.contains(DestReg) &&
> + AArch64::QQQRegClass.contains(SrcReg)) {
> + static const unsigned Indices[] = { AArch64::qsub0, AArch64::qsub1,
> + AArch64::qsub2 };
> + copyPhysRegTuple(MBB, I, DL, DestReg, SrcReg, KillSrc, AArch64::ORRv16i8,
> Indices);
> return;
> }
>
> // Copy a QQ register pair by copying the individual sub-registers.
> - if (ARM64::QQRegClass.contains(DestReg) &&
> - ARM64::QQRegClass.contains(SrcReg)) {
> - static const unsigned Indices[] = { ARM64::qsub0, ARM64::qsub1 };
> - copyPhysRegTuple(MBB, I, DL, DestReg, SrcReg, KillSrc, ARM64::ORRv16i8,
> + if (AArch64::QQRegClass.contains(DestReg) &&
> + AArch64::QQRegClass.contains(SrcReg)) {
> + static const unsigned Indices[] = { AArch64::qsub0, AArch64::qsub1 };
> + copyPhysRegTuple(MBB, I, DL, DestReg, SrcReg, KillSrc, AArch64::ORRv16i8,
> Indices);
> return;
> }
>
> - if (ARM64::FPR128RegClass.contains(DestReg) &&
> - ARM64::FPR128RegClass.contains(SrcReg)) {
> + if (AArch64::FPR128RegClass.contains(DestReg) &&
> + AArch64::FPR128RegClass.contains(SrcReg)) {
> if(getSubTarget().hasNEON()) {
> - BuildMI(MBB, I, DL, get(ARM64::ORRv16i8), DestReg).addReg(SrcReg).addReg(
> - SrcReg, getKillRegState(KillSrc));
> + BuildMI(MBB, I, DL, get(AArch64::ORRv16i8), DestReg)
> + .addReg(SrcReg)
> + .addReg(SrcReg, getKillRegState(KillSrc));
> } else {
> - BuildMI(MBB, I, DL, get(ARM64::STRQpre))
> - .addReg(ARM64::SP, RegState::Define)
> + BuildMI(MBB, I, DL, get(AArch64::STRQpre))
> + .addReg(AArch64::SP, RegState::Define)
> .addReg(SrcReg, getKillRegState(KillSrc))
> - .addReg(ARM64::SP)
> + .addReg(AArch64::SP)
> .addImm(-16);
> - BuildMI(MBB, I, DL, get(ARM64::LDRQpre))
> - .addReg(ARM64::SP, RegState::Define)
> + BuildMI(MBB, I, DL, get(AArch64::LDRQpre))
> + .addReg(AArch64::SP, RegState::Define)
> .addReg(DestReg, RegState::Define)
> - .addReg(ARM64::SP)
> + .addReg(AArch64::SP)
> .addImm(16);
> }
> return;
> }
>
> - if (ARM64::FPR64RegClass.contains(DestReg) &&
> - ARM64::FPR64RegClass.contains(SrcReg)) {
> + if (AArch64::FPR64RegClass.contains(DestReg) &&
> + AArch64::FPR64RegClass.contains(SrcReg)) {
> if(getSubTarget().hasNEON()) {
> - DestReg =
> - RI.getMatchingSuperReg(DestReg, ARM64::dsub, &ARM64::FPR128RegClass);
> - SrcReg =
> - RI.getMatchingSuperReg(SrcReg, ARM64::dsub, &ARM64::FPR128RegClass);
> - BuildMI(MBB, I, DL, get(ARM64::ORRv16i8), DestReg).addReg(SrcReg).addReg(
> - SrcReg, getKillRegState(KillSrc));
> + DestReg = RI.getMatchingSuperReg(DestReg, AArch64::dsub,
> + &AArch64::FPR128RegClass);
> + SrcReg = RI.getMatchingSuperReg(SrcReg, AArch64::dsub,
> + &AArch64::FPR128RegClass);
> + BuildMI(MBB, I, DL, get(AArch64::ORRv16i8), DestReg)
> + .addReg(SrcReg)
> + .addReg(SrcReg, getKillRegState(KillSrc));
> } else {
> - BuildMI(MBB, I, DL, get(ARM64::FMOVDr), DestReg)
> + BuildMI(MBB, I, DL, get(AArch64::FMOVDr), DestReg)
> .addReg(SrcReg, getKillRegState(KillSrc));
> }
> return;
> }
>
> - if (ARM64::FPR32RegClass.contains(DestReg) &&
> - ARM64::FPR32RegClass.contains(SrcReg)) {
> + if (AArch64::FPR32RegClass.contains(DestReg) &&
> + AArch64::FPR32RegClass.contains(SrcReg)) {
> if(getSubTarget().hasNEON()) {
> - DestReg =
> - RI.getMatchingSuperReg(DestReg, ARM64::ssub, &ARM64::FPR128RegClass);
> - SrcReg =
> - RI.getMatchingSuperReg(SrcReg, ARM64::ssub, &ARM64::FPR128RegClass);
> - BuildMI(MBB, I, DL, get(ARM64::ORRv16i8), DestReg).addReg(SrcReg).addReg(
> - SrcReg, getKillRegState(KillSrc));
> + DestReg = RI.getMatchingSuperReg(DestReg, AArch64::ssub,
> + &AArch64::FPR128RegClass);
> + SrcReg = RI.getMatchingSuperReg(SrcReg, AArch64::ssub,
> + &AArch64::FPR128RegClass);
> + BuildMI(MBB, I, DL, get(AArch64::ORRv16i8), DestReg)
> + .addReg(SrcReg)
> + .addReg(SrcReg, getKillRegState(KillSrc));
> } else {
> - BuildMI(MBB, I, DL, get(ARM64::FMOVSr), DestReg)
> + BuildMI(MBB, I, DL, get(AArch64::FMOVSr), DestReg)
> .addReg(SrcReg, getKillRegState(KillSrc));
> }
> return;
> }
>
> - if (ARM64::FPR16RegClass.contains(DestReg) &&
> - ARM64::FPR16RegClass.contains(SrcReg)) {
> + if (AArch64::FPR16RegClass.contains(DestReg) &&
> + AArch64::FPR16RegClass.contains(SrcReg)) {
> if(getSubTarget().hasNEON()) {
> - DestReg =
> - RI.getMatchingSuperReg(DestReg, ARM64::hsub, &ARM64::FPR128RegClass);
> - SrcReg =
> - RI.getMatchingSuperReg(SrcReg, ARM64::hsub, &ARM64::FPR128RegClass);
> - BuildMI(MBB, I, DL, get(ARM64::ORRv16i8), DestReg).addReg(SrcReg).addReg(
> - SrcReg, getKillRegState(KillSrc));
> + DestReg = RI.getMatchingSuperReg(DestReg, AArch64::hsub,
> + &AArch64::FPR128RegClass);
> + SrcReg = RI.getMatchingSuperReg(SrcReg, AArch64::hsub,
> + &AArch64::FPR128RegClass);
> + BuildMI(MBB, I, DL, get(AArch64::ORRv16i8), DestReg)
> + .addReg(SrcReg)
> + .addReg(SrcReg, getKillRegState(KillSrc));
> } else {
> - DestReg =
> - RI.getMatchingSuperReg(DestReg, ARM64::hsub, &ARM64::FPR32RegClass);
> - SrcReg =
> - RI.getMatchingSuperReg(SrcReg, ARM64::hsub, &ARM64::FPR32RegClass);
> - BuildMI(MBB, I, DL, get(ARM64::FMOVSr), DestReg)
> + DestReg = RI.getMatchingSuperReg(DestReg, AArch64::hsub,
> + &AArch64::FPR32RegClass);
> + SrcReg = RI.getMatchingSuperReg(SrcReg, AArch64::hsub,
> + &AArch64::FPR32RegClass);
> + BuildMI(MBB, I, DL, get(AArch64::FMOVSr), DestReg)
> .addReg(SrcReg, getKillRegState(KillSrc));
> }
> return;
> }
>
> - if (ARM64::FPR8RegClass.contains(DestReg) &&
> - ARM64::FPR8RegClass.contains(SrcReg)) {
> + if (AArch64::FPR8RegClass.contains(DestReg) &&
> + AArch64::FPR8RegClass.contains(SrcReg)) {
> if(getSubTarget().hasNEON()) {
> - DestReg =
> - RI.getMatchingSuperReg(DestReg, ARM64::bsub, &ARM64::FPR128RegClass);
> - SrcReg =
> - RI.getMatchingSuperReg(SrcReg, ARM64::bsub, &ARM64::FPR128RegClass);
> - BuildMI(MBB, I, DL, get(ARM64::ORRv16i8), DestReg).addReg(SrcReg).addReg(
> - SrcReg, getKillRegState(KillSrc));
> + DestReg = RI.getMatchingSuperReg(DestReg, AArch64::bsub,
> + &AArch64::FPR128RegClass);
> + SrcReg = RI.getMatchingSuperReg(SrcReg, AArch64::bsub,
> + &AArch64::FPR128RegClass);
> + BuildMI(MBB, I, DL, get(AArch64::ORRv16i8), DestReg)
> + .addReg(SrcReg)
> + .addReg(SrcReg, getKillRegState(KillSrc));
> } else {
> - DestReg =
> - RI.getMatchingSuperReg(DestReg, ARM64::bsub, &ARM64::FPR32RegClass);
> - SrcReg =
> - RI.getMatchingSuperReg(SrcReg, ARM64::bsub, &ARM64::FPR32RegClass);
> - BuildMI(MBB, I, DL, get(ARM64::FMOVSr), DestReg)
> + DestReg = RI.getMatchingSuperReg(DestReg, AArch64::bsub,
> + &AArch64::FPR32RegClass);
> + SrcReg = RI.getMatchingSuperReg(SrcReg, AArch64::bsub,
> + &AArch64::FPR32RegClass);
> + BuildMI(MBB, I, DL, get(AArch64::FMOVSr), DestReg)
> .addReg(SrcReg, getKillRegState(KillSrc));
> }
> return;
> }
>
> // Copies between GPR64 and FPR64.
> - if (ARM64::FPR64RegClass.contains(DestReg) &&
> - ARM64::GPR64RegClass.contains(SrcReg)) {
> - BuildMI(MBB, I, DL, get(ARM64::FMOVXDr), DestReg)
> + if (AArch64::FPR64RegClass.contains(DestReg) &&
> + AArch64::GPR64RegClass.contains(SrcReg)) {
> + BuildMI(MBB, I, DL, get(AArch64::FMOVXDr), DestReg)
> .addReg(SrcReg, getKillRegState(KillSrc));
> return;
> }
> - if (ARM64::GPR64RegClass.contains(DestReg) &&
> - ARM64::FPR64RegClass.contains(SrcReg)) {
> - BuildMI(MBB, I, DL, get(ARM64::FMOVDXr), DestReg)
> + if (AArch64::GPR64RegClass.contains(DestReg) &&
> + AArch64::FPR64RegClass.contains(SrcReg)) {
> + BuildMI(MBB, I, DL, get(AArch64::FMOVDXr), DestReg)
> .addReg(SrcReg, getKillRegState(KillSrc));
> return;
> }
> // Copies between GPR32 and FPR32.
> - if (ARM64::FPR32RegClass.contains(DestReg) &&
> - ARM64::GPR32RegClass.contains(SrcReg)) {
> - BuildMI(MBB, I, DL, get(ARM64::FMOVWSr), DestReg)
> + if (AArch64::FPR32RegClass.contains(DestReg) &&
> + AArch64::GPR32RegClass.contains(SrcReg)) {
> + BuildMI(MBB, I, DL, get(AArch64::FMOVWSr), DestReg)
> .addReg(SrcReg, getKillRegState(KillSrc));
> return;
> }
> - if (ARM64::GPR32RegClass.contains(DestReg) &&
> - ARM64::FPR32RegClass.contains(SrcReg)) {
> - BuildMI(MBB, I, DL, get(ARM64::FMOVSWr), DestReg)
> + if (AArch64::GPR32RegClass.contains(DestReg) &&
> + AArch64::FPR32RegClass.contains(SrcReg)) {
> + BuildMI(MBB, I, DL, get(AArch64::FMOVSWr), DestReg)
> .addReg(SrcReg, getKillRegState(KillSrc));
> return;
> }
> @@ -1504,11 +1510,10 @@ void ARM64InstrInfo::copyPhysReg(Machine
> assert(0 && "unimplemented reg-to-reg copy");
> }
>
> -void ARM64InstrInfo::storeRegToStackSlot(MachineBasicBlock &MBB,
> - MachineBasicBlock::iterator MBBI,
> - unsigned SrcReg, bool isKill, int FI,
> - const TargetRegisterClass *RC,
> - const TargetRegisterInfo *TRI) const {
> +void AArch64InstrInfo::storeRegToStackSlot(
> + MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI, unsigned SrcReg,
> + bool isKill, int FI, const TargetRegisterClass *RC,
> + const TargetRegisterInfo *TRI) const {
> DebugLoc DL;
> if (MBBI != MBB.end())
> DL = MBBI->getDebugLoc();
> @@ -1523,72 +1528,72 @@ void ARM64InstrInfo::storeRegToStackSlot
> bool Offset = true;
> switch (RC->getSize()) {
> case 1:
> - if (ARM64::FPR8RegClass.hasSubClassEq(RC))
> - Opc = ARM64::STRBui;
> + if (AArch64::FPR8RegClass.hasSubClassEq(RC))
> + Opc = AArch64::STRBui;
> break;
> case 2:
> - if (ARM64::FPR16RegClass.hasSubClassEq(RC))
> - Opc = ARM64::STRHui;
> + if (AArch64::FPR16RegClass.hasSubClassEq(RC))
> + Opc = AArch64::STRHui;
> break;
> case 4:
> - if (ARM64::GPR32allRegClass.hasSubClassEq(RC)) {
> - Opc = ARM64::STRWui;
> + if (AArch64::GPR32allRegClass.hasSubClassEq(RC)) {
> + Opc = AArch64::STRWui;
> if (TargetRegisterInfo::isVirtualRegister(SrcReg))
> - MF.getRegInfo().constrainRegClass(SrcReg, &ARM64::GPR32RegClass);
> + MF.getRegInfo().constrainRegClass(SrcReg, &AArch64::GPR32RegClass);
> else
> - assert(SrcReg != ARM64::WSP);
> - } else if (ARM64::FPR32RegClass.hasSubClassEq(RC))
> - Opc = ARM64::STRSui;
> + assert(SrcReg != AArch64::WSP);
> + } else if (AArch64::FPR32RegClass.hasSubClassEq(RC))
> + Opc = AArch64::STRSui;
> break;
> case 8:
> - if (ARM64::GPR64allRegClass.hasSubClassEq(RC)) {
> - Opc = ARM64::STRXui;
> + if (AArch64::GPR64allRegClass.hasSubClassEq(RC)) {
> + Opc = AArch64::STRXui;
> if (TargetRegisterInfo::isVirtualRegister(SrcReg))
> - MF.getRegInfo().constrainRegClass(SrcReg, &ARM64::GPR64RegClass);
> + MF.getRegInfo().constrainRegClass(SrcReg, &AArch64::GPR64RegClass);
> else
> - assert(SrcReg != ARM64::SP);
> - } else if (ARM64::FPR64RegClass.hasSubClassEq(RC))
> - Opc = ARM64::STRDui;
> + assert(SrcReg != AArch64::SP);
> + } else if (AArch64::FPR64RegClass.hasSubClassEq(RC))
> + Opc = AArch64::STRDui;
> break;
> case 16:
> - if (ARM64::FPR128RegClass.hasSubClassEq(RC))
> - Opc = ARM64::STRQui;
> - else if (ARM64::DDRegClass.hasSubClassEq(RC)) {
> + if (AArch64::FPR128RegClass.hasSubClassEq(RC))
> + Opc = AArch64::STRQui;
> + else if (AArch64::DDRegClass.hasSubClassEq(RC)) {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register store without NEON");
> - Opc = ARM64::ST1Twov1d, Offset = false;
> + Opc = AArch64::ST1Twov1d, Offset = false;
> }
> break;
> case 24:
> - if (ARM64::DDDRegClass.hasSubClassEq(RC)) {
> + if (AArch64::DDDRegClass.hasSubClassEq(RC)) {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register store without NEON");
> - Opc = ARM64::ST1Threev1d, Offset = false;
> + Opc = AArch64::ST1Threev1d, Offset = false;
> }
> break;
> case 32:
> - if (ARM64::DDDDRegClass.hasSubClassEq(RC)) {
> + if (AArch64::DDDDRegClass.hasSubClassEq(RC)) {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register store without NEON");
> - Opc = ARM64::ST1Fourv1d, Offset = false;
> - } else if (ARM64::QQRegClass.hasSubClassEq(RC)) {
> + Opc = AArch64::ST1Fourv1d, Offset = false;
> + } else if (AArch64::QQRegClass.hasSubClassEq(RC)) {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register store without NEON");
> - Opc = ARM64::ST1Twov2d, Offset = false;
> + Opc = AArch64::ST1Twov2d, Offset = false;
> }
> break;
> case 48:
> - if (ARM64::QQQRegClass.hasSubClassEq(RC)) {
> + if (AArch64::QQQRegClass.hasSubClassEq(RC)) {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register store without NEON");
> - Opc = ARM64::ST1Threev2d, Offset = false;
> + Opc = AArch64::ST1Threev2d, Offset = false;
> }
> break;
> case 64:
> - if (ARM64::QQQQRegClass.hasSubClassEq(RC)) {
> + if (AArch64::QQQQRegClass.hasSubClassEq(RC)) {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register store without NEON");
> - Opc = ARM64::ST1Fourv2d, Offset = false;
> + Opc = AArch64::ST1Fourv2d, Offset = false;
> }
> break;
> }
> @@ -1603,11 +1608,10 @@ void ARM64InstrInfo::storeRegToStackSlot
> MI.addMemOperand(MMO);
> }
>
> -void ARM64InstrInfo::loadRegFromStackSlot(MachineBasicBlock &MBB,
> - MachineBasicBlock::iterator MBBI,
> - unsigned DestReg, int FI,
> - const TargetRegisterClass *RC,
> - const TargetRegisterInfo *TRI) const {
> +void AArch64InstrInfo::loadRegFromStackSlot(
> + MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI, unsigned DestReg,
> + int FI, const TargetRegisterClass *RC,
> + const TargetRegisterInfo *TRI) const {
> DebugLoc DL;
> if (MBBI != MBB.end())
> DL = MBBI->getDebugLoc();
> @@ -1622,72 +1626,72 @@ void ARM64InstrInfo::loadRegFromStackSlo
> bool Offset = true;
> switch (RC->getSize()) {
> case 1:
> - if (ARM64::FPR8RegClass.hasSubClassEq(RC))
> - Opc = ARM64::LDRBui;
> + if (AArch64::FPR8RegClass.hasSubClassEq(RC))
> + Opc = AArch64::LDRBui;
> break;
> case 2:
> - if (ARM64::FPR16RegClass.hasSubClassEq(RC))
> - Opc = ARM64::LDRHui;
> + if (AArch64::FPR16RegClass.hasSubClassEq(RC))
> + Opc = AArch64::LDRHui;
> break;
> case 4:
> - if (ARM64::GPR32allRegClass.hasSubClassEq(RC)) {
> - Opc = ARM64::LDRWui;
> + if (AArch64::GPR32allRegClass.hasSubClassEq(RC)) {
> + Opc = AArch64::LDRWui;
> if (TargetRegisterInfo::isVirtualRegister(DestReg))
> - MF.getRegInfo().constrainRegClass(DestReg, &ARM64::GPR32RegClass);
> + MF.getRegInfo().constrainRegClass(DestReg, &AArch64::GPR32RegClass);
> else
> - assert(DestReg != ARM64::WSP);
> - } else if (ARM64::FPR32RegClass.hasSubClassEq(RC))
> - Opc = ARM64::LDRSui;
> + assert(DestReg != AArch64::WSP);
> + } else if (AArch64::FPR32RegClass.hasSubClassEq(RC))
> + Opc = AArch64::LDRSui;
> break;
> case 8:
> - if (ARM64::GPR64allRegClass.hasSubClassEq(RC)) {
> - Opc = ARM64::LDRXui;
> + if (AArch64::GPR64allRegClass.hasSubClassEq(RC)) {
> + Opc = AArch64::LDRXui;
> if (TargetRegisterInfo::isVirtualRegister(DestReg))
> - MF.getRegInfo().constrainRegClass(DestReg, &ARM64::GPR64RegClass);
> + MF.getRegInfo().constrainRegClass(DestReg, &AArch64::GPR64RegClass);
> else
> - assert(DestReg != ARM64::SP);
> - } else if (ARM64::FPR64RegClass.hasSubClassEq(RC))
> - Opc = ARM64::LDRDui;
> + assert(DestReg != AArch64::SP);
> + } else if (AArch64::FPR64RegClass.hasSubClassEq(RC))
> + Opc = AArch64::LDRDui;
> break;
> case 16:
> - if (ARM64::FPR128RegClass.hasSubClassEq(RC))
> - Opc = ARM64::LDRQui;
> - else if (ARM64::DDRegClass.hasSubClassEq(RC)) {
> + if (AArch64::FPR128RegClass.hasSubClassEq(RC))
> + Opc = AArch64::LDRQui;
> + else if (AArch64::DDRegClass.hasSubClassEq(RC)) {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register load without NEON");
> - Opc = ARM64::LD1Twov1d, Offset = false;
> + Opc = AArch64::LD1Twov1d, Offset = false;
> }
> break;
> case 24:
> - if (ARM64::DDDRegClass.hasSubClassEq(RC)) {
> + if (AArch64::DDDRegClass.hasSubClassEq(RC)) {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register load without NEON");
> - Opc = ARM64::LD1Threev1d, Offset = false;
> + Opc = AArch64::LD1Threev1d, Offset = false;
> }
> break;
> case 32:
> - if (ARM64::DDDDRegClass.hasSubClassEq(RC)) {
> + if (AArch64::DDDDRegClass.hasSubClassEq(RC)) {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register load without NEON");
> - Opc = ARM64::LD1Fourv1d, Offset = false;
> - } else if (ARM64::QQRegClass.hasSubClassEq(RC)) {
> + Opc = AArch64::LD1Fourv1d, Offset = false;
> + } else if (AArch64::QQRegClass.hasSubClassEq(RC)) {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register load without NEON");
> - Opc = ARM64::LD1Twov2d, Offset = false;
> + Opc = AArch64::LD1Twov2d, Offset = false;
> }
> break;
> case 48:
> - if (ARM64::QQQRegClass.hasSubClassEq(RC)) {
> + if (AArch64::QQQRegClass.hasSubClassEq(RC)) {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register load without NEON");
> - Opc = ARM64::LD1Threev2d, Offset = false;
> + Opc = AArch64::LD1Threev2d, Offset = false;
> }
> break;
> case 64:
> - if (ARM64::QQQQRegClass.hasSubClassEq(RC)) {
> + if (AArch64::QQQQRegClass.hasSubClassEq(RC)) {
> assert(getSubTarget().hasNEON() &&
> "Unexpected register load without NEON");
> - Opc = ARM64::LD1Fourv2d, Offset = false;
> + Opc = AArch64::LD1Fourv2d, Offset = false;
> }
> break;
> }
> @@ -1704,8 +1708,8 @@ void ARM64InstrInfo::loadRegFromStackSlo
> void llvm::emitFrameOffset(MachineBasicBlock &MBB,
> MachineBasicBlock::iterator MBBI, DebugLoc DL,
> unsigned DestReg, unsigned SrcReg, int Offset,
> - const ARM64InstrInfo *TII, MachineInstr::MIFlag Flag,
> - bool SetNZCV) {
> + const AArch64InstrInfo *TII,
> + MachineInstr::MIFlag Flag, bool SetNZCV) {
> if (DestReg == SrcReg && Offset == 0)
> return;
>
> @@ -1726,9 +1730,9 @@ void llvm::emitFrameOffset(MachineBasicB
>
> unsigned Opc;
> if (SetNZCV)
> - Opc = isSub ? ARM64::SUBSXri : ARM64::ADDSXri;
> + Opc = isSub ? AArch64::SUBSXri : AArch64::ADDSXri;
> else
> - Opc = isSub ? ARM64::SUBXri : ARM64::ADDXri;
> + Opc = isSub ? AArch64::SUBXri : AArch64::ADDXri;
> const unsigned MaxEncoding = 0xfff;
> const unsigned ShiftSize = 12;
> const unsigned MaxEncodableValue = MaxEncoding << ShiftSize;
> @@ -1744,7 +1748,7 @@ void llvm::emitFrameOffset(MachineBasicB
> BuildMI(MBB, MBBI, DL, TII->get(Opc), DestReg)
> .addReg(SrcReg)
> .addImm(ThisVal >> ShiftSize)
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, ShiftSize))
> + .addImm(AArch64_AM::getShifterImm(AArch64_AM::LSL, ShiftSize))
> .setMIFlag(Flag);
>
> SrcReg = DestReg;
> @@ -1755,14 +1759,14 @@ void llvm::emitFrameOffset(MachineBasicB
> BuildMI(MBB, MBBI, DL, TII->get(Opc), DestReg)
> .addReg(SrcReg)
> .addImm(Offset)
> - .addImm(ARM64_AM::getShifterImm(ARM64_AM::LSL, 0))
> + .addImm(AArch64_AM::getShifterImm(AArch64_AM::LSL, 0))
> .setMIFlag(Flag);
> }
>
> MachineInstr *
> -ARM64InstrInfo::foldMemoryOperandImpl(MachineFunction &MF, MachineInstr *MI,
> - const SmallVectorImpl<unsigned> &Ops,
> - int FrameIndex) const {
> +AArch64InstrInfo::foldMemoryOperandImpl(MachineFunction &MF, MachineInstr *MI,
> + const SmallVectorImpl<unsigned> &Ops,
> + int FrameIndex) const {
> // This is a bit of a hack. Consider this instruction:
> //
> // %vreg0<def> = COPY %SP; GPR64all:%vreg0
> @@ -1779,12 +1783,14 @@ ARM64InstrInfo::foldMemoryOperandImpl(Ma
> if (MI->isCopy()) {
> unsigned DstReg = MI->getOperand(0).getReg();
> unsigned SrcReg = MI->getOperand(1).getReg();
> - if (SrcReg == ARM64::SP && TargetRegisterInfo::isVirtualRegister(DstReg)) {
> - MF.getRegInfo().constrainRegClass(DstReg, &ARM64::GPR64RegClass);
> + if (SrcReg == AArch64::SP &&
> + TargetRegisterInfo::isVirtualRegister(DstReg)) {
> + MF.getRegInfo().constrainRegClass(DstReg, &AArch64::GPR64RegClass);
> return nullptr;
> }
> - if (DstReg == ARM64::SP && TargetRegisterInfo::isVirtualRegister(SrcReg)) {
> - MF.getRegInfo().constrainRegClass(SrcReg, &ARM64::GPR64RegClass);
> + if (DstReg == AArch64::SP &&
> + TargetRegisterInfo::isVirtualRegister(SrcReg)) {
> + MF.getRegInfo().constrainRegClass(SrcReg, &AArch64::GPR64RegClass);
> return nullptr;
> }
> }
> @@ -1793,10 +1799,10 @@ ARM64InstrInfo::foldMemoryOperandImpl(Ma
> return nullptr;
> }
>
> -int llvm::isARM64FrameOffsetLegal(const MachineInstr &MI, int &Offset,
> - bool *OutUseUnscaledOp,
> - unsigned *OutUnscaledOp,
> - int *EmittableOffset) {
> +int llvm::isAArch64FrameOffsetLegal(const MachineInstr &MI, int &Offset,
> + bool *OutUseUnscaledOp,
> + unsigned *OutUnscaledOp,
> + int *EmittableOffset) {
> int Scale = 1;
> bool IsSigned = false;
> // The ImmIdx should be changed case by case if it is not 2.
> @@ -1811,162 +1817,162 @@ int llvm::isARM64FrameOffsetLegal(const
> *OutUnscaledOp = 0;
> switch (MI.getOpcode()) {
> default:
> - assert(0 && "unhandled opcode in rewriteARM64FrameIndex");
> + assert(0 && "unhandled opcode in rewriteAArch64FrameIndex");
> // Vector spills/fills can't take an immediate offset.
> - case ARM64::LD1Twov2d:
> - case ARM64::LD1Threev2d:
> - case ARM64::LD1Fourv2d:
> - case ARM64::LD1Twov1d:
> - case ARM64::LD1Threev1d:
> - case ARM64::LD1Fourv1d:
> - case ARM64::ST1Twov2d:
> - case ARM64::ST1Threev2d:
> - case ARM64::ST1Fourv2d:
> - case ARM64::ST1Twov1d:
> - case ARM64::ST1Threev1d:
> - case ARM64::ST1Fourv1d:
> - return ARM64FrameOffsetCannotUpdate;
> - case ARM64::PRFMui:
> + case AArch64::LD1Twov2d:
> + case AArch64::LD1Threev2d:
> + case AArch64::LD1Fourv2d:
> + case AArch64::LD1Twov1d:
> + case AArch64::LD1Threev1d:
> + case AArch64::LD1Fourv1d:
> + case AArch64::ST1Twov2d:
> + case AArch64::ST1Threev2d:
> + case AArch64::ST1Fourv2d:
> + case AArch64::ST1Twov1d:
> + case AArch64::ST1Threev1d:
> + case AArch64::ST1Fourv1d:
> + return AArch64FrameOffsetCannotUpdate;
> + case AArch64::PRFMui:
> Scale = 8;
> - UnscaledOp = ARM64::PRFUMi;
> + UnscaledOp = AArch64::PRFUMi;
> break;
> - case ARM64::LDRXui:
> + case AArch64::LDRXui:
> Scale = 8;
> - UnscaledOp = ARM64::LDURXi;
> + UnscaledOp = AArch64::LDURXi;
> break;
> - case ARM64::LDRWui:
> + case AArch64::LDRWui:
> Scale = 4;
> - UnscaledOp = ARM64::LDURWi;
> + UnscaledOp = AArch64::LDURWi;
> break;
> - case ARM64::LDRBui:
> + case AArch64::LDRBui:
> Scale = 1;
> - UnscaledOp = ARM64::LDURBi;
> + UnscaledOp = AArch64::LDURBi;
> break;
> - case ARM64::LDRHui:
> + case AArch64::LDRHui:
> Scale = 2;
> - UnscaledOp = ARM64::LDURHi;
> + UnscaledOp = AArch64::LDURHi;
> break;
> - case ARM64::LDRSui:
> + case AArch64::LDRSui:
> Scale = 4;
> - UnscaledOp = ARM64::LDURSi;
> + UnscaledOp = AArch64::LDURSi;
> break;
> - case ARM64::LDRDui:
> + case AArch64::LDRDui:
> Scale = 8;
> - UnscaledOp = ARM64::LDURDi;
> + UnscaledOp = AArch64::LDURDi;
> break;
> - case ARM64::LDRQui:
> + case AArch64::LDRQui:
> Scale = 16;
> - UnscaledOp = ARM64::LDURQi;
> + UnscaledOp = AArch64::LDURQi;
> break;
> - case ARM64::LDRBBui:
> + case AArch64::LDRBBui:
> Scale = 1;
> - UnscaledOp = ARM64::LDURBBi;
> + UnscaledOp = AArch64::LDURBBi;
> break;
> - case ARM64::LDRHHui:
> + case AArch64::LDRHHui:
> Scale = 2;
> - UnscaledOp = ARM64::LDURHHi;
> + UnscaledOp = AArch64::LDURHHi;
> break;
> - case ARM64::LDRSBXui:
> + case AArch64::LDRSBXui:
> Scale = 1;
> - UnscaledOp = ARM64::LDURSBXi;
> + UnscaledOp = AArch64::LDURSBXi;
> break;
> - case ARM64::LDRSBWui:
> + case AArch64::LDRSBWui:
> Scale = 1;
> - UnscaledOp = ARM64::LDURSBWi;
> + UnscaledOp = AArch64::LDURSBWi;
> break;
> - case ARM64::LDRSHXui:
> + case AArch64::LDRSHXui:
> Scale = 2;
> - UnscaledOp = ARM64::LDURSHXi;
> + UnscaledOp = AArch64::LDURSHXi;
> break;
> - case ARM64::LDRSHWui:
> + case AArch64::LDRSHWui:
> Scale = 2;
> - UnscaledOp = ARM64::LDURSHWi;
> + UnscaledOp = AArch64::LDURSHWi;
> break;
> - case ARM64::LDRSWui:
> + case AArch64::LDRSWui:
> Scale = 4;
> - UnscaledOp = ARM64::LDURSWi;
> + UnscaledOp = AArch64::LDURSWi;
> break;
>
> - case ARM64::STRXui:
> + case AArch64::STRXui:
> Scale = 8;
> - UnscaledOp = ARM64::STURXi;
> + UnscaledOp = AArch64::STURXi;
> break;
> - case ARM64::STRWui:
> + case AArch64::STRWui:
> Scale = 4;
> - UnscaledOp = ARM64::STURWi;
> + UnscaledOp = AArch64::STURWi;
> break;
> - case ARM64::STRBui:
> + case AArch64::STRBui:
> Scale = 1;
> - UnscaledOp = ARM64::STURBi;
> + UnscaledOp = AArch64::STURBi;
> break;
> - case ARM64::STRHui:
> + case AArch64::STRHui:
> Scale = 2;
> - UnscaledOp = ARM64::STURHi;
> + UnscaledOp = AArch64::STURHi;
> break;
> - case ARM64::STRSui:
> + case AArch64::STRSui:
> Scale = 4;
> - UnscaledOp = ARM64::STURSi;
> + UnscaledOp = AArch64::STURSi;
> break;
> - case ARM64::STRDui:
> + case AArch64::STRDui:
> Scale = 8;
> - UnscaledOp = ARM64::STURDi;
> + UnscaledOp = AArch64::STURDi;
> break;
> - case ARM64::STRQui:
> + case AArch64::STRQui:
> Scale = 16;
> - UnscaledOp = ARM64::STURQi;
> + UnscaledOp = AArch64::STURQi;
> break;
> - case ARM64::STRBBui:
> + case AArch64::STRBBui:
> Scale = 1;
> - UnscaledOp = ARM64::STURBBi;
> + UnscaledOp = AArch64::STURBBi;
> break;
> - case ARM64::STRHHui:
> + case AArch64::STRHHui:
> Scale = 2;
> - UnscaledOp = ARM64::STURHHi;
> + UnscaledOp = AArch64::STURHHi;
> break;
>
> - case ARM64::LDPXi:
> - case ARM64::LDPDi:
> - case ARM64::STPXi:
> - case ARM64::STPDi:
> + case AArch64::LDPXi:
> + case AArch64::LDPDi:
> + case AArch64::STPXi:
> + case AArch64::STPDi:
> IsSigned = true;
> Scale = 8;
> break;
> - case ARM64::LDPQi:
> - case ARM64::STPQi:
> + case AArch64::LDPQi:
> + case AArch64::STPQi:
> IsSigned = true;
> Scale = 16;
> break;
> - case ARM64::LDPWi:
> - case ARM64::LDPSi:
> - case ARM64::STPWi:
> - case ARM64::STPSi:
> + case AArch64::LDPWi:
> + case AArch64::LDPSi:
> + case AArch64::STPWi:
> + case AArch64::STPSi:
> IsSigned = true;
> Scale = 4;
> break;
>
> - case ARM64::LDURXi:
> - case ARM64::LDURWi:
> - case ARM64::LDURBi:
> - case ARM64::LDURHi:
> - case ARM64::LDURSi:
> - case ARM64::LDURDi:
> - case ARM64::LDURQi:
> - case ARM64::LDURHHi:
> - case ARM64::LDURBBi:
> - case ARM64::LDURSBXi:
> - case ARM64::LDURSBWi:
> - case ARM64::LDURSHXi:
> - case ARM64::LDURSHWi:
> - case ARM64::LDURSWi:
> - case ARM64::STURXi:
> - case ARM64::STURWi:
> - case ARM64::STURBi:
> - case ARM64::STURHi:
> - case ARM64::STURSi:
> - case ARM64::STURDi:
> - case ARM64::STURQi:
> - case ARM64::STURBBi:
> - case ARM64::STURHHi:
> + case AArch64::LDURXi:
> + case AArch64::LDURWi:
> + case AArch64::LDURBi:
> + case AArch64::LDURHi:
> + case AArch64::LDURSi:
> + case AArch64::LDURDi:
> + case AArch64::LDURQi:
> + case AArch64::LDURHHi:
> + case AArch64::LDURBBi:
> + case AArch64::LDURSBXi:
> + case AArch64::LDURSBWi:
> + case AArch64::LDURSHXi:
> + case AArch64::LDURSHWi:
> + case AArch64::LDURSWi:
> + case AArch64::STURXi:
> + case AArch64::STURWi:
> + case AArch64::STURBi:
> + case AArch64::STURHi:
> + case AArch64::STURSi:
> + case AArch64::STURDi:
> + case AArch64::STURQi:
> + case AArch64::STURBBi:
> + case AArch64::STURHHi:
> Scale = 1;
> break;
> }
> @@ -2014,21 +2020,21 @@ int llvm::isARM64FrameOffsetLegal(const
> *OutUseUnscaledOp = useUnscaledOp;
> if (OutUnscaledOp)
> *OutUnscaledOp = UnscaledOp;
> - return ARM64FrameOffsetCanUpdate |
> - (Offset == 0 ? ARM64FrameOffsetIsLegal : 0);
> + return AArch64FrameOffsetCanUpdate |
> + (Offset == 0 ? AArch64FrameOffsetIsLegal : 0);
> }
>
> -bool llvm::rewriteARM64FrameIndex(MachineInstr &MI, unsigned FrameRegIdx,
> - unsigned FrameReg, int &Offset,
> - const ARM64InstrInfo *TII) {
> +bool llvm::rewriteAArch64FrameIndex(MachineInstr &MI, unsigned FrameRegIdx,
> + unsigned FrameReg, int &Offset,
> + const AArch64InstrInfo *TII) {
> unsigned Opcode = MI.getOpcode();
> unsigned ImmIdx = FrameRegIdx + 1;
>
> - if (Opcode == ARM64::ADDSXri || Opcode == ARM64::ADDXri) {
> + if (Opcode == AArch64::ADDSXri || Opcode == AArch64::ADDXri) {
> Offset += MI.getOperand(ImmIdx).getImm();
> emitFrameOffset(*MI.getParent(), MI, MI.getDebugLoc(),
> MI.getOperand(0).getReg(), FrameReg, Offset, TII,
> - MachineInstr::NoFlags, (Opcode == ARM64::ADDSXri));
> + MachineInstr::NoFlags, (Opcode == AArch64::ADDSXri));
> MI.eraseFromParent();
> Offset = 0;
> return true;
> @@ -2037,10 +2043,10 @@ bool llvm::rewriteARM64FrameIndex(Machin
> int NewOffset;
> unsigned UnscaledOp;
> bool UseUnscaledOp;
> - int Status = isARM64FrameOffsetLegal(MI, Offset, &UseUnscaledOp, &UnscaledOp,
> - &NewOffset);
> - if (Status & ARM64FrameOffsetCanUpdate) {
> - if (Status & ARM64FrameOffsetIsLegal)
> + int Status = isAArch64FrameOffsetLegal(MI, Offset, &UseUnscaledOp,
> + &UnscaledOp, &NewOffset);
> + if (Status & AArch64FrameOffsetCanUpdate) {
> + if (Status & AArch64FrameOffsetIsLegal)
> // Replace the FrameIndex with FrameReg.
> MI.getOperand(FrameRegIdx).ChangeToRegister(FrameReg, false);
> if (UseUnscaledOp)
> @@ -2053,7 +2059,7 @@ bool llvm::rewriteARM64FrameIndex(Machin
> return false;
> }
>
> -void ARM64InstrInfo::getNoopForMachoTarget(MCInst &NopInst) const {
> - NopInst.setOpcode(ARM64::HINT);
> +void AArch64InstrInfo::getNoopForMachoTarget(MCInst &NopInst) const {
> + NopInst.setOpcode(AArch64::HINT);
> NopInst.addOperand(MCOperand::CreateImm(0));
> }
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.h (from r209576, llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.h)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.h?p2=llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.h&p1=llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.h&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.h (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.h Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===- ARM64InstrInfo.h - ARM64 Instruction Information ---------*- C++ -*-===//
> +//===- AArch64InstrInfo.h - AArch64 Instruction Information -----*- C++ -*-===//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,44 +7,44 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// This file contains the ARM64 implementation of the TargetInstrInfo class.
> +// This file contains the AArch64 implementation of the TargetInstrInfo class.
> //
> //===----------------------------------------------------------------------===//
>
> -#ifndef LLVM_TARGET_ARM64INSTRINFO_H
> -#define LLVM_TARGET_ARM64INSTRINFO_H
> +#ifndef LLVM_TARGET_AArch64INSTRINFO_H
> +#define LLVM_TARGET_AArch64INSTRINFO_H
>
> -#include "ARM64.h"
> -#include "ARM64RegisterInfo.h"
> +#include "AArch64.h"
> +#include "AArch64RegisterInfo.h"
> #include "llvm/Target/TargetInstrInfo.h"
>
> #define GET_INSTRINFO_HEADER
> -#include "ARM64GenInstrInfo.inc"
> +#include "AArch64GenInstrInfo.inc"
>
> namespace llvm {
>
> -class ARM64Subtarget;
> -class ARM64TargetMachine;
> +class AArch64Subtarget;
> +class AArch64TargetMachine;
>
> -class ARM64InstrInfo : public ARM64GenInstrInfo {
> +class AArch64InstrInfo : public AArch64GenInstrInfo {
> // Reserve bits in the MachineMemOperand target hint flags, starting at 1.
> // They will be shifted into MOTargetHintStart when accessed.
> enum TargetMemOperandFlags {
> MOSuppressPair = 1
> };
>
> - const ARM64RegisterInfo RI;
> - const ARM64Subtarget &Subtarget;
> + const AArch64RegisterInfo RI;
> + const AArch64Subtarget &Subtarget;
>
> public:
> - explicit ARM64InstrInfo(const ARM64Subtarget &STI);
> + explicit AArch64InstrInfo(const AArch64Subtarget &STI);
>
> /// getRegisterInfo - TargetInstrInfo is a superset of MRegister info. As
> /// such, whenever a client has an instance of instruction info, it should
> /// always be able to get register info as well (through this method).
> - const ARM64RegisterInfo &getRegisterInfo() const { return RI; }
> + const AArch64RegisterInfo &getRegisterInfo() const { return RI; }
>
> - const ARM64Subtarget &getSubTarget() const { return Subtarget; }
> + const AArch64Subtarget &getSubTarget() const { return Subtarget; }
>
> unsigned GetInstSizeInBytes(const MachineInstr *MI) const;
>
> @@ -60,8 +60,8 @@ public:
> /// is non-zero.
> bool hasShiftedReg(const MachineInstr *MI) const;
>
> - /// Returns true if there is an extendable register and that the extending value
> - /// is non-zero.
> + /// Returns true if there is an extendable register and that the extending
> + /// value is non-zero.
> bool hasExtendedReg(const MachineInstr *MI) const;
>
> /// \brief Does this instruction set its full destination register to zero?
> @@ -168,63 +168,63 @@ private:
> /// if necessary, to be replaced by the scavenger at the end of PEI.
> void emitFrameOffset(MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI,
> DebugLoc DL, unsigned DestReg, unsigned SrcReg, int Offset,
> - const ARM64InstrInfo *TII,
> + const AArch64InstrInfo *TII,
> MachineInstr::MIFlag = MachineInstr::NoFlags,
> bool SetNZCV = false);
>
> -/// rewriteARM64FrameIndex - Rewrite MI to access 'Offset' bytes from the
> +/// rewriteAArch64FrameIndex - Rewrite MI to access 'Offset' bytes from the
> /// FP. Return false if the offset could not be handled directly in MI, and
> /// return the left-over portion by reference.
> -bool rewriteARM64FrameIndex(MachineInstr &MI, unsigned FrameRegIdx,
> +bool rewriteAArch64FrameIndex(MachineInstr &MI, unsigned FrameRegIdx,
> unsigned FrameReg, int &Offset,
> - const ARM64InstrInfo *TII);
> + const AArch64InstrInfo *TII);
>
> -/// \brief Use to report the frame offset status in isARM64FrameOffsetLegal.
> -enum ARM64FrameOffsetStatus {
> - ARM64FrameOffsetCannotUpdate = 0x0, ///< Offset cannot apply.
> - ARM64FrameOffsetIsLegal = 0x1, ///< Offset is legal.
> - ARM64FrameOffsetCanUpdate = 0x2 ///< Offset can apply, at least partly.
> +/// \brief Use to report the frame offset status in isAArch64FrameOffsetLegal.
> +enum AArch64FrameOffsetStatus {
> + AArch64FrameOffsetCannotUpdate = 0x0, ///< Offset cannot apply.
> + AArch64FrameOffsetIsLegal = 0x1, ///< Offset is legal.
> + AArch64FrameOffsetCanUpdate = 0x2 ///< Offset can apply, at least partly.
> };
>
> /// \brief Check if the @p Offset is a valid frame offset for @p MI.
> /// The returned value reports the validity of the frame offset for @p MI.
> -/// It uses the values defined by ARM64FrameOffsetStatus for that.
> -/// If result == ARM64FrameOffsetCannotUpdate, @p MI cannot be updated to
> +/// It uses the values defined by AArch64FrameOffsetStatus for that.
> +/// If result == AArch64FrameOffsetCannotUpdate, @p MI cannot be updated to
> /// use an offset.eq
> -/// If result & ARM64FrameOffsetIsLegal, @p Offset can completely be
> +/// If result & AArch64FrameOffsetIsLegal, @p Offset can completely be
> /// rewriten in @p MI.
> -/// If result & ARM64FrameOffsetCanUpdate, @p Offset contains the
> +/// If result & AArch64FrameOffsetCanUpdate, @p Offset contains the
> /// amount that is off the limit of the legal offset.
> /// If set, @p OutUseUnscaledOp will contain the whether @p MI should be
> /// turned into an unscaled operator, which opcode is in @p OutUnscaledOp.
> /// If set, @p EmittableOffset contains the amount that can be set in @p MI
> /// (possibly with @p OutUnscaledOp if OutUseUnscaledOp is true) and that
> /// is a legal offset.
> -int isARM64FrameOffsetLegal(const MachineInstr &MI, int &Offset,
> +int isAArch64FrameOffsetLegal(const MachineInstr &MI, int &Offset,
> bool *OutUseUnscaledOp = nullptr,
> unsigned *OutUnscaledOp = nullptr,
> int *EmittableOffset = nullptr);
>
> -static inline bool isUncondBranchOpcode(int Opc) { return Opc == ARM64::B; }
> +static inline bool isUncondBranchOpcode(int Opc) { return Opc == AArch64::B; }
>
> static inline bool isCondBranchOpcode(int Opc) {
> switch (Opc) {
> - case ARM64::Bcc:
> - case ARM64::CBZW:
> - case ARM64::CBZX:
> - case ARM64::CBNZW:
> - case ARM64::CBNZX:
> - case ARM64::TBZW:
> - case ARM64::TBZX:
> - case ARM64::TBNZW:
> - case ARM64::TBNZX:
> + case AArch64::Bcc:
> + case AArch64::CBZW:
> + case AArch64::CBZX:
> + case AArch64::CBNZW:
> + case AArch64::CBNZX:
> + case AArch64::TBZW:
> + case AArch64::TBZX:
> + case AArch64::TBNZW:
> + case AArch64::TBNZX:
> return true;
> default:
> return false;
> }
> }
>
> -static inline bool isIndirectBranchOpcode(int Opc) { return Opc == ARM64::BR; }
> +static inline bool isIndirectBranchOpcode(int Opc) { return Opc == AArch64::BR; }
>
> } // end namespace llvm
>
>
> Copied: llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.td (from r209576, llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.td)
> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.td?p2=llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.td&p1=llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.td&r1=209576&r2=209577&rev=209577&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Target/ARM64/ARM64InstrInfo.td (original)
> +++ llvm/trunk/lib/Target/AArch64/AArch64InstrInfo.td Sat May 24 07:50:23 2014
> @@ -1,4 +1,4 @@
> -//===- ARM64InstrInfo.td - Describe the ARM64 Instructions -*- tablegen -*-===//
> +//=- AArch64InstrInfo.td - Describe the AArch64 Instructions -*- tablegen -*-=//
> //
> // The LLVM Compiler Infrastructure
> //
> @@ -7,7 +7,7 @@
> //
> //===----------------------------------------------------------------------===//
> //
> -// ARM64 Instruction definitions.
> +// AArch64 Instruction definitions.
> //
> //===----------------------------------------------------------------------===//
>
> @@ -26,7 +26,7 @@ def IsLE : Predicate<"Subtar
> def IsBE : Predicate<"!Subtarget->isLittleEndian()">;
>
> //===----------------------------------------------------------------------===//
> -// ARM64-specific DAG Nodes.
> +// AArch64-specific DAG Nodes.
> //
>
> // SDTBinaryArithWithFlagsOut - RES1, FLAGS = op LHS, RHS
> @@ -50,196 +50,198 @@ def SDTBinaryArithWithFlagsInOut : SDTyp
> SDTCisVT<1, i32>,
> SDTCisVT<4, i32>]>;
>
> -def SDT_ARM64Brcond : SDTypeProfile<0, 3,
> +def SDT_AArch64Brcond : SDTypeProfile<0, 3,
> [SDTCisVT<0, OtherVT>, SDTCisVT<1, i32>,
> SDTCisVT<2, i32>]>;
> -def SDT_ARM64cbz : SDTypeProfile<0, 2, [SDTCisInt<0>, SDTCisVT<1, OtherVT>]>;
> -def SDT_ARM64tbz : SDTypeProfile<0, 3, [SDTCisInt<0>, SDTCisInt<1>,
> +def SDT_AArch64cbz : SDTypeProfile<0, 2, [SDTCisInt<0>, SDTCisVT<1, OtherVT>]>;
> +def SDT_AArch64tbz : SDTypeProfile<0, 3, [SDTCisInt<0>, SDTCisInt<1>,
> SDTCisVT<2, OtherVT>]>;
>
>
> -def SDT_ARM64CSel : SDTypeProfile<1, 4,
> +def SDT_AArch64CSel : SDTypeProfile<1, 4,
> [SDTCisSameAs<0, 1>,
> SDTCisSameAs<0, 2>,
> SDTCisInt<3>,
> SDTCisVT<4, i32>]>;
> -def SDT_ARM64FCmp : SDTypeProfile<0, 2,
> +def SDT_AArch64FCmp : SDTypeProfile<0, 2,
> [SDTCisFP<0>,
> SDTCisSameAs<0, 1>]>;
> -def SDT_ARM64Dup : SDTypeProfile<1, 1, [SDTCisVec<0>]>;
> -def SDT_ARM64DupLane : SDTypeProfile<1, 2, [SDTCisVec<0>, SDTCisInt<2>]>;
> -def SDT_ARM64Zip : SDTypeProfile<1, 2, [SDTCisVec<0>,
> +def SDT_AArch64Dup : SDTypeProfile<1, 1, [SDTCisVec<0>]>;
> +def SDT_AArch64DupLane : SDTypeProfile<1, 2, [SDTCisVec<0>, SDTCisInt<2>]>;
> +def SDT_AArch64Zip : SDTypeProfile<1, 2, [SDTCisVec<0>,
> SDTCisSameAs<0, 1>,
> SDTCisSameAs<0, 2>]>;
> -def SDT_ARM64MOVIedit : SDTypeProfile<1, 1, [SDTCisInt<1>]>;
> -def SDT_ARM64MOVIshift : SDTypeProfile<1, 2, [SDTCisInt<1>, SDTCisInt<2>]>;
> -def SDT_ARM64vecimm : SDTypeProfile<1, 3, [SDTCisVec<0>, SDTCisSameAs<0,1>,
> +def SDT_AArch64MOVIedit : SDTypeProfile<1, 1, [SDTCisInt<1>]>;
> +def SDT_AArch64MOVIshift : SDTypeProfile<1, 2, [SDTCisInt<1>, SDTCisInt<2>]>;
> +def SDT_AArch64vecimm : SDTypeProfile<1, 3, [SDTCisVec<0>, SDTCisSameAs<0,1>,
> SDTCisInt<2>, SDTCisInt<3>]>;
> -def SDT_ARM64UnaryVec: SDTypeProfile<1, 1, [SDTCisVec<0>, SDTCisSameAs<0,1>]>;
> -def SDT_ARM64ExtVec: SDTypeProfile<1, 3, [SDTCisVec<0>, SDTCisSameAs<0,1>,
> +def SDT_AArch64UnaryVec: SDTypeProfile<1, 1, [SDTCisVec<0>, SDTCisSameAs<0,1>]>;
> +def SDT_AArch64ExtVec: SDTypeProfile<1, 3, [SDTCisVec<0>, SDTCisSameAs<0,1>,
> SDTCisSameAs<0,2>, SDTCisInt<3>]>;
> -def SDT_ARM64vshift : SDTypeProfile<1, 2, [SDTCisSameAs<0,1>, SDTCisInt<2>]>;
> +def SDT_AArch64vshift : SDTypeProfile<1, 2, [SDTCisSameAs<0,1>, SDTCisInt<2>]>;
>
> -def SDT_ARM64unvec : SDTypeProfile<1, 1, [SDTCisVec<0>, SDTCisSameAs<0,1>]>;
> -def SDT_ARM64fcmpz : SDTypeProfile<1, 1, []>;
> -def SDT_ARM64fcmp : SDTypeProfile<1, 2, [SDTCisSameAs<1,2>]>;
> -def SDT_ARM64binvec : SDTypeProfile<1, 2, [SDTCisVec<0>, SDTCisSameAs<0,1>,
> +def SDT_AArch64unvec : SDTypeProfile<1, 1, [SDTCisVec<0>, SDTCisSameAs<0,1>]>;
> +def SDT_AArch64fcmpz : SDTypeProfile<1, 1, []>;
> +def SDT_AArch64fcmp : SDTypeProfile<1, 2, [SDTCisSameAs<1,2>]>;
> +def SDT_AArch64binvec : SDTypeProfile<1, 2, [SDTCisVec<0>, SDTCisSameAs<0,1>,
> SDTCisSameAs<0,2>]>;
> -def SDT_ARM64trivec : SDTypeProfile<1, 3, [SDTCisVec<0>, SDTCisSameAs<0,1>,
> +def SDT_AArch64trivec : SDTypeProfile<1, 3, [SDTCisVec<0>, SDTCisSameAs<0,1>,
> SDTCisSameAs<0,2>,
> SDTCisSameAs<0,3>]>;
> -def SDT_ARM64TCRET : SDTypeProfile<0, 2, [SDTCisPtrTy<0>]>;
> -def SDT_ARM64PREFETCH : SDTypeProfile<0, 2, [SDTCisVT<0, i32>, SDTCisPtrTy<1>]>;
> +def SDT_AArch64TCRET : SDTypeProfile<0, 2, [SDTCisPtrTy<0>]>;
> +def SDT_AArch64PREFETCH : SDTypeProfile<0, 2, [SDTCisVT<0, i32>, SDTCisPtrTy<1>]>;
>
> -def SDT_ARM64ITOF : SDTypeProfile<1, 1, [SDTCisFP<0>, SDTCisSameAs<0,1>]>;
> +def SDT_AArch64ITOF : SDTypeProfile<1, 1, [SDTCisFP<0>, SDTCisSameAs<0,1>]>;
>
> -def SDT_ARM64TLSDescCall : SDTypeProfile<0, -2, [SDTCisPtrTy<0>,
> +def SDT_AArch64TLSDescCall : SDTypeProfile<0, -2, [SDTCisPtrTy<0>,
> SDTCisPtrTy<1>]>;
> -def SDT_ARM64WrapperLarge : SDTypeProfile<1, 4,
> +def SDT_AArch64WrapperLarge : SDTypeProfile<1, 4,
> [SDTCisVT<0, i64>, SDTCisVT<1, i32>,
> SDTCisSameAs<1, 2>, SDTCisSameAs<1, 3>,
> SDTCisSameAs<1, 4>]>;
>
>
> // Node definitions.
> -def ARM64adrp : SDNode<"ARM64ISD::ADRP", SDTIntUnaryOp, []>;
> -def ARM64addlow : SDNode<"ARM64ISD::ADDlow", SDTIntBinOp, []>;
> -def ARM64LOADgot : SDNode<"ARM64ISD::LOADgot", SDTIntUnaryOp>;
> -def ARM64callseq_start : SDNode<"ISD::CALLSEQ_START",
> +def AArch64adrp : SDNode<"AArch64ISD::ADRP", SDTIntUnaryOp, []>;
> +def AArch64addlow : SDNode<"AArch64ISD::ADDlow", SDTIntBinOp, []>;
> +def AArch64LOADgot : SDNode<"AArch64ISD::LOADgot", SDTIntUnaryOp>;
> +def AArch64callseq_start : SDNode<"ISD::CALLSEQ_START",
> SDCallSeqStart<[ SDTCisVT<0, i32> ]>,
> [SDNPHasChain, SDNPOutGlue]>;
> -def ARM64callseq_end : SDNode<"ISD::CALLSEQ_END",
> +def AArch64callseq_end : SDNode<"ISD::CALLSEQ_END",
> SDCallSeqEnd<[ SDTCisVT<0, i32>,
> SDTCisVT<1, i32> ]>,
> [SDNPHasChain, SDNPOptInGlue, SDNPOutGlue]>;
> -def ARM64call : SDNode<"ARM64ISD::CALL",
> +def AArch64call : SDNode<"AArch64ISD::CALL",
> SDTypeProfile<0, -1, [SDTCisPtrTy<0>]>,
> [SDNPHasChain, SDNPOptInGlue, SDNPOutGlue,
> SDNPVariadic]>;
> -def ARM64brcond : SDNode<"ARM64ISD::BRCOND", SDT_ARM64Brcond,
> +def AArch64brcond : SDNode<"AArch64ISD::BRCOND", SDT_AArch64Brcond,
> [SDNPHasChain]>;
> -def ARM64cbz : SDNode<"ARM64ISD::CBZ", SDT_ARM64cbz,
> +def AArch64cbz : SDNode<"AArch64ISD::CBZ", SDT_AArch64cbz,
> [SDNPHasChain]>;
> -def ARM64cbnz : SDNode<"ARM64ISD::CBNZ", SDT_ARM64cbz,
> +def AArch64cbnz : SDNode<"AArch64ISD::CBNZ", SDT_AArch64cbz,
> [SDNPHasChain]>;
> -def ARM64tbz : SDNode<"ARM64ISD::TBZ", SDT_ARM64tbz,
> +def AArch64tbz : SDNode<"AArch64ISD::TBZ", SDT_AArch64tbz,
> [SDNPHasChain]>;
> -def ARM64tbnz : SDNode<"ARM64ISD::TBNZ", SDT_ARM64tbz,
> +def AArch64tbnz : SDNode<"AArch64ISD::TBNZ", SDT_AArch64tbz,
> [SDNPHasChain]>;
>
>
> -def ARM64csel : SDNode<"ARM64ISD::CSEL", SDT_ARM64CSel>;
> -def ARM64csinv : SDNode<"ARM64ISD::CSINV", SDT_ARM64CSel>;
> -def ARM64csneg : SDNode<"ARM64ISD::CSNEG", SDT_ARM64CSel>;
> -def ARM64csinc : SDNode<"ARM64ISD::CSINC", SDT_ARM64CSel>;
> -def ARM64retflag : SDNode<"ARM64ISD::RET_FLAG", SDTNone,
> +def AArch64csel : SDNode<"AArch64ISD::CSEL", SDT_AArch64CSel>;
> +def AArch64csinv : SDNode<"AArch64ISD::CSINV", SDT_AArch64CSel>;
> +def AArch64csneg : SDNode<"AArch64ISD::CSNEG", SDT_AArch64CSel>;
> +def AArch64csinc : SDNode<"AArch64ISD::CSINC", SDT_AArch64CSel>;
> +def AArch64retflag : SDNode<"AArch64ISD::RET_FLAG", SDTNone,
> [SDNPHasChain, SDNPOptInGlue, SDNPVariadic]>;
> -def ARM64adc : SDNode<"ARM64ISD::ADC", SDTBinaryArithWithFlagsIn >;
> -def ARM64sbc : SDNode<"ARM64ISD::SBC", SDTBinaryArithWithFlagsIn>;
> -def ARM64add_flag : SDNode<"ARM64ISD::ADDS", SDTBinaryArithWithFlagsOut,
> +def AArch64adc : SDNode<"AArch64ISD::ADC", SDTBinaryArithWithFlagsIn >;
> +def AArch64sbc : SDNode<"AArch64ISD::SBC", SDTBinaryArithWithFlagsIn>;
> +def AArch64add_flag : SDNode<"AArch64ISD::ADDS", SDTBinaryArithWithFlagsOut,
> [SDNPCommutative]>;
> -def ARM64sub_flag : SDNode<"ARM64ISD::SUBS", SDTBinaryArithWithFlagsOut>;
> -def ARM64and_flag : SDNode<"ARM64ISD::ANDS", SDTBinaryArithWithFlagsOut,
> +def AArch64sub_flag : SDNode<"AArch64ISD::SUBS", SDTBinaryArithWithFlagsOut>;
> +def AArch64and_flag : SDNode<"AArch64ISD::ANDS", SDTBinaryArithWithFlagsOut,
> [SDNPCommutative]>;
> -def ARM64adc_flag : SDNode<"ARM64ISD::ADCS", SDTBinaryArithWithFlagsInOut>;
> -def ARM64sbc_flag : SDNode<"ARM64ISD::SBCS", SDTBinaryArithWithFlagsInOut>;
> +def AArch64adc_flag : SDNode<"AArch64ISD::ADCS", SDTBinaryArithWithFlagsInOut>;
> +def AArch64sbc_flag : SDNode<"AArch64ISD::SBCS", SDTBinaryArithWithFlagsInOut>;
>
> -def ARM64threadpointer : SDNode<"ARM64ISD::THREAD_POINTER", SDTPtrLeaf>;
> +def AArch64threadpointer : SDNode<"AArch64ISD::THREAD_POINTER", SDTPtrLeaf>;
>
> -def ARM64fcmp : SDNode<"ARM64ISD::FCMP", SDT_ARM64FCmp>;
> +def AArch64fcmp : SDNode<"AArch64ISD::FCMP", SDT_AArch64FCmp>;
>
> -def ARM64fmax : SDNode<"ARM64ISD::FMAX", SDTFPBinOp>;
> -def ARM64fmin : SDNode<"ARM64ISD::FMIN", SDTFPBinOp>;
> -
> -def ARM64dup : SDNode<"ARM64ISD::DUP", SDT_ARM64Dup>;
> -def ARM64duplane8 : SDNode<"ARM64ISD::DUPLANE8", SDT_ARM64DupLane>;
> -def ARM64duplane16 : SDNode<"ARM64ISD::DUPLANE16", SDT_ARM64DupLane>;
> -def ARM64duplane32 : SDNode<"ARM64ISD::DUPLANE32", SDT_ARM64DupLane>;
> -def ARM64duplane64 : SDNode<"ARM64ISD::DUPLANE64", SDT_ARM64DupLane>;
> -
> -def ARM64zip1 : SDNode<"ARM64ISD::ZIP1", SDT_ARM64Zip>;
> -def ARM64zip2 : SDNode<"ARM64ISD::ZIP2", SDT_ARM64Zip>;
> -def ARM64uzp1 : SDNode<"ARM64ISD::UZP1", SDT_ARM64Zip>;
> -def ARM64uzp2 : SDNode<"ARM64ISD::UZP2", SDT_ARM64Zip>;
> -def ARM64trn1 : SDNode<"ARM64ISD::TRN1", SDT_ARM64Zip>;
> -def ARM64trn2 : SDNode<"ARM64ISD::TRN2", SDT_ARM64Zip>;
> -
> -def ARM64movi_edit : SDNode<"ARM64ISD::MOVIedit", SDT_ARM64MOVIedit>;
> -def ARM64movi_shift : SDNode<"ARM64ISD::MOVIshift", SDT_ARM64MOVIshift>;
> -def ARM64movi_msl : SDNode<"ARM64ISD::MOVImsl", SDT_ARM64MOVIshift>;
> -def ARM64mvni_shift : SDNode<"ARM64ISD::MVNIshift", SDT_ARM64MOVIshift>;
> -def ARM64mvni_msl : SDNode<"ARM64ISD::MVNImsl", SDT_ARM64MOVIshift>;
> -def ARM64movi : SDNode<"ARM64ISD::MOVI", SDT_ARM64MOVIedit>;
> -def ARM64fmov : SDNode<"ARM64ISD::FMOV", SDT_ARM64MOVIedit>;
> -
> -def ARM64rev16 : SDNode<"ARM64ISD::REV16", SDT_ARM64UnaryVec>;
> -def ARM64rev32 : SDNode<"ARM64ISD::REV32", SDT_ARM64UnaryVec>;
> -def ARM64rev64 : SDNode<"ARM64ISD::REV64", SDT_ARM64UnaryVec>;
> -def ARM64ext : SDNode<"ARM64ISD::EXT", SDT_ARM64ExtVec>;
> -
> -def ARM64vashr : SDNode<"ARM64ISD::VASHR", SDT_ARM64vshift>;
> -def ARM64vlshr : SDNode<"ARM64ISD::VLSHR", SDT_ARM64vshift>;
> -def ARM64vshl : SDNode<"ARM64ISD::VSHL", SDT_ARM64vshift>;
> -def ARM64sqshli : SDNode<"ARM64ISD::SQSHL_I", SDT_ARM64vshift>;
> -def ARM64uqshli : SDNode<"ARM64ISD::UQSHL_I", SDT_ARM64vshift>;
> -def ARM64sqshlui : SDNode<"ARM64ISD::SQSHLU_I", SDT_ARM64vshift>;
> -def ARM64srshri : SDNode<"ARM64ISD::SRSHR_I", SDT_ARM64vshift>;
> -def ARM64urshri : SDNode<"ARM64ISD::URSHR_I", SDT_ARM64vshift>;
> -
> -def ARM64not: SDNode<"ARM64ISD::NOT", SDT_ARM64unvec>;
> -def ARM64bit: SDNode<"ARM64ISD::BIT", SDT_ARM64trivec>;
> -def ARM64bsl: SDNode<"ARM64ISD::BSL", SDT_ARM64trivec>;
> -
> -def ARM64cmeq: SDNode<"ARM64ISD::CMEQ", SDT_ARM64binvec>;
> -def ARM64cmge: SDNode<"ARM64ISD::CMGE", SDT_ARM64binvec>;
> -def ARM64cmgt: SDNode<"ARM64ISD::CMGT", SDT_ARM64binvec>;
> -def ARM64cmhi: SDNode<"ARM64ISD::CMHI", SDT_ARM64binvec>;
> -def ARM64cmhs: SDNode<"ARM64ISD::CMHS", SDT_ARM64binvec>;
> -
> -def ARM64fcmeq: SDNode<"ARM64ISD::FCMEQ", SDT_ARM64fcmp>;
> -def ARM64fcmge: SDNode<"ARM64ISD::FCMGE", SDT_ARM64fcmp>;
> -def ARM64fcmgt: SDNode<"ARM64ISD::FCMGT", SDT_ARM64fcmp>;
> -
> -def ARM64cmeqz: SDNode<"ARM64ISD::CMEQz", SDT_ARM64unvec>;
> -def ARM64cmgez: SDNode<"ARM64ISD::CMGEz", SDT_ARM64unvec>;
> -def ARM64cmgtz: SDNode<"ARM64ISD::CMGTz", SDT_ARM64unvec>;
> -def ARM64cmlez: SDNode<"ARM64ISD::CMLEz", SDT_ARM64unvec>;
> -def ARM64cmltz: SDNode<"ARM64ISD::CMLTz", SDT_ARM64unvec>;
> -def ARM64cmtst : PatFrag<(ops node:$LHS, node:$RHS),
> - (ARM64not (ARM64cmeqz (and node:$LHS, node:$RHS)))>;
> -
> -def ARM64fcmeqz: SDNode<"ARM64ISD::FCMEQz", SDT_ARM64fcmpz>;
> -def ARM64fcmgez: SDNode<"ARM64ISD::FCMGEz", SDT_ARM64fcmpz>;
> -def ARM64fcmgtz: SDNode<"ARM64ISD::FCMGTz", SDT_ARM64fcmpz>;
> -def ARM64fcmlez: SDNode<"ARM64ISD::FCMLEz", SDT_ARM64fcmpz>;
> -def ARM64fcmltz: SDNode<"ARM64ISD::FCMLTz", SDT_ARM64fcmpz>;
> +def AArch64fmax : SDNode<"AArch64ISD::FMAX", SDTFPBinOp>;
> +def AArch64fmin : SDNode<"AArch64ISD::FMIN", SDTFPBinOp>;
> +
> +def AArch64dup : SDNode<"AArch64ISD::DUP", SDT_AArch64Dup>;
> +def AArch64duplane8 : SDNode<"AArch64ISD::DUPLANE8", SDT_AArch64DupLane>;
> +def AArch64duplane16 : SDNode<"AArch64ISD::DUPLANE16", SDT_AArch64DupLane>;
> +def AArch64duplane32 : SDNode<"AArch64ISD::DUPLANE32", SDT_AArch64DupLane>;
> +def AArch64duplane64 : SDNode<"AArch64ISD::DUPLANE64", SDT_AArch64DupLane>;
> +
> +def AArch64zip1 : SDNode<"AArch64ISD::ZIP1", SDT_AArch64Zip>;
> +def AArch64zip2 : SDNode<"AArch64ISD::ZIP2", SDT_AArch64Zip>;
> +def AArch64uzp1 : SDNode<"AArch64ISD::UZP1", SDT_AArch64Zip>;
> +def AArch64uzp2 : SDNode<"AArch64ISD::UZP2", SDT_AArch64Zip>;
> +def AArch64trn1 : SDNode<"AArch64ISD::TRN1", SDT_AArch64Zip>;
> +def AArch64trn2 : SDNode<"AArch64ISD::TRN2", SDT_AArch64Zip>;
> +
> +def AArch64movi_edit : SDNode<"AArch64ISD::MOVIedit", SDT_AArch64MOVIedit>;
> +def AArch64movi_shift : SDNode<"AArch64ISD::MOVIshift", SDT_AArch64MOVIshift>;
> +def AArch64movi_msl : SDNode<"AArch64ISD::MOVImsl", SDT_AArch64MOVIshift>;
> +def AArch64mvni_shift : SDNode<"AArch64ISD::MVNIshift", SDT_AArch64MOVIshift>;
> +def AArch64mvni_msl : SDNode<"AArch64ISD::MVNImsl", SDT_AArch64MOVIshift>;
> +def AArch64movi : SDNode<"AArch64ISD::MOVI", SDT_AArch64MOVIedit>;
> +def AArch64fmov : SDNode<"AArch64ISD::FMOV", SDT_AArch64MOVIedit>;
> +
> +def AArch64rev16 : SDNode<"AArch64ISD::REV16", SDT_AArch64UnaryVec>;
> +def AArch64rev32 : SDNode<"AArch64ISD::REV32", SDT_AArch64UnaryVec>;
> +def AArch64rev64 : SDNode<"AArch64ISD::REV64", SDT_AArch64UnaryVec>;
> +def AArch64ext : SDNode<"AArch64ISD::EXT", SDT_AArch64ExtVec>;
> +
> +def AArch64vashr : SDNode<"AArch64ISD::VASHR", SDT_AArch64vshift>;
> +def AArch64vlshr : SDNode<"AArch64ISD::VLSHR", SDT_AArch64vshift>;
> +def AArch64vshl : SDNode<"AArch64ISD::VSHL", SDT_AArch64vshift>;
> +def AArch64sqshli : SDNode<"AArch64ISD::SQSHL_I", SDT_AArch64vshift>;
> +def AArch64uqshli : SDNode<"AArch64ISD::UQSHL_I", SDT_AArch64vshift>;
> +def AArch64sqshlui : SDNode<"AArch64ISD::SQSHLU_I", SDT_AArch64vshift>;
> +def AArch64srshri : SDNode<"AArch64ISD::SRSHR_I", SDT_AArch64vshift>;
> +def AArch64urshri : SDNode<"AArch64ISD::URSHR_I", SDT_AArch64vshift>;
> +
> +def AArch64not: SDNode<"AArch64ISD::NOT", SDT_AArch64unvec>;
> +def AArch64bit: SDNode<"AArch64ISD::BIT", SDT_AArch64trivec>;
> +def AArch64bsl: SDNode<"AArch64ISD::BSL", SDT_AArch64trivec>;
> +
> +def AArch64cmeq: SDNode<"AArch64ISD::CMEQ", SDT_AArch64binvec>;
> +def AArch64cmge: SDNode<"AArch64ISD::CMGE", SDT_AArch64binvec>;
> +def AArch64cmgt: SDNode<"AArch64ISD::CMGT", SDT_AArch64binvec>;
> +def AArch64cmhi: SDNode<"AArch64ISD::CMHI", SDT_AArch64binvec>;
> +def AArch64cmhs: SDNode<"AArch64ISD::CMHS", SDT_AArch64binvec>;
> +
> +def AArch64fcmeq: SDNode<"AArch64ISD::FCMEQ", SDT_AArch64fcmp>;
> +def AArch64fcmge: SDNode<"AArch64ISD::FCMGE", SDT_AArch64fcmp>;
> +def AArch64fcmgt: SDNode<"AArch64ISD::FCMGT", SDT_AArch64fcmp>;
> +
> +def AArch64cmeqz: SDNode<"AArch64ISD::CMEQz", SDT_AArch64unvec>;
> +def AArch64cmgez: SDNode<"AArch64ISD::CMGEz", SDT_AArch64unvec>;
> +def AArch64cmgtz: SDNode<"AArch64ISD::CMGTz", SDT_AArch64unvec>;
> +def AArch64cmlez: SDNode<"AArch64ISD::CMLEz", SDT_AArch64unvec>;
> +def AArch64cmltz: SDNode<"AArch64ISD::CMLTz", SDT_AArch64unvec>;
> +def AArch64cmtst : PatFrag<(ops node:$LHS, node:$RHS),
> + (AArch64not (AArch64cmeqz (and node:$LHS, node:$RHS)))>;
> +
> +def AArch64fcmeqz: SDNode<"AArch64ISD::FCMEQz", SDT_AArch64fcmpz>;
> +def AArch64fcmgez: SDNode<"AArch64ISD::FCMGEz", SDT_AArch64fcmpz>;
> +def AArch64fcmgtz: SDNode<"AArch64ISD::FCMGTz", SDT_AArch64fcmpz>;
> +def AArch64fcmlez: SDNode<"AArch64ISD::FCMLEz", SDT_AArch64fcmpz>;
> +def AArch64fcmltz: SDNode<"AArch64ISD::FCMLTz", SDT_AArch64fcmpz>;
>
> -def ARM64bici: SDNode<"ARM64ISD::BICi", SDT_ARM64vecimm>;
> -def ARM64orri: SDNode<"ARM64ISD::ORRi", SDT_ARM64vecimm>;
> +def AArch64bici: SDNode<"AArch64ISD::BICi", SDT_AArch64vecimm>;
> +def AArch64orri: SDNode<"AArch64ISD::ORRi", SDT_AArch64vecimm>;
>
> -def ARM64neg : SDNode<"ARM64ISD::NEG", SDT_ARM64unvec>;
> +def AArch64neg : SDNode<"AArch64ISD::NEG", SDT_AArch64unvec>;
>
> -def ARM64tcret: SDNode<"ARM64ISD::TC_RETURN", SDT_ARM64TCRET,
> +def AArch64tcret: SDNode<"AArch64ISD::TC_RETURN", SDT_AArch64TCRET,
> [SDNPHasChain, SDNPOptInGlue, SDNPVariadic]>;
>
> -def ARM64Prefetch : SDNode<"ARM64ISD::PREFETCH", SDT_ARM64PREFETCH,
> +def AArch64Prefetch : SDNode<"AArch64ISD::PREFETCH", SDT_AArch64PREFETCH,
> [SDNPHasChain, SDNPSideEffect]>;
>
> -def ARM64sitof: SDNode<"ARM64ISD::SITOF", SDT_ARM64ITOF>;
> -def ARM64uitof: SDNode<"ARM64ISD::UITOF", SDT_ARM64ITOF>;
> +def AArch64sitof: SDNode<"AArch64ISD::SITOF", SDT_AArch64ITOF>;
> +def AArch64uitof: SDNode<"AArch64ISD::UITOF", SDT_AArch64ITOF>;
>
> -def ARM64tlsdesc_call : SDNode<"ARM64ISD::TLSDESC_CALL", SDT_ARM64TLSDescCall,
> - [SDNPInGlue, SDNPOutGlue, SDNPHasChain,
> - SDNPVariadic]>;
> +def AArch64tlsdesc_call : SDNode<"AArch64ISD::TLSDESC_CALL",
> + SDT_AArch64TLSDescCall,
> + [SDNPInGlue, SDNPOutGlue, SDNPHasChain,
> + SDNPVariadic]>;
>
> -def ARM64WrapperLarge : SDNode<"ARM64ISD::WrapperLarge", SDT_ARM64WrapperLarge>;
> +def AArch64WrapperLarge : SDNode<"AArch64ISD::WrapperLarge",
> + SDT_AArch64WrapperLarge>;
>
>
> //===----------------------------------------------------------------------===//
>
> //===----------------------------------------------------------------------===//
>
> -// ARM64 Instruction Predicate Definitions.
> +// AArch64 Instruction Predicate Definitions.
> //
> def HasZCZ : Predicate<"Subtarget->hasZeroCycleZeroing()">;
> def NoZCZ : Predicate<"!Subtarget->hasZeroCycleZeroing()">;
> @@ -248,7 +250,7 @@ def IsNotDarwin: Predicate<"!Subtarget->
> def ForCodeSize : Predicate<"ForCodeSize">;
> def NotForCodeSize : Predicate<"!ForCodeSize">;
>
> -include "ARM64InstrFormats.td"
> +include "AArch64InstrFormats.td"
>
> //===----------------------------------------------------------------------===//
>
> @@ -258,63 +260,63 @@ include "ARM64InstrFormats.td"
>
> let Defs = [SP], Uses = [SP], hasSideEffects = 1, isCodeGenOnly = 1 in {
> def ADJCALLSTACKDOWN : Pseudo<(outs), (ins i32imm:$amt),
> - [(ARM64callseq_start timm:$amt)]>;
> + [(AArch64callseq_start timm:$amt)]>;
> def ADJCALLSTACKUP : Pseudo<(outs), (ins i32imm:$amt1, i32imm:$amt2),
> - [(ARM64callseq_end timm:$amt1, timm:$amt2)]>;
> + [(AArch64callseq_end timm:$amt1, timm:$amt2)]>;
> } // Defs = [SP], Uses = [SP], hasSideEffects = 1, isCodeGenOnly = 1
>
> let isReMaterializable = 1, isCodeGenOnly = 1 in {
> // FIXME: The following pseudo instructions are only needed because remat
> // cannot handle multiple instructions. When that changes, they can be
> -// removed, along with the ARM64Wrapper node.
> +// removed, along with the AArch64Wrapper node.
>
> let AddedComplexity = 10 in
> def LOADgot : Pseudo<(outs GPR64:$dst), (ins i64imm:$addr),
> - [(set GPR64:$dst, (ARM64LOADgot tglobaladdr:$addr))]>,
> + [(set GPR64:$dst, (AArch64LOADgot tglobaladdr:$addr))]>,
> Sched<[WriteLDAdr]>;
>
> // The MOVaddr instruction should match only when the add is not folded
> // into a load or store address.
> def MOVaddr
> : Pseudo<(outs GPR64:$dst), (ins i64imm:$hi, i64imm:$low),
> - [(set GPR64:$dst, (ARM64addlow (ARM64adrp tglobaladdr:$hi),
> + [(set GPR64:$dst, (AArch64addlow (AArch64adrp tglobaladdr:$hi),
> tglobaladdr:$low))]>,
> Sched<[WriteAdrAdr]>;
> def MOVaddrJT
> : Pseudo<(outs GPR64:$dst), (ins i64imm:$hi, i64imm:$low),
> - [(set GPR64:$dst, (ARM64addlow (ARM64adrp tjumptable:$hi),
> + [(set GPR64:$dst, (AArch64addlow (AArch64adrp tjumptable:$hi),
> tjumptable:$low))]>,
> Sched<[WriteAdrAdr]>;
> def MOVaddrCP
> : Pseudo<(outs GPR64:$dst), (ins i64imm:$hi, i64imm:$low),
> - [(set GPR64:$dst, (ARM64addlow (ARM64adrp tconstpool:$hi),
> + [(set GPR64:$dst, (AArch64addlow (AArch64adrp tconstpool:$hi),
> tconstpool:$low))]>,
> Sched<[WriteAdrAdr]>;
> def MOVaddrBA
> : Pseudo<(outs GPR64:$dst), (ins i64imm:$hi, i64imm:$low),
> - [(set GPR64:$dst, (ARM64addlow (ARM64adrp tblockaddress:$hi),
> + [(set GPR64:$dst, (AArch64addlow (AArch64adrp tblockaddress:$hi),
> tblockaddress:$low))]>,
> Sched<[WriteAdrAdr]>;
> def MOVaddrTLS
> : Pseudo<(outs GPR64:$dst), (ins i64imm:$hi, i64imm:$low),
> - [(set GPR64:$dst, (ARM64addlow (ARM64adrp tglobaltlsaddr:$hi),
> + [(set GPR64:$dst, (AArch64addlow (AArch64adrp tglobaltlsaddr:$hi),
> tglobaltlsaddr:$low))]>,
> Sched<[WriteAdrAdr]>;
> def MOVaddrEXT
> : Pseudo<(outs GPR64:$dst), (ins i64imm:$hi, i64imm:$low),
> - [(set GPR64:$dst, (ARM64addlow (ARM64adrp texternalsym:$hi),
> + [(set GPR64:$dst, (AArch64addlow (AArch64adrp texternalsym:$hi),
> texternalsym:$low))]>,
> Sched<[WriteAdrAdr]>;
>
> } // isReMaterializable, isCodeGenOnly
>
> -def : Pat<(ARM64LOADgot tglobaltlsaddr:$addr),
> +def : Pat<(AArch64LOADgot tglobaltlsaddr:$addr),
> (LOADgot tglobaltlsaddr:$addr)>;
>
> -def : Pat<(ARM64LOADgot texternalsym:$addr),
> +def : Pat<(AArch64LOADgot texternalsym:$addr),
> (LOADgot texternalsym:$addr)>;
>
> -def : Pat<(ARM64LOADgot tconstpool:$addr),
> +def : Pat<(AArch64LOADgot tconstpool:$addr),
> (LOADgot tconstpool:$addr)>;
>
> //===----------------------------------------------------------------------===//
> @@ -345,7 +347,7 @@ def MSRpstate: MSRpstateI;
>
> // The thread pointer (on Linux, at least, where this has been implemented) is
> // TPIDR_EL0.
> -def : Pat<(ARM64threadpointer), (MRS 0xde82)>;
> +def : Pat<(AArch64threadpointer), (MRS 0xde82)>;
>
> // Generic system instructions
> def SYSxt : SystemXtI<0, "sys">;
> @@ -464,28 +466,28 @@ def : Pat<(i64 i64imm_32bit:$src),
>
> // Deal with the various forms of (ELF) large addressing with MOVZ/MOVK
> // sequences.
> -def : Pat<(ARM64WrapperLarge tglobaladdr:$g3, tglobaladdr:$g2,
> +def : Pat<(AArch64WrapperLarge tglobaladdr:$g3, tglobaladdr:$g2,
> tglobaladdr:$g1, tglobaladdr:$g0),
> (MOVKXi (MOVKXi (MOVKXi (MOVZXi tglobaladdr:$g3, 48),
> tglobaladdr:$g2, 32),
> tglobaladdr:$g1, 16),
> tglobaladdr:$g0, 0)>;
>
> -def : Pat<(ARM64WrapperLarge tblockaddress:$g3, tblockaddress:$g2,
> +def : Pat<(AArch64WrapperLarge tblockaddress:$g3, tblockaddress:$g2,
> tblockaddress:$g1, tblockaddress:$g0),
> (MOVKXi (MOVKXi (MOVKXi (MOVZXi tblockaddress:$g3, 48),
> tblockaddress:$g2, 32),
> tblockaddress:$g1, 16),
> tblockaddress:$g0, 0)>;
>
> -def : Pat<(ARM64WrapperLarge tconstpool:$g3, tconstpool:$g2,
> +def : Pat<(AArch64WrapperLarge tconstpool:$g3, tconstpool:$g2,
> tconstpool:$g1, tconstpool:$g0),
> (MOVKXi (MOVKXi (MOVKXi (MOVZXi tconstpool:$g3, 48),
> tconstpool:$g2, 32),
> tconstpool:$g1, 16),
> tconstpool:$g0, 0)>;
>
> -def : Pat<(ARM64WrapperLarge tjumptable:$g3, tjumptable:$g2,
> +def : Pat<(AArch64WrapperLarge tjumptable:$g3, tjumptable:$g2,
> tjumptable:$g1, tjumptable:$g0),
> (MOVKXi (MOVKXi (MOVKXi (MOVZXi tjumptable:$g3, 48),
> tjumptable:$g2, 32),
> @@ -498,8 +500,8 @@ def : Pat<(ARM64WrapperLarge tjumptable:
> //===----------------------------------------------------------------------===//
>
> // Add/subtract with carry.
> -defm ADC : AddSubCarry<0, "adc", "adcs", ARM64adc, ARM64adc_flag>;
> -defm SBC : AddSubCarry<1, "sbc", "sbcs", ARM64sbc, ARM64sbc_flag>;
> +defm ADC : AddSubCarry<0, "adc", "adcs", AArch64adc, AArch64adc_flag>;
> +defm SBC : AddSubCarry<1, "sbc", "sbcs", AArch64sbc, AArch64sbc_flag>;
>
> def : InstAlias<"ngc $dst, $src", (SBCWr GPR32:$dst, WZR, GPR32:$src)>;
> def : InstAlias<"ngc $dst, $src", (SBCXr GPR64:$dst, XZR, GPR64:$src)>;
> @@ -519,8 +521,8 @@ def : InstAlias<"mov $dst, $src",
> def : InstAlias<"mov $dst, $src",
> (ADDXri GPR64sp:$dst, GPR64sponly:$src, 0, 0)>;
>
> -defm ADDS : AddSubS<0, "adds", ARM64add_flag, "cmn">;
> -defm SUBS : AddSubS<1, "subs", ARM64sub_flag, "cmp">;
> +defm ADDS : AddSubS<0, "adds", AArch64add_flag, "cmn">;
> +defm SUBS : AddSubS<1, "subs", AArch64sub_flag, "cmp">;
>
> // Use SUBS instead of SUB to enable CSE between SUBS and SUB.
> def : Pat<(sub GPR32sp:$Rn, addsub_shifted_imm32:$imm),
> @@ -558,13 +560,13 @@ def : Pat<(sub GPR64:$Rn, neg_addsub_shi
> // expression (add x, -1) must be transformed to (SUB{W,X}ri x, 1).
> // These patterns capture that transformation.
> let AddedComplexity = 1 in {
> -def : Pat<(ARM64add_flag GPR32:$Rn, neg_addsub_shifted_imm32:$imm),
> +def : Pat<(AArch64add_flag GPR32:$Rn, neg_addsub_shifted_imm32:$imm),
> (SUBSWri GPR32:$Rn, neg_addsub_shifted_imm32:$imm)>;
> -def : Pat<(ARM64add_flag GPR64:$Rn, neg_addsub_shifted_imm64:$imm),
> +def : Pat<(AArch64add_flag GPR64:$Rn, neg_addsub_shifted_imm64:$imm),
> (SUBSXri GPR64:$Rn, neg_addsub_shifted_imm64:$imm)>;
> -def : Pat<(ARM64sub_flag GPR32:$Rn, neg_addsub_shifted_imm32:$imm),
> +def : Pat<(AArch64sub_flag GPR32:$Rn, neg_addsub_shifted_imm32:$imm),
> (ADDSWri GPR32:$Rn, neg_addsub_shifted_imm32:$imm)>;
> -def : Pat<(ARM64sub_flag GPR64:$Rn, neg_addsub_shifted_imm64:$imm),
> +def : Pat<(AArch64sub_flag GPR64:$Rn, neg_addsub_shifted_imm64:$imm),
> (ADDSXri GPR64:$Rn, neg_addsub_shifted_imm64:$imm)>;
> }
>
> @@ -587,8 +589,8 @@ def : InstAlias<"negs $dst, $src$shift",
> defm UDIV : Div<0, "udiv", udiv>;
> defm SDIV : Div<1, "sdiv", sdiv>;
> let isCodeGenOnly = 1 in {
> -defm UDIV_Int : Div<0, "udiv", int_arm64_udiv>;
> -defm SDIV_Int : Div<1, "sdiv", int_arm64_sdiv>;
> +defm UDIV_Int : Div<0, "udiv", int_aarch64_udiv>;
> +defm SDIV_Int : Div<1, "sdiv", int_aarch64_sdiv>;
> }
>
> // Variable shift
> @@ -653,15 +655,15 @@ def SMULHrr : MulHi<0b010, "smulh", mulh
> def UMULHrr : MulHi<0b110, "umulh", mulhu>;
>
> // CRC32
> -def CRC32Brr : BaseCRC32<0, 0b00, 0, GPR32, int_arm64_crc32b, "crc32b">;
> -def CRC32Hrr : BaseCRC32<0, 0b01, 0, GPR32, int_arm64_crc32h, "crc32h">;
> -def CRC32Wrr : BaseCRC32<0, 0b10, 0, GPR32, int_arm64_crc32w, "crc32w">;
> -def CRC32Xrr : BaseCRC32<1, 0b11, 0, GPR64, int_arm64_crc32x, "crc32x">;
> -
> -def CRC32CBrr : BaseCRC32<0, 0b00, 1, GPR32, int_arm64_crc32cb, "crc32cb">;
> -def CRC32CHrr : BaseCRC32<0, 0b01, 1, GPR32, int_arm64_crc32ch, "crc32ch">;
> -def CRC32CWrr : BaseCRC32<0, 0b10, 1, GPR32, int_arm64_crc32cw, "crc32cw">;
> -def CRC32CXrr : BaseCRC32<1, 0b11, 1, GPR64, int_arm64_crc32cx, "crc32cx">;
> +def CRC32Brr : BaseCRC32<0, 0b00, 0, GPR32, int_aarch64_crc32b, "crc32b">;
> +def CRC32Hrr : BaseCRC32<0, 0b01, 0, GPR32, int_aarch64_crc32h, "crc32h">;
> +def CRC32Wrr : BaseCRC32<0, 0b10, 0, GPR32, int_aarch64_crc32w, "crc32w">;
> +def CRC32Xrr : BaseCRC32<1, 0b11, 0, GPR64, int_aarch64_crc32x, "crc32x">;
> +
> +def CRC32CBrr : BaseCRC32<0, 0b00, 1, GPR32, int_aarch64_crc32cb, "crc32cb">;
> +def CRC32CHrr : BaseCRC32<0, 0b01, 1, GPR32, int_aarch64_crc32ch, "crc32ch">;
> +def CRC32CWrr : BaseCRC32<0, 0b10, 1, GPR32, int_aarch64_crc32cw, "crc32cw">;
> +def CRC32CXrr : BaseCRC32<1, 0b11, 1, GPR64, int_aarch64_crc32cx, "crc32cx">;
>
>
> //===----------------------------------------------------------------------===//
> @@ -669,7 +671,7 @@ def CRC32CXrr : BaseCRC32<1, 0b11, 1, GP
> //===----------------------------------------------------------------------===//
>
> // (immediate)
> -defm ANDS : LogicalImmS<0b11, "ands", ARM64and_flag>;
> +defm ANDS : LogicalImmS<0b11, "ands", AArch64and_flag>;
> defm AND : LogicalImm<0b00, "and", and>;
> defm EOR : LogicalImm<0b10, "eor", xor>;
> defm ORR : LogicalImm<0b01, "orr", or>;
> @@ -684,9 +686,9 @@ def : InstAlias<"mov $dst, $imm", (ORRXr
>
>
> // (register)
> -defm ANDS : LogicalRegS<0b11, 0, "ands", ARM64and_flag>;
> +defm ANDS : LogicalRegS<0b11, 0, "ands", AArch64and_flag>;
> defm BICS : LogicalRegS<0b11, 1, "bics",
> - BinOpFrag<(ARM64and_flag node:$LHS, (not node:$RHS))>>;
> + BinOpFrag<(AArch64and_flag node:$LHS, (not node:$RHS))>>;
> defm AND : LogicalReg<0b00, 0, "and", and>;
> defm BIC : LogicalReg<0b00, 1, "bic",
> BinOpFrag<(and node:$LHS, (not node:$RHS))>>;
> @@ -900,26 +902,26 @@ defm CSINC : CondSelectOp<0, 0b01, "csin
> defm CSINV : CondSelectOp<1, 0b00, "csinv", not>;
> defm CSNEG : CondSelectOp<1, 0b01, "csneg", ineg>;
>
> -def : Pat<(ARM64csinv GPR32:$tval, GPR32:$fval, (i32 imm:$cc), NZCV),
> +def : Pat<(AArch64csinv GPR32:$tval, GPR32:$fval, (i32 imm:$cc), NZCV),
> (CSINVWr GPR32:$tval, GPR32:$fval, (i32 imm:$cc))>;
> -def : Pat<(ARM64csinv GPR64:$tval, GPR64:$fval, (i32 imm:$cc), NZCV),
> +def : Pat<(AArch64csinv GPR64:$tval, GPR64:$fval, (i32 imm:$cc), NZCV),
> (CSINVXr GPR64:$tval, GPR64:$fval, (i32 imm:$cc))>;
> -def : Pat<(ARM64csneg GPR32:$tval, GPR32:$fval, (i32 imm:$cc), NZCV),
> +def : Pat<(AArch64csneg GPR32:$tval, GPR32:$fval, (i32 imm:$cc), NZCV),
> (CSNEGWr GPR32:$tval, GPR32:$fval, (i32 imm:$cc))>;
> -def : Pat<(ARM64csneg GPR64:$tval, GPR64:$fval, (i32 imm:$cc), NZCV),
> +def : Pat<(AArch64csneg GPR64:$tval, GPR64:$fval, (i32 imm:$cc), NZCV),
> (CSNEGXr GPR64:$tval, GPR64:$fval, (i32 imm:$cc))>;
> -def : Pat<(ARM64csinc GPR32:$tval, GPR32:$fval, (i32 imm:$cc), NZCV),
> +def : Pat<(AArch64csinc GPR32:$tval, GPR32:$fval, (i32 imm:$cc), NZCV),
> (CSINCWr GPR32:$tval, GPR32:$fval, (i32 imm:$cc))>;
> -def : Pat<(ARM64csinc GPR64:$tval, GPR64:$fval, (i32 imm:$cc), NZCV),
> +def : Pat<(AArch64csinc GPR64:$tval, GPR64:$fval, (i32 imm:$cc), NZCV),
> (CSINCXr GPR64:$tval, GPR64:$fval, (i32 imm:$cc))>;
>
> -def : Pat<(ARM64csel (i32 0), (i32 1), (i32 imm:$cc), NZCV),
> +def : Pat<(AArch64csel (i32 0), (i32 1), (i32 imm:$cc), NZCV),
> (CSINCWr WZR, WZR, (i32 imm:$cc))>;
> -def : Pat<(ARM64csel (i64 0), (i64 1), (i32 imm:$cc), NZCV),
> +def : Pat<(AArch64csel (i64 0), (i64 1), (i32 imm:$cc), NZCV),
> (CSINCXr XZR, XZR, (i32 imm:$cc))>;
> -def : Pat<(ARM64csel (i32 0), (i32 -1), (i32 imm:$cc), NZCV),
> +def : Pat<(AArch64csel (i32 0), (i32 -1), (i32 imm:$cc), NZCV),
> (CSINVWr WZR, WZR, (i32 imm:$cc))>;
> -def : Pat<(ARM64csel (i64 0), (i64 -1), (i32 imm:$cc), NZCV),
> +def : Pat<(AArch64csel (i64 0), (i64 -1), (i32 imm:$cc), NZCV),
> (CSINVXr XZR, XZR, (i32 imm:$cc))>;
>
> // The inverse of the condition code from the alias instruction is what is used
> @@ -959,12 +961,12 @@ def ADR : ADRI<0, "adr", adrlabel, []>;
> } // neverHasSideEffects = 1
>
> def ADRP : ADRI<1, "adrp", adrplabel,
> - [(set GPR64:$Xd, (ARM64adrp tglobaladdr:$label))]>;
> + [(set GPR64:$Xd, (AArch64adrp tglobaladdr:$label))]>;
> } // isReMaterializable = 1
>
> // page address of a constant pool entry, block address
> -def : Pat<(ARM64adrp tconstpool:$cp), (ADRP tconstpool:$cp)>;
> -def : Pat<(ARM64adrp tblockaddress:$cp), (ADRP tblockaddress:$cp)>;
> +def : Pat<(AArch64adrp tconstpool:$cp), (ADRP tconstpool:$cp)>;
> +def : Pat<(AArch64adrp tblockaddress:$cp), (ADRP tblockaddress:$cp)>;
>
> //===----------------------------------------------------------------------===//
> // Unconditional branch (register) instructions.
> @@ -980,7 +982,7 @@ def ERET : SpecialReturn<0b0100, "eret">
> def : InstAlias<"ret", (RET LR)>;
>
> let isCall = 1, Defs = [LR], Uses = [SP] in {
> -def BLR : BranchReg<0b0001, "blr", [(ARM64call GPR64:$Rn)]>;
> +def BLR : BranchReg<0b0001, "blr", [(AArch64call GPR64:$Rn)]>;
> } // isCall
>
> let isBranch = 1, isTerminator = 1, isBarrier = 1, isIndirectBranch = 1 in {
> @@ -990,7 +992,7 @@ def BR : BranchReg<0b0000, "br", [(brin
> // Create a separate pseudo-instruction for codegen to use so that we don't
> // flag lr as used in every function. It'll be restored before the RET by the
> // epilogue if it's legitimately used.
> -def RET_ReallyLR : Pseudo<(outs), (ins), [(ARM64retflag)]> {
> +def RET_ReallyLR : Pseudo<(outs), (ins), [(AArch64retflag)]> {
> let isTerminator = 1;
> let isBarrier = 1;
> let isReturn = 1;
> @@ -1009,9 +1011,9 @@ def TLSDESCCALL : Pseudo<(outs), (ins i6
> let isCall = 1, Defs = [LR] in
> def TLSDESC_BLR
> : Pseudo<(outs), (ins GPR64:$dest, i64imm:$sym),
> - [(ARM64tlsdesc_call GPR64:$dest, tglobaltlsaddr:$sym)]>;
> + [(AArch64tlsdesc_call GPR64:$dest, tglobaltlsaddr:$sym)]>;
>
> -def : Pat<(ARM64tlsdesc_call GPR64:$dest, texternalsym:$sym),
> +def : Pat<(AArch64tlsdesc_call GPR64:$dest, texternalsym:$sym),
> (TLSDESC_BLR GPR64:$dest, texternalsym:$sym)>;
> //===----------------------------------------------------------------------===//
> // Conditional branch (immediate) instruction.
> @@ -1021,14 +1023,14 @@ def Bcc : BranchCond;
> //===----------------------------------------------------------------------===//
> // Compare-and-branch instructions.
> //===----------------------------------------------------------------------===//
> -defm CBZ : CmpBranch<0, "cbz", ARM64cbz>;
> -defm CBNZ : CmpBranch<1, "cbnz", ARM64cbnz>;
> +defm CBZ : CmpBranch<0, "cbz", AArch64cbz>;
> +defm CBNZ : CmpBranch<1, "cbnz", AArch64cbnz>;
>
> //===----------------------------------------------------------------------===//
> // Test-bit-and-branch instructions.
> //===----------------------------------------------------------------------===//
> -defm TBZ : TestBranch<0, "tbz", ARM64tbz>;
> -defm TBNZ : TestBranch<1, "tbnz", ARM64tbnz>;
> +defm TBZ : TestBranch<0, "tbz", AArch64tbz>;
> +defm TBNZ : TestBranch<1, "tbnz", AArch64tbnz>;
>
> //===----------------------------------------------------------------------===//
> // Unconditional branch (immediate) instructions.
> @@ -1038,9 +1040,9 @@ def B : BranchImm<0, "b", [(br bb:$addr
> } // isBranch, isTerminator, isBarrier
>
> let isCall = 1, Defs = [LR], Uses = [SP] in {
> -def BL : CallImm<1, "bl", [(ARM64call tglobaladdr:$addr)]>;
> +def BL : CallImm<1, "bl", [(AArch64call tglobaladdr:$addr)]>;
> } // isCall
> -def : Pat<(ARM64call texternalsym:$func), (BL texternalsym:$func)>;
> +def : Pat<(AArch64call texternalsym:$func), (BL texternalsym:$func)>;
>
> //===----------------------------------------------------------------------===//
> // Exception generation instructions.
> @@ -1432,7 +1434,7 @@ def : Pat<(i64 (zextloadi32 (am_indexed3
>
> // Pre-fetch.
> def PRFMui : PrefetchUI<0b11, 0, 0b10, "prfm",
> - [(ARM64Prefetch imm:$Rt,
> + [(AArch64Prefetch imm:$Rt,
> (am_indexed64 GPR64sp:$Rn,
> uimm12s8:$offset))]>;
>
> @@ -1451,7 +1453,7 @@ def LDRSWl : LoadLiteral<0b10, 0, GPR64,
>
> // prefetch
> def PRFMl : PrefetchLiteral<0b11, 0, "prfm", []>;
> -// [(ARM64Prefetch imm:$Rt, tglobaladdr:$label)]>;
> +// [(AArch64Prefetch imm:$Rt, tglobaladdr:$label)]>;
>
> //---
> // (unscaled immediate)
> @@ -1650,7 +1652,7 @@ def : InstAlias<"ldrsw $Rt, [$Rn, $offse
>
> // Pre-fetch.
> defm PRFUM : PrefetchUnscaled<0b11, 0, 0b10, "prfum",
> - [(ARM64Prefetch imm:$Rt,
> + [(AArch64Prefetch imm:$Rt,
> (am_unscaled64 GPR64sp:$Rn, simm9:$offset))]>;
>
> //---
> @@ -2187,23 +2189,23 @@ def STXPX : StoreExclusivePair<0b11, 0,
> // Scaled floating point to integer conversion instructions.
> //===----------------------------------------------------------------------===//
>
> -defm FCVTAS : FPToIntegerUnscaled<0b00, 0b100, "fcvtas", int_arm64_neon_fcvtas>;
> -defm FCVTAU : FPToIntegerUnscaled<0b00, 0b101, "fcvtau", int_arm64_neon_fcvtau>;
> -defm FCVTMS : FPToIntegerUnscaled<0b10, 0b000, "fcvtms", int_arm64_neon_fcvtms>;
> -defm FCVTMU : FPToIntegerUnscaled<0b10, 0b001, "fcvtmu", int_arm64_neon_fcvtmu>;
> -defm FCVTNS : FPToIntegerUnscaled<0b00, 0b000, "fcvtns", int_arm64_neon_fcvtns>;
> -defm FCVTNU : FPToIntegerUnscaled<0b00, 0b001, "fcvtnu", int_arm64_neon_fcvtnu>;
> -defm FCVTPS : FPToIntegerUnscaled<0b01, 0b000, "fcvtps", int_arm64_neon_fcvtps>;
> -defm FCVTPU : FPToIntegerUnscaled<0b01, 0b001, "fcvtpu", int_arm64_neon_fcvtpu>;
> +defm FCVTAS : FPToIntegerUnscaled<0b00, 0b100, "fcvtas", int_aarch64_neon_fcvtas>;
> +defm FCVTAU : FPToIntegerUnscaled<0b00, 0b101, "fcvtau", int_aarch64_neon_fcvtau>;
> +defm FCVTMS : FPToIntegerUnscaled<0b10, 0b000, "fcvtms", int_aarch64_neon_fcvtms>;
> +defm FCVTMU : FPToIntegerUnscaled<0b10, 0b001, "fcvtmu", int_aarch64_neon_fcvtmu>;
> +defm FCVTNS : FPToIntegerUnscaled<0b00, 0b000, "fcvtns", int_aarch64_neon_fcvtns>;
> +defm FCVTNU : FPToIntegerUnscaled<0b00, 0b001, "fcvtnu", int_aarch64_neon_fcvtnu>;
> +defm FCVTPS : FPToIntegerUnscaled<0b01, 0b000, "fcvtps", int_aarch64_neon_fcvtps>;
> +defm FCVTPU : FPToIntegerUnscaled<0b01, 0b001, "fcvtpu", int_aarch64_neon_fcvtpu>;
> defm FCVTZS : FPToIntegerUnscaled<0b11, 0b000, "fcvtzs", fp_to_sint>;
> defm FCVTZU : FPToIntegerUnscaled<0b11, 0b001, "fcvtzu", fp_to_uint>;
> defm FCVTZS : FPToIntegerScaled<0b11, 0b000, "fcvtzs", fp_to_sint>;
> defm FCVTZU : FPToIntegerScaled<0b11, 0b001, "fcvtzu", fp_to_uint>;
> let isCodeGenOnly = 1 in {
> -defm FCVTZS_Int : FPToIntegerUnscaled<0b11, 0b000, "fcvtzs", int_arm64_neon_fcvtzs>;
> -defm FCVTZU_Int : FPToIntegerUnscaled<0b11, 0b001, "fcvtzu", int_arm64_neon_fcvtzu>;
> -defm FCVTZS_Int : FPToIntegerScaled<0b11, 0b000, "fcvtzs", int_arm64_neon_fcvtzs>;
> -defm FCVTZU_Int : FPToIntegerScaled<0b11, 0b001, "fcvtzu", int_arm64_neon_fcvtzu>;
> +defm FCVTZS_Int : FPToIntegerUnscaled<0b11, 0b000, "fcvtzs", int_aarch64_neon_fcvtzs>;
> +defm FCVTZU_Int : FPToIntegerUnscaled<0b11, 0b001, "fcvtzu", int_aarch64_neon_fcvtzu>;
> +defm FCVTZS_Int : FPToIntegerScaled<0b11, 0b000, "fcvtzs", int_aarch64_neon_fcvtzs>;
> +defm FCVTZU_Int : FPToIntegerScaled<0b11, 0b001, "fcvtzu", int_aarch64_neon_fcvtzu>;
> }
>
> //===----------------------------------------------------------------------===//
> @@ -2246,10 +2248,10 @@ defm FNEG : SingleOperandFPData<0b0010
> defm FRINTA : SingleOperandFPData<0b1100, "frinta", frnd>;
> defm FRINTI : SingleOperandFPData<0b1111, "frinti", fnearbyint>;
> defm FRINTM : SingleOperandFPData<0b1010, "frintm", ffloor>;
> -defm FRINTN : SingleOperandFPData<0b1000, "frintn", int_arm64_neon_frintn>;
> +defm FRINTN : SingleOperandFPData<0b1000, "frintn", int_aarch64_neon_frintn>;
> defm FRINTP : SingleOperandFPData<0b1001, "frintp", fceil>;
>
> -def : Pat<(v1f64 (int_arm64_neon_frintn (v1f64 FPR64:$Rn))),
> +def : Pat<(v1f64 (int_aarch64_neon_frintn (v1f64 FPR64:$Rn))),
> (FRINTNDr FPR64:$Rn)>;
>
> // FRINTX is inserted to set the flags as required by FENV_ACCESS ON behavior
> @@ -2274,23 +2276,23 @@ defm FADD : TwoOperandFPData<0b0010, "
> let SchedRW = [WriteFDiv] in {
> defm FDIV : TwoOperandFPData<0b0001, "fdiv", fdiv>;
> }
> -defm FMAXNM : TwoOperandFPData<0b0110, "fmaxnm", int_arm64_neon_fmaxnm>;
> -defm FMAX : TwoOperandFPData<0b0100, "fmax", ARM64fmax>;
> -defm FMINNM : TwoOperandFPData<0b0111, "fminnm", int_arm64_neon_fminnm>;
> -defm FMIN : TwoOperandFPData<0b0101, "fmin", ARM64fmin>;
> +defm FMAXNM : TwoOperandFPData<0b0110, "fmaxnm", int_aarch64_neon_fmaxnm>;
> +defm FMAX : TwoOperandFPData<0b0100, "fmax", AArch64fmax>;
> +defm FMINNM : TwoOperandFPData<0b0111, "fminnm", int_aarch64_neon_fminnm>;
> +defm FMIN : TwoOperandFPData<0b0101, "fmin", AArch64fmin>;
> let SchedRW = [WriteFMul] in {
> defm FMUL : TwoOperandFPData<0b0000, "fmul", fmul>;
> defm FNMUL : TwoOperandFPDataNeg<0b1000, "fnmul", fmul>;
> }
> defm FSUB : TwoOperandFPData<0b0011, "fsub", fsub>;
>
> -def : Pat<(v1f64 (ARM64fmax (v1f64 FPR64:$Rn), (v1f64 FPR64:$Rm))),
> +def : Pat<(v1f64 (AArch64fmax (v1f64 FPR64:$Rn), (v1f64 FPR64:$Rm))),
> (FMAXDrr FPR64:$Rn, FPR64:$Rm)>;
> -def : Pat<(v1f64 (ARM64fmin (v1f64 FPR64:$Rn), (v1f64 FPR64:$Rm))),
> +def : Pat<(v1f64 (AArch64fmin (v1f64 FPR64:$Rn), (v1f64 FPR64:$Rm))),
> (FMINDrr FPR64:$Rn, FPR64:$Rm)>;
> -def : Pat<(v1f64 (int_arm64_neon_fmaxnm (v1f64 FPR64:$Rn), (v1f64 FPR64:$Rm))),
> +def : Pat<(v1f64 (int_aarch64_neon_fmaxnm (v1f64 FPR64:$Rn), (v1f64 FPR64:$Rm))),
> (FMAXNMDrr FPR64:$Rn, FPR64:$Rm)>;
> -def : Pat<(v1f64 (int_arm64_neon_fminnm (v1f64 FPR64:$Rn), (v1f64 FPR64:$Rm))),
> +def : Pat<(v1f64 (int_aarch64_neon_fminnm (v1f64 FPR64:$Rn), (v1f64 FPR64:$Rm))),
> (FMINNMDrr FPR64:$Rn, FPR64:$Rm)>;
>
> //===----------------------------------------------------------------------===//
> @@ -2335,7 +2337,7 @@ def : Pat<(f64 (fma FPR64:$Rn, (fneg FPR
> //===----------------------------------------------------------------------===//
>
> defm FCMPE : FPComparison<1, "fcmpe">;
> -defm FCMP : FPComparison<0, "fcmp", ARM64fcmp>;
> +defm FCMP : FPComparison<0, "fcmp", AArch64fcmp>;
>
> //===----------------------------------------------------------------------===//
> // Floating point conditional comparison instructions.
> @@ -2356,7 +2358,7 @@ defm FCSEL : FPCondSelect<"fcsel">;
> def F128CSEL : Pseudo<(outs FPR128:$Rd),
> (ins FPR128:$Rn, FPR128:$Rm, ccode:$cond),
> [(set (f128 FPR128:$Rd),
> - (ARM64csel FPR128:$Rn, FPR128:$Rm,
> + (AArch64csel FPR128:$Rn, FPR128:$Rm,
> (i32 imm:$cond), NZCV))]> {
> let Uses = [NZCV];
> let usesCustomInserter = 1;
> @@ -2375,28 +2377,28 @@ defm FMOV : FPMoveImmediate<"fmov">;
> // Advanced SIMD two vector instructions.
> //===----------------------------------------------------------------------===//
>
> -defm ABS : SIMDTwoVectorBHSD<0, 0b01011, "abs", int_arm64_neon_abs>;
> -defm CLS : SIMDTwoVectorBHS<0, 0b00100, "cls", int_arm64_neon_cls>;
> +defm ABS : SIMDTwoVectorBHSD<0, 0b01011, "abs", int_aarch64_neon_abs>;
> +defm CLS : SIMDTwoVectorBHS<0, 0b00100, "cls", int_aarch64_neon_cls>;
> defm CLZ : SIMDTwoVectorBHS<1, 0b00100, "clz", ctlz>;
> -defm CMEQ : SIMDCmpTwoVector<0, 0b01001, "cmeq", ARM64cmeqz>;
> -defm CMGE : SIMDCmpTwoVector<1, 0b01000, "cmge", ARM64cmgez>;
> -defm CMGT : SIMDCmpTwoVector<0, 0b01000, "cmgt", ARM64cmgtz>;
> -defm CMLE : SIMDCmpTwoVector<1, 0b01001, "cmle", ARM64cmlez>;
> -defm CMLT : SIMDCmpTwoVector<0, 0b01010, "cmlt", ARM64cmltz>;
> +defm CMEQ : SIMDCmpTwoVector<0, 0b01001, "cmeq", AArch64cmeqz>;
> +defm CMGE : SIMDCmpTwoVector<1, 0b01000, "cmge", AArch64cmgez>;
> +defm CMGT : SIMDCmpTwoVector<0, 0b01000, "cmgt", AArch64cmgtz>;
> +defm CMLE : SIMDCmpTwoVector<1, 0b01001, "cmle", AArch64cmlez>;
> +defm CMLT : SIMDCmpTwoVector<0, 0b01010, "cmlt", AArch64cmltz>;
> defm CNT : SIMDTwoVectorB<0, 0b00, 0b00101, "cnt", ctpop>;
> defm FABS : SIMDTwoVectorFP<0, 1, 0b01111, "fabs", fabs>;
>
> -defm FCMEQ : SIMDFPCmpTwoVector<0, 1, 0b01101, "fcmeq", ARM64fcmeqz>;
> -defm FCMGE : SIMDFPCmpTwoVector<1, 1, 0b01100, "fcmge", ARM64fcmgez>;
> -defm FCMGT : SIMDFPCmpTwoVector<0, 1, 0b01100, "fcmgt", ARM64fcmgtz>;
> -defm FCMLE : SIMDFPCmpTwoVector<1, 1, 0b01101, "fcmle", ARM64fcmlez>;
> -defm FCMLT : SIMDFPCmpTwoVector<0, 1, 0b01110, "fcmlt", ARM64fcmltz>;
> -defm FCVTAS : SIMDTwoVectorFPToInt<0,0,0b11100, "fcvtas",int_arm64_neon_fcvtas>;
> -defm FCVTAU : SIMDTwoVectorFPToInt<1,0,0b11100, "fcvtau",int_arm64_neon_fcvtau>;
> +defm FCMEQ : SIMDFPCmpTwoVector<0, 1, 0b01101, "fcmeq", AArch64fcmeqz>;
> +defm FCMGE : SIMDFPCmpTwoVector<1, 1, 0b01100, "fcmge", AArch64fcmgez>;
> +defm FCMGT : SIMDFPCmpTwoVector<0, 1, 0b01100, "fcmgt", AArch64fcmgtz>;
> +defm FCMLE : SIMDFPCmpTwoVector<1, 1, 0b01101, "fcmle", AArch64fcmlez>;
> +defm FCMLT : SIMDFPCmpTwoVector<0, 1, 0b01110, "fcmlt", AArch64fcmltz>;
> +defm FCVTAS : SIMDTwoVectorFPToInt<0,0,0b11100, "fcvtas",int_aarch64_neon_fcvtas>;
> +defm FCVTAU : SIMDTwoVectorFPToInt<1,0,0b11100, "fcvtau",int_aarch64_neon_fcvtau>;
> defm FCVTL : SIMDFPWidenTwoVector<0, 0, 0b10111, "fcvtl">;
> -def : Pat<(v4f32 (int_arm64_neon_vcvthf2fp (v4i16 V64:$Rn))),
> +def : Pat<(v4f32 (int_aarch64_neon_vcvthf2fp (v4i16 V64:$Rn))),
> (FCVTLv4i16 V64:$Rn)>;
> -def : Pat<(v4f32 (int_arm64_neon_vcvthf2fp (extract_subvector (v8i16 V128:$Rn),
> +def : Pat<(v4f32 (int_aarch64_neon_vcvthf2fp (extract_subvector (v8i16 V128:$Rn),
> (i64 4)))),
> (FCVTLv8i16 V128:$Rn)>;
> def : Pat<(v2f64 (fextend (v2f32 V64:$Rn))), (FCVTLv2i32 V64:$Rn)>;
> @@ -2404,41 +2406,41 @@ def : Pat<(v2f64 (fextend (v2f32 (extrac
> (i64 2))))),
> (FCVTLv4i32 V128:$Rn)>;
>
> -defm FCVTMS : SIMDTwoVectorFPToInt<0,0,0b11011, "fcvtms",int_arm64_neon_fcvtms>;
> -defm FCVTMU : SIMDTwoVectorFPToInt<1,0,0b11011, "fcvtmu",int_arm64_neon_fcvtmu>;
> -defm FCVTNS : SIMDTwoVectorFPToInt<0,0,0b11010, "fcvtns",int_arm64_neon_fcvtns>;
> -defm FCVTNU : SIMDTwoVectorFPToInt<1,0,0b11010, "fcvtnu",int_arm64_neon_fcvtnu>;
> +defm FCVTMS : SIMDTwoVectorFPToInt<0,0,0b11011, "fcvtms",int_aarch64_neon_fcvtms>;
> +defm FCVTMU : SIMDTwoVectorFPToInt<1,0,0b11011, "fcvtmu",int_aarch64_neon_fcvtmu>;
> +defm FCVTNS : SIMDTwoVectorFPToInt<0,0,0b11010, "fcvtns",int_aarch64_neon_fcvtns>;
> +defm FCVTNU : SIMDTwoVectorFPToInt<1,0,0b11010, "fcvtnu",int_aarch64_neon_fcvtnu>;
> defm FCVTN : SIMDFPNarrowTwoVector<0, 0, 0b10110, "fcvtn">;
> -def : Pat<(v4i16 (int_arm64_neon_vcvtfp2hf (v4f32 V128:$Rn))),
> +def : Pat<(v4i16 (int_aarch64_neon_vcvtfp2hf (v4f32 V128:$Rn))),
> (FCVTNv4i16 V128:$Rn)>;
> def : Pat<(concat_vectors V64:$Rd,
> - (v4i16 (int_arm64_neon_vcvtfp2hf (v4f32 V128:$Rn)))),
> + (v4i16 (int_aarch64_neon_vcvtfp2hf (v4f32 V128:$Rn)))),
> (FCVTNv8i16 (INSERT_SUBREG (IMPLICIT_DEF), V64:$Rd, dsub), V128:$Rn)>;
> def : Pat<(v2f32 (fround (v2f64 V128:$Rn))), (FCVTNv2i32 V128:$Rn)>;
> def : Pat<(concat_vectors V64:$Rd, (v2f32 (fround (v2f64 V128:$Rn)))),
> (FCVTNv4i32 (INSERT_SUBREG (IMPLICIT_DEF), V64:$Rd, dsub), V128:$Rn)>;
> -defm FCVTPS : SIMDTwoVectorFPToInt<0,1,0b11010, "fcvtps",int_arm64_neon_fcvtps>;
> -defm FCVTPU : SIMDTwoVectorFPToInt<1,1,0b11010, "fcvtpu",int_arm64_neon_fcvtpu>;
> +defm FCVTPS : SIMDTwoVectorFPToInt<0,1,0b11010, "fcvtps",int_aarch64_neon_fcvtps>;
> +defm FCVTPU : SIMDTwoVectorFPToInt<1,1,0b11010, "fcvtpu",int_aarch64_neon_fcvtpu>;
> defm FCVTXN : SIMDFPInexactCvtTwoVector<1, 0, 0b10110, "fcvtxn",
> - int_arm64_neon_fcvtxn>;
> + int_aarch64_neon_fcvtxn>;
> defm FCVTZS : SIMDTwoVectorFPToInt<0, 1, 0b11011, "fcvtzs", fp_to_sint>;
> defm FCVTZU : SIMDTwoVectorFPToInt<1, 1, 0b11011, "fcvtzu", fp_to_uint>;
> let isCodeGenOnly = 1 in {
> defm FCVTZS_Int : SIMDTwoVectorFPToInt<0, 1, 0b11011, "fcvtzs",
> - int_arm64_neon_fcvtzs>;
> + int_aarch64_neon_fcvtzs>;
> defm FCVTZU_Int : SIMDTwoVectorFPToInt<1, 1, 0b11011, "fcvtzu",
> - int_arm64_neon_fcvtzu>;
> + int_aarch64_neon_fcvtzu>;
> }
> defm FNEG : SIMDTwoVectorFP<1, 1, 0b01111, "fneg", fneg>;
> -defm FRECPE : SIMDTwoVectorFP<0, 1, 0b11101, "frecpe", int_arm64_neon_frecpe>;
> +defm FRECPE : SIMDTwoVectorFP<0, 1, 0b11101, "frecpe", int_aarch64_neon_frecpe>;
> defm FRINTA : SIMDTwoVectorFP<1, 0, 0b11000, "frinta", frnd>;
> defm FRINTI : SIMDTwoVectorFP<1, 1, 0b11001, "frinti", fnearbyint>;
> defm FRINTM : SIMDTwoVectorFP<0, 0, 0b11001, "frintm", ffloor>;
> -defm FRINTN : SIMDTwoVectorFP<0, 0, 0b11000, "frintn", int_arm64_neon_frintn>;
> +defm FRINTN : SIMDTwoVectorFP<0, 0, 0b11000, "frintn", int_aarch64_neon_frintn>;
> defm FRINTP : SIMDTwoVectorFP<0, 1, 0b11000, "frintp", fceil>;
> defm FRINTX : SIMDTwoVectorFP<1, 0, 0b11001, "frintx", frint>;
> defm FRINTZ : SIMDTwoVectorFP<0, 1, 0b11001, "frintz", ftrunc>;
> -defm FRSQRTE: SIMDTwoVectorFP<1, 1, 0b11101, "frsqrte", int_arm64_neon_frsqrte>;
> +defm FRSQRTE: SIMDTwoVectorFP<1, 1, 0b11101, "frsqrte", int_aarch64_neon_frsqrte>;
> defm FSQRT : SIMDTwoVectorFP<1, 1, 0b11111, "fsqrt", fsqrt>;
> defm NEG : SIMDTwoVectorBHSD<1, 0b01011, "neg",
> UnOpFrag<(sub immAllZerosV, node:$LHS)> >;
> @@ -2449,22 +2451,22 @@ def : InstAlias<"mvn{ $Vd.8b, $Vn.8b|.8b
> def : InstAlias<"mvn{ $Vd.16b, $Vn.16b|.16b $Vd, $Vn}",
> (NOTv16i8 V128:$Vd, V128:$Vn)>;
>
> -def : Pat<(ARM64neg (v8i8 V64:$Rn)), (NEGv8i8 V64:$Rn)>;
> -def : Pat<(ARM64neg (v16i8 V128:$Rn)), (NEGv16i8 V128:$Rn)>;
> -def : Pat<(ARM64neg (v4i16 V64:$Rn)), (NEGv4i16 V64:$Rn)>;
> -def : Pat<(ARM64neg (v8i16 V128:$Rn)), (NEGv8i16 V128:$Rn)>;
> -def : Pat<(ARM64neg (v2i32 V64:$Rn)), (NEGv2i32 V64:$Rn)>;
> -def : Pat<(ARM64neg (v4i32 V128:$Rn)), (NEGv4i32 V128:$Rn)>;
> -def : Pat<(ARM64neg (v2i64 V128:$Rn)), (NEGv2i64 V128:$Rn)>;
> -
> -def : Pat<(ARM64not (v8i8 V64:$Rn)), (NOTv8i8 V64:$Rn)>;
> -def : Pat<(ARM64not (v16i8 V128:$Rn)), (NOTv16i8 V128:$Rn)>;
> -def : Pat<(ARM64not (v4i16 V64:$Rn)), (NOTv8i8 V64:$Rn)>;
> -def : Pat<(ARM64not (v8i16 V128:$Rn)), (NOTv16i8 V128:$Rn)>;
> -def : Pat<(ARM64not (v2i32 V64:$Rn)), (NOTv8i8 V64:$Rn)>;
> -def : Pat<(ARM64not (v1i64 V64:$Rn)), (NOTv8i8 V64:$Rn)>;
> -def : Pat<(ARM64not (v4i32 V128:$Rn)), (NOTv16i8 V128:$Rn)>;
> -def : Pat<(ARM64not (v2i64 V128:$Rn)), (NOTv16i8 V128:$Rn)>;
> +def : Pat<(AArch64neg (v8i8 V64:$Rn)), (NEGv8i8 V64:$Rn)>;
> +def : Pat<(AArch64neg (v16i8 V128:$Rn)), (NEGv16i8 V128:$Rn)>;
> +def : Pat<(AArch64neg (v4i16 V64:$Rn)), (NEGv4i16 V64:$Rn)>;
> +def : Pat<(AArch64neg (v8i16 V128:$Rn)), (NEGv8i16 V128:$Rn)>;
> +def : Pat<(AArch64neg (v2i32 V64:$Rn)), (NEGv2i32 V64:$Rn)>;
> +def : Pat<(AArch64neg (v4i32 V128:$Rn)), (NEGv4i32 V128:$Rn)>;
> +def : Pat<(AArch64neg (v2i64 V128:$Rn)), (NEGv2i64 V128:$Rn)>;
> +
> +def : Pat<(AArch64not (v8i8 V64:$Rn)), (NOTv8i8 V64:$Rn)>;
> +def : Pat<(AArch64not (v16i8 V128:$Rn)), (NOTv16i8 V128:$Rn)>;
> +def : Pat<(AArch64not (v4i16 V64:$Rn)), (NOTv8i8 V64:$Rn)>;
> +def : Pat<(AArch64not (v8i16 V128:$Rn)), (NOTv16i8 V128:$Rn)>;
> +def : Pat<(AArch64not (v2i32 V64:$Rn)), (NOTv8i8 V64:$Rn)>;
> +def : Pat<(AArch64not (v1i64 V64:$Rn)), (NOTv8i8 V64:$Rn)>;
> +def : Pat<(AArch64not (v4i32 V128:$Rn)), (NOTv16i8 V128:$Rn)>;
> +def : Pat<(AArch64not (v2i64 V128:$Rn)), (NOTv16i8 V128:$Rn)>;
>
> def : Pat<(vnot (v4i16 V64:$Rn)), (NOTv8i8 V64:$Rn)>;
> def : Pat<(vnot (v8i16 V128:$Rn)), (NOTv16i8 V128:$Rn)>;
> @@ -2472,49 +2474,49 @@ def : Pat<(vnot (v2i32 V64:$Rn)), (NOTv
> def : Pat<(vnot (v4i32 V128:$Rn)), (NOTv16i8 V128:$Rn)>;
> def : Pat<(vnot (v2i64 V128:$Rn)), (NOTv16i8 V128:$Rn)>;
>
> -defm RBIT : SIMDTwoVectorB<1, 0b01, 0b00101, "rbit", int_arm64_neon_rbit>;
> -defm REV16 : SIMDTwoVectorB<0, 0b00, 0b00001, "rev16", ARM64rev16>;
> -defm REV32 : SIMDTwoVectorBH<1, 0b00000, "rev32", ARM64rev32>;
> -defm REV64 : SIMDTwoVectorBHS<0, 0b00000, "rev64", ARM64rev64>;
> +defm RBIT : SIMDTwoVectorB<1, 0b01, 0b00101, "rbit", int_aarch64_neon_rbit>;
> +defm REV16 : SIMDTwoVectorB<0, 0b00, 0b00001, "rev16", AArch64rev16>;
> +defm REV32 : SIMDTwoVectorBH<1, 0b00000, "rev32", AArch64rev32>;
> +defm REV64 : SIMDTwoVectorBHS<0, 0b00000, "rev64", AArch64rev64>;
> defm SADALP : SIMDLongTwoVectorTied<0, 0b00110, "sadalp",
> - BinOpFrag<(add node:$LHS, (int_arm64_neon_saddlp node:$RHS))> >;
> -defm SADDLP : SIMDLongTwoVector<0, 0b00010, "saddlp", int_arm64_neon_saddlp>;
> + BinOpFrag<(add node:$LHS, (int_aarch64_neon_saddlp node:$RHS))> >;
> +defm SADDLP : SIMDLongTwoVector<0, 0b00010, "saddlp", int_aarch64_neon_saddlp>;
> defm SCVTF : SIMDTwoVectorIntToFP<0, 0, 0b11101, "scvtf", sint_to_fp>;
> defm SHLL : SIMDVectorLShiftLongBySizeBHS;
> -defm SQABS : SIMDTwoVectorBHSD<0, 0b00111, "sqabs", int_arm64_neon_sqabs>;
> -defm SQNEG : SIMDTwoVectorBHSD<1, 0b00111, "sqneg", int_arm64_neon_sqneg>;
> -defm SQXTN : SIMDMixedTwoVector<0, 0b10100, "sqxtn", int_arm64_neon_sqxtn>;
> -defm SQXTUN : SIMDMixedTwoVector<1, 0b10010, "sqxtun", int_arm64_neon_sqxtun>;
> -defm SUQADD : SIMDTwoVectorBHSDTied<0, 0b00011, "suqadd",int_arm64_neon_suqadd>;
> +defm SQABS : SIMDTwoVectorBHSD<0, 0b00111, "sqabs", int_aarch64_neon_sqabs>;
> +defm SQNEG : SIMDTwoVectorBHSD<1, 0b00111, "sqneg", int_aarch64_neon_sqneg>;
> +defm SQXTN : SIMDMixedTwoVector<0, 0b10100, "sqxtn", int_aarch64_neon_sqxtn>;
> +defm SQXTUN : SIMDMixedTwoVector<1, 0b10010, "sqxtun", int_aarch64_neon_sqxtun>;
> +defm SUQADD : SIMDTwoVectorBHSDTied<0, 0b00011, "suqadd",int_aarch64_neon_suqadd>;
> defm UADALP : SIMDLongTwoVectorTied<1, 0b00110, "uadalp",
> - BinOpFrag<(add node:$LHS, (int_arm64_neon_uaddlp node:$RHS))> >;
> + BinOpFrag<(add node:$LHS, (int_aarch64_neon_uaddlp node:$RHS))> >;
> defm UADDLP : SIMDLongTwoVector<1, 0b00010, "uaddlp",
> - int_arm64_neon_uaddlp>;
> + int_aarch64_neon_uaddlp>;
> defm UCVTF : SIMDTwoVectorIntToFP<1, 0, 0b11101, "ucvtf", uint_to_fp>;
> -defm UQXTN : SIMDMixedTwoVector<1, 0b10100, "uqxtn", int_arm64_neon_uqxtn>;
> -defm URECPE : SIMDTwoVectorS<0, 1, 0b11100, "urecpe", int_arm64_neon_urecpe>;
> -defm URSQRTE: SIMDTwoVectorS<1, 1, 0b11100, "ursqrte", int_arm64_neon_ursqrte>;
> -defm USQADD : SIMDTwoVectorBHSDTied<1, 0b00011, "usqadd",int_arm64_neon_usqadd>;
> +defm UQXTN : SIMDMixedTwoVector<1, 0b10100, "uqxtn", int_aarch64_neon_uqxtn>;
> +defm URECPE : SIMDTwoVectorS<0, 1, 0b11100, "urecpe", int_aarch64_neon_urecpe>;
> +defm URSQRTE: SIMDTwoVectorS<1, 1, 0b11100, "ursqrte", int_aarch64_neon_ursqrte>;
> +defm USQADD : SIMDTwoVectorBHSDTied<1, 0b00011, "usqadd",int_aarch64_neon_usqadd>;
> defm XTN : SIMDMixedTwoVector<0, 0b10010, "xtn", trunc>;
>
> -def : Pat<(v2f32 (ARM64rev64 V64:$Rn)), (REV64v2i32 V64:$Rn)>;
> -def : Pat<(v4f32 (ARM64rev64 V128:$Rn)), (REV64v4i32 V128:$Rn)>;
> +def : Pat<(v2f32 (AArch64rev64 V64:$Rn)), (REV64v2i32 V64:$Rn)>;
> +def : Pat<(v4f32 (AArch64rev64 V128:$Rn)), (REV64v4i32 V128:$Rn)>;
>
> // Patterns for vector long shift (by element width). These need to match all
> // three of zext, sext and anyext so it's easier to pull the patterns out of the
> // definition.
> multiclass SIMDVectorLShiftLongBySizeBHSPats<SDPatternOperator ext> {
> - def : Pat<(ARM64vshl (v8i16 (ext (v8i8 V64:$Rn))), (i32 8)),
> + def : Pat<(AArch64vshl (v8i16 (ext (v8i8 V64:$Rn))), (i32 8)),
> (SHLLv8i8 V64:$Rn)>;
> - def : Pat<(ARM64vshl (v8i16 (ext (extract_high_v16i8 V128:$Rn))), (i32 8)),
> + def : Pat<(AArch64vshl (v8i16 (ext (extract_high_v16i8 V128:$Rn))), (i32 8)),
> (SHLLv16i8 V128:$Rn)>;
> - def : Pat<(ARM64vshl (v4i32 (ext (v4i16 V64:$Rn))), (i32 16)),
> + def : Pat<(AArch64vshl (v4i32 (ext (v4i16 V64:$Rn))), (i32 16)),
> (SHLLv4i16 V64:$Rn)>;
> - def : Pat<(ARM64vshl (v4i32 (ext (extract_high_v8i16 V128:$Rn))), (i32 16)),
> + def : Pat<(AArch64vshl (v4i32 (ext (extract_high_v8i16 V128:$Rn))), (i32 16)),
> (SHLLv8i16 V128:$Rn)>;
> - def : Pat<(ARM64vshl (v2i64 (ext (v2i32 V64:$Rn))), (i32 32)),
> + def : Pat<(AArch64vshl (v2i64 (ext (v2i32 V64:$Rn))), (i32 32)),
> (SHLLv2i32 V64:$Rn)>;
> - def : Pat<(ARM64vshl (v2i64 (ext (extract_high_v4i32 V128:$Rn))), (i32 32)),
> + def : Pat<(AArch64vshl (v2i64 (ext (extract_high_v4i32 V128:$Rn))), (i32 32)),
> (SHLLv4i32 V128:$Rn)>;
> }
>
> @@ -2527,30 +2529,30 @@ defm : SIMDVectorLShiftLongBySizeBHSPats
> //===----------------------------------------------------------------------===//
>
> defm ADD : SIMDThreeSameVector<0, 0b10000, "add", add>;
> -defm ADDP : SIMDThreeSameVector<0, 0b10111, "addp", int_arm64_neon_addp>;
> -defm CMEQ : SIMDThreeSameVector<1, 0b10001, "cmeq", ARM64cmeq>;
> -defm CMGE : SIMDThreeSameVector<0, 0b00111, "cmge", ARM64cmge>;
> -defm CMGT : SIMDThreeSameVector<0, 0b00110, "cmgt", ARM64cmgt>;
> -defm CMHI : SIMDThreeSameVector<1, 0b00110, "cmhi", ARM64cmhi>;
> -defm CMHS : SIMDThreeSameVector<1, 0b00111, "cmhs", ARM64cmhs>;
> -defm CMTST : SIMDThreeSameVector<0, 0b10001, "cmtst", ARM64cmtst>;
> -defm FABD : SIMDThreeSameVectorFP<1,1,0b11010,"fabd", int_arm64_neon_fabd>;
> -defm FACGE : SIMDThreeSameVectorFPCmp<1,0,0b11101,"facge",int_arm64_neon_facge>;
> -defm FACGT : SIMDThreeSameVectorFPCmp<1,1,0b11101,"facgt",int_arm64_neon_facgt>;
> -defm FADDP : SIMDThreeSameVectorFP<1,0,0b11010,"faddp",int_arm64_neon_addp>;
> +defm ADDP : SIMDThreeSameVector<0, 0b10111, "addp", int_aarch64_neon_addp>;
> +defm CMEQ : SIMDThreeSameVector<1, 0b10001, "cmeq", AArch64cmeq>;
> +defm CMGE : SIMDThreeSameVector<0, 0b00111, "cmge", AArch64cmge>;
> +defm CMGT : SIMDThreeSameVector<0, 0b00110, "cmgt", AArch64cmgt>;
> +defm CMHI : SIMDThreeSameVector<1, 0b00110, "cmhi", AArch64cmhi>;
> +defm CMHS : SIMDThreeSameVector<1, 0b00111, "cmhs", AArch64cmhs>;
> +defm CMTST : SIMDThreeSameVector<0, 0b10001, "cmtst", AArch64cmtst>;
> +defm FABD : SIMDThreeSameVectorFP<1,1,0b11010,"fabd", int_aarch64_neon_fabd>;
> +defm FACGE : SIMDThreeSameVectorFPCmp<1,0,0b11101,"facge",int_aarch64_neon_facge>;
> +defm FACGT : SIMDThreeSameVectorFPCmp<1,1,0b11101,"facgt",int_aarch64_neon_facgt>;
> +defm FADDP : SIMDThreeSameVectorFP<1,0,0b11010,"faddp",int_aarch64_neon_addp>;
> defm FADD : SIMDThreeSameVectorFP<0,0,0b11010,"fadd", fadd>;
> -defm FCMEQ : SIMDThreeSameVectorFPCmp<0, 0, 0b11100, "fcmeq", ARM64fcmeq>;
> -defm FCMGE : SIMDThreeSameVectorFPCmp<1, 0, 0b11100, "fcmge", ARM64fcmge>;
> -defm FCMGT : SIMDThreeSameVectorFPCmp<1, 1, 0b11100, "fcmgt", ARM64fcmgt>;
> +defm FCMEQ : SIMDThreeSameVectorFPCmp<0, 0, 0b11100, "fcmeq", AArch64fcmeq>;
> +defm FCMGE : SIMDThreeSameVectorFPCmp<1, 0, 0b11100, "fcmge", AArch64fcmge>;
> +defm FCMGT : SIMDThreeSameVectorFPCmp<1, 1, 0b11100, "fcmgt", AArch64fcmgt>;
> defm FDIV : SIMDThreeSameVectorFP<1,0,0b11111,"fdiv", fdiv>;
> -defm FMAXNMP : SIMDThreeSameVectorFP<1,0,0b11000,"fmaxnmp", int_arm64_neon_fmaxnmp>;
> -defm FMAXNM : SIMDThreeSameVectorFP<0,0,0b11000,"fmaxnm", int_arm64_neon_fmaxnm>;
> -defm FMAXP : SIMDThreeSameVectorFP<1,0,0b11110,"fmaxp", int_arm64_neon_fmaxp>;
> -defm FMAX : SIMDThreeSameVectorFP<0,0,0b11110,"fmax", ARM64fmax>;
> -defm FMINNMP : SIMDThreeSameVectorFP<1,1,0b11000,"fminnmp", int_arm64_neon_fminnmp>;
> -defm FMINNM : SIMDThreeSameVectorFP<0,1,0b11000,"fminnm", int_arm64_neon_fminnm>;
> -defm FMINP : SIMDThreeSameVectorFP<1,1,0b11110,"fminp", int_arm64_neon_fminp>;
> -defm FMIN : SIMDThreeSameVectorFP<0,1,0b11110,"fmin", ARM64fmin>;
> +defm FMAXNMP : SIMDThreeSameVectorFP<1,0,0b11000,"fmaxnmp", int_aarch64_neon_fmaxnmp>;
> +defm FMAXNM : SIMDThreeSameVectorFP<0,0,0b11000,"fmaxnm", int_aarch64_neon_fmaxnm>;
> +defm FMAXP : SIMDThreeSameVectorFP<1,0,0b11110,"fmaxp", int_aarch64_neon_fmaxp>;
> +defm FMAX : SIMDThreeSameVectorFP<0,0,0b11110,"fmax", AArch64fmax>;
> +defm FMINNMP : SIMDThreeSameVectorFP<1,1,0b11000,"fminnmp", int_aarch64_neon_fminnmp>;
> +defm FMINNM : SIMDThreeSameVectorFP<0,1,0b11000,"fminnm", int_aarch64_neon_fminnm>;
> +defm FMINP : SIMDThreeSameVectorFP<1,1,0b11110,"fminp", int_aarch64_neon_fminp>;
> +defm FMIN : SIMDThreeSameVectorFP<0,1,0b11110,"fmin", AArch64fmin>;
>
> // NOTE: The operands of the PatFrag are reordered on FMLA/FMLS because the
> // instruction expects the addend first, while the fma intrinsic puts it last.
> @@ -2570,58 +2572,58 @@ def : Pat<(v4f32 (fma (fneg V128:$Rn), V
> def : Pat<(v2f64 (fma (fneg V128:$Rn), V128:$Rm, V128:$Rd)),
> (FMLSv2f64 V128:$Rd, V128:$Rn, V128:$Rm)>;
>
> -defm FMULX : SIMDThreeSameVectorFP<0,0,0b11011,"fmulx", int_arm64_neon_fmulx>;
> +defm FMULX : SIMDThreeSameVectorFP<0,0,0b11011,"fmulx", int_aarch64_neon_fmulx>;
> defm FMUL : SIMDThreeSameVectorFP<1,0,0b11011,"fmul", fmul>;
> -defm FRECPS : SIMDThreeSameVectorFP<0,0,0b11111,"frecps", int_arm64_neon_frecps>;
> -defm FRSQRTS : SIMDThreeSameVectorFP<0,1,0b11111,"frsqrts", int_arm64_neon_frsqrts>;
> +defm FRECPS : SIMDThreeSameVectorFP<0,0,0b11111,"frecps", int_aarch64_neon_frecps>;
> +defm FRSQRTS : SIMDThreeSameVectorFP<0,1,0b11111,"frsqrts", int_aarch64_neon_frsqrts>;
> defm FSUB : SIMDThreeSameVectorFP<0,1,0b11010,"fsub", fsub>;
> defm MLA : SIMDThreeSameVectorBHSTied<0, 0b10010, "mla",
> TriOpFrag<(add node:$LHS, (mul node:$MHS, node:$RHS))> >;
> defm MLS : SIMDThreeSameVectorBHSTied<1, 0b10010, "mls",
> TriOpFrag<(sub node:$LHS, (mul node:$MHS, node:$RHS))> >;
> defm MUL : SIMDThreeSameVectorBHS<0, 0b10011, "mul", mul>;
> -defm PMUL : SIMDThreeSameVectorB<1, 0b10011, "pmul", int_arm64_neon_pmul>;
> +defm PMUL : SIMDThreeSameVectorB<1, 0b10011, "pmul", int_aarch64_neon_pmul>;
> defm SABA : SIMDThreeSameVectorBHSTied<0, 0b01111, "saba",
> - TriOpFrag<(add node:$LHS, (int_arm64_neon_sabd node:$MHS, node:$RHS))> >;
> -defm SABD : SIMDThreeSameVectorBHS<0,0b01110,"sabd", int_arm64_neon_sabd>;
> -defm SHADD : SIMDThreeSameVectorBHS<0,0b00000,"shadd", int_arm64_neon_shadd>;
> -defm SHSUB : SIMDThreeSameVectorBHS<0,0b00100,"shsub", int_arm64_neon_shsub>;
> -defm SMAXP : SIMDThreeSameVectorBHS<0,0b10100,"smaxp", int_arm64_neon_smaxp>;
> -defm SMAX : SIMDThreeSameVectorBHS<0,0b01100,"smax", int_arm64_neon_smax>;
> -defm SMINP : SIMDThreeSameVectorBHS<0,0b10101,"sminp", int_arm64_neon_sminp>;
> -defm SMIN : SIMDThreeSameVectorBHS<0,0b01101,"smin", int_arm64_neon_smin>;
> -defm SQADD : SIMDThreeSameVector<0,0b00001,"sqadd", int_arm64_neon_sqadd>;
> -defm SQDMULH : SIMDThreeSameVectorHS<0,0b10110,"sqdmulh",int_arm64_neon_sqdmulh>;
> -defm SQRDMULH : SIMDThreeSameVectorHS<1,0b10110,"sqrdmulh",int_arm64_neon_sqrdmulh>;
> -defm SQRSHL : SIMDThreeSameVector<0,0b01011,"sqrshl", int_arm64_neon_sqrshl>;
> -defm SQSHL : SIMDThreeSameVector<0,0b01001,"sqshl", int_arm64_neon_sqshl>;
> -defm SQSUB : SIMDThreeSameVector<0,0b00101,"sqsub", int_arm64_neon_sqsub>;
> -defm SRHADD : SIMDThreeSameVectorBHS<0,0b00010,"srhadd",int_arm64_neon_srhadd>;
> -defm SRSHL : SIMDThreeSameVector<0,0b01010,"srshl", int_arm64_neon_srshl>;
> -defm SSHL : SIMDThreeSameVector<0,0b01000,"sshl", int_arm64_neon_sshl>;
> + TriOpFrag<(add node:$LHS, (int_aarch64_neon_sabd node:$MHS, node:$RHS))> >;
> +defm SABD : SIMDThreeSameVectorBHS<0,0b01110,"sabd", int_aarch64_neon_sabd>;
> +defm SHADD : SIMDThreeSameVectorBHS<0,0b00000,"shadd", int_aarch64_neon_shadd>;
> +defm SHSUB : SIMDThreeSameVectorBHS<0,0b00100,"shsub", int_aarch64_neon_shsub>;
> +defm SMAXP : SIMDThreeSameVectorBHS<0,0b10100,"smaxp", int_aarch64_neon_smaxp>;
> +defm SMAX : SIMDThreeSameVectorBHS<0,0b01100,"smax", int_aarch64_neon_smax>;
> +defm SMINP : SIMDThreeSameVectorBHS<0,0b10101,"sminp", int_aarch64_neon_sminp>;
> +defm SMIN : SIMDThreeSameVectorBHS<0,0b01101,"smin", int_aarch64_neon_smin>;
> +defm SQADD : SIMDThreeSameVector<0,0b00001,"sqadd", int_aarch64_neon_sqadd>;
> +defm SQDMULH : SIMDThreeSameVectorHS<0,0b10110,"sqdmulh",int_aarch64_neon_sqdmulh>;
> +defm SQRDMULH : SIMDThreeSameVectorHS<1,0b10110,"sqrdmulh",int_aarch64_neon_sqrdmulh>;
> +defm SQRSHL : SIMDThreeSameVector<0,0b01011,"sqrshl", int_aarch64_neon_sqrshl>;
> +defm SQSHL : SIMDThreeSameVector<0,0b01001,"sqshl", int_aarch64_neon_sqshl>;
> +defm SQSUB : SIMDThreeSameVector<0,0b00101,"sqsub", int_aarch64_neon_sqsub>;
> +defm SRHADD : SIMDThreeSameVectorBHS<0,0b00010,"srhadd",int_aarch64_neon_srhadd>;
> +defm SRSHL : SIMDThreeSameVector<0,0b01010,"srshl", int_aarch64_neon_srshl>;
> +defm SSHL : SIMDThreeSameVector<0,0b01000,"sshl", int_aarch64_neon_sshl>;
> defm SUB : SIMDThreeSameVector<1,0b10000,"sub", sub>;
> defm UABA : SIMDThreeSameVectorBHSTied<1, 0b01111, "uaba",
> - TriOpFrag<(add node:$LHS, (int_arm64_neon_uabd node:$MHS, node:$RHS))> >;
> -defm UABD : SIMDThreeSameVectorBHS<1,0b01110,"uabd", int_arm64_neon_uabd>;
> -defm UHADD : SIMDThreeSameVectorBHS<1,0b00000,"uhadd", int_arm64_neon_uhadd>;
> -defm UHSUB : SIMDThreeSameVectorBHS<1,0b00100,"uhsub", int_arm64_neon_uhsub>;
> -defm UMAXP : SIMDThreeSameVectorBHS<1,0b10100,"umaxp", int_arm64_neon_umaxp>;
> -defm UMAX : SIMDThreeSameVectorBHS<1,0b01100,"umax", int_arm64_neon_umax>;
> -defm UMINP : SIMDThreeSameVectorBHS<1,0b10101,"uminp", int_arm64_neon_uminp>;
> -defm UMIN : SIMDThreeSameVectorBHS<1,0b01101,"umin", int_arm64_neon_umin>;
> -defm UQADD : SIMDThreeSameVector<1,0b00001,"uqadd", int_arm64_neon_uqadd>;
> -defm UQRSHL : SIMDThreeSameVector<1,0b01011,"uqrshl", int_arm64_neon_uqrshl>;
> -defm UQSHL : SIMDThreeSameVector<1,0b01001,"uqshl", int_arm64_neon_uqshl>;
> -defm UQSUB : SIMDThreeSameVector<1,0b00101,"uqsub", int_arm64_neon_uqsub>;
> -defm URHADD : SIMDThreeSameVectorBHS<1,0b00010,"urhadd", int_arm64_neon_urhadd>;
> -defm URSHL : SIMDThreeSameVector<1,0b01010,"urshl", int_arm64_neon_urshl>;
> -defm USHL : SIMDThreeSameVector<1,0b01000,"ushl", int_arm64_neon_ushl>;
> + TriOpFrag<(add node:$LHS, (int_aarch64_neon_uabd node:$MHS, node:$RHS))> >;
> +defm UABD : SIMDThreeSameVectorBHS<1,0b01110,"uabd", int_aarch64_neon_uabd>;
> +defm UHADD : SIMDThreeSameVectorBHS<1,0b00000,"uhadd", int_aarch64_neon_uhadd>;
> +defm UHSUB : SIMDThreeSameVectorBHS<1,0b00100,"uhsub", int_aarch64_neon_uhsub>;
> +defm UMAXP : SIMDThreeSameVectorBHS<1,0b10100,"umaxp", int_aarch64_neon_umaxp>;
> +defm UMAX : SIMDThreeSameVectorBHS<1,0b01100,"umax", int_aarch64_neon_umax>;
> +defm UMINP : SIMDThreeSameVectorBHS<1,0b10101,"uminp", int_aarch64_neon_uminp>;
> +defm UMIN : SIMDThreeSameVectorBHS<1,0b01101,"umin", int_aarch64_neon_umin>;
> +defm UQADD : SIMDThreeSameVector<1,0b00001,"uqadd", int_aarch64_neon_uqadd>;
> +defm UQRSHL : SIMDThreeSameVector<1,0b01011,"uqrshl", int_aarch64_neon_uqrshl>;
> +defm UQSHL : SIMDThreeSameVector<1,0b01001,"uqshl", int_aarch64_neon_uqshl>;
> +defm UQSUB : SIMDThreeSameVector<1,0b00101,"uqsub", int_aarch64_neon_uqsub>;
> +defm URHADD : SIMDThreeSameVectorBHS<1,0b00010,"urhadd", int_aarch64_neon_urhadd>;
> +defm URSHL : SIMDThreeSameVector<1,0b01010,"urshl", int_aarch64_neon_urshl>;
> +defm USHL : SIMDThreeSameVector<1,0b01000,"ushl", int_aarch64_neon_ushl>;
>
> defm AND : SIMDLogicalThreeVector<0, 0b00, "and", and>;
> defm BIC : SIMDLogicalThreeVector<0, 0b01, "bic",
> BinOpFrag<(and node:$LHS, (vnot node:$RHS))> >;
> defm BIF : SIMDLogicalThreeVector<1, 0b11, "bif">;
> -defm BIT : SIMDLogicalThreeVectorTied<1, 0b10, "bit", ARM64bit>;
> +defm BIT : SIMDLogicalThreeVectorTied<1, 0b10, "bit", AArch64bit>;
> defm BSL : SIMDLogicalThreeVectorTied<1, 0b01, "bsl",
> TriOpFrag<(or (and node:$LHS, node:$MHS), (and (vnot node:$LHS), node:$RHS))>>;
> defm EOR : SIMDLogicalThreeVector<1, 0b00, "eor", xor>;
> @@ -2629,22 +2631,22 @@ defm ORN : SIMDLogicalThreeVector<0, 0b1
> BinOpFrag<(or node:$LHS, (vnot node:$RHS))> >;
> defm ORR : SIMDLogicalThreeVector<0, 0b10, "orr", or>;
>
> -def : Pat<(ARM64bsl (v8i8 V64:$Rd), V64:$Rn, V64:$Rm),
> +def : Pat<(AArch64bsl (v8i8 V64:$Rd), V64:$Rn, V64:$Rm),
> (BSLv8i8 V64:$Rd, V64:$Rn, V64:$Rm)>;
> -def : Pat<(ARM64bsl (v4i16 V64:$Rd), V64:$Rn, V64:$Rm),
> +def : Pat<(AArch64bsl (v4i16 V64:$Rd), V64:$Rn, V64:$Rm),
> (BSLv8i8 V64:$Rd, V64:$Rn, V64:$Rm)>;
> -def : Pat<(ARM64bsl (v2i32 V64:$Rd), V64:$Rn, V64:$Rm),
> +def : Pat<(AArch64bsl (v2i32 V64:$Rd), V64:$Rn, V64:$Rm),
> (BSLv8i8 V64:$Rd, V64:$Rn, V64:$Rm)>;
> -def : Pat<(ARM64bsl (v1i64 V64:$Rd), V64:$Rn, V64:$Rm),
> +def : Pat<(AArch64bsl (v1i64 V64:$Rd), V64:$Rn, V64:$Rm),
> (BSLv8i8 V64:$Rd, V64:$Rn, V64:$Rm)>;
>
> -def : Pat<(ARM64bsl (v16i8 V128:$Rd), V128:$Rn, V128:$Rm),
> +def : Pat<(AArch64bsl (v16i8 V128:$Rd), V128:$Rn, V128:$Rm),
> (BSLv16i8 V128:$Rd, V128:$Rn, V128:$Rm)>;
> -def : Pat<(ARM64bsl (v8i16 V128:$Rd), V128:$Rn, V128:$Rm),
> +def : Pat<(AArch64bsl (v8i16 V128:$Rd), V128:$Rn, V128:$Rm),
> (BSLv16i8 V128:$Rd, V128:$Rn, V128:$Rm)>;
> -def : Pat<(ARM64bsl (v4i32 V128:$Rd), V128:$Rn, V128:$Rm),
> +def : Pat<(AArch64bsl (v4i32 V128:$Rd), V128:$Rn, V128:$Rm),
> (BSLv16i8 V128:$Rd, V128:$Rn, V128:$Rm)>;
> -def : Pat<(ARM64bsl (v2i64 V128:$Rd), V128:$Rn, V128:$Rm),
> +def : Pat<(AArch64bsl (v2i64 V128:$Rd), V128:$Rn, V128:$Rm),
> (BSLv16i8 V128:$Rd, V128:$Rn, V128:$Rm)>;
>
> def : InstAlias<"mov{\t$dst.16b, $src.16b|.16b\t$dst, $src}",
> @@ -2798,40 +2800,40 @@ def : InstAlias<"{faclt\t$dst.2d, $src1.
> //===----------------------------------------------------------------------===//
>
> defm ADD : SIMDThreeScalarD<0, 0b10000, "add", add>;
> -defm CMEQ : SIMDThreeScalarD<1, 0b10001, "cmeq", ARM64cmeq>;
> -defm CMGE : SIMDThreeScalarD<0, 0b00111, "cmge", ARM64cmge>;
> -defm CMGT : SIMDThreeScalarD<0, 0b00110, "cmgt", ARM64cmgt>;
> -defm CMHI : SIMDThreeScalarD<1, 0b00110, "cmhi", ARM64cmhi>;
> -defm CMHS : SIMDThreeScalarD<1, 0b00111, "cmhs", ARM64cmhs>;
> -defm CMTST : SIMDThreeScalarD<0, 0b10001, "cmtst", ARM64cmtst>;
> -defm FABD : SIMDThreeScalarSD<1, 1, 0b11010, "fabd", int_arm64_sisd_fabd>;
> -def : Pat<(v1f64 (int_arm64_neon_fabd (v1f64 FPR64:$Rn), (v1f64 FPR64:$Rm))),
> +defm CMEQ : SIMDThreeScalarD<1, 0b10001, "cmeq", AArch64cmeq>;
> +defm CMGE : SIMDThreeScalarD<0, 0b00111, "cmge", AArch64cmge>;
> +defm CMGT : SIMDThreeScalarD<0, 0b00110, "cmgt", AArch64cmgt>;
> +defm CMHI : SIMDThreeScalarD<1, 0b00110, "cmhi", AArch64cmhi>;
> +defm CMHS : SIMDThreeScalarD<1, 0b00111, "cmhs", AArch64cmhs>;
> +defm CMTST : SIMDThreeScalarD<0, 0b10001, "cmtst", AArch64cmtst>;
> +defm FABD : SIMDThreeScalarSD<1, 1, 0b11010, "fabd", int_aarch64_sisd_fabd>;
> +def : Pat<(v1f64 (int_aarch64_neon_fabd (v1f64 FPR64:$Rn), (v1f64 FPR64:$Rm))),
> (FABD64 FPR64:$Rn, FPR64:$Rm)>;
> defm FACGE : SIMDThreeScalarFPCmp<1, 0, 0b11101, "facge",
> - int_arm64_neon_facge>;
> + int_aarch64_neon_facge>;
> defm FACGT : SIMDThreeScalarFPCmp<1, 1, 0b11101, "facgt",
> - int_arm64_neon_facgt>;
> -defm FCMEQ : SIMDThreeScalarFPCmp<0, 0, 0b11100, "fcmeq", ARM64fcmeq>;
> -defm FCMGE : SIMDThreeScalarFPCmp<1, 0, 0b11100, "fcmge", ARM64fcmge>;
> -defm FCMGT : SIMDThreeScalarFPCmp<1, 1, 0b11100, "fcmgt", ARM64fcmgt>;
> -defm FMULX : SIMDThreeScalarSD<0, 0, 0b11011, "fmulx", int_arm64_neon_fmulx>;
> -defm FRECPS : SIMDThreeScalarSD<0, 0, 0b11111, "frecps", int_arm64_neon_frecps>;
> -defm FRSQRTS : SIMDThreeScalarSD<0, 1, 0b11111, "frsqrts", int_arm64_neon_frsqrts>;
> -defm SQADD : SIMDThreeScalarBHSD<0, 0b00001, "sqadd", int_arm64_neon_sqadd>;
> -defm SQDMULH : SIMDThreeScalarHS< 0, 0b10110, "sqdmulh", int_arm64_neon_sqdmulh>;
> -defm SQRDMULH : SIMDThreeScalarHS< 1, 0b10110, "sqrdmulh", int_arm64_neon_sqrdmulh>;
> -defm SQRSHL : SIMDThreeScalarBHSD<0, 0b01011, "sqrshl",int_arm64_neon_sqrshl>;
> -defm SQSHL : SIMDThreeScalarBHSD<0, 0b01001, "sqshl", int_arm64_neon_sqshl>;
> -defm SQSUB : SIMDThreeScalarBHSD<0, 0b00101, "sqsub", int_arm64_neon_sqsub>;
> -defm SRSHL : SIMDThreeScalarD< 0, 0b01010, "srshl", int_arm64_neon_srshl>;
> -defm SSHL : SIMDThreeScalarD< 0, 0b01000, "sshl", int_arm64_neon_sshl>;
> + int_aarch64_neon_facgt>;
> +defm FCMEQ : SIMDThreeScalarFPCmp<0, 0, 0b11100, "fcmeq", AArch64fcmeq>;
> +defm FCMGE : SIMDThreeScalarFPCmp<1, 0, 0b11100, "fcmge", AArch64fcmge>;
> +defm FCMGT : SIMDThreeScalarFPCmp<1, 1, 0b11100, "fcmgt", AArch64fcmgt>;
> +defm FMULX : SIMDThreeScalarSD<0, 0, 0b11011, "fmulx", int_aarch64_neon_fmulx>;
> +defm FRECPS : SIMDThreeScalarSD<0, 0, 0b11111, "frecps", int_aarch64_neon_frecps>;
> +defm FRSQRTS : SIMDThreeScalarSD<0, 1, 0b11111, "frsqrts", int_aarch64_neon_frsqrts>;
> +defm SQADD : SIMDThreeScalarBHSD<0, 0b00001, "sqadd", int_aarch64_neon_sqadd>;
> +defm SQDMULH : SIMDThreeScalarHS< 0, 0b10110, "sqdmulh", int_aarch64_neon_sqdmulh>;
> +defm SQRDMULH : SIMDThreeScalarHS< 1, 0b10110, "sqrdmulh", int_aarch64_neon_sqrdmulh>;
> +defm SQRSHL : SIMDThreeScalarBHSD<0, 0b01011, "sqrshl",int_aarch64_neon_sqrshl>;
> +defm SQSHL : SIMDThreeScalarBHSD<0, 0b01001, "sqshl", int_aarch64_neon_sqshl>;
> +defm SQSUB : SIMDThreeScalarBHSD<0, 0b00101, "sqsub", int_aarch64_neon_sqsub>;
> +defm SRSHL : SIMDThreeScalarD< 0, 0b01010, "srshl", int_aarch64_neon_srshl>;
> +defm SSHL : SIMDThreeScalarD< 0, 0b01000, "sshl", int_aarch64_neon_sshl>;
> defm SUB : SIMDThreeScalarD< 1, 0b10000, "sub", sub>;
> -defm UQADD : SIMDThreeScalarBHSD<1, 0b00001, "uqadd", int_arm64_neon_uqadd>;
> -defm UQRSHL : SIMDThreeScalarBHSD<1, 0b01011, "uqrshl",int_arm64_neon_uqrshl>;
> -defm UQSHL : SIMDThreeScalarBHSD<1, 0b01001, "uqshl", int_arm64_neon_uqshl>;
> -defm UQSUB : SIMDThreeScalarBHSD<1, 0b00101, "uqsub", int_arm64_neon_uqsub>;
> -defm URSHL : SIMDThreeScalarD< 1, 0b01010, "urshl", int_arm64_neon_urshl>;
> -defm USHL : SIMDThreeScalarD< 1, 0b01000, "ushl", int_arm64_neon_ushl>;
> +defm UQADD : SIMDThreeScalarBHSD<1, 0b00001, "uqadd", int_aarch64_neon_uqadd>;
> +defm UQRSHL : SIMDThreeScalarBHSD<1, 0b01011, "uqrshl",int_aarch64_neon_uqrshl>;
> +defm UQSHL : SIMDThreeScalarBHSD<1, 0b01001, "uqshl", int_aarch64_neon_uqshl>;
> +defm UQSUB : SIMDThreeScalarBHSD<1, 0b00101, "uqsub", int_aarch64_neon_uqsub>;
> +defm URSHL : SIMDThreeScalarD< 1, 0b01010, "urshl", int_aarch64_neon_urshl>;
> +defm USHL : SIMDThreeScalarD< 1, 0b01000, "ushl", int_aarch64_neon_ushl>;
>
> def : InstAlias<"cmls $dst, $src1, $src2",
> (CMHSv1i64 FPR64:$dst, FPR64:$src2, FPR64:$src1), 0>;
> @@ -2862,16 +2864,16 @@ def : InstAlias<"faclt $dst, $src1, $src
> // Advanced SIMD three scalar instructions (mixed operands).
> //===----------------------------------------------------------------------===//
> defm SQDMULL : SIMDThreeScalarMixedHS<0, 0b11010, "sqdmull",
> - int_arm64_neon_sqdmulls_scalar>;
> + int_aarch64_neon_sqdmulls_scalar>;
> defm SQDMLAL : SIMDThreeScalarMixedTiedHS<0, 0b10010, "sqdmlal">;
> defm SQDMLSL : SIMDThreeScalarMixedTiedHS<0, 0b10110, "sqdmlsl">;
>
> -def : Pat<(i64 (int_arm64_neon_sqadd (i64 FPR64:$Rd),
> - (i64 (int_arm64_neon_sqdmulls_scalar (i32 FPR32:$Rn),
> +def : Pat<(i64 (int_aarch64_neon_sqadd (i64 FPR64:$Rd),
> + (i64 (int_aarch64_neon_sqdmulls_scalar (i32 FPR32:$Rn),
> (i32 FPR32:$Rm))))),
> (SQDMLALi32 FPR64:$Rd, FPR32:$Rn, FPR32:$Rm)>;
> -def : Pat<(i64 (int_arm64_neon_sqsub (i64 FPR64:$Rd),
> - (i64 (int_arm64_neon_sqdmulls_scalar (i32 FPR32:$Rn),
> +def : Pat<(i64 (int_aarch64_neon_sqsub (i64 FPR64:$Rd),
> + (i64 (int_aarch64_neon_sqdmulls_scalar (i32 FPR32:$Rn),
> (i32 FPR32:$Rm))))),
> (SQDMLSLi32 FPR64:$Rd, FPR32:$Rn, FPR32:$Rm)>;
>
> @@ -2879,17 +2881,17 @@ def : Pat<(i64 (int_arm64_neon_sqsub (i6
> // Advanced SIMD two scalar instructions.
> //===----------------------------------------------------------------------===//
>
> -defm ABS : SIMDTwoScalarD< 0, 0b01011, "abs", int_arm64_neon_abs>;
> -defm CMEQ : SIMDCmpTwoScalarD< 0, 0b01001, "cmeq", ARM64cmeqz>;
> -defm CMGE : SIMDCmpTwoScalarD< 1, 0b01000, "cmge", ARM64cmgez>;
> -defm CMGT : SIMDCmpTwoScalarD< 0, 0b01000, "cmgt", ARM64cmgtz>;
> -defm CMLE : SIMDCmpTwoScalarD< 1, 0b01001, "cmle", ARM64cmlez>;
> -defm CMLT : SIMDCmpTwoScalarD< 0, 0b01010, "cmlt", ARM64cmltz>;
> -defm FCMEQ : SIMDCmpTwoScalarSD<0, 1, 0b01101, "fcmeq", ARM64fcmeqz>;
> -defm FCMGE : SIMDCmpTwoScalarSD<1, 1, 0b01100, "fcmge", ARM64fcmgez>;
> -defm FCMGT : SIMDCmpTwoScalarSD<0, 1, 0b01100, "fcmgt", ARM64fcmgtz>;
> -defm FCMLE : SIMDCmpTwoScalarSD<1, 1, 0b01101, "fcmle", ARM64fcmlez>;
> -defm FCMLT : SIMDCmpTwoScalarSD<0, 1, 0b01110, "fcmlt", ARM64fcmltz>;
> +defm ABS : SIMDTwoScalarD< 0, 0b01011, "abs", int_aarch64_neon_abs>;
> +defm CMEQ : SIMDCmpTwoScalarD< 0, 0b01001, "cmeq", AArch64cmeqz>;
> +defm CMGE : SIMDCmpTwoScalarD< 1, 0b01000, "cmge", AArch64cmgez>;
> +defm CMGT : SIMDCmpTwoScalarD< 0, 0b01000, "cmgt", AArch64cmgtz>;
> +defm CMLE : SIMDCmpTwoScalarD< 1, 0b01001, "cmle", AArch64cmlez>;
> +defm CMLT : SIMDCmpTwoScalarD< 0, 0b01010, "cmlt", AArch64cmltz>;
> +defm FCMEQ : SIMDCmpTwoScalarSD<0, 1, 0b01101, "fcmeq", AArch64fcmeqz>;
> +defm FCMGE : SIMDCmpTwoScalarSD<1, 1, 0b01100, "fcmge", AArch64fcmgez>;
> +defm FCMGT : SIMDCmpTwoScalarSD<0, 1, 0b01100, "fcmgt", AArch64fcmgtz>;
> +defm FCMLE : SIMDCmpTwoScalarSD<1, 1, 0b01101, "fcmle", AArch64fcmlez>;
> +defm FCMLT : SIMDCmpTwoScalarSD<0, 1, 0b01110, "fcmlt", AArch64fcmltz>;
> defm FCVTAS : SIMDTwoScalarSD< 0, 0, 0b11100, "fcvtas">;
> defm FCVTAU : SIMDTwoScalarSD< 1, 0, 0b11100, "fcvtau">;
> defm FCVTMS : SIMDTwoScalarSD< 0, 0, 0b11011, "fcvtms">;
> @@ -2906,54 +2908,54 @@ defm FRECPX : SIMDTwoScalarSD< 0, 1, 0
> defm FRSQRTE : SIMDTwoScalarSD< 1, 1, 0b11101, "frsqrte">;
> defm NEG : SIMDTwoScalarD< 1, 0b01011, "neg",
> UnOpFrag<(sub immAllZerosV, node:$LHS)> >;
> -defm SCVTF : SIMDTwoScalarCVTSD< 0, 0, 0b11101, "scvtf", ARM64sitof>;
> -defm SQABS : SIMDTwoScalarBHSD< 0, 0b00111, "sqabs", int_arm64_neon_sqabs>;
> -defm SQNEG : SIMDTwoScalarBHSD< 1, 0b00111, "sqneg", int_arm64_neon_sqneg>;
> -defm SQXTN : SIMDTwoScalarMixedBHS< 0, 0b10100, "sqxtn", int_arm64_neon_scalar_sqxtn>;
> -defm SQXTUN : SIMDTwoScalarMixedBHS< 1, 0b10010, "sqxtun", int_arm64_neon_scalar_sqxtun>;
> +defm SCVTF : SIMDTwoScalarCVTSD< 0, 0, 0b11101, "scvtf", AArch64sitof>;
> +defm SQABS : SIMDTwoScalarBHSD< 0, 0b00111, "sqabs", int_aarch64_neon_sqabs>;
> +defm SQNEG : SIMDTwoScalarBHSD< 1, 0b00111, "sqneg", int_aarch64_neon_sqneg>;
> +defm SQXTN : SIMDTwoScalarMixedBHS< 0, 0b10100, "sqxtn", int_aarch64_neon_scalar_sqxtn>;
> +defm SQXTUN : SIMDTwoScalarMixedBHS< 1, 0b10010, "sqxtun", int_aarch64_neon_scalar_sqxtun>;
> defm SUQADD : SIMDTwoScalarBHSDTied< 0, 0b00011, "suqadd",
> - int_arm64_neon_suqadd>;
> -defm UCVTF : SIMDTwoScalarCVTSD< 1, 0, 0b11101, "ucvtf", ARM64uitof>;
> -defm UQXTN : SIMDTwoScalarMixedBHS<1, 0b10100, "uqxtn", int_arm64_neon_scalar_uqxtn>;
> + int_aarch64_neon_suqadd>;
> +defm UCVTF : SIMDTwoScalarCVTSD< 1, 0, 0b11101, "ucvtf", AArch64uitof>;
> +defm UQXTN : SIMDTwoScalarMixedBHS<1, 0b10100, "uqxtn", int_aarch64_neon_scalar_uqxtn>;
> defm USQADD : SIMDTwoScalarBHSDTied< 1, 0b00011, "usqadd",
> - int_arm64_neon_usqadd>;
> + int_aarch64_neon_usqadd>;
>
> -def : Pat<(ARM64neg (v1i64 V64:$Rn)), (NEGv1i64 V64:$Rn)>;
> +def : Pat<(AArch64neg (v1i64 V64:$Rn)), (NEGv1i64 V64:$Rn)>;
>
> -def : Pat<(v1i64 (int_arm64_neon_fcvtas (v1f64 FPR64:$Rn))),
> +def : Pat<(v1i64 (int_aarch64_neon_fcvtas (v1f64 FPR64:$Rn))),
> (FCVTASv1i64 FPR64:$Rn)>;
> -def : Pat<(v1i64 (int_arm64_neon_fcvtau (v1f64 FPR64:$Rn))),
> +def : Pat<(v1i64 (int_aarch64_neon_fcvtau (v1f64 FPR64:$Rn))),
> (FCVTAUv1i64 FPR64:$Rn)>;
> -def : Pat<(v1i64 (int_arm64_neon_fcvtms (v1f64 FPR64:$Rn))),
> +def : Pat<(v1i64 (int_aarch64_neon_fcvtms (v1f64 FPR64:$Rn))),
> (FCVTMSv1i64 FPR64:$Rn)>;
> -def : Pat<(v1i64 (int_arm64_neon_fcvtmu (v1f64 FPR64:$Rn))),
> +def : Pat<(v1i64 (int_aarch64_neon_fcvtmu (v1f64 FPR64:$Rn))),
> (FCVTMUv1i64 FPR64:$Rn)>;
> -def : Pat<(v1i64 (int_arm64_neon_fcvtns (v1f64 FPR64:$Rn))),
> +def : Pat<(v1i64 (int_aarch64_neon_fcvtns (v1f64 FPR64:$Rn))),
> (FCVTNSv1i64 FPR64:$Rn)>;
> -def : Pat<(v1i64 (int_arm64_neon_fcvtnu (v1f64 FPR64:$Rn))),
> +def : Pat<(v1i64 (int_aarch64_neon_fcvtnu (v1f64 FPR64:$Rn))),
> (FCVTNUv1i64 FPR64:$Rn)>;
> -def : Pat<(v1i64 (int_arm64_neon_fcvtps (v1f64 FPR64:$Rn))),
> +def : Pat<(v1i64 (int_aarch64_neon_fcvtps (v1f64 FPR64:$Rn))),
> (FCVTPSv1i64 FPR64:$Rn)>;
> -def : Pat<(v1i64 (int_arm64_neon_fcvtpu (v1f64 FPR64:$Rn))),
> +def : Pat<(v1i64 (int_aarch64_neon_fcvtpu (v1f64 FPR64:$Rn))),
> (FCVTPUv1i64 FPR64:$Rn)>;
>
> -def : Pat<(f32 (int_arm64_neon_frecpe (f32 FPR32:$Rn))),
> +def : Pat<(f32 (int_aarch64_neon_frecpe (f32 FPR32:$Rn))),
> (FRECPEv1i32 FPR32:$Rn)>;
> -def : Pat<(f64 (int_arm64_neon_frecpe (f64 FPR64:$Rn))),
> +def : Pat<(f64 (int_aarch64_neon_frecpe (f64 FPR64:$Rn))),
> (FRECPEv1i64 FPR64:$Rn)>;
> -def : Pat<(v1f64 (int_arm64_neon_frecpe (v1f64 FPR64:$Rn))),
> +def : Pat<(v1f64 (int_aarch64_neon_frecpe (v1f64 FPR64:$Rn))),
> (FRECPEv1i64 FPR64:$Rn)>;
>
> -def : Pat<(f32 (int_arm64_neon_frecpx (f32 FPR32:$Rn))),
> +def : Pat<(f32 (int_aarch64_neon_frecpx (f32 FPR32:$Rn))),
> (FRECPXv1i32 FPR32:$Rn)>;
> -def : Pat<(f64 (int_arm64_neon_frecpx (f64 FPR64:$Rn))),
> +def : Pat<(f64 (int_aarch64_neon_frecpx (f64 FPR64:$Rn))),
> (FRECPXv1i64 FPR64:$Rn)>;
>
> -def : Pat<(f32 (int_arm64_neon_frsqrte (f32 FPR32:$Rn))),
> +def : Pat<(f32 (int_aarch64_neon_frsqrte (f32 FPR32:$Rn))),
> (FRSQRTEv1i32 FPR32:$Rn)>;
> -def : Pat<(f64 (int_arm64_neon_frsqrte (f64 FPR64:$Rn))),
> +def : Pat<(f64 (int_aarch64_neon_frsqrte (f64 FPR64:$Rn))),
> (FRSQRTEv1i64 FPR64:$Rn)>;
> -def : Pat<(v1f64 (int_arm64_neon_frsqrte (v1f64 FPR64:$Rn))),
> +def : Pat<(v1f64 (int_aarch64_neon_frsqrte (v1f64 FPR64:$Rn))),
> (FRSQRTEv1i64 FPR64:$Rn)>;
>
> // If an integer is about to be converted to a floating point value,
> @@ -3047,56 +3049,56 @@ def : Pat <(f64 (uint_to_fp (i32
> // Advanced SIMD three different-sized vector instructions.
> //===----------------------------------------------------------------------===//
>
> -defm ADDHN : SIMDNarrowThreeVectorBHS<0,0b0100,"addhn", int_arm64_neon_addhn>;
> -defm SUBHN : SIMDNarrowThreeVectorBHS<0,0b0110,"subhn", int_arm64_neon_subhn>;
> -defm RADDHN : SIMDNarrowThreeVectorBHS<1,0b0100,"raddhn",int_arm64_neon_raddhn>;
> -defm RSUBHN : SIMDNarrowThreeVectorBHS<1,0b0110,"rsubhn",int_arm64_neon_rsubhn>;
> -defm PMULL : SIMDDifferentThreeVectorBD<0,0b1110,"pmull",int_arm64_neon_pmull>;
> +defm ADDHN : SIMDNarrowThreeVectorBHS<0,0b0100,"addhn", int_aarch64_neon_addhn>;
> +defm SUBHN : SIMDNarrowThreeVectorBHS<0,0b0110,"subhn", int_aarch64_neon_subhn>;
> +defm RADDHN : SIMDNarrowThreeVectorBHS<1,0b0100,"raddhn",int_aarch64_neon_raddhn>;
> +defm RSUBHN : SIMDNarrowThreeVectorBHS<1,0b0110,"rsubhn",int_aarch64_neon_rsubhn>;
> +defm PMULL : SIMDDifferentThreeVectorBD<0,0b1110,"pmull",int_aarch64_neon_pmull>;
> defm SABAL : SIMDLongThreeVectorTiedBHSabal<0,0b0101,"sabal",
> - int_arm64_neon_sabd>;
> + int_aarch64_neon_sabd>;
> defm SABDL : SIMDLongThreeVectorBHSabdl<0, 0b0111, "sabdl",
> - int_arm64_neon_sabd>;
> + int_aarch64_neon_sabd>;
> defm SADDL : SIMDLongThreeVectorBHS< 0, 0b0000, "saddl",
> BinOpFrag<(add (sext node:$LHS), (sext node:$RHS))>>;
> defm SADDW : SIMDWideThreeVectorBHS< 0, 0b0001, "saddw",
> BinOpFrag<(add node:$LHS, (sext node:$RHS))>>;
> defm SMLAL : SIMDLongThreeVectorTiedBHS<0, 0b1000, "smlal",
> - TriOpFrag<(add node:$LHS, (int_arm64_neon_smull node:$MHS, node:$RHS))>>;
> + TriOpFrag<(add node:$LHS, (int_aarch64_neon_smull node:$MHS, node:$RHS))>>;
> defm SMLSL : SIMDLongThreeVectorTiedBHS<0, 0b1010, "smlsl",
> - TriOpFrag<(sub node:$LHS, (int_arm64_neon_smull node:$MHS, node:$RHS))>>;
> -defm SMULL : SIMDLongThreeVectorBHS<0, 0b1100, "smull", int_arm64_neon_smull>;
> + TriOpFrag<(sub node:$LHS, (int_aarch64_neon_smull node:$MHS, node:$RHS))>>;
> +defm SMULL : SIMDLongThreeVectorBHS<0, 0b1100, "smull", int_aarch64_neon_smull>;
> defm SQDMLAL : SIMDLongThreeVectorSQDMLXTiedHS<0, 0b1001, "sqdmlal",
> - int_arm64_neon_sqadd>;
> + int_aarch64_neon_sqadd>;
> defm SQDMLSL : SIMDLongThreeVectorSQDMLXTiedHS<0, 0b1011, "sqdmlsl",
> - int_arm64_neon_sqsub>;
> + int_aarch64_neon_sqsub>;
> defm SQDMULL : SIMDLongThreeVectorHS<0, 0b1101, "sqdmull",
> - int_arm64_neon_sqdmull>;
> + int_aarch64_neon_sqdmull>;
> defm SSUBL : SIMDLongThreeVectorBHS<0, 0b0010, "ssubl",
> BinOpFrag<(sub (sext node:$LHS), (sext node:$RHS))>>;
> defm SSUBW : SIMDWideThreeVectorBHS<0, 0b0011, "ssubw",
> BinOpFrag<(sub node:$LHS, (sext node:$RHS))>>;
> defm UABAL : SIMDLongThreeVectorTiedBHSabal<1, 0b0101, "uabal",
> - int_arm64_neon_uabd>;
> + int_aarch64_neon_uabd>;
> defm UABDL : SIMDLongThreeVectorBHSabdl<1, 0b0111, "uabdl",
> - int_arm64_neon_uabd>;
> + int_aarch64_neon_uabd>;
> defm UADDL : SIMDLongThreeVectorBHS<1, 0b0000, "uaddl",
> BinOpFrag<(add (zext node:$LHS), (zext node:$RHS))>>;
> defm UADDW : SIMDWideThreeVectorBHS<1, 0b0001, "uaddw",
> BinOpFrag<(add node:$LHS, (zext node:$RHS))>>;
> defm UMLAL : SIMDLongThreeVectorTiedBHS<1, 0b1000, "umlal",
> - TriOpFrag<(add node:$LHS, (int_arm64_neon_umull node:$MHS, node:$RHS))>>;
> + TriOpFrag<(add node:$LHS, (int_aarch64_neon_umull node:$MHS, node:$RHS))>>;
> defm UMLSL : SIMDLongThreeVectorTiedBHS<1, 0b1010, "umlsl",
> - TriOpFrag<(sub node:$LHS, (int_arm64_neon_umull node:$MHS, node:$RHS))>>;
> -defm UMULL : SIMDLongThreeVectorBHS<1, 0b1100, "umull", int_arm64_neon_umull>;
> + TriOpFrag<(sub node:$LHS, (int_aarch64_neon_umull node:$MHS, node:$RHS))>>;
> +defm UMULL : SIMDLongThreeVectorBHS<1, 0b1100, "umull", int_aarch64_neon_umull>;
> defm USUBL : SIMDLongThreeVectorBHS<1, 0b0010, "usubl",
> BinOpFrag<(sub (zext node:$LHS), (zext node:$RHS))>>;
> defm USUBW : SIMDWideThreeVectorBHS< 1, 0b0011, "usubw",
> BinOpFrag<(sub node:$LHS, (zext node:$RHS))>>;
>
> // Patterns for 64-bit pmull
> -def : Pat<(int_arm64_neon_pmull64 V64:$Rn, V64:$Rm),
> +def : Pat<(int_aarch64_neon_pmull64 V64:$Rn, V64:$Rm),
> (PMULLv1i64 V64:$Rn, V64:$Rm)>;
> -def : Pat<(int_arm64_neon_pmull64 (vector_extract (v2i64 V128:$Rn), (i64 1)),
> +def : Pat<(int_aarch64_neon_pmull64 (vector_extract (v2i64 V128:$Rn), (i64 1)),
> (vector_extract (v2i64 V128:$Rm), (i64 1))),
> (PMULLv2i64 V128:$Rn, V128:$Rm)>;
>
> @@ -3104,51 +3106,51 @@ def : Pat<(int_arm64_neon_pmull64 (vecto
> // written in LLVM IR without too much difficulty.
>
> // ADDHN
> -def : Pat<(v8i8 (trunc (v8i16 (ARM64vlshr (add V128:$Rn, V128:$Rm), (i32 8))))),
> +def : Pat<(v8i8 (trunc (v8i16 (AArch64vlshr (add V128:$Rn, V128:$Rm), (i32 8))))),
> (ADDHNv8i16_v8i8 V128:$Rn, V128:$Rm)>;
> -def : Pat<(v4i16 (trunc (v4i32 (ARM64vlshr (add V128:$Rn, V128:$Rm),
> +def : Pat<(v4i16 (trunc (v4i32 (AArch64vlshr (add V128:$Rn, V128:$Rm),
> (i32 16))))),
> (ADDHNv4i32_v4i16 V128:$Rn, V128:$Rm)>;
> -def : Pat<(v2i32 (trunc (v2i64 (ARM64vlshr (add V128:$Rn, V128:$Rm),
> +def : Pat<(v2i32 (trunc (v2i64 (AArch64vlshr (add V128:$Rn, V128:$Rm),
> (i32 32))))),
> (ADDHNv2i64_v2i32 V128:$Rn, V128:$Rm)>;
> def : Pat<(concat_vectors (v8i8 V64:$Rd),
> - (trunc (v8i16 (ARM64vlshr (add V128:$Rn, V128:$Rm),
> + (trunc (v8i16 (AArch64vlshr (add V128:$Rn, V128:$Rm),
> (i32 8))))),
> (ADDHNv8i16_v16i8 (SUBREG_TO_REG (i32 0), V64:$Rd, dsub),
> V128:$Rn, V128:$Rm)>;
> def : Pat<(concat_vectors (v4i16 V64:$Rd),
> - (trunc (v4i32 (ARM64vlshr (add V128:$Rn, V128:$Rm),
> + (trunc (v4i32 (AArch64vlshr (add V128:$Rn, V128:$Rm),
> (i32 16))))),
> (ADDHNv4i32_v8i16 (SUBREG_TO_REG (i32 0), V64:$Rd, dsub),
> V128:$Rn, V128:$Rm)>;
> def : Pat<(concat_vectors (v2i32 V64:$Rd),
> - (trunc (v2i64 (ARM64vlshr (add V128:$Rn, V128:$Rm),
> + (trunc (v2i64 (AArch64vlshr (add V128:$Rn, V128:$Rm),
> (i32 32))))),
> (ADDHNv2i64_v4i32 (SUBREG_TO_REG (i32 0), V64:$Rd, dsub),
> V128:$Rn, V128:$Rm)>;
>
> // SUBHN
> -def : Pat<(v8i8 (trunc (v8i16 (ARM64vlshr (sub V128:$Rn, V128:$Rm), (i32 8))))),
> +def : Pat<(v8i8 (trunc (v8i16 (AArch64vlshr (sub V128:$Rn, V128:$Rm), (i32 8))))),
> (SUBHNv8i16_v8i8 V128:$Rn, V128:$Rm)>;
> -def : Pat<(v4i16 (trunc (v4i32 (ARM64vlshr (sub V128:$Rn, V128:$Rm),
> +def : Pat<(v4i16 (trunc (v4i32 (AArch64vlshr (sub V128:$Rn, V128:$Rm),
> (i32 16))))),
> (SUBHNv4i32_v4i16 V128:$Rn, V128:$Rm)>;
> -def : Pat<(v2i32 (trunc (v2i64 (ARM64vlshr (sub V128:$Rn, V128:$Rm),
> +def : Pat<(v2i32 (trunc (v2i64 (AArch64vlshr (sub V128:$Rn, V128:$Rm),
> (i32 32))))),
> (SUBHNv2i64_v2i32 V128:$Rn, V128:$Rm)>;
> def : Pat<(concat_vectors (v8i8 V64:$Rd),
> - (trunc (v8i16 (ARM64vlshr (sub V128:$Rn, V128:$Rm),
> + (trunc (v8i16 (AArch64vlshr (sub V128:$Rn, V128:$Rm),
> (i32 8))))),
> (SUBHNv8i16_v16i8 (SUBREG_TO_REG (i32 0), V64:$Rd, dsub),
> V128:$Rn, V128:$Rm)>;
> def : Pat<(concat_vectors (v4i16 V64:$Rd),
> - (trunc (v4i32 (ARM64vlshr (sub V128:$Rn, V128:$Rm),
> + (trunc (v4i32 (AArch64vlshr (sub V128:$Rn, V128:$Rm),
> (i32 16))))),
> (SUBHNv4i32_v8i16 (SUBREG_TO_REG (i32 0), V64:$Rd, dsub),
> V128:$Rn, V128:$Rm)>;
> def : Pat<(concat_vectors (v2i32 V64:$Rd),
> - (trunc (v2i64 (ARM64vlshr (sub V128:$Rn, V128:$Rm),
> + (trunc (v2i64 (AArch64vlshr (sub V128:$Rn, V128:$Rm),
> (i32 32))))),
> (SUBHNv2i64_v4i32 (SUBREG_TO_REG (i32 0), V64:$Rd, dsub),
> V128:$Rn, V128:$Rm)>;
> @@ -3159,21 +3161,21 @@ def : Pat<(concat_vectors (v2i32 V64:$Rd
>
> defm EXT : SIMDBitwiseExtract<"ext">;
>
> -def : Pat<(v4i16 (ARM64ext V64:$Rn, V64:$Rm, (i32 imm:$imm))),
> +def : Pat<(v4i16 (AArch64ext V64:$Rn, V64:$Rm, (i32 imm:$imm))),
> (EXTv8i8 V64:$Rn, V64:$Rm, imm:$imm)>;
> -def : Pat<(v8i16 (ARM64ext V128:$Rn, V128:$Rm, (i32 imm:$imm))),
> +def : Pat<(v8i16 (AArch64ext V128:$Rn, V128:$Rm, (i32 imm:$imm))),
> (EXTv16i8 V128:$Rn, V128:$Rm, imm:$imm)>;
> -def : Pat<(v2i32 (ARM64ext V64:$Rn, V64:$Rm, (i32 imm:$imm))),
> +def : Pat<(v2i32 (AArch64ext V64:$Rn, V64:$Rm, (i32 imm:$imm))),
> (EXTv8i8 V64:$Rn, V64:$Rm, imm:$imm)>;
> -def : Pat<(v2f32 (ARM64ext V64:$Rn, V64:$Rm, (i32 imm:$imm))),
> +def : Pat<(v2f32 (AArch64ext V64:$Rn, V64:$Rm, (i32 imm:$imm))),
> (EXTv8i8 V64:$Rn, V64:$Rm, imm:$imm)>;
> -def : Pat<(v4i32 (ARM64ext V128:$Rn, V128:$Rm, (i32 imm:$imm))),
> +def : Pat<(v4i32 (AArch64ext V128:$Rn, V128:$Rm, (i32 imm:$imm))),
> (EXTv16i8 V128:$Rn, V128:$Rm, imm:$imm)>;
> -def : Pat<(v4f32 (ARM64ext V128:$Rn, V128:$Rm, (i32 imm:$imm))),
> +def : Pat<(v4f32 (AArch64ext V128:$Rn, V128:$Rm, (i32 imm:$imm))),
> (EXTv16i8 V128:$Rn, V128:$Rm, imm:$imm)>;
> -def : Pat<(v2i64 (ARM64ext V128:$Rn, V128:$Rm, (i32 imm:$imm))),
> +def : Pat<(v2i64 (AArch64ext V128:$Rn, V128:$Rm, (i32 imm:$imm))),
> (EXTv16i8 V128:$Rn, V128:$Rm, imm:$imm)>;
> -def : Pat<(v2f64 (ARM64ext V128:$Rn, V128:$Rm, (i32 imm:$imm))),
> +def : Pat<(v2f64 (AArch64ext V128:$Rn, V128:$Rm, (i32 imm:$imm))),
> (EXTv16i8 V128:$Rn, V128:$Rm, imm:$imm)>;
>
> // We use EXT to handle extract_subvector to copy the upper 64-bits of a
> @@ -3196,12 +3198,12 @@ def : Pat<(v1f64 (extract_subvector V128
> // AdvSIMD zip vector
> //----------------------------------------------------------------------------
>
> -defm TRN1 : SIMDZipVector<0b010, "trn1", ARM64trn1>;
> -defm TRN2 : SIMDZipVector<0b110, "trn2", ARM64trn2>;
> -defm UZP1 : SIMDZipVector<0b001, "uzp1", ARM64uzp1>;
> -defm UZP2 : SIMDZipVector<0b101, "uzp2", ARM64uzp2>;
> -defm ZIP1 : SIMDZipVector<0b011, "zip1", ARM64zip1>;
> -defm ZIP2 : SIMDZipVector<0b111, "zip2", ARM64zip2>;
> +defm TRN1 : SIMDZipVector<0b010, "trn1", AArch64trn1>;
> +defm TRN2 : SIMDZipVector<0b110, "trn2", AArch64trn2>;
> +defm UZP1 : SIMDZipVector<0b001, "uzp1", AArch64uzp1>;
> +defm UZP2 : SIMDZipVector<0b101, "uzp2", AArch64uzp2>;
> +defm ZIP1 : SIMDZipVector<0b011, "zip1", AArch64zip1>;
> +defm ZIP2 : SIMDZipVector<0b111, "zip2", AArch64zip2>;
>
> //----------------------------------------------------------------------------
> // AdvSIMD TBL/TBX instructions
> @@ -3210,15 +3212,15 @@ defm ZIP2 : SIMDZipVector<0b111, "zip2",
> defm TBL : SIMDTableLookup< 0, "tbl">;
> defm TBX : SIMDTableLookupTied<1, "tbx">;
>
> -def : Pat<(v8i8 (int_arm64_neon_tbl1 (v16i8 VecListOne128:$Rn), (v8i8 V64:$Ri))),
> +def : Pat<(v8i8 (int_aarch64_neon_tbl1 (v16i8 VecListOne128:$Rn), (v8i8 V64:$Ri))),
> (TBLv8i8One VecListOne128:$Rn, V64:$Ri)>;
> -def : Pat<(v16i8 (int_arm64_neon_tbl1 (v16i8 V128:$Ri), (v16i8 V128:$Rn))),
> +def : Pat<(v16i8 (int_aarch64_neon_tbl1 (v16i8 V128:$Ri), (v16i8 V128:$Rn))),
> (TBLv16i8One V128:$Ri, V128:$Rn)>;
>
> -def : Pat<(v8i8 (int_arm64_neon_tbx1 (v8i8 V64:$Rd),
> +def : Pat<(v8i8 (int_aarch64_neon_tbx1 (v8i8 V64:$Rd),
> (v16i8 VecListOne128:$Rn), (v8i8 V64:$Ri))),
> (TBXv8i8One V64:$Rd, VecListOne128:$Rn, V64:$Ri)>;
> -def : Pat<(v16i8 (int_arm64_neon_tbx1 (v16i8 V128:$Rd),
> +def : Pat<(v16i8 (int_aarch64_neon_tbx1 (v16i8 V128:$Rd),
> (v16i8 V128:$Ri), (v16i8 V128:$Rn))),
> (TBXv16i8One V128:$Rd, V128:$Ri, V128:$Rn)>;
>
> @@ -3239,31 +3241,31 @@ defm FMAXNMP : SIMDPairwiseScalarSD<1, 0
> defm FMAXP : SIMDPairwiseScalarSD<1, 0, 0b01111, "fmaxp">;
> defm FMINNMP : SIMDPairwiseScalarSD<1, 1, 0b01100, "fminnmp">;
> defm FMINP : SIMDPairwiseScalarSD<1, 1, 0b01111, "fminp">;
> -def : Pat<(i64 (int_arm64_neon_saddv (v2i64 V128:$Rn))),
> +def : Pat<(i64 (int_aarch64_neon_saddv (v2i64 V128:$Rn))),
> (ADDPv2i64p V128:$Rn)>;
> -def : Pat<(i64 (int_arm64_neon_uaddv (v2i64 V128:$Rn))),
> +def : Pat<(i64 (int_aarch64_neon_uaddv (v2i64 V128:$Rn))),
> (ADDPv2i64p V128:$Rn)>;
> -def : Pat<(f32 (int_arm64_neon_faddv (v2f32 V64:$Rn))),
> +def : Pat<(f32 (int_aarch64_neon_faddv (v2f32 V64:$Rn))),
> (FADDPv2i32p V64:$Rn)>;
> -def : Pat<(f32 (int_arm64_neon_faddv (v4f32 V128:$Rn))),
> +def : Pat<(f32 (int_aarch64_neon_faddv (v4f32 V128:$Rn))),
> (FADDPv2i32p (EXTRACT_SUBREG (FADDPv4f32 V128:$Rn, V128:$Rn), dsub))>;
> -def : Pat<(f64 (int_arm64_neon_faddv (v2f64 V128:$Rn))),
> +def : Pat<(f64 (int_aarch64_neon_faddv (v2f64 V128:$Rn))),
> (FADDPv2i64p V128:$Rn)>;
> -def : Pat<(f32 (int_arm64_neon_fmaxnmv (v2f32 V64:$Rn))),
> +def : Pat<(f32 (int_aarch64_neon_fmaxnmv (v2f32 V64:$Rn))),
> (FMAXNMPv2i32p V64:$Rn)>;
> -def : Pat<(f64 (int_arm64_neon_fmaxnmv (v2f64 V128:$Rn))),
> +def : Pat<(f64 (int_aarch64_neon_fmaxnmv (v2f64 V128:$Rn))),
> (FMAXNMPv2i64p V128:$Rn)>;
> -def : Pat<(f32 (int_arm64_neon_fmaxv (v2f32 V64:$Rn))),
> +def : Pat<(f32 (int_aarch64_neon_fmaxv (v2f32 V64:$Rn))),
> (FMAXPv2i32p V64:$Rn)>;
> -def : Pat<(f64 (int_arm64_neon_fmaxv (v2f64 V128:$Rn))),
> +def : Pat<(f64 (int_aarch64_neon_fmaxv (v2f64 V128:$Rn))),
> (FMAXPv2i64p V128:$Rn)>;
> -def : Pat<(f32 (int_arm64_neon_fminnmv (v2f32 V64:$Rn))),
> +def : Pat<(f32 (int_aarch64_neon_fminnmv (v2f32 V64:$Rn))),
> (FMINNMPv2i32p V64:$Rn)>;
> -def : Pat<(f64 (int_arm64_neon_fminnmv (v2f64 V128:$Rn))),
> +def : Pat<(f64 (int_aarch64_neon_fminnmv (v2f64 V128:$Rn))),
> (FMINNMPv2i64p V128:$Rn)>;
> -def : Pat<(f32 (int_arm64_neon_fminv (v2f32 V64:$Rn))),
> +def : Pat<(f32 (int_aarch64_neon_fminv (v2f32 V64:$Rn))),
> (FMINPv2i32p V64:$Rn)>;
> -def : Pat<(f64 (int_arm64_neon_fminv (v2f64 V128:$Rn))),
> +def : Pat<(f64 (int_aarch64_neon_fminv (v2f64 V128:$Rn))),
> (FMINPv2i64p V128:$Rn)>;
>
> //----------------------------------------------------------------------------
> @@ -3286,27 +3288,27 @@ def DUPv8i16lane : SIMDDup16FromElement<
> def DUPv8i8lane : SIMDDup8FromElement <0, ".8b", v8i8, V64>;
> def DUPv16i8lane : SIMDDup8FromElement <1, ".16b", v16i8, V128>;
>
> -def : Pat<(v2f32 (ARM64dup (f32 FPR32:$Rn))),
> +def : Pat<(v2f32 (AArch64dup (f32 FPR32:$Rn))),
> (v2f32 (DUPv2i32lane
> (INSERT_SUBREG (v4i32 (IMPLICIT_DEF)), FPR32:$Rn, ssub),
> (i64 0)))>;
> -def : Pat<(v4f32 (ARM64dup (f32 FPR32:$Rn))),
> +def : Pat<(v4f32 (AArch64dup (f32 FPR32:$Rn))),
> (v4f32 (DUPv4i32lane
> (INSERT_SUBREG (v4i32 (IMPLICIT_DEF)), FPR32:$Rn, ssub),
> (i64 0)))>;
> -def : Pat<(v2f64 (ARM64dup (f64 FPR64:$Rn))),
> +def : Pat<(v2f64 (AArch64dup (f64 FPR64:$Rn))),
> (v2f64 (DUPv2i64lane
> (INSERT_SUBREG (v4i32 (IMPLICIT_DEF)), FPR64:$Rn, dsub),
> (i64 0)))>;
>
> -def : Pat<(v2f32 (ARM64duplane32 (v4f32 V128:$Rn), VectorIndexS:$imm)),
> +def : Pat<(v2f32 (AArch64duplane32 (v4f32 V128:$Rn), VectorIndexS:$imm)),
> (DUPv2i32lane V128:$Rn, VectorIndexS:$imm)>;
> -def : Pat<(v4f32 (ARM64duplane32 (v4f32 V128:$Rn), VectorIndexS:$imm)),
> +def : Pat<(v4f32 (AArch64duplane32 (v4f32 V128:$Rn), VectorIndexS:$imm)),
> (DUPv4i32lane V128:$Rn, VectorIndexS:$imm)>;
> -def : Pat<(v2f64 (ARM64duplane64 (v2f64 V128:$Rn), VectorIndexD:$imm)),
> +def : Pat<(v2f64 (AArch64duplane64 (v2f64 V128:$Rn), VectorIndexD:$imm)),
> (DUPv2i64lane V128:$Rn, VectorIndexD:$imm)>;
>
> -// If there's an (ARM64dup (vector_extract ...) ...), we can use a duplane
> +// If there's an (AArch64dup (vector_extract ...) ...), we can use a duplane
> // instruction even if the types don't match: we just have to remap the lane
> // carefully. N.b. this trick only applies to truncations.
> def VecIndex_x2 : SDNodeXForm<imm, [{
> @@ -3322,11 +3324,11 @@ def VecIndex_x8 : SDNodeXForm<imm, [{
> multiclass DUPWithTruncPats<ValueType ResVT, ValueType Src64VT,
> ValueType Src128VT, ValueType ScalVT,
> Instruction DUP, SDNodeXForm IdxXFORM> {
> - def : Pat<(ResVT (ARM64dup (ScalVT (vector_extract (Src128VT V128:$Rn),
> + def : Pat<(ResVT (AArch64dup (ScalVT (vector_extract (Src128VT V128:$Rn),
> imm:$idx)))),
> (DUP V128:$Rn, (IdxXFORM imm:$idx))>;
>
> - def : Pat<(ResVT (ARM64dup (ScalVT (vector_extract (Src64VT V64:$Rn),
> + def : Pat<(ResVT (AArch64dup (ScalVT (vector_extract (Src64VT V64:$Rn),
> imm:$idx)))),
> (DUP (SUBREG_TO_REG (i64 0), V64:$Rn, dsub), (IdxXFORM imm:$idx))>;
> }
> @@ -3341,11 +3343,11 @@ defm : DUPWithTruncPats<v8i16, v2i32, v
>
> multiclass DUPWithTrunci64Pats<ValueType ResVT, Instruction DUP,
> SDNodeXForm IdxXFORM> {
> - def : Pat<(ResVT (ARM64dup (i32 (trunc (vector_extract (v2i64 V128:$Rn),
> + def : Pat<(ResVT (AArch64dup (i32 (trunc (vector_extract (v2i64 V128:$Rn),
> imm:$idx))))),
> (DUP V128:$Rn, (IdxXFORM imm:$idx))>;
>
> - def : Pat<(ResVT (ARM64dup (i32 (trunc (vector_extract (v1i64 V64:$Rn),
> + def : Pat<(ResVT (AArch64dup (i32 (trunc (vector_extract (v1i64 V64:$Rn),
> imm:$idx))))),
> (DUP (SUBREG_TO_REG (i64 0), V64:$Rn, dsub), (IdxXFORM imm:$idx))>;
> }
> @@ -3377,7 +3379,7 @@ def : Pat<(sext (i32 (vector_extract (v4
>
> // Extracting i8 or i16 elements will have the zero-extend transformed to
> // an 'and' mask by type legalization since neither i8 nor i16 are legal types
> -// for ARM64. Match these patterns here since UMOV already zeroes out the high
> +// for AArch64. Match these patterns here since UMOV already zeroes out the high
> // bits of the destination register.
> def : Pat<(and (vector_extract (v16i8 V128:$Rn), VectorIndexB:$idx),
> (i32 0xff)),
> @@ -3445,25 +3447,25 @@ def : Pat<(v2f64 (vector_insert (v2f64 V
> // element of another.
> // FIXME refactor to a shared class/dev parameterized on vector type, vector
> // index type and INS extension
> -def : Pat<(v16i8 (int_arm64_neon_vcopy_lane
> +def : Pat<(v16i8 (int_aarch64_neon_vcopy_lane
> (v16i8 V128:$Vd), VectorIndexB:$idx, (v16i8 V128:$Vs),
> VectorIndexB:$idx2)),
> (v16i8 (INSvi8lane
> V128:$Vd, VectorIndexB:$idx, V128:$Vs, VectorIndexB:$idx2)
> )>;
> -def : Pat<(v8i16 (int_arm64_neon_vcopy_lane
> +def : Pat<(v8i16 (int_aarch64_neon_vcopy_lane
> (v8i16 V128:$Vd), VectorIndexH:$idx, (v8i16 V128:$Vs),
> VectorIndexH:$idx2)),
> (v8i16 (INSvi16lane
> V128:$Vd, VectorIndexH:$idx, V128:$Vs, VectorIndexH:$idx2)
> )>;
> -def : Pat<(v4i32 (int_arm64_neon_vcopy_lane
> +def : Pat<(v4i32 (int_aarch64_neon_vcopy_lane
> (v4i32 V128:$Vd), VectorIndexS:$idx, (v4i32 V128:$Vs),
> VectorIndexS:$idx2)),
> (v4i32 (INSvi32lane
> V128:$Vd, VectorIndexS:$idx, V128:$Vs, VectorIndexS:$idx2)
> )>;
> -def : Pat<(v2i64 (int_arm64_neon_vcopy_lane
> +def : Pat<(v2i64 (int_aarch64_neon_vcopy_lane
> (v2i64 V128:$Vd), VectorIndexD:$idx, (v2i64 V128:$Vs),
> VectorIndexD:$idx2)),
> (v2i64 (INSvi64lane
> @@ -3526,7 +3528,7 @@ def : Pat<(vector_extract (v4f32 V128:$R
> ssub))>;
>
> // All concat_vectors operations are canonicalised to act on i64 vectors for
> -// ARM64. In the general case we need an instruction, which had just as well be
> +// AArch64. In the general case we need an instruction, which had just as well be
> // INS.
> class ConcatPat<ValueType DstTy, ValueType SrcTy>
> : Pat<(DstTy (concat_vectors (SrcTy V64:$Rd), V64:$Rn)),
> @@ -3563,10 +3565,10 @@ defm UMAXV : SIMDAcrossLanesBHS<1, 0b0
> defm UMINV : SIMDAcrossLanesBHS<1, 0b11010, "uminv">;
> defm SADDLV : SIMDAcrossLanesHSD<0, 0b00011, "saddlv">;
> defm UADDLV : SIMDAcrossLanesHSD<1, 0b00011, "uaddlv">;
> -defm FMAXNMV : SIMDAcrossLanesS<0b01100, 0, "fmaxnmv", int_arm64_neon_fmaxnmv>;
> -defm FMAXV : SIMDAcrossLanesS<0b01111, 0, "fmaxv", int_arm64_neon_fmaxv>;
> -defm FMINNMV : SIMDAcrossLanesS<0b01100, 1, "fminnmv", int_arm64_neon_fminnmv>;
> -defm FMINV : SIMDAcrossLanesS<0b01111, 1, "fminv", int_arm64_neon_fminv>;
> +defm FMAXNMV : SIMDAcrossLanesS<0b01100, 0, "fmaxnmv", int_aarch64_neon_fmaxnmv>;
> +defm FMAXV : SIMDAcrossLanesS<0b01111, 0, "fmaxv", int_aarch64_neon_fmaxv>;
> +defm FMINNMV : SIMDAcrossLanesS<0b01100, 1, "fminnmv", int_aarch64_neon_fminnmv>;
> +defm FMINV : SIMDAcrossLanesS<0b01111, 1, "fminv", int_aarch64_neon_fminv>;
>
> multiclass SIMDAcrossLanesSignedIntrinsic<string baseOpc, Intrinsic intOp> {
> // If there is a sign extension after this intrinsic, consume it as smov already
> @@ -3745,43 +3747,43 @@ def : Pat<(i64 (intOp (v4i32 V128:$Rn)))
> dsub))>;
> }
>
> -defm : SIMDAcrossLanesSignedIntrinsic<"ADDV", int_arm64_neon_saddv>;
> +defm : SIMDAcrossLanesSignedIntrinsic<"ADDV", int_aarch64_neon_saddv>;
> // vaddv_[su]32 is special; -> ADDP Vd.2S,Vn.2S,Vm.2S; return Vd.s[0];Vn==Vm
> -def : Pat<(i32 (int_arm64_neon_saddv (v2i32 V64:$Rn))),
> +def : Pat<(i32 (int_aarch64_neon_saddv (v2i32 V64:$Rn))),
> (EXTRACT_SUBREG (ADDPv2i32 V64:$Rn, V64:$Rn), ssub)>;
>
> -defm : SIMDAcrossLanesUnsignedIntrinsic<"ADDV", int_arm64_neon_uaddv>;
> +defm : SIMDAcrossLanesUnsignedIntrinsic<"ADDV", int_aarch64_neon_uaddv>;
> // vaddv_[su]32 is special; -> ADDP Vd.2S,Vn.2S,Vm.2S; return Vd.s[0];Vn==Vm
> -def : Pat<(i32 (int_arm64_neon_uaddv (v2i32 V64:$Rn))),
> +def : Pat<(i32 (int_aarch64_neon_uaddv (v2i32 V64:$Rn))),
> (EXTRACT_SUBREG (ADDPv2i32 V64:$Rn, V64:$Rn), ssub)>;
>
> -defm : SIMDAcrossLanesSignedIntrinsic<"SMAXV", int_arm64_neon_smaxv>;
> -def : Pat<(i32 (int_arm64_neon_smaxv (v2i32 V64:$Rn))),
> +defm : SIMDAcrossLanesSignedIntrinsic<"SMAXV", int_aarch64_neon_smaxv>;
> +def : Pat<(i32 (int_aarch64_neon_smaxv (v2i32 V64:$Rn))),
> (EXTRACT_SUBREG (SMAXPv2i32 V64:$Rn, V64:$Rn), ssub)>;
>
> -defm : SIMDAcrossLanesSignedIntrinsic<"SMINV", int_arm64_neon_sminv>;
> -def : Pat<(i32 (int_arm64_neon_sminv (v2i32 V64:$Rn))),
> +defm : SIMDAcrossLanesSignedIntrinsic<"SMINV", int_aarch64_neon_sminv>;
> +def : Pat<(i32 (int_aarch64_neon_sminv (v2i32 V64:$Rn))),
> (EXTRACT_SUBREG (SMINPv2i32 V64:$Rn, V64:$Rn), ssub)>;
>
> -defm : SIMDAcrossLanesUnsignedIntrinsic<"UMAXV", int_arm64_neon_umaxv>;
> -def : Pat<(i32 (int_arm64_neon_umaxv (v2i32 V64:$Rn))),
> +defm : SIMDAcrossLanesUnsignedIntrinsic<"UMAXV", int_aarch64_neon_umaxv>;
> +def : Pat<(i32 (int_aarch64_neon_umaxv (v2i32 V64:$Rn))),
> (EXTRACT_SUBREG (UMAXPv2i32 V64:$Rn, V64:$Rn), ssub)>;
>
> -defm : SIMDAcrossLanesUnsignedIntrinsic<"UMINV", int_arm64_neon_uminv>;
> -def : Pat<(i32 (int_arm64_neon_uminv (v2i32 V64:$Rn))),
> +defm : SIMDAcrossLanesUnsignedIntrinsic<"UMINV", int_aarch64_neon_uminv>;
> +def : Pat<(i32 (int_aarch64_neon_uminv (v2i32 V64:$Rn))),
> (EXTRACT_SUBREG (UMINPv2i32 V64:$Rn, V64:$Rn), ssub)>;
>
> -defm : SIMDAcrossLanesSignedLongIntrinsic<"SADDLV", int_arm64_neon_saddlv>;
> -defm : SIMDAcrossLanesUnsignedLongIntrinsic<"UADDLV", int_arm64_neon_uaddlv>;
> +defm : SIMDAcrossLanesSignedLongIntrinsic<"SADDLV", int_aarch64_neon_saddlv>;
> +defm : SIMDAcrossLanesUnsignedLongIntrinsic<"UADDLV", int_aarch64_neon_uaddlv>;
>
> // The vaddlv_s32 intrinsic gets mapped to SADDLP.
> -def : Pat<(i64 (int_arm64_neon_saddlv (v2i32 V64:$Rn))),
> +def : Pat<(i64 (int_aarch64_neon_saddlv (v2i32 V64:$Rn))),
> (i64 (EXTRACT_SUBREG
> (INSERT_SUBREG (v16i8 (IMPLICIT_DEF)),
> (SADDLPv2i32_v1i64 V64:$Rn), dsub),
> dsub))>;
> // The vaddlv_u32 intrinsic gets mapped to UADDLP.
> -def : Pat<(i64 (int_arm64_neon_uaddlv (v2i32 V64:$Rn))),
> +def : Pat<(i64 (int_aarch64_neon_uaddlv (v2i32 V64:$Rn))),
> (i64 (EXTRACT_SUBREG
> (INSERT_SUBREG (v16i8 (IMPLICIT_DEF)),
> (UADDLPv2i32_v1i64 V64:$Rn), dsub),
> @@ -3792,9 +3794,9 @@ def : Pat<(i64 (int_arm64_neon_uaddlv (v
> //------------------------------------------------------------------------------
>
> // AdvSIMD BIC
> -defm BIC : SIMDModifiedImmVectorShiftTied<1, 0b11, 0b01, "bic", ARM64bici>;
> +defm BIC : SIMDModifiedImmVectorShiftTied<1, 0b11, 0b01, "bic", AArch64bici>;
> // AdvSIMD ORR
> -defm ORR : SIMDModifiedImmVectorShiftTied<0, 0b11, 0b01, "orr", ARM64orri>;
> +defm ORR : SIMDModifiedImmVectorShiftTied<0, 0b11, 0b01, "orr", AArch64orri>;
>
> def : InstAlias<"bic $Vd.4h, $imm", (BICv4i16 V64:$Vd, imm0_255:$imm, 0)>;
> def : InstAlias<"bic $Vd.8h, $imm", (BICv8i16 V128:$Vd, imm0_255:$imm, 0)>;
> @@ -3819,13 +3821,13 @@ def : InstAlias<"orr.4s $Vd, $imm", (ORR
> // AdvSIMD FMOV
> def FMOVv2f64_ns : SIMDModifiedImmVectorNoShift<1, 1, 0b1111, V128, fpimm8,
> "fmov", ".2d",
> - [(set (v2f64 V128:$Rd), (ARM64fmov imm0_255:$imm8))]>;
> + [(set (v2f64 V128:$Rd), (AArch64fmov imm0_255:$imm8))]>;
> def FMOVv2f32_ns : SIMDModifiedImmVectorNoShift<0, 0, 0b1111, V64, fpimm8,
> "fmov", ".2s",
> - [(set (v2f32 V64:$Rd), (ARM64fmov imm0_255:$imm8))]>;
> + [(set (v2f32 V64:$Rd), (AArch64fmov imm0_255:$imm8))]>;
> def FMOVv4f32_ns : SIMDModifiedImmVectorNoShift<1, 0, 0b1111, V128, fpimm8,
> "fmov", ".4s",
> - [(set (v4f32 V128:$Rd), (ARM64fmov imm0_255:$imm8))]>;
> + [(set (v4f32 V128:$Rd), (AArch64fmov imm0_255:$imm8))]>;
>
> // AdvSIMD MOVI
>
> @@ -3835,7 +3837,7 @@ def MOVID : SIMDModifiedImmScalarNo
> [(set FPR64:$Rd, simdimmtype10:$imm8)]>;
> // The movi_edit node has the immediate value already encoded, so we use
> // a plain imm0_255 here.
> -def : Pat<(f64 (ARM64movi_edit imm0_255:$shift)),
> +def : Pat<(f64 (AArch64movi_edit imm0_255:$shift)),
> (MOVID imm0_255:$shift)>;
>
> def : Pat<(v1i64 immAllZerosV), (MOVID (i32 0))>;
> @@ -3856,7 +3858,7 @@ let isReMaterializable = 1, isAsCheapAsA
> def MOVIv2d_ns : SIMDModifiedImmVectorNoShift<1, 1, 0b1110, V128,
> simdimmtype10,
> "movi", ".2d",
> - [(set (v2i64 V128:$Rd), (ARM64movi_edit imm0_255:$imm8))]>;
> + [(set (v2i64 V128:$Rd), (AArch64movi_edit imm0_255:$imm8))]>;
>
>
> // Use movi.2d to materialize 0.0 if the HW does zero-cycle zeroing.
> @@ -3880,8 +3882,8 @@ def : Pat<(v4i32 immAllOnesV), (MOVIv2d_
> def : Pat<(v8i16 immAllOnesV), (MOVIv2d_ns (i32 255))>;
> def : Pat<(v16i8 immAllOnesV), (MOVIv2d_ns (i32 255))>;
>
> -def : Pat<(v2f64 (ARM64dup (f64 fpimm0))), (MOVIv2d_ns (i32 0))>;
> -def : Pat<(v4f32 (ARM64dup (f32 fpimm0))), (MOVIv2d_ns (i32 0))>;
> +def : Pat<(v2f64 (AArch64dup (f64 fpimm0))), (MOVIv2d_ns (i32 0))>;
> +def : Pat<(v4f32 (AArch64dup (f32 fpimm0))), (MOVIv2d_ns (i32 0))>;
>
> // EDIT per word & halfword: 2s, 4h, 4s, & 8h
> defm MOVI : SIMDModifiedImmVectorShift<0, 0b10, 0b00, "movi">;
> @@ -3896,30 +3898,30 @@ def : InstAlias<"movi.8h $Vd, $imm", (MO
> def : InstAlias<"movi.2s $Vd, $imm", (MOVIv2i32 V64:$Vd, imm0_255:$imm, 0), 0>;
> def : InstAlias<"movi.4s $Vd, $imm", (MOVIv4i32 V128:$Vd, imm0_255:$imm, 0), 0>;
>
> -def : Pat<(v2i32 (ARM64movi_shift imm0_255:$imm8, (i32 imm:$shift))),
> +def : Pat<(v2i32 (AArch64movi_shift imm0_255:$imm8, (i32 imm:$shift))),
> (MOVIv2i32 imm0_255:$imm8, imm:$shift)>;
> -def : Pat<(v4i32 (ARM64movi_shift imm0_255:$imm8, (i32 imm:$shift))),
> +def : Pat<(v4i32 (AArch64movi_shift imm0_255:$imm8, (i32 imm:$shift))),
> (MOVIv4i32 imm0_255:$imm8, imm:$shift)>;
> -def : Pat<(v4i16 (ARM64movi_shift imm0_255:$imm8, (i32 imm:$shift))),
> +def : Pat<(v4i16 (AArch64movi_shift imm0_255:$imm8, (i32 imm:$shift))),
> (MOVIv4i16 imm0_255:$imm8, imm:$shift)>;
> -def : Pat<(v8i16 (ARM64movi_shift imm0_255:$imm8, (i32 imm:$shift))),
> +def : Pat<(v8i16 (AArch64movi_shift imm0_255:$imm8, (i32 imm:$shift))),
> (MOVIv8i16 imm0_255:$imm8, imm:$shift)>;
>
> // EDIT per word: 2s & 4s with MSL shifter
> def MOVIv2s_msl : SIMDModifiedImmMoveMSL<0, 0, {1,1,0,?}, V64, "movi", ".2s",
> [(set (v2i32 V64:$Rd),
> - (ARM64movi_msl imm0_255:$imm8, (i32 imm:$shift)))]>;
> + (AArch64movi_msl imm0_255:$imm8, (i32 imm:$shift)))]>;
> def MOVIv4s_msl : SIMDModifiedImmMoveMSL<1, 0, {1,1,0,?}, V128, "movi", ".4s",
> [(set (v4i32 V128:$Rd),
> - (ARM64movi_msl imm0_255:$imm8, (i32 imm:$shift)))]>;
> + (AArch64movi_msl imm0_255:$imm8, (i32 imm:$shift)))]>;
>
> // Per byte: 8b & 16b
> def MOVIv8b_ns : SIMDModifiedImmVectorNoShift<0, 0, 0b1110, V64, imm0_255,
> "movi", ".8b",
> - [(set (v8i8 V64:$Rd), (ARM64movi imm0_255:$imm8))]>;
> + [(set (v8i8 V64:$Rd), (AArch64movi imm0_255:$imm8))]>;
> def MOVIv16b_ns : SIMDModifiedImmVectorNoShift<1, 0, 0b1110, V128, imm0_255,
> "movi", ".16b",
> - [(set (v16i8 V128:$Rd), (ARM64movi imm0_255:$imm8))]>;
> + [(set (v16i8 V128:$Rd), (AArch64movi imm0_255:$imm8))]>;
>
> // AdvSIMD MVNI
>
> @@ -3936,22 +3938,22 @@ def : InstAlias<"mvni.8h $Vd, $imm", (MV
> def : InstAlias<"mvni.2s $Vd, $imm", (MVNIv2i32 V64:$Vd, imm0_255:$imm, 0), 0>;
> def : InstAlias<"mvni.4s $Vd, $imm", (MVNIv4i32 V128:$Vd, imm0_255:$imm, 0), 0>;
>
> -def : Pat<(v2i32 (ARM64mvni_shift imm0_255:$imm8, (i32 imm:$shift))),
> +def : Pat<(v2i32 (AArch64mvni_shift imm0_255:$imm8, (i32 imm:$shift))),
> (MVNIv2i32 imm0_255:$imm8, imm:$shift)>;
> -def : Pat<(v4i32 (ARM64mvni_shift imm0_255:$imm8, (i32 imm:$shift))),
> +def : Pat<(v4i32 (AArch64mvni_shift imm0_255:$imm8, (i32 imm:$shift))),
> (MVNIv4i32 imm0_255:$imm8, imm:$shift)>;
> -def : Pat<(v4i16 (ARM64mvni_shift imm0_255:$imm8, (i32 imm:$shift))),
> +def : Pat<(v4i16 (AArch64mvni_shift imm0_255:$imm8, (i32 imm:$shift))),
> (MVNIv4i16 imm0_255:$imm8, imm:$shift)>;
> -def : Pat<(v8i16 (ARM64mvni_shift imm0_255:$imm8, (i32 imm:$shift))),
> +def : Pat<(v8i16 (AArch64mvni_shift imm0_255:$imm8, (i32 imm:$shift))),
> (MVNIv8i16 imm0_255:$imm8, imm:$shift)>;
>
> // EDIT per word: 2s & 4s with MSL shifter
> def MVNIv2s_msl : SIMDModifiedImmMoveMSL<0, 1, {1,1,0,?}, V64, "mvni", ".2s",
> [(set (v2i32 V64:$Rd),
> - (ARM64mvni_msl imm0_255:$imm8, (i32 imm:$shift)))]>;
> + (AArch64mvni_msl imm0_255:$imm8, (i32 imm:$shift)))]>;
> def MVNIv4s_msl : SIMDModifiedImmMoveMSL<1, 1, {1,1,0,?}, V128, "mvni", ".4s",
> [(set (v4i32 V128:$Rd),
> - (ARM64mvni_msl imm0_255:$imm8, (i32 imm:$shift)))]>;
> + (AArch64mvni_msl imm0_255:$imm8, (i32 imm:$shift)))]>;
>
> //----------------------------------------------------------------------------
> // AdvSIMD indexed element
> @@ -3985,11 +3987,11 @@ multiclass FMLSIndexedAfterNegPatterns<S
> // 3 variants for the .2s version: DUPLANE from 128-bit, DUPLANE from 64-bit
> // and DUP scalar.
> def : Pat<(v2f32 (OpNode (v2f32 V64:$Rd), (v2f32 V64:$Rn),
> - (ARM64duplane32 (v4f32 (fneg V128:$Rm)),
> + (AArch64duplane32 (v4f32 (fneg V128:$Rm)),
> VectorIndexS:$idx))),
> (FMLSv2i32_indexed V64:$Rd, V64:$Rn, V128:$Rm, VectorIndexS:$idx)>;
> def : Pat<(v2f32 (OpNode (v2f32 V64:$Rd), (v2f32 V64:$Rn),
> - (v2f32 (ARM64duplane32
> + (v2f32 (AArch64duplane32
> (v4f32 (insert_subvector undef,
> (v2f32 (fneg V64:$Rm)),
> (i32 0))),
> @@ -3998,19 +4000,19 @@ multiclass FMLSIndexedAfterNegPatterns<S
> (SUBREG_TO_REG (i32 0), V64:$Rm, dsub),
> VectorIndexS:$idx)>;
> def : Pat<(v2f32 (OpNode (v2f32 V64:$Rd), (v2f32 V64:$Rn),
> - (ARM64dup (f32 (fneg FPR32Op:$Rm))))),
> + (AArch64dup (f32 (fneg FPR32Op:$Rm))))),
> (FMLSv2i32_indexed V64:$Rd, V64:$Rn,
> (SUBREG_TO_REG (i32 0), FPR32Op:$Rm, ssub), (i64 0))>;
>
> // 3 variants for the .4s version: DUPLANE from 128-bit, DUPLANE from 64-bit
> // and DUP scalar.
> def : Pat<(v4f32 (OpNode (v4f32 V128:$Rd), (v4f32 V128:$Rn),
> - (ARM64duplane32 (v4f32 (fneg V128:$Rm)),
> + (AArch64duplane32 (v4f32 (fneg V128:$Rm)),
> VectorIndexS:$idx))),
> (FMLSv4i32_indexed V128:$Rd, V128:$Rn, V128:$Rm,
> VectorIndexS:$idx)>;
> def : Pat<(v4f32 (OpNode (v4f32 V128:$Rd), (v4f32 V128:$Rn),
> - (v4f32 (ARM64duplane32
> + (v4f32 (AArch64duplane32
> (v4f32 (insert_subvector undef,
> (v2f32 (fneg V64:$Rm)),
> (i32 0))),
> @@ -4019,19 +4021,19 @@ multiclass FMLSIndexedAfterNegPatterns<S
> (SUBREG_TO_REG (i32 0), V64:$Rm, dsub),
> VectorIndexS:$idx)>;
> def : Pat<(v4f32 (OpNode (v4f32 V128:$Rd), (v4f32 V128:$Rn),
> - (ARM64dup (f32 (fneg FPR32Op:$Rm))))),
> + (AArch64dup (f32 (fneg FPR32Op:$Rm))))),
> (FMLSv4i32_indexed V128:$Rd, V128:$Rn,
> (SUBREG_TO_REG (i32 0), FPR32Op:$Rm, ssub), (i64 0))>;
>
> // 2 variants for the .2d version: DUPLANE from 128-bit, and DUP scalar
> // (DUPLANE from 64-bit would be trivial).
> def : Pat<(v2f64 (OpNode (v2f64 V128:$Rd), (v2f64 V128:$Rn),
> - (ARM64duplane64 (v2f64 (fneg V128:$Rm)),
> + (AArch64duplane64 (v2f64 (fneg V128:$Rm)),
> VectorIndexD:$idx))),
> (FMLSv2i64_indexed
> V128:$Rd, V128:$Rn, V128:$Rm, VectorIndexS:$idx)>;
> def : Pat<(v2f64 (OpNode (v2f64 V128:$Rd), (v2f64 V128:$Rn),
> - (ARM64dup (f64 (fneg FPR64Op:$Rm))))),
> + (AArch64dup (f64 (fneg FPR64Op:$Rm))))),
> (FMLSv2i64_indexed V128:$Rd, V128:$Rn,
> (SUBREG_TO_REG (i32 0), FPR64Op:$Rm, dsub), (i64 0))>;
>
> @@ -4060,50 +4062,50 @@ defm : FMLSIndexedAfterNegPatterns<
> defm : FMLSIndexedAfterNegPatterns<
> TriOpFrag<(fma node:$MHS, node:$RHS, node:$LHS)> >;
>
> -defm FMULX : SIMDFPIndexedSD<1, 0b1001, "fmulx", int_arm64_neon_fmulx>;
> +defm FMULX : SIMDFPIndexedSD<1, 0b1001, "fmulx", int_aarch64_neon_fmulx>;
> defm FMUL : SIMDFPIndexedSD<0, 0b1001, "fmul", fmul>;
>
> -def : Pat<(v2f32 (fmul V64:$Rn, (ARM64dup (f32 FPR32:$Rm)))),
> +def : Pat<(v2f32 (fmul V64:$Rn, (AArch64dup (f32 FPR32:$Rm)))),
> (FMULv2i32_indexed V64:$Rn,
> (INSERT_SUBREG (v4i32 (IMPLICIT_DEF)), FPR32:$Rm, ssub),
> (i64 0))>;
> -def : Pat<(v4f32 (fmul V128:$Rn, (ARM64dup (f32 FPR32:$Rm)))),
> +def : Pat<(v4f32 (fmul V128:$Rn, (AArch64dup (f32 FPR32:$Rm)))),
> (FMULv4i32_indexed V128:$Rn,
> (INSERT_SUBREG (v4i32 (IMPLICIT_DEF)), FPR32:$Rm, ssub),
> (i64 0))>;
> -def : Pat<(v2f64 (fmul V128:$Rn, (ARM64dup (f64 FPR64:$Rm)))),
> +def : Pat<(v2f64 (fmul V128:$Rn, (AArch64dup (f64 FPR64:$Rm)))),
> (FMULv2i64_indexed V128:$Rn,
> (INSERT_SUBREG (v4i32 (IMPLICIT_DEF)), FPR64:$Rm, dsub),
> (i64 0))>;
>
> -defm SQDMULH : SIMDIndexedHS<0, 0b1100, "sqdmulh", int_arm64_neon_sqdmulh>;
> -defm SQRDMULH : SIMDIndexedHS<0, 0b1101, "sqrdmulh", int_arm64_neon_sqrdmulh>;
> +defm SQDMULH : SIMDIndexedHS<0, 0b1100, "sqdmulh", int_aarch64_neon_sqdmulh>;
> +defm SQRDMULH : SIMDIndexedHS<0, 0b1101, "sqrdmulh", int_aarch64_neon_sqrdmulh>;
> defm MLA : SIMDVectorIndexedHSTied<1, 0b0000, "mla",
> TriOpFrag<(add node:$LHS, (mul node:$MHS, node:$RHS))>>;
> defm MLS : SIMDVectorIndexedHSTied<1, 0b0100, "mls",
> TriOpFrag<(sub node:$LHS, (mul node:$MHS, node:$RHS))>>;
> defm MUL : SIMDVectorIndexedHS<0, 0b1000, "mul", mul>;
> defm SMLAL : SIMDVectorIndexedLongSDTied<0, 0b0010, "smlal",
> - TriOpFrag<(add node:$LHS, (int_arm64_neon_smull node:$MHS, node:$RHS))>>;
> + TriOpFrag<(add node:$LHS, (int_aarch64_neon_smull node:$MHS, node:$RHS))>>;
> defm SMLSL : SIMDVectorIndexedLongSDTied<0, 0b0110, "smlsl",
> - TriOpFrag<(sub node:$LHS, (int_arm64_neon_smull node:$MHS, node:$RHS))>>;
> + TriOpFrag<(sub node:$LHS, (int_aarch64_neon_smull node:$MHS, node:$RHS))>>;
> defm SMULL : SIMDVectorIndexedLongSD<0, 0b1010, "smull",
> - int_arm64_neon_smull>;
> + int_aarch64_neon_smull>;
> defm SQDMLAL : SIMDIndexedLongSQDMLXSDTied<0, 0b0011, "sqdmlal",
> - int_arm64_neon_sqadd>;
> + int_aarch64_neon_sqadd>;
> defm SQDMLSL : SIMDIndexedLongSQDMLXSDTied<0, 0b0111, "sqdmlsl",
> - int_arm64_neon_sqsub>;
> -defm SQDMULL : SIMDIndexedLongSD<0, 0b1011, "sqdmull", int_arm64_neon_sqdmull>;
> + int_aarch64_neon_sqsub>;
> +defm SQDMULL : SIMDIndexedLongSD<0, 0b1011, "sqdmull", int_aarch64_neon_sqdmull>;
> defm UMLAL : SIMDVectorIndexedLongSDTied<1, 0b0010, "umlal",
> - TriOpFrag<(add node:$LHS, (int_arm64_neon_umull node:$MHS, node:$RHS))>>;
> + TriOpFrag<(add node:$LHS, (int_aarch64_neon_umull node:$MHS, node:$RHS))>>;
> defm UMLSL : SIMDVectorIndexedLongSDTied<1, 0b0110, "umlsl",
> - TriOpFrag<(sub node:$LHS, (int_arm64_neon_umull node:$MHS, node:$RHS))>>;
> + TriOpFrag<(sub node:$LHS, (int_aarch64_neon_umull node:$MHS, node:$RHS))>>;
> defm UMULL : SIMDVectorIndexedLongSD<1, 0b1010, "umull",
> - int_arm64_neon_umull>;
> + int_aarch64_neon_umull>;
>
> // A scalar sqdmull with the second operand being a vector lane can be
> // handled directly with the indexed instruction encoding.
> -def : Pat<(int_arm64_neon_sqdmulls_scalar (i32 FPR32:$Rn),
> +def : Pat<(int_aarch64_neon_sqdmulls_scalar (i32 FPR32:$Rn),
> (vector_extract (v4i32 V128:$Vm),
> VectorIndexS:$idx)),
> (SQDMULLv1i64_indexed FPR32:$Rn, V128:$Vm, VectorIndexS:$idx)>;
> @@ -4118,149 +4120,149 @@ defm UCVTF : SIMDScalarRShiftSD<1, 0b11
> // Codegen patterns for the above. We don't put these directly on the
> // instructions because TableGen's type inference can't handle the truth.
> // Having the same base pattern for fp <--> int totally freaks it out.
> -def : Pat<(int_arm64_neon_vcvtfp2fxs FPR32:$Rn, vecshiftR32:$imm),
> +def : Pat<(int_aarch64_neon_vcvtfp2fxs FPR32:$Rn, vecshiftR32:$imm),
> (FCVTZSs FPR32:$Rn, vecshiftR32:$imm)>;
> -def : Pat<(int_arm64_neon_vcvtfp2fxu FPR32:$Rn, vecshiftR32:$imm),
> +def : Pat<(int_aarch64_neon_vcvtfp2fxu FPR32:$Rn, vecshiftR32:$imm),
> (FCVTZUs FPR32:$Rn, vecshiftR32:$imm)>;
> -def : Pat<(i64 (int_arm64_neon_vcvtfp2fxs (f64 FPR64:$Rn), vecshiftR64:$imm)),
> +def : Pat<(i64 (int_aarch64_neon_vcvtfp2fxs (f64 FPR64:$Rn), vecshiftR64:$imm)),
> (FCVTZSd FPR64:$Rn, vecshiftR64:$imm)>;
> -def : Pat<(i64 (int_arm64_neon_vcvtfp2fxu (f64 FPR64:$Rn), vecshiftR64:$imm)),
> +def : Pat<(i64 (int_aarch64_neon_vcvtfp2fxu (f64 FPR64:$Rn), vecshiftR64:$imm)),
> (FCVTZUd FPR64:$Rn, vecshiftR64:$imm)>;
> -def : Pat<(v1i64 (int_arm64_neon_vcvtfp2fxs (v1f64 FPR64:$Rn),
> +def : Pat<(v1i64 (int_aarch64_neon_vcvtfp2fxs (v1f64 FPR64:$Rn),
> vecshiftR64:$imm)),
> (FCVTZSd FPR64:$Rn, vecshiftR64:$imm)>;
> -def : Pat<(v1i64 (int_arm64_neon_vcvtfp2fxu (v1f64 FPR64:$Rn),
> +def : Pat<(v1i64 (int_aarch64_neon_vcvtfp2fxu (v1f64 FPR64:$Rn),
> vecshiftR64:$imm)),
> (FCVTZUd FPR64:$Rn, vecshiftR64:$imm)>;
> -def : Pat<(int_arm64_neon_vcvtfxs2fp FPR32:$Rn, vecshiftR32:$imm),
> +def : Pat<(int_aarch64_neon_vcvtfxs2fp FPR32:$Rn, vecshiftR32:$imm),
> (SCVTFs FPR32:$Rn, vecshiftR32:$imm)>;
> -def : Pat<(int_arm64_neon_vcvtfxu2fp FPR32:$Rn, vecshiftR32:$imm),
> +def : Pat<(int_aarch64_neon_vcvtfxu2fp FPR32:$Rn, vecshiftR32:$imm),
> (UCVTFs FPR32:$Rn, vecshiftR32:$imm)>;
> -def : Pat<(f64 (int_arm64_neon_vcvtfxs2fp (i64 FPR64:$Rn), vecshiftR64:$imm)),
> +def : Pat<(f64 (int_aarch64_neon_vcvtfxs2fp (i64 FPR64:$Rn), vecshiftR64:$imm)),
> (SCVTFd FPR64:$Rn, vecshiftR64:$imm)>;
> -def : Pat<(f64 (int_arm64_neon_vcvtfxu2fp (i64 FPR64:$Rn), vecshiftR64:$imm)),
> +def : Pat<(f64 (int_aarch64_neon_vcvtfxu2fp (i64 FPR64:$Rn), vecshiftR64:$imm)),
> (UCVTFd FPR64:$Rn, vecshiftR64:$imm)>;
> -def : Pat<(v1f64 (int_arm64_neon_vcvtfxs2fp (v1i64 FPR64:$Rn),
> +def : Pat<(v1f64 (int_aarch64_neon_vcvtfxs2fp (v1i64 FPR64:$Rn),
> vecshiftR64:$imm)),
> (SCVTFd FPR64:$Rn, vecshiftR64:$imm)>;
> -def : Pat<(v1f64 (int_arm64_neon_vcvtfxu2fp (v1i64 FPR64:$Rn),
> +def : Pat<(v1f64 (int_aarch64_neon_vcvtfxu2fp (v1i64 FPR64:$Rn),
> vecshiftR64:$imm)),
> (UCVTFd FPR64:$Rn, vecshiftR64:$imm)>;
>
> -defm SHL : SIMDScalarLShiftD< 0, 0b01010, "shl", ARM64vshl>;
> +defm SHL : SIMDScalarLShiftD< 0, 0b01010, "shl", AArch64vshl>;
> defm SLI : SIMDScalarLShiftDTied<1, 0b01010, "sli">;
> defm SQRSHRN : SIMDScalarRShiftBHS< 0, 0b10011, "sqrshrn",
> - int_arm64_neon_sqrshrn>;
> + int_aarch64_neon_sqrshrn>;
> defm SQRSHRUN : SIMDScalarRShiftBHS< 1, 0b10001, "sqrshrun",
> - int_arm64_neon_sqrshrun>;
> -defm SQSHLU : SIMDScalarLShiftBHSD<1, 0b01100, "sqshlu", ARM64sqshlui>;
> -defm SQSHL : SIMDScalarLShiftBHSD<0, 0b01110, "sqshl", ARM64sqshli>;
> + int_aarch64_neon_sqrshrun>;
> +defm SQSHLU : SIMDScalarLShiftBHSD<1, 0b01100, "sqshlu", AArch64sqshlui>;
> +defm SQSHL : SIMDScalarLShiftBHSD<0, 0b01110, "sqshl", AArch64sqshli>;
> defm SQSHRN : SIMDScalarRShiftBHS< 0, 0b10010, "sqshrn",
> - int_arm64_neon_sqshrn>;
> + int_aarch64_neon_sqshrn>;
> defm SQSHRUN : SIMDScalarRShiftBHS< 1, 0b10000, "sqshrun",
> - int_arm64_neon_sqshrun>;
> + int_aarch64_neon_sqshrun>;
> defm SRI : SIMDScalarRShiftDTied< 1, 0b01000, "sri">;
> -defm SRSHR : SIMDScalarRShiftD< 0, 0b00100, "srshr", ARM64srshri>;
> +defm SRSHR : SIMDScalarRShiftD< 0, 0b00100, "srshr", AArch64srshri>;
> defm SRSRA : SIMDScalarRShiftDTied< 0, 0b00110, "srsra",
> TriOpFrag<(add node:$LHS,
> - (ARM64srshri node:$MHS, node:$RHS))>>;
> -defm SSHR : SIMDScalarRShiftD< 0, 0b00000, "sshr", ARM64vashr>;
> + (AArch64srshri node:$MHS, node:$RHS))>>;
> +defm SSHR : SIMDScalarRShiftD< 0, 0b00000, "sshr", AArch64vashr>;
> defm SSRA : SIMDScalarRShiftDTied< 0, 0b00010, "ssra",
> TriOpFrag<(add node:$LHS,
> - (ARM64vashr node:$MHS, node:$RHS))>>;
> + (AArch64vashr node:$MHS, node:$RHS))>>;
> defm UQRSHRN : SIMDScalarRShiftBHS< 1, 0b10011, "uqrshrn",
> - int_arm64_neon_uqrshrn>;
> -defm UQSHL : SIMDScalarLShiftBHSD<1, 0b01110, "uqshl", ARM64uqshli>;
> + int_aarch64_neon_uqrshrn>;
> +defm UQSHL : SIMDScalarLShiftBHSD<1, 0b01110, "uqshl", AArch64uqshli>;
> defm UQSHRN : SIMDScalarRShiftBHS< 1, 0b10010, "uqshrn",
> - int_arm64_neon_uqshrn>;
> -defm URSHR : SIMDScalarRShiftD< 1, 0b00100, "urshr", ARM64urshri>;
> + int_aarch64_neon_uqshrn>;
> +defm URSHR : SIMDScalarRShiftD< 1, 0b00100, "urshr", AArch64urshri>;
> defm URSRA : SIMDScalarRShiftDTied< 1, 0b00110, "ursra",
> TriOpFrag<(add node:$LHS,
> - (ARM64urshri node:$MHS, node:$RHS))>>;
> -defm USHR : SIMDScalarRShiftD< 1, 0b00000, "ushr", ARM64vlshr>;
> + (AArch64urshri node:$MHS, node:$RHS))>>;
> +defm USHR : SIMDScalarRShiftD< 1, 0b00000, "ushr", AArch64vlshr>;
> defm USRA : SIMDScalarRShiftDTied< 1, 0b00010, "usra",
> TriOpFrag<(add node:$LHS,
> - (ARM64vlshr node:$MHS, node:$RHS))>>;
> + (AArch64vlshr node:$MHS, node:$RHS))>>;
>
> //----------------------------------------------------------------------------
> // AdvSIMD vector shift instructions
> //----------------------------------------------------------------------------
> -defm FCVTZS:SIMDVectorRShiftSD<0, 0b11111, "fcvtzs", int_arm64_neon_vcvtfp2fxs>;
> -defm FCVTZU:SIMDVectorRShiftSD<1, 0b11111, "fcvtzu", int_arm64_neon_vcvtfp2fxu>;
> +defm FCVTZS:SIMDVectorRShiftSD<0, 0b11111, "fcvtzs", int_aarch64_neon_vcvtfp2fxs>;
> +defm FCVTZU:SIMDVectorRShiftSD<1, 0b11111, "fcvtzu", int_aarch64_neon_vcvtfp2fxu>;
> defm SCVTF: SIMDVectorRShiftSDToFP<0, 0b11100, "scvtf",
> - int_arm64_neon_vcvtfxs2fp>;
> + int_aarch64_neon_vcvtfxs2fp>;
> defm RSHRN : SIMDVectorRShiftNarrowBHS<0, 0b10001, "rshrn",
> - int_arm64_neon_rshrn>;
> -defm SHL : SIMDVectorLShiftBHSD<0, 0b01010, "shl", ARM64vshl>;
> + int_aarch64_neon_rshrn>;
> +defm SHL : SIMDVectorLShiftBHSD<0, 0b01010, "shl", AArch64vshl>;
> defm SHRN : SIMDVectorRShiftNarrowBHS<0, 0b10000, "shrn",
> - BinOpFrag<(trunc (ARM64vashr node:$LHS, node:$RHS))>>;
> -defm SLI : SIMDVectorLShiftBHSDTied<1, 0b01010, "sli", int_arm64_neon_vsli>;
> -def : Pat<(v1i64 (int_arm64_neon_vsli (v1i64 FPR64:$Rd), (v1i64 FPR64:$Rn),
> + BinOpFrag<(trunc (AArch64vashr node:$LHS, node:$RHS))>>;
> +defm SLI : SIMDVectorLShiftBHSDTied<1, 0b01010, "sli", int_aarch64_neon_vsli>;
> +def : Pat<(v1i64 (int_aarch64_neon_vsli (v1i64 FPR64:$Rd), (v1i64 FPR64:$Rn),
> (i32 vecshiftL64:$imm))),
> (SLId FPR64:$Rd, FPR64:$Rn, vecshiftL64:$imm)>;
> defm SQRSHRN : SIMDVectorRShiftNarrowBHS<0, 0b10011, "sqrshrn",
> - int_arm64_neon_sqrshrn>;
> + int_aarch64_neon_sqrshrn>;
> defm SQRSHRUN: SIMDVectorRShiftNarrowBHS<1, 0b10001, "sqrshrun",
> - int_arm64_neon_sqrshrun>;
> -defm SQSHLU : SIMDVectorLShiftBHSD<1, 0b01100, "sqshlu", ARM64sqshlui>;
> -defm SQSHL : SIMDVectorLShiftBHSD<0, 0b01110, "sqshl", ARM64sqshli>;
> + int_aarch64_neon_sqrshrun>;
> +defm SQSHLU : SIMDVectorLShiftBHSD<1, 0b01100, "sqshlu", AArch64sqshlui>;
> +defm SQSHL : SIMDVectorLShiftBHSD<0, 0b01110, "sqshl", AArch64sqshli>;
> defm SQSHRN : SIMDVectorRShiftNarrowBHS<0, 0b10010, "sqshrn",
> - int_arm64_neon_sqshrn>;
> + int_aarch64_neon_sqshrn>;
> defm SQSHRUN : SIMDVectorRShiftNarrowBHS<1, 0b10000, "sqshrun",
> - int_arm64_neon_sqshrun>;
> -defm SRI : SIMDVectorRShiftBHSDTied<1, 0b01000, "sri", int_arm64_neon_vsri>;
> -def : Pat<(v1i64 (int_arm64_neon_vsri (v1i64 FPR64:$Rd), (v1i64 FPR64:$Rn),
> + int_aarch64_neon_sqshrun>;
> +defm SRI : SIMDVectorRShiftBHSDTied<1, 0b01000, "sri", int_aarch64_neon_vsri>;
> +def : Pat<(v1i64 (int_aarch64_neon_vsri (v1i64 FPR64:$Rd), (v1i64 FPR64:$Rn),
> (i32 vecshiftR64:$imm))),
> (SRId FPR64:$Rd, FPR64:$Rn, vecshiftR64:$imm)>;
> -defm SRSHR : SIMDVectorRShiftBHSD<0, 0b00100, "srshr", ARM64srshri>;
> +defm SRSHR : SIMDVectorRShiftBHSD<0, 0b00100, "srshr", AArch64srshri>;
> defm SRSRA : SIMDVectorRShiftBHSDTied<0, 0b00110, "srsra",
> TriOpFrag<(add node:$LHS,
> - (ARM64srshri node:$MHS, node:$RHS))> >;
> + (AArch64srshri node:$MHS, node:$RHS))> >;
> defm SSHLL : SIMDVectorLShiftLongBHSD<0, 0b10100, "sshll",
> - BinOpFrag<(ARM64vshl (sext node:$LHS), node:$RHS)>>;
> + BinOpFrag<(AArch64vshl (sext node:$LHS), node:$RHS)>>;
>
> -defm SSHR : SIMDVectorRShiftBHSD<0, 0b00000, "sshr", ARM64vashr>;
> +defm SSHR : SIMDVectorRShiftBHSD<0, 0b00000, "sshr", AArch64vashr>;
> defm SSRA : SIMDVectorRShiftBHSDTied<0, 0b00010, "ssra",
> - TriOpFrag<(add node:$LHS, (ARM64vashr node:$MHS, node:$RHS))>>;
> + TriOpFrag<(add node:$LHS, (AArch64vashr node:$MHS, node:$RHS))>>;
> defm UCVTF : SIMDVectorRShiftSDToFP<1, 0b11100, "ucvtf",
> - int_arm64_neon_vcvtfxu2fp>;
> + int_aarch64_neon_vcvtfxu2fp>;
> defm UQRSHRN : SIMDVectorRShiftNarrowBHS<1, 0b10011, "uqrshrn",
> - int_arm64_neon_uqrshrn>;
> -defm UQSHL : SIMDVectorLShiftBHSD<1, 0b01110, "uqshl", ARM64uqshli>;
> + int_aarch64_neon_uqrshrn>;
> +defm UQSHL : SIMDVectorLShiftBHSD<1, 0b01110, "uqshl", AArch64uqshli>;
> defm UQSHRN : SIMDVectorRShiftNarrowBHS<1, 0b10010, "uqshrn",
> - int_arm64_neon_uqshrn>;
> -defm URSHR : SIMDVectorRShiftBHSD<1, 0b00100, "urshr", ARM64urshri>;
> + int_aarch64_neon_uqshrn>;
> +defm URSHR : SIMDVectorRShiftBHSD<1, 0b00100, "urshr", AArch64urshri>;
> defm URSRA : SIMDVectorRShiftBHSDTied<1, 0b00110, "ursra",
> TriOpFrag<(add node:$LHS,
> - (ARM64urshri node:$MHS, node:$RHS))> >;
> + (AArch64urshri node:$MHS, node:$RHS))> >;
> defm USHLL : SIMDVectorLShiftLongBHSD<1, 0b10100, "ushll",
> - BinOpFrag<(ARM64vshl (zext node:$LHS), node:$RHS)>>;
> -defm USHR : SIMDVectorRShiftBHSD<1, 0b00000, "ushr", ARM64vlshr>;
> + BinOpFrag<(AArch64vshl (zext node:$LHS), node:$RHS)>>;
> +defm USHR : SIMDVectorRShiftBHSD<1, 0b00000, "ushr", AArch64vlshr>;
> defm USRA : SIMDVectorRShiftBHSDTied<1, 0b00010, "usra",
> - TriOpFrag<(add node:$LHS, (ARM64vlshr node:$MHS, node:$RHS))> >;
> + TriOpFrag<(add node:$LHS, (AArch64vlshr node:$MHS, node:$RHS))> >;
>
> // SHRN patterns for when a logical right shift was used instead of arithmetic
> // (the immediate guarantees no sign bits actually end up in the result so it
> // doesn't matter).
> -def : Pat<(v8i8 (trunc (ARM64vlshr (v8i16 V128:$Rn), vecshiftR16Narrow:$imm))),
> +def : Pat<(v8i8 (trunc (AArch64vlshr (v8i16 V128:$Rn), vecshiftR16Narrow:$imm))),
> (SHRNv8i8_shift V128:$Rn, vecshiftR16Narrow:$imm)>;
> -def : Pat<(v4i16 (trunc (ARM64vlshr (v4i32 V128:$Rn), vecshiftR32Narrow:$imm))),
> +def : Pat<(v4i16 (trunc (AArch64vlshr (v4i32 V128:$Rn), vecshiftR32Narrow:$imm))),
> (SHRNv4i16_shift V128:$Rn, vecshiftR32Narrow:$imm)>;
> -def : Pat<(v2i32 (trunc (ARM64vlshr (v2i64 V128:$Rn), vecshiftR64Narrow:$imm))),
> +def : Pat<(v2i32 (trunc (AArch64vlshr (v2i64 V128:$Rn), vecshiftR64Narrow:$imm))),
> (SHRNv2i32_shift V128:$Rn, vecshiftR64Narrow:$imm)>;
>
> def : Pat<(v16i8 (concat_vectors (v8i8 V64:$Rd),
> - (trunc (ARM64vlshr (v8i16 V128:$Rn),
> + (trunc (AArch64vlshr (v8i16 V128:$Rn),
> vecshiftR16Narrow:$imm)))),
> (SHRNv16i8_shift (INSERT_SUBREG (IMPLICIT_DEF), V64:$Rd, dsub),
> V128:$Rn, vecshiftR16Narrow:$imm)>;
> def : Pat<(v8i16 (concat_vectors (v4i16 V64:$Rd),
> - (trunc (ARM64vlshr (v4i32 V128:$Rn),
> + (trunc (AArch64vlshr (v4i32 V128:$Rn),
> vecshiftR32Narrow:$imm)))),
> (SHRNv8i16_shift (INSERT_SUBREG (IMPLICIT_DEF), V64:$Rd, dsub),
> V128:$Rn, vecshiftR32Narrow:$imm)>;
> def : Pat<(v4i32 (concat_vectors (v2i32 V64:$Rd),
> - (trunc (ARM64vlshr (v2i64 V128:$Rn),
> + (trunc (AArch64vlshr (v2i64 V128:$Rn),
> vecshiftR64Narrow:$imm)))),
> (SHRNv4i32_shift (INSERT_SUBREG (IMPLICIT_DEF), V64:$Rd, dsub),
> V128:$Rn, vecshiftR32Narrow:$imm)>;
> @@ -4530,30 +4532,30 @@ defm LD4 : SIMDLdSingleSTied<1, 0b101, 0
> defm LD4 : SIMDLdSingleDTied<1, 0b101, 0b01, "ld4", VecListFourd, GPR64pi32>;
> }
>
> -def : Pat<(v8i8 (ARM64dup (i32 (extloadi8 GPR64sp:$Rn)))),
> +def : Pat<(v8i8 (AArch64dup (i32 (extloadi8 GPR64sp:$Rn)))),
> (LD1Rv8b GPR64sp:$Rn)>;
> -def : Pat<(v16i8 (ARM64dup (i32 (extloadi8 GPR64sp:$Rn)))),
> +def : Pat<(v16i8 (AArch64dup (i32 (extloadi8 GPR64sp:$Rn)))),
> (LD1Rv16b GPR64sp:$Rn)>;
> -def : Pat<(v4i16 (ARM64dup (i32 (extloadi16 GPR64sp:$Rn)))),
> +def : Pat<(v4i16 (AArch64dup (i32 (extloadi16 GPR64sp:$Rn)))),
> (LD1Rv4h GPR64sp:$Rn)>;
> -def : Pat<(v8i16 (ARM64dup (i32 (extloadi16 GPR64sp:$Rn)))),
> +def : Pat<(v8i16 (AArch64dup (i32 (extloadi16 GPR64sp:$Rn)))),
> (LD1Rv8h GPR64sp:$Rn)>;
> -def : Pat<(v2i32 (ARM64dup (i32 (load GPR64sp:$Rn)))),
> +def : Pat<(v2i32 (AArch64dup (i32 (load GPR64sp:$Rn)))),
> (LD1Rv2s GPR64sp:$Rn)>;
> -def : Pat<(v4i32 (ARM64dup (i32 (load GPR64sp:$Rn)))),
> +def : Pat<(v4i32 (AArch64dup (i32 (load GPR64sp:$Rn)))),
> (LD1Rv4s GPR64sp:$Rn)>;
> -def : Pat<(v2i64 (ARM64dup (i64 (load GPR64sp:$Rn)))),
> +def : Pat<(v2i64 (AArch64dup (i64 (load GPR64sp:$Rn)))),
> (LD1Rv2d GPR64sp:$Rn)>;
> -def : Pat<(v1i64 (ARM64dup (i64 (load GPR64sp:$Rn)))),
> +def : Pat<(v1i64 (AArch64dup (i64 (load GPR64sp:$Rn)))),
> (LD1Rv1d GPR64sp:$Rn)>;
> // Grab the floating point version too
> -def : Pat<(v2f32 (ARM64dup (f32 (load GPR64sp:$Rn)))),
> +def : Pat<(v2f32 (AArch64dup (f32 (load GPR64sp:$Rn)))),
> (LD1Rv2s GPR64sp:$Rn)>;
> -def : Pat<(v4f32 (ARM64dup (f32 (load GPR64sp:$Rn)))),
> +def : Pat<(v4f32 (AArch64dup (f32 (load GPR64sp:$Rn)))),
> (LD1Rv4s GPR64sp:$Rn)>;
> -def : Pat<(v2f64 (ARM64dup (f64 (load GPR64sp:$Rn)))),
> +def : Pat<(v2f64 (AArch64dup (f64 (load GPR64sp:$Rn)))),
> (LD1Rv2d GPR64sp:$Rn)>;
> -def : Pat<(v1f64 (ARM64dup (f64 (load GPR64sp:$Rn)))),
> +def : Pat<(v1f64 (AArch64dup (f64 (load GPR64sp:$Rn)))),
> (LD1Rv1d GPR64sp:$Rn)>;
>
> class Ld1Lane128Pat<SDPatternOperator scalar_load, Operand VecIndex,
> @@ -4695,22 +4697,22 @@ defm ST4 : SIMDLdSt4SingleAliases<"st4">
> // Crypto extensions
> //----------------------------------------------------------------------------
>
> -def AESErr : AESTiedInst<0b0100, "aese", int_arm64_crypto_aese>;
> -def AESDrr : AESTiedInst<0b0101, "aesd", int_arm64_crypto_aesd>;
> -def AESMCrr : AESInst< 0b0110, "aesmc", int_arm64_crypto_aesmc>;
> -def AESIMCrr : AESInst< 0b0111, "aesimc", int_arm64_crypto_aesimc>;
> -
> -def SHA1Crrr : SHATiedInstQSV<0b000, "sha1c", int_arm64_crypto_sha1c>;
> -def SHA1Prrr : SHATiedInstQSV<0b001, "sha1p", int_arm64_crypto_sha1p>;
> -def SHA1Mrrr : SHATiedInstQSV<0b010, "sha1m", int_arm64_crypto_sha1m>;
> -def SHA1SU0rrr : SHATiedInstVVV<0b011, "sha1su0", int_arm64_crypto_sha1su0>;
> -def SHA256Hrrr : SHATiedInstQQV<0b100, "sha256h", int_arm64_crypto_sha256h>;
> -def SHA256H2rrr : SHATiedInstQQV<0b101, "sha256h2",int_arm64_crypto_sha256h2>;
> -def SHA256SU1rrr :SHATiedInstVVV<0b110, "sha256su1",int_arm64_crypto_sha256su1>;
> -
> -def SHA1Hrr : SHAInstSS< 0b0000, "sha1h", int_arm64_crypto_sha1h>;
> -def SHA1SU1rr : SHATiedInstVV<0b0001, "sha1su1", int_arm64_crypto_sha1su1>;
> -def SHA256SU0rr : SHATiedInstVV<0b0010, "sha256su0",int_arm64_crypto_sha256su0>;
> +def AESErr : AESTiedInst<0b0100, "aese", int_aarch64_crypto_aese>;
> +def AESDrr : AESTiedInst<0b0101, "aesd", int_aarch64_crypto_aesd>;
> +def AESMCrr : AESInst< 0b0110, "aesmc", int_aarch64_crypto_aesmc>;
> +def AESIMCrr : AESInst< 0b0111, "aesimc", int_aarch64_crypto_aesimc>;
> +
> +def SHA1Crrr : SHATiedInstQSV<0b000, "sha1c", int_aarch64_crypto_sha1c>;
> +def SHA1Prrr : SHATiedInstQSV<0b001, "sha1p", int_aarch64_crypto_sha1p>;
> +def SHA1Mrrr : SHATiedInstQSV<0b010, "sha1m", int_aarch64_crypto_sha1m>;
> +def SHA1SU0rrr : SHATiedInstVVV<0b011, "sha1su0", int_aarch64_crypto_sha1su0>;
> +def SHA256Hrrr : SHATiedInstQQV<0b100, "sha256h", int_aarch64_crypto_sha256h>;
> +def SHA256H2rrr : SHATiedInstQQV<0b101, "sha256h2",int_aarch64_crypto_sha256h2>;
> +def SHA256SU1rrr :SHATiedInstVVV<0b110, "sha256su1",int_aarch64_crypto_sha256su1>;
> +
> +def SHA1Hrr : SHAInstSS< 0b0000, "sha1h", int_aarch64_crypto_sha1h>;
> +def SHA1SU1rr : SHATiedInstVV<0b0001, "sha1su1", int_aarch64_crypto_sha1su1>;
> +def SHA256SU0rr : SHATiedInstVV<0b0010, "sha256su0",int_aarch64_crypto_sha256su0>;
>
> //----------------------------------------------------------------------------
> // Compiler-pseudos
> @@ -4799,7 +4801,7 @@ def : Pat<(sra (i64 (sext GPR32:$Rn)), (
> def : Pat<(i32 (trunc GPR64sp:$src)),
> (i32 (EXTRACT_SUBREG GPR64sp:$src, sub_32))>;
>
> -// __builtin_trap() uses the BRK instruction on ARM64.
> +// __builtin_trap() uses the BRK instruction on AArch64.
> def : Pat<(trap), (BRK 1)>;
>
> // Conversions within AdvSIMD types in the same register size are free.
> @@ -5256,13 +5258,13 @@ def : Pat<(fadd (vector_extract (v4f32 F
> (f32 (FADDPv2i32p (EXTRACT_SUBREG FPR128:$Rn, dsub)))>;
>
> // Scalar 64-bit shifts in FPR64 registers.
> -def : Pat<(i64 (int_arm64_neon_sshl (i64 FPR64:$Rn), (i64 FPR64:$Rm))),
> +def : Pat<(i64 (int_aarch64_neon_sshl (i64 FPR64:$Rn), (i64 FPR64:$Rm))),
> (SSHLv1i64 FPR64:$Rn, FPR64:$Rm)>;
> -def : Pat<(i64 (int_arm64_neon_ushl (i64 FPR64:$Rn), (i64 FPR64:$Rm))),
> +def : Pat<(i64 (int_aarch64_neon_ushl (i64 FPR64:$Rn), (i64 FPR64:$Rm))),
> (USHLv1i64 FPR64:$Rn, FPR64:$Rm)>;
> -def : Pat<(i64 (int_arm64_neon_srshl (i64 FPR64:$Rn), (i64 FPR64:$Rm))),
> +def : Pat<(i64 (int_aarch64_neon_srshl (i64 FPR64:$Rn), (i64 FPR64:$Rm))),
> (SRSHLv1i64 FPR64:$Rn, FPR64:$Rm)>;
> -def : Pat<(i64 (int_arm64_neon_urshl (i64 FPR64:$Rn), (i64 FPR64:$Rm))),
> +def : Pat<(i64 (int_aarch64_neon_urshl (i64 FPR64:$Rn), (i64 FPR64:$Rm))),
> (URSHLv1i64 FPR64:$Rn, FPR64:$Rm)>;
>
> // Tail call return handling. These are all compiler pseudo-instructions,
> @@ -5272,11 +5274,11 @@ let isCall = 1, isTerminator = 1, isRetu
> def TCRETURNri : Pseudo<(outs), (ins tcGPR64:$dst, i32imm:$FPDiff), []>;
> }
>
> -def : Pat<(ARM64tcret tcGPR64:$dst, (i32 timm:$FPDiff)),
> +def : Pat<(AArch64tcret tcGPR64:$dst, (i32 timm:$FPDiff)),
> (TCRETURNri tcGPR64:$dst, imm:$FPDiff)>;
> -def : Pat<(ARM64tcret tglobaladdr:$dst, (i32 timm:$FPDiff)),
> +def : Pat<(AArch64tcret tglobaladdr:$dst, (i32 timm:$FPDiff)),
> (TCRETURNdi texternalsym:$dst, imm:$FPDiff)>;
> -def : Pat<(ARM64tcret texternalsym:$dst, (i32 timm:$FPDiff)),
> +def : Pat<(AArch64tcret texternalsym:$dst, (i32 timm:$FPDiff)),
> (TCRETURNdi texternalsym:$dst, imm:$FPDiff)>;
>
> -include "ARM64InstrAtomics.td"
> +include "AArch64InstrAtomics.td"
>
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
More information about the llvm-commits
mailing list