[llvm-commits] [llvm-gcc-4.2] r76781 [1/5] - in /llvm-gcc-4.2/trunk: ./ gcc/ gcc/config/ gcc/config/arm/ gcc/config/rs6000/ gcc/cp/ gcc/doc/ gcc/testsuite/g++.apple/ gcc/testsuite/g++.dg/abi/ gcc/testsuite/gcc.apple/ gcc/testsuite/gcc.target/arm/ gcc/testsuite/gcc.target/arm/neon/ gcc/testsuite/obj-c++.dg/ gcc/testsuite/objc.dg/

Bob Wilson bob.wilson at apple.com
Wed Jul 22 13:36:46 PDT 2009


Author: bwilson
Date: Wed Jul 22 15:36:27 2009
New Revision: 76781

URL: http://llvm.org/viewvc/llvm-project?rev=76781&view=rev
Log:
Merge with Apple's gcc r155791.
(This has a lot of ARMv7 changes, but there is still one more merge to do.)

Added:
    llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon.h
    llvm-gcc-4.2/trunk/gcc/config/arm/cortex-a8-neon.md
    llvm-gcc-4.2/trunk/gcc/config/arm/cortex-a8.md
    llvm-gcc-4.2/trunk/gcc/config/arm/cortex-r4.md
    llvm-gcc-4.2/trunk/gcc/config/arm/gen-darwin-multilib-exceptions.sh
    llvm-gcc-4.2/trunk/gcc/config/arm/hwdiv.md
    llvm-gcc-4.2/trunk/gcc/config/arm/neon-docgen.ml
    llvm-gcc-4.2/trunk/gcc/config/arm/neon-gen.ml
    llvm-gcc-4.2/trunk/gcc/config/arm/neon-schedgen.ml
    llvm-gcc-4.2/trunk/gcc/config/arm/neon-testgen.ml
    llvm-gcc-4.2/trunk/gcc/config/arm/neon.md
    llvm-gcc-4.2/trunk/gcc/config/arm/neon.ml
    llvm-gcc-4.2/trunk/gcc/config/arm/thumb2.md
    llvm-gcc-4.2/trunk/gcc/config/arm/vec-common.md
    llvm-gcc-4.2/trunk/gcc/config/arm/vfp11.md
    llvm-gcc-4.2/trunk/gcc/doc/arm-neon-intrinsics.texi
    llvm-gcc-4.2/trunk/gcc/testsuite/g++.apple/visibility-4.C
    llvm-gcc-4.2/trunk/gcc/testsuite/g++.dg/abi/mangle-neon.C
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.apple/6251664.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.apple/condexec-2.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/neon.exp
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/polytypes.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRaddhns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRaddhns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRaddhns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRaddhnu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRaddhnu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRaddhnu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRhaddQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRhaddQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRhaddQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRhaddQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRhaddQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRhaddQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRhadds16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRhadds32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRhadds8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRhaddu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRhaddu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRhaddu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshlQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshlQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshlQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshlQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshlQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshlQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshlQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshlQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshls64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshlu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshlu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshlu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshlu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrQ_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrQ_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrQ_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrQ_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshr_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshr_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshr_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshr_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshr_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshr_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshr_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshr_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrn_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrn_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrn_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrn_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrn_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRshrn_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsraQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsraQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsraQ_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsraQ_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsraQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsraQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsraQ_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsraQ_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsra_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsra_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsra_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsra_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsra_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsra_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsra_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsra_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsubhns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsubhns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsubhns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsubhnu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsubhnu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vRsubhnu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabaQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabaQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabaQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabaQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabaQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabaQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabals16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabals32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabals8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabalu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabalu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabalu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabas16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabas32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabas8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabau16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabau32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabau8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdlu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdlu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdlu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabds16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabds32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabds8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabdu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabsQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabsQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabsQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabsQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabsf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabss16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabss32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vabss8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddhns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddhns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddhns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddhnu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddhnu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddhnu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddlu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddlu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddlu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vadds16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vadds32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vadds64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vadds8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddws16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddws32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddws8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddwu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddwu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vaddwu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vandQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vandQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vandQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vandQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vandQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vandQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vandQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vandQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vands16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vands32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vands64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vands8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vandu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vandu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vandu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vandu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbicQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbicQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbicQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbicQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbicQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbicQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbicQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbicQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbics16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbics32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbics64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbics8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbicu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbicu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbicu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbicu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslQp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslQp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbsls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbsls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbsls64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbsls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vbslu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcageQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcagef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcagtQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcagtf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcaleQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcalef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcaltQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcaltf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqQp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vceqs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcequ16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcequ32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcequ8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgeQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgeQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgeQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgeQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgeQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgeQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgeQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcges16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcges32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcges8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgeu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgeu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgeu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgtQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgtQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgtQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgtQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgtQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgtQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgtQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgtf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgts16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgts32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgts8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgtu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgtu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcgtu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcleQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcleQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcleQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcleQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcleQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcleQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcleQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcles16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcles32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcles8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcleu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcleu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcleu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclsQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclsQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclsQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclss16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclss32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclss8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcltQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcltQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcltQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcltQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcltQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcltQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcltQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcltf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclts16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclts32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclts8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcltu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcltu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcltu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclzQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclzQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclzQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclzQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclzQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclzQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclzs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclzs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclzs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclzu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclzu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vclzu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcntQp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcntQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcntQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcntp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcnts8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcntu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcombinef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcombinep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcombinep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcombines16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcombines32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcombines64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcombines8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcombineu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcombineu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcombineu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcombineu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcreatef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcreatep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcreatep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcreates16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcreates32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcreates64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcreates8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcreateu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcreateu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcreateu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcreateu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvtQ_nf32_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvtQ_nf32_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvtQ_ns32_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvtQ_nu32_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvtQf32_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvtQf32_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvtQs32_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvtQu32_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvt_nf32_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvt_nf32_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvt_ns32_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvt_nu32_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvtf32_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvtf32_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvts32_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vcvtu32_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_lanes64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_laneu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_nf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_np16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_np8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdupQ_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_lanes64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_laneu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_nf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_np16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_np8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vdup_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veorQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veorQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veorQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veorQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veorQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veorQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veorQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veorQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veors16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veors32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veors64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veors8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veoru16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veoru32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veoru64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/veoru8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextQp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextQp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vexts16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vexts32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vexts64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vexts8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vextu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vgetQ_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vgetQ_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vgetQ_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vgetQ_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vgetQ_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vgetQ_lanes64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vgetQ_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vgetQ_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vgetQ_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vgetQ_laneu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vgetQ_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_highf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_highp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_highp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_highs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_highs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_highs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_highs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_highu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_highu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_highu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_highu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lanes64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_laneu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lowf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lowp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lowp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lows16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lows32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lows64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lows8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lowu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lowu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lowu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vget_lowu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhaddQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhaddQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhaddQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhaddQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhaddQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhaddQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhadds16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhadds32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhadds8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhaddu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhaddu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhaddu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhsubQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhsubQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhsubQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhsubQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhsubQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhsubQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhsubs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhsubs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhsubs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhsubu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhsubu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vhsubu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_dupf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_dupp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_dupp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_dups16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_dups32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_dups64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_dups8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_dupu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_dupu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_dupu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_dupu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_lanes64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_laneu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Q_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Qf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Qp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Qp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Qs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Qs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Qs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Qs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Qu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Qu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Qu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1Qu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_dupf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_dupp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_dupp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_dups16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_dups32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_dups64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_dups8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_dupu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_dupu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_dupu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_dupu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_lanes64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_laneu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld1u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Q_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Q_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Q_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Q_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Q_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Q_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Qf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Qp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Qp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Qs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Qs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Qs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Qu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Qu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2Qu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_dupf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_dupp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_dupp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_dups16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_dups32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_dups64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_dups8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_dupu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_dupu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_dupu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_dupu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld2u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Q_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Q_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Q_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Q_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Q_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Q_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Qf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Qp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Qp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Qs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Qs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Qs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Qu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Qu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3Qu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_dupf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_dupp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_dupp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_dups16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_dups32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_dups64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_dups8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_dupu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_dupu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_dupu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_dupu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld3u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Q_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Q_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Q_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Q_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Q_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Q_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Qf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Qp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Qp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Qs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Qs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Qs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Qu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Qu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4Qu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_dupf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_dupp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_dupp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_dups16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_dups32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_dups64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_dups8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_dupu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_dupu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_dupu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_dupu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vld4u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmaxu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vminQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vminQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vminQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vminQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vminQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vminQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vminQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vminf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmins16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmins32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmins8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vminu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vminu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vminu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQ_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQ_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQ_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQ_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQ_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQ_nf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmla_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmla_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmla_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmla_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmla_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmla_nf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmla_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmla_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmla_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmla_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlaf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlal_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlal_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlal_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlal_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlal_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlal_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlal_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlal_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlals16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlals32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlals8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlalu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlalu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlalu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlas16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlas32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlas8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlau16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlau32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlau8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQ_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQ_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQ_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQ_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQ_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQ_nf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmls_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmls_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmls_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmls_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmls_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmls_nf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmls_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmls_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmls_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmls_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsl_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsl_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsl_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsl_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsl_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsl_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsl_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsl_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlslu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlslu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlslu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlss16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlss32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlss8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmlsu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovQ_nf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovQ_np16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovQ_np8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovQ_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovQ_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovQ_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovQ_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmov_nf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmov_np16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmov_np8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmov_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmov_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmov_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmov_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmov_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmov_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmov_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmov_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovlu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovlu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovlu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovnu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovnu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmovnu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQ_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQ_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQ_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQ_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQ_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQ_nf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmul_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmul_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmul_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmul_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmul_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmul_nf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmul_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmul_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmul_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmul_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmull_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmull_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmull_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmull_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmull_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmull_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmull_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmull_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmullp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmullu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmullu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmullu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmuls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmuls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmuls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmulu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvnQp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvnQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvnQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvnQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvnQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvnQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvnQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvnp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvnu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvnu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vmvnu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vnegQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vnegQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vnegQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vnegQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vnegf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vnegs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vnegs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vnegs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vornQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vornQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vornQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vornQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vornQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vornQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vornQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vornQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vornu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vornu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vornu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vornu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorrQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorrQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorrQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorrQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorrQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorrQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorrQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorrQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorrs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorrs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorrs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorrs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorru16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorru32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorru64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vorru8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadalQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadalQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadalQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadalQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadalQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadalQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadals16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadals32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadals8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadalu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadalu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadalu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddlQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddlQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddlQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddlQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddlQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddlQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddlu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddlu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddlu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadds16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadds32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpadds8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpaddu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpmaxf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpmaxs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpmaxs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpmaxs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpmaxu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpmaxu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpmaxu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpminf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpmins16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpmins32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpmins8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpminu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpminu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vpminu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRdmulhQ_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRdmulhQ_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRdmulhQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRdmulhQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRdmulhQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRdmulhQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRdmulh_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRdmulh_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRdmulh_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRdmulh_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRdmulhs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRdmulhs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshlQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshlQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshlQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshlQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshlQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshlQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshlQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshlQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshls64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshlu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshlu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshlu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshlu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshrn_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshrn_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshrn_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshrn_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshrn_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshrn_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshrun_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshrun_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqRshrun_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqabsQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqabsQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqabsQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqabss16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqabss32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqabss8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqaddQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqaddQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqaddQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqaddQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqaddQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqaddQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqaddQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqaddQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqadds16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqadds32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqadds64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqadds8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqaddu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqaddu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqaddu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqaddu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmlal_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmlal_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmlal_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmlal_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmlals16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmlals32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmlsl_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmlsl_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmlsl_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmlsl_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmlsls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmlsls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulhQ_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulhQ_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulhQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulhQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulhQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulhQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulh_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulh_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulh_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulh_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulhs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulhs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmull_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmull_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmull_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmull_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqdmulls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqmovns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqmovns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqmovns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqmovnu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqmovnu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqmovnu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqmovuns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqmovuns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqmovuns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqnegQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqnegQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqnegQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqnegs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqnegs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqnegs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQ_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQ_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQ_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQ_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshl_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshl_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshl_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshl_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshl_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshl_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshl_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshl_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshls64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshluQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshluQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshluQ_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshluQ_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlu_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlu_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlu_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshlu_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshrn_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshrn_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshrn_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshrn_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshrn_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshrn_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshrun_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshrun_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqshrun_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vqsubu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrecpeQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrecpeQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrecpef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrecpeu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrecpsQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrecpsf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQf32_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQf32_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQf32_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQf32_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQf32_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQf32_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQf32_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQf32_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQf32_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQf32_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp16_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp16_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp16_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp16_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp16_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp16_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp16_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp16_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp16_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp16_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp8_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp8_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp8_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp8_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp8_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp8_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp8_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp8_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp8_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQp8_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs16_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs16_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs16_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs16_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs16_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs16_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs16_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs16_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs16_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs16_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs32_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs32_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs32_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs32_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs32_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs32_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs32_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs32_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs32_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs32_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs64_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs64_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs64_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs64_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs64_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs64_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs64_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs64_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs64_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs64_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs8_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs8_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs8_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs8_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs8_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs8_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs8_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs8_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs8_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQs8_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu16_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu16_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu16_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu16_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu16_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu16_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu16_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu16_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu16_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu16_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu32_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu32_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu32_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu32_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu32_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu32_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu32_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu32_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu32_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu32_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu64_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu64_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu64_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu64_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu64_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu64_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu64_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu64_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu64_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu64_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu8_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu8_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu8_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu8_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu8_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu8_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu8_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu8_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu8_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretQu8_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretf32_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretf32_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretf32_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretf32_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretf32_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretf32_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretf32_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretf32_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretf32_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretf32_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp16_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp16_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp16_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp16_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp16_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp16_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp16_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp16_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp16_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp16_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp8_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp8_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp8_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp8_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp8_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp8_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp8_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp8_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp8_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretp8_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets16_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets16_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets16_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets16_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets16_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets16_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets16_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets16_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets16_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets16_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets32_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets32_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets32_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets32_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets32_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets32_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets32_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets32_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets32_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets32_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets64_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets64_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets64_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets64_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets64_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets64_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets64_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets64_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets64_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets64_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets8_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets8_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets8_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets8_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets8_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets8_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets8_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets8_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets8_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterprets8_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu16_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu16_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu16_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu16_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu16_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu16_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu16_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu16_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu16_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu16_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu32_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu32_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu32_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu32_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu32_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu32_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu32_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu32_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu32_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu32_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu64_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu64_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu64_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu64_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu64_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu64_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu64_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu64_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu64_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu64_u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu8_f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu8_p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu8_p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu8_s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu8_s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu8_s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu8_s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu8_u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu8_u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vreinterpretu8_u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev16Qp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev16Qs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev16Qu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev16p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev16s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev16u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev32Qp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev32Qp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev32Qs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev32Qs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev32Qu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev32Qu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev32p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev32p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev32s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev32s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev32u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev32u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64Qf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64Qp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64Qp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64Qs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64Qs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64Qs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64Qu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64Qu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64Qu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrev64u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrsqrteQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrsqrteQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrsqrtef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrsqrteu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrsqrtsQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vrsqrtsf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsetQ_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsetQ_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsetQ_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsetQ_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsetQ_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsetQ_lanes64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsetQ_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsetQ_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsetQ_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsetQ_laneu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsetQ_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vset_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vset_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vset_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vset_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vset_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vset_lanes64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vset_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vset_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vset_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vset_laneu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vset_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQ_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQ_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQ_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQ_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshl_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshl_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshl_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshl_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshl_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshl_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshl_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshl_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshll_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshll_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshll_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshll_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshll_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshll_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshls64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshlu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrQ_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrQ_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrQ_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrQ_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshr_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshr_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshr_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshr_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshr_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshr_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshr_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshr_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrn_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrn_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrn_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrn_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrn_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vshrn_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsliQ_np16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsliQ_np8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsliQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsliQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsliQ_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsliQ_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsliQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsliQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsliQ_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsliQ_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsli_np16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsli_np8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsli_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsli_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsli_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsli_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsli_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsli_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsli_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsli_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsraQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsraQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsraQ_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsraQ_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsraQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsraQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsraQ_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsraQ_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsra_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsra_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsra_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsra_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsra_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsra_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsra_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsra_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsriQ_np16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsriQ_np8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsriQ_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsriQ_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsriQ_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsriQ_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsriQ_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsriQ_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsriQ_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsriQ_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsri_np16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsri_np8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsri_ns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsri_ns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsri_ns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsri_ns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsri_nu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsri_nu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsri_nu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsri_nu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Q_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Q_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Q_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Q_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Q_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Q_lanes64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Q_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Q_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Q_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Q_laneu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Q_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Qf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Qp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Qp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Qs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Qs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Qs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Qs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Qu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Qu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Qu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1Qu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1_lanes64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1_laneu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst1u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Q_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Q_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Q_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Q_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Q_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Q_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Qf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Qp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Qp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Qs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Qs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Qs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Qu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Qu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2Qu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst2u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Q_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Q_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Q_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Q_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Q_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Q_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Qf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Qp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Qp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Qs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Qs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Qs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Qu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Qu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3Qu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst3u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Q_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Q_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Q_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Q_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Q_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Q_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Qf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Qp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Qp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Qs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Qs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Qs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Qu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Qu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4Qu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4_lanef32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4_lanep16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4_lanep8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4_lanes16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4_lanes32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4_lanes8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4_laneu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4_laneu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4_laneu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4f32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4p16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4s16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4s32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4s64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4u16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4u32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4u64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vst4u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubQs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubQu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubhns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubhns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubhns64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubhnu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubhnu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubhnu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubls16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubls32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubls8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsublu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsublu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsublu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubs64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubu64.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubws16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubws32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubws8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubwu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubwu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vsubwu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbl1p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbl1s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbl1u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbl2p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbl2s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbl2u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbl3p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbl3s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbl3u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbl4p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbl4s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbl4u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbx1p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbx1s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbx1u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbx2p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbx2s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbx2u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbx3p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbx3s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbx3u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbx4p8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbx4s8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtbx4u8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnQp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnQp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrns16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrns32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrns8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtrnu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtstQp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtstQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtstQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtstQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtstQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtstQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtstQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtstp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtsts16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtsts32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtsts8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtstu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtstu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vtstu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpQp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpQp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzps16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzps32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzps8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vuzpu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipQf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipQp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipQp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipQs16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipQs32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipQs8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipQu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipQu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipQu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipf32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipp16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipp8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzips16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzips32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzips8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipu16.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipu32.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/neon/vzipu8.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/vfp-ldmdbd.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/vfp-ldmdbs.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/vfp-ldmiad.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/vfp-ldmias.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/vfp-stmdbd.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/vfp-stmdbs.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/vfp-stmiad.c
    llvm-gcc-4.2/trunk/gcc/testsuite/gcc.target/arm/vfp-stmias.c
    llvm-gcc-4.2/trunk/gcc/testsuite/obj-c++.dg/objc2-protocol-3.mm
    llvm-gcc-4.2/trunk/gcc/testsuite/obj-c++.dg/property-as-initializer.mm
    llvm-gcc-4.2/trunk/gcc/testsuite/objc.dg/objc2-protocol-3.m
Modified:
    llvm-gcc-4.2/trunk/ChangeLog.apple
    llvm-gcc-4.2/trunk/build_gcc
    llvm-gcc-4.2/trunk/gcc/ChangeLog.apple
    llvm-gcc-4.2/trunk/gcc/Makefile.in
    llvm-gcc-4.2/trunk/gcc/builtin-attrs.def
    llvm-gcc-4.2/trunk/gcc/builtins.def
    llvm-gcc-4.2/trunk/gcc/c-opts.c
    llvm-gcc-4.2/trunk/gcc/c-parser.c
    llvm-gcc-4.2/trunk/gcc/config.gcc
    llvm-gcc-4.2/trunk/gcc/config/arm/aof.h
    llvm-gcc-4.2/trunk/gcc/config/arm/aout.h
    llvm-gcc-4.2/trunk/gcc/config/arm/arm-cores.def
    llvm-gcc-4.2/trunk/gcc/config/arm/arm-modes.def
    llvm-gcc-4.2/trunk/gcc/config/arm/arm-protos.h
    llvm-gcc-4.2/trunk/gcc/config/arm/arm-tune.md
    llvm-gcc-4.2/trunk/gcc/config/arm/arm.c
    llvm-gcc-4.2/trunk/gcc/config/arm/arm.h
    llvm-gcc-4.2/trunk/gcc/config/arm/arm.md
    llvm-gcc-4.2/trunk/gcc/config/arm/arm.opt
    llvm-gcc-4.2/trunk/gcc/config/arm/bpabi.S
    llvm-gcc-4.2/trunk/gcc/config/arm/cirrus.md
    llvm-gcc-4.2/trunk/gcc/config/arm/coff.h
    llvm-gcc-4.2/trunk/gcc/config/arm/constraints.md
    llvm-gcc-4.2/trunk/gcc/config/arm/darwin.h
    llvm-gcc-4.2/trunk/gcc/config/arm/elf.h
    llvm-gcc-4.2/trunk/gcc/config/arm/fpa.md
    llvm-gcc-4.2/trunk/gcc/config/arm/ieee754-df.S
    llvm-gcc-4.2/trunk/gcc/config/arm/ieee754-sf.S
    llvm-gcc-4.2/trunk/gcc/config/arm/iwmmxt.md
    llvm-gcc-4.2/trunk/gcc/config/arm/lib1funcs.asm
    llvm-gcc-4.2/trunk/gcc/config/arm/libunwind.S
    llvm-gcc-4.2/trunk/gcc/config/arm/pr-support.c
    llvm-gcc-4.2/trunk/gcc/config/arm/predicates.md
    llvm-gcc-4.2/trunk/gcc/config/arm/t-arm
    llvm-gcc-4.2/trunk/gcc/config/arm/t-arm-elf
    llvm-gcc-4.2/trunk/gcc/config/arm/t-darwin
    llvm-gcc-4.2/trunk/gcc/config/arm/t-pe
    llvm-gcc-4.2/trunk/gcc/config/arm/unwind-arm.c
    llvm-gcc-4.2/trunk/gcc/config/arm/vfp.md
    llvm-gcc-4.2/trunk/gcc/config/darwin-c.c
    llvm-gcc-4.2/trunk/gcc/config/darwin-driver.c
    llvm-gcc-4.2/trunk/gcc/config/darwin.h
    llvm-gcc-4.2/trunk/gcc/config/rs6000/rs6000.c
    llvm-gcc-4.2/trunk/gcc/configure.ac
    llvm-gcc-4.2/trunk/gcc/cp/ChangeLog.apple
    llvm-gcc-4.2/trunk/gcc/cp/cvt.c
    llvm-gcc-4.2/trunk/gcc/cp/decl2.c
    llvm-gcc-4.2/trunk/gcc/cp/name-lookup.c
    llvm-gcc-4.2/trunk/gcc/doc/extend.texi
    llvm-gcc-4.2/trunk/gcc/doc/invoke.texi
    llvm-gcc-4.2/trunk/gcc/ifcvt.c
    llvm-gcc-4.2/trunk/gcc/libgcov.c
    llvm-gcc-4.2/trunk/gcc/local-alloc.c
    llvm-gcc-4.2/trunk/gcc/varasm.c
    llvm-gcc-4.2/trunk/gcc/version.c

Modified: llvm-gcc-4.2/trunk/ChangeLog.apple
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/ChangeLog.apple?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/ChangeLog.apple (original)
+++ llvm-gcc-4.2/trunk/ChangeLog.apple Wed Jul 22 15:36:27 2009
@@ -1,3 +1,16 @@
+2009-05-27  Bob Wilson  <bob.wilson at apple.com>
+
+	Radar 6915254
+	* build_gcc: When the Legacy PDK (or whatever B&I uses) is not
+	available, fall back to use the internal iPhone SDK with the tools
+	from the iPhone platform directory.
+
+2009-02-25  Jim Grosbach <grosbach at apple.com>
+
+	Radar 6611402
+	* build_gcc (ARM_MAKEFLAGS, ARM_MULTILIB_ARCHS): New.
+	* build_libgcc (ARM_MAKEFLAGS, ARM_MULTILIB_ARCHS): New.
+
 2008-10-09  Devang Patel  <dpatel at apple.com>
 
        Radar 6184418
@@ -30,6 +43,19 @@
 	* driverdriver.c (main): Instead of aborting, print a fatal error when
 	there is no argument after -o.
 
+2008-08-07  Jim Grosbach <grosbach at apple.com>
+
+	* build_libgcc: enable building fat for v4/v6/v7
+
+2008-06-27  Josh Conner  <jconner at apple.com>
+
+	* build_libgcc: s/[^-]-enable-werror/--enable-werror/
+	Also, strip out --enable-werror for ARM builds.
+
+2008-06-19  Josh Conner  <jconner at apple.com>
+
+	* build_gcc: s/[^-]-enable-werror/--enable-werror/g
+
 2008-06-04  Josh Conner  <jconner at apple.com>
 
 	Radar 5961147

Modified: llvm-gcc-4.2/trunk/build_gcc
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/build_gcc?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/build_gcc (original)
+++ llvm-gcc-4.2/trunk/build_gcc Wed Jul 22 15:36:27 2009
@@ -332,7 +332,8 @@
    cd $DIR/obj-$BUILD-$t || exit 1
    if [ \! -f Makefile ]; then
     # APPLE LOCAL begin ARM ARM_CONFIGFLAGS
-    $SRC_DIR/configure $CONFIGFLAGS --enable-werror-always \
+    # APPLE LOCAL don't use --enable-werror for ARM builds
+    $SRC_DIR/configure `echo $CONFIGFLAGS | sed -e "s/--enable-werror//"`  \
      `if [ $t = 'arm' ] ; then echo $ARM_CONFIGFLAGS ; else echo $NON_ARM_CONFIGFLAGS ; fi` \
       --program-prefix=$t-apple-darwin$DARWIN_VERS- \
       --host=$BUILD-apple-darwin$DARWIN_VERS --target=$t-apple-darwin$DARWIN_VERS || exit 1
@@ -378,7 +379,8 @@
 
       if [ \! -f Makefile ]; then
 	# APPLE LOCAL begin ARM ARM_CONFIGFLAGS
-        $SRC_DIR/configure $CONFIGFLAGS \
+        # APPLE LOCAL don't use --enable-werror for ARM builds
+        $SRC_DIR/configure `echo $CONFIGFLAGS | sed -e "s/--enable-werror//"`  \
 	  `if [ $t = 'arm' ] && [ $h != 'arm' ] ; then echo $ARM_CONFIGFLAGS ; else echo $NON_ARM_CONFIGFLAGS ; fi` \
           --program-prefix=$pp \
           --host=$h-apple-darwin$DARWIN_VERS --target=$t-apple-darwin$DARWIN_VERS || exit 1

Modified: llvm-gcc-4.2/trunk/gcc/ChangeLog.apple
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/ChangeLog.apple?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/ChangeLog.apple (original)
+++ llvm-gcc-4.2/trunk/gcc/ChangeLog.apple Wed Jul 22 15:36:27 2009
@@ -1,3 +1,75 @@
+2009-07-09  Jim Grosbach <grosbach at apple.com>
+
+	Radar 6999417
+	* config/darwin.h (DARWIN_DYLIB1_SPEC): -ldylib1.o is not necessary
+	for iPhoneOS >= 3.1
+	(DARWIN_BUNDLE1_SPEC): New. -lbundle1.o is not necessary for
+	iPhoneOS >= 3.1
+	(DARWIN_CRT1_SPEC): iPhoneOS >= 3.1 should use -lcrt1.3.1.o;
+	otherwise, -lcrt1.o.
+
+2009-07-07  Stuart Hastings  <stuart at apple.com>
+
+	Radar 6939151
+	* local-alloc.c (local_alloc): Add comment, limit to a 12MB
+          bitmap.
+
+2009-07-07  Stuart Hastings  <stuart at apple.com>
+
+	Radar 6939151
+	* local-alloc.c (local_alloc): Avoid allocating huge register
+          bitmaps.  Arbitrarily set 64K pseudo-register limit for
+          reg_inheritance analysis.
+
+2009-06-25  Bob Wilson  <bob.wilson at apple.com>
+
+	Radar 6879229
+	* config/arm/arm.c (arm_override_options): Disallow -fasm-blocks.
+	* doc/invoke.texi (-fasm-blocks): Document this restriction.
+
+2009-06-17  Jim Grosbach <grosbach at apple.com>
+
+	Radar 6821124
+	* builtin-attrs.def (DEF_FORMAT_ATTRIBUTE): For Darwin, don't claim
+	the format argument to be nonnull.
+	* builtins.def (BUILT_IN_FPUTS, BUILT_IN_FPUTS_UNLOCKED,
+	BUILT_IN_PUTS, BUILT_IN_PUTS_UNLOCKED): Likewise.
+
+2009-06-17  Jim Grosbach <grosbach at apple.com>
+
+	Radar 6858124
+	* config/arm/arm.c (arm_output_mi_thunk): Stub calls aren't indirect
+	in MI thumb thunks.
+
+2009-06-15  Fariborz Jahanian <fjahanian at apple.com>
+
+	Radar 6936421
+	* cvt.c (force_rvalue): Convert property reference
+	expression to its getter call before converting to
+	rvalue.
+	* obj-c++.dg/property-as-initializer.mm: New
+
+2009-06-12  Jim Grosbach <grosbach at apple.com>
+
+	Radar 6868127
+	* doc/invoke.texi (-mthumb): Specify that thumb is the default for
+	armv7.
+
+2009-05-20  Jim Grosbach <grosbach at apple.com>
+
+	Radar 6902937
+	* config/arm/arm.c (arm_output_epilogue_vfp_restore): New function.
+	(arm_output_epilogue): Factor out common code to
+	arm_output_epilogue_vfp_restore.
+
+2009-05-19  Jim Grosbach <grosbach at apple.com>
+
+        Radar 6902792
+	* config/arm/arm.c: Add includes for stdlib.h and ctype.h.
+	(TARGET_MD_ASM_CLOBBERS): New.
+	(arm_md_asm_clobbers): New function. Add a clobber of the upper
+	half D register when a Q register clobber is used.
+
 2009-04-08  Jim Grosbach <grosbach at apple.com>
 
 	Radar 6738583
@@ -17,6 +89,21 @@
         re-setting the expression location at the end, use the input
         location for Block pointer assignments.
 
+2009-03-18  Jim Grosbach <grosbach at apple.com
+
+        Radar 6676111 (from Mike's fix to SnowLeopard gcc 4.2 6486153)
+	* fold-const.c (extract_muldiv_1): Copy TYPE_OVERFLOW_UNDEFINED from 
+	other uses in extract_muldiv_1 to avoid optimization when overflow 
+	is defined.
+
+2009-03-18  Bob Wilson  <bob.wilson at apple.com>
+
+	Radar 6545322
+	* gcc/config/arm/ieee754-sf.S (floatundisf): Replace undefined do_itt
+	macro call by do_it with an extra "t" argument.
+	* gcc/config/arm/ieee754-df.S (floatdidf): Likewise.
+	(muldf3): Revert inexplicable change that replaced a BIC with an AND.
+
 2009-02-25  Jim Grosbach <grosbach at apple.com>
 
 	Radar 6465387
@@ -37,6 +124,31 @@
 	flag in call to copy/dispose helper functions.
 	* c-common.h (BLOCK_BYREF_CALLER): New flag.
 
+2009-02-25  Jim Grosbach <grosbach at apple.com>
+
+	Radar 6611402
+	* gcc/config/arm/t-darwin (ARM_MULTILIB_ARCHS): New.
+	* (MULTILIB_OPTIONS, MULTILIB_DIRNAMES, MULTILIB_EXCEPTIONS): 
+	Derive values from list in ARM_MULTILIB_ARCHS.
+	* gcc/config/arm/gen-darwin-multilib-exceptions.sh: New.
+	Helper function to calculate the list for MULTILIB_EXCEPTIONS.
+
+2009-02-02   Jim Grosbach <grosbach at apple.com>
+
+	Radar 5571707
+	* gcc/config/arm/darwin.h (SUBTARGET_OVERRIDE_OPTIONS): Set
+	flag to reserve R9 on v6 if target iPhone SDK is less than 3.0.
+	* gcc/config/arm/arm.c (darwin_reserve_r9_on_v6): New flag.
+	(arm_darwin_subtarget_conditional_register_usage): Conditionalize
+	making R9 available on darwin_reserve_r9_on_v6 when targeting v6.
+
+2009-01-30   Jim Grosbach <grosbach at apple.com>
+
+	Radar 6370037
+	* ifcvt.c (cond_exec_process_if_block): Move loop-local variable
+	initialization inside loop so that further iterations don't
+	incorrectly use a value of the variable from a previous iteration.
+
 2009-01-29  Josh Conner  <jconner at apple.com>
 
 	Radar 6186914, 6541440
@@ -45,6 +157,22 @@
 	(reg_loc_descriptor): Call TARGET_DWARF2_REG_HANDLER if it is defined.
 	* config/arm/arm.h (TARGET_DWARF2_REG_HANDLER): Define.
 
+2009-01-27   Jim Grosbach <grosbach at apple.com>
+
+	Radar 6483528
+	* config/arm/arm.c (arm_output_epilogue): Pop VFP registers in
+	correct order when multiple pop instructions are necessary.
+
+2009-01-22   Jim Grosbach <grosbach at apple.com>
+
+	Radar 5571707
+	* config/arm/darwin.h (SUBTARGET_CONDITIONAL_REGISTER_USAGE):
+	Allow use of R9 on v6 and v7.
+	* config/arm/arm.c (arm_darwin_subtarget_conditional_register_usage):
+	New function.
+	* config/arm/arm-protos.h 
+	(arm_darwin_subtarget_conditional_register_usage): Ditto.
+
 2009-01-22  Stuart Hastings  <stuart at apple.com>
 
 	Radar 6515001
@@ -271,6 +399,39 @@
 	* config/darwin-c.c (darwin_cpp_builtins): Define __weak even when
 	-fobjc-gc is off.
 
+2008-11-18   Jim Grosbach <grosbach at apple.com>
+	Radar 6361608
+	* config/arm/arm.c (arm_output_mi_thunk): C++ thunks for multiple
+	inheritance need to account for the different PC-relative offset
+	bias of the add instruction in Thumb-2 vs. ARM mode (4 vs. 8).
+
+2008-11-13   Jim Grosbach <grosbach at apple.com>
+
+	Radar 6357106
+	* config/arm/arm.c (arm_function_boundary): Thumb2 functions
+	should be 32-bit aligned.
+
+2008-11-13   Jim Grosbach <grosbach at apple.com>
+	Radar 6333007
+	* config/darwin-driver.c (darwin_default_min_version): Bump default
+	iPhoneos-version-min to 3.0.
+	* config/arm/darwin.h (DARWIN_MINVERSION_SPEC,
+	SUBTARGET_OVERRIDE_OPTIONS): Ditto.
+
+2008-11-13   Jim Grosbach <grosbach at apple.com>
+
+	Radar 6345234
+	* config/arm/arm.c (arm_file_start): Emit section directives
+	for text sections at the start of file, as for PPC.
+
+2008-11-11  Fariborz Jahanian <fjahanian at apple.com>
+
+        Radar 6351990
+	* config/darwin.c (machopic_select_section): Accomodate change
+	for meta-data prefix name.
+	* config/darwin-sections.def (__objc_protorefs, __objc_protolist):
+	These sections are now coalesced.
+
 2008-11-4   Jim Grosbach <grosbach at apple.com>
 
 	Radar 6327222
@@ -1616,6 +1777,111 @@
 	* stmt.c (expand_asm_operands): Revise to handle new %ebx
           allocation policy.
 
+2008-10-30  Josh Conner  <jconner at apple.com>
+
+	Radar 6297258
+	* config/arm/arm.c (arm_output_mi_thunk): Emit 32-bit branch
+	for thumb2 target.
+
+2008-10-24  Josh Conner  <jconner at apple.com>
+
+	Radar 6305331
+	* config/arm/arm.c (arm_asm_output_addr_diff_vec): Disable support
+	for compact switch tables with -mlongcall.
+	* config/arm/arm.h (TARGET_COMPACT_SWITCH_TABLES): New definition...
+	(CASE_VECTOR_SHORTEN_MODE): ...use it.
+	* config/arm/arm.md (casesi): ...use it.
+	(thumb_casesi_internal): ...use it.
+
+2008-10-20  Fariborz Jahanian <fjahanian at apple.com>
+
+        Radar 6255595
+	* config/darwin.c (output_objc_section_asm_op): Add two new section names.
+	(objc_internal_variable_name): New routine.
+	(machopic_select_section): Call objc_internal_variable_name.
+	* config/darwin-sections.def: Define two new kinds of
+	__DATA section.
+
+2008-10-16  Josh Conner  <jconner at apple.com>
+
+	Radar 6293989
+	* arm.c (arm_legitimate_address_p): Don't allow pre-inc/dec or
+	post-dec addressing for NEON vector operations.
+	(thumb2_legitimate_address_p): Likewise.
+
+2008-10-16  Josh Conner  <jconner at apple.com>
+
+        Radar 6288519
+        * config/arm/arm.md (casesi): Disallow for TARGET_THUMB &&
+        TARGET_LONG_CALLS.
+        (thumb_casesi_internal): Likewise.
+
+2008-10-13  Josh Conner  <jconner at apple.com>
+
+	Radar 6280380
+	* config/arm/arm.c (arm_final_prescan_insn): Check predicability
+	of insns before allowing them to be predicated.
+
+2008-10-09  Josh Conner  <jconner at apple.com>
+
+	Radar 6279481
+	* config/arm/arm.c (arm_adjust_insn_length): Don't adjust thumb-2
+	epilogue lengths.
+
+2008-10-09  Josh Conner  <jconner at apple.com>
+
+	Radar 6267907
+	* config/arm/thumb2.md (thumb2_casesi_internal): Mark scratch reg as
+	early-clobber.
+	(thumb2_casesi_internal_pic): Remove this define_insn.
+	* config/arm/arm.md (casesi): Don't use thumb2_casesi_internal_pic.
+
+2008-10-03  Josh Conner  <jconner at apple.com>
+
+	Radar 6268204
+	* doc/invoke.texi (-mkernel): Document that -mlong-branch
+	is set for ARM.
+	* config/arm/darwin.h (SUBTARGET_OVERRIDE_OPTIONS): Have -mkernel
+	and -fapple-kext imply -mlong-branch.
+
+2008-10-03  Josh Conner  <jconner at apple.com>
+
+	Radar 6261739
+	* config/arm/thumb2.md (thumb2_cbz): Check for low-register
+	use when calculating cost.
+	(thumb2_cbnz): Likewise.
+
+2008-10-02  Josh Conner  <jconner at apple.com>
+
+	Radar 6134442
+	* config/arm/t-darwin: Re-enable multi-libs for v5.
+
+2008-09-30  Josh Conner  <jconner at apple.com>
+
+	Radar 6197406
+	* config/arm/neon.md (mulsf3addsf_neon, mulsf3subsf_neon):
+	Remove.
+
+2008-09-30  Josh Conner  <jconner at apple.com>
+
+	Radar 6251664
+	* config/arm/neon.md (mulsf3subsf_neon): Reverse operands to
+	minus.
+
+2008-09-30  Josh Conner  <jconner at apple.com>
+
+	Radar 6160917
+	* config/arm/arm.c (neon_vector_mem_operand): Call
+	arm_legitimate_index_p.
+	(neon_reload_in): New function.
+	(neon_reload_out): New function.
+	* config/arm/arm-protos.h (neon_reload_in): New proto.
+	(neon_reload_out): New proto.
+	* config/arm/neon.md (reload_in<mode>): New expand pattern.
+	(reload_out<mode>): New expand pattern.
+	* config/arm/predicates.md (neon_reload_mem_operand): New
+	predicate.
+
 2008-05-12  Fariborz Jahanian <fjahanian at apple.com>
 
         Radar 5925784
@@ -1635,6 +1901,18 @@
 	(TARGET_LIBGCC2_CFLAGS): Remove armv6.
 	(DARWIN_EXTRA_CRT_BUILD_FLAGS): Remove.
 
+2008-09-19  Josh Conner  <jconner at apple.com>
+
+	Radar 6216388
+	* config/arm/arm.c (TARGET_DEFAULT_TARGET_FLAGS): Remove
+	MASK_SCHED_PROLOG.
+
+2008-09-19  Josh Conner  <jconner at apple.com>
+
+	Radar 6196857
+	* config/arm/arm.c (arm_output_epilogue): Use pop instead of ldmfd
+	on Thumb-2.
+
 2008-04-30  Caroline Tice  <ctice at apple.com>
 
         Radar 5811961
@@ -1647,6 +1925,54 @@
         Radar 5803005 (tweaked)
 	* c-typeck.c (build_external_ref): Refactored global decl checks.
 
+2008-09-11  Josh Conner  <jconner at apple.com>
+
+	Radar 6150882
+	* config/arm/ieee754-df.S: Add do_it macros to allow building
+	thumb-2.
+	* config/arm/ieee754-sf.S: Likewise.
+	* config/arm/lib1funcs.asm: Redefine macros to allow building
+	thumb-2.  Disable compact switch table code for thumb-2.
+	* config/arm/arm.c (arm_override_options): armv7a implies thumb-2.
+	(arm_legitimate_index_p): Require VECTOR_MODE_P to use NEON
+	instruction ranges.
+	(thumb2_legitimate_index_p): Likewise.
+	* config/arm/arm.h (TARGET_THUMB): Define.
+	* config/arm/arm.opt (thumb_option): New option.
+
+2008-09-09  Josh Conner  <jconner at apple.com>
+
+	Radar 6195983
+	* config/arm/arm.c (thumb2_output_casesi): Fix table invocation
+	instructions.
+	(arm_asm_output_addr_diff_vec): Emit b.w instead of b for
+	Thumb-2.
+
+2008-09-03  Jim Grosbach <grosbach at apple.com>
+
+	* Radar 6150859
+	* config/arm/arm.c (arm_print_operand): Add 'p' output code to
+	print a d{0-15} reference to a 32 bit register.
+	* config/arm/arm.h (VFP_REGNO_OK_FOR_SINGLE): Restrict SF mode
+	values to even numbered float registers so they can be referenced
+	by the 32x2 NEON instructions.
+	* config/arm/vfp.md (*addsf3_vfp, *subsf3_vfp, *mulsf3_vfp,
+	*mulsf3addsf_vfp, *mulsf3subsf_vfp): Disable for TARGET_NEON
+	since we'll be using the NEON instructions instead.
+	* config/arm/neon.md (*addsf3_neon, *subsf3_neon, *mulsf3_neon,
+	*mulsf3addsf_neon, *mulsf3subsf_neon): New productions.
+
+2008-08-26  Jim Grosbach <grosbach at apple.com>
+
+	Radar 6152801
+	* config/arm/arm.c (thumb2_output_casesi, 
+	arm_asm_output_addr_diff_vec): For thumb2, use direct
+	braches for SImode switch tables (like we do for ARM mode).
+
+2008-08-25  Josh Conner  <jconner at apple.com>
+
+	* config/arm/t-darwin: Disable multi-libs for v5.
+
 2008-04-24  Fariborz Jahanian <fjahanian at apple.com>
 
         Radar 5803005
@@ -1678,6 +2004,24 @@
 	* c-typeck.c (types_are_closure_compatible): Check for underlying
 	pointer types as well.
 
+2008-08-11  Josh Conner  <jconner at apple.com>
+
+	Radar 6134442
+	* config/arm/t-darwin: Enable multi-libs for v5.
+
+2008-08-06  Jim Grosbach <grosbach at apple.com>
+
+	Radar 6093696
+	* config/arm/t-darwin (t-darwin): Enable multi-libs for v7
+
+2008-08-06  Jim Grosbach <grosbach at apple.com>
+
+	Radar 6129445
+	* config/arm/arm.md (mulsidi3, umulsidi3, smulsi3_highpart,
+	umulsi3_highpart): Remove conditionalization on non-v6
+	(mulsidi3_v6, umulsidi3_v6, smulsi3_highpart_v6,
+	umulsi3_highpart_v6): Remove production.
+
 2008-04-23  Fariborz Jahanian <fjahanian at apple.com>
 
         Radar 5882266
@@ -2127,6 +2471,20 @@
 	(convert_to_closure_pointer): New
 	* convert.h (convert_to_closure_pointer): New declaration.
 
+	Radar 6093388
+	* config/arm/arm.c (FL_FOR_ARCH7A): Add FL_NEON.
+	(arm_arch7a): New global flag.
+	(arm_override_options): Set arm_arch7a flag.
+	* config/arm/darwin.h (FPUTYPE_DEFAULT): Default to Neon for v7.
+	(TARGET_DEFAULT_FLOAT_ABI): Default to ARM_FLOAT_ABI_SOFTFP for v7,
+	like we do for v6.
+
+2008-07-23  Josh Conner  <jconner at apple.com>
+
+	Radar 6090740
+	* config/arm/arm.c (arm_output_epilogue): Use SP not IP
+	as scratch when restoring VFP registers.
+
 2008-07-22  Josh Conner  <jconner at apple.com>
 
 	Radar 6077274
@@ -2153,6 +2511,11 @@
 	too.
 	(arm_darwin_encode_section_info): Likewise.
 
+2008-07-01  Jim Grosbach <grosbach at apple.com>
+	Radar 6040923
+	* config/arm/arm.md (*mulsi3_compare0_v6, 
+	*mulsi_compare0_scratch_v6): Remove "&& optimize_size" conditional.
+
 2008-06-25  Josh Conner  <jconner at apple.com>
 
 	Radar 6008578
@@ -2639,6 +3002,33 @@
 	* config/i386/i386.c (ix86_builtins): Need APPLE LOCAL markers
 	  around deletion.
 
+2008-03-07  Jim Grosbach <grosbach at apple.com>
+
+	* global.c (set_preference): The apple local change to not
+	punt on subregs breaks under ARM. Go back to mainline behaviour
+	for that case.
+
+2008-03-07  Jim Grosbach <grosbach at apple.com>
+
+	* config/arm/arm.h (CASE_VECTOR_SHORTEN_MODE): Do not shorten
+	VECTOR_DIFF tables in Thumb2 mode. The length calculations for 
+	many variable length instructions are only approximate, but the
+	shorter tables require them to be exact and are thus often
+	incorrect.
+
+2008-03-07  Jim Grosbach <grosbach at apple.com>
+
+	* config/arm/arm.md: Split out a separate thumb2_jump pattern
+	to enable accurate length attribute calculation.
+
+2008-03-07  Jim Grosbach <grosbach at apple.com>
+
+	* config/arm/arm.h: Functionalize ASM_OUTPUT_ADDR_DIFF_VEC
+	* config/arm/arm-protos.h: Ditto
+	* config/arm/arm.c (arm_asm_output_addr_diff_vec): New function.
+	Adjust offset calculations for Thumb2 as the table layout does
+	not need the Apple specific compact switch table bits.
+
 2008-03-07  Josh Conner  <jconner at apple.com>
 
 	Radar 5782111

Modified: llvm-gcc-4.2/trunk/gcc/Makefile.in
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/Makefile.in?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/Makefile.in (original)
+++ llvm-gcc-4.2/trunk/gcc/Makefile.in Wed Jul 22 15:36:27 2009
@@ -3727,7 +3727,7 @@
 	 gcov.texi trouble.texi bugreport.texi service.texi		\
 	 contribute.texi compat.texi funding.texi gnu.texi gpl.texi	\
 	 fdl.texi contrib.texi cppenv.texi cppopts.texi			\
-	 implement-c.texi sourcecode.texi
+	 implement-c.texi arm-neon-intrinsics.texi sourcecode.texi
 # APPLE LOCAL end GPL compliance
 
 TEXI_GCCINT_FILES = gccint.texi gcc-common.texi gcc-vers.texi		\
@@ -3796,21 +3796,25 @@
       doc/cppinternals.dvi lang.dvi
 
 doc/%.dvi: %.texi
-	$(TEXI2DVI) -I . -I $(abs_docdir) -I $(abs_docdir)/include -o $@ $<
+	# APPLE LOCAL use makeinfo to expand macros
+	$(TEXI2DVI) -e -I . -I $(abs_docdir) -I $(abs_docdir)/include -o $@ $<
 
 # Duplicate entry to handle renaming of gccinstall.dvi
 doc/gccinstall.dvi: $(TEXI_GCCINSTALL_FILES)
-	$(TEXI2DVI) -I . -I $(abs_docdir) -I $(abs_docdir)/include -o $@ $<
+	# APPLE LOCAL use makeinfo to expand macros
+	$(TEXI2DVI) -e -I . -I $(abs_docdir) -I $(abs_docdir)/include -o $@ $<
 
 pdf:: doc/gcc.pdf doc/gccint.pdf doc/gccinstall.pdf doc/cpp.pdf \
       doc/cppinternals.pdf lang.pdf
 
 doc/%.pdf: %.texi
-	$(TEXI2PDF) -I . -I $(abs_docdir) -I $(abs_docdir)/include -o $@ $<
+	# APPLE LOCAL use makeinfo to expand macros
+	$(TEXI2PDF) -e -I . -I $(abs_docdir) -I $(abs_docdir)/include -o $@ $<
 
 # Duplicate entry to handle renaming of gccinstall.pdf
 doc/gccinstall.pdf: $(TEXI_GCCINSTALL_FILES)
-	$(TEXI2PDF) -I . -I $(abs_docdir) -I $(abs_docdir)/include -o $@ $<
+	# APPLE LOCAL use makeinfo to expand macros
+	$(TEXI2PDF) -e -I . -I $(abs_docdir) -I $(abs_docdir)/include -o $@ $<
 
 # List the directories or single hmtl files which are installed by
 # install-html. The lang.html file triggers language fragments to build

Modified: llvm-gcc-4.2/trunk/gcc/builtin-attrs.def
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/builtin-attrs.def?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/builtin-attrs.def (original)
+++ llvm-gcc-4.2/trunk/gcc/builtin-attrs.def Wed Jul 22 15:36:27 2009
@@ -150,11 +150,21 @@
 			ATTR_NOTHROW_NONNULL)
 
 /* Construct a tree for a format attribute.  */
+/* APPLE LOCAL begin 6821124 darwin libc allows null args */
+#ifdef CONFIG_DARWIN_H
+#define DEF_FORMAT_ATTRIBUTE(TYPE, FA, VALUES)				 \
+  DEF_ATTR_TREE_LIST (ATTR_##TYPE##_##VALUES, ATTR_NULL,		 \
+		      ATTR_##TYPE, ATTR_LIST_##VALUES)			 \
+  DEF_ATTR_TREE_LIST (ATTR_FORMAT_##TYPE##_##VALUES, ATTR_FORMAT,	 \
+		      ATTR_##TYPE##_##VALUES, ATTR_NOTHROW_LIST)
+#else
 #define DEF_FORMAT_ATTRIBUTE(TYPE, FA, VALUES)				 \
   DEF_ATTR_TREE_LIST (ATTR_##TYPE##_##VALUES, ATTR_NULL,		 \
 		      ATTR_##TYPE, ATTR_LIST_##VALUES)			 \
   DEF_ATTR_TREE_LIST (ATTR_FORMAT_##TYPE##_##VALUES, ATTR_FORMAT,	 \
 		      ATTR_##TYPE##_##VALUES, ATTR_NOTHROW_NONNULL_##FA)
+#endif
+/* APPLE LOCAL end 6821124 darwin libc allows null args */
 DEF_FORMAT_ATTRIBUTE(PRINTF,1,1_0)
 DEF_FORMAT_ATTRIBUTE(PRINTF,1,1_2)
 DEF_FORMAT_ATTRIBUTE(PRINTF,2,2_0)

Modified: llvm-gcc-4.2/trunk/gcc/builtins.def
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/builtins.def?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/builtins.def (original)
+++ llvm-gcc-4.2/trunk/gcc/builtins.def Wed Jul 22 15:36:27 2009
@@ -531,8 +531,15 @@
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_PUTC_UNLOCKED, "putc_unlocked", BT_FN_INT_INT_FILEPTR, ATTR_NONNULL_LIST)
 DEF_LIB_BUILTIN        (BUILT_IN_FPUTC, "fputc", BT_FN_INT_INT_FILEPTR, ATTR_NONNULL_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_FPUTC_UNLOCKED, "fputc_unlocked", BT_FN_INT_INT_FILEPTR, ATTR_NONNULL_LIST)
+/* APPLE LOCAL begin 6821124 darwin libc allows null args */
+#ifdef CONFIG_DARWIN_H
+DEF_LIB_BUILTIN        (BUILT_IN_FPUTS, "fputs", BT_FN_INT_CONST_STRING_FILEPTR, ATTR_NULL)
+DEF_EXT_LIB_BUILTIN    (BUILT_IN_FPUTS_UNLOCKED, "fputs_unlocked", BT_FN_INT_CONST_STRING_FILEPTR, ATTR_NULL)
+#else
 DEF_LIB_BUILTIN        (BUILT_IN_FPUTS, "fputs", BT_FN_INT_CONST_STRING_FILEPTR, ATTR_NONNULL_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_FPUTS_UNLOCKED, "fputs_unlocked", BT_FN_INT_CONST_STRING_FILEPTR, ATTR_NONNULL_LIST)
+#endif
+/* APPLE LOCAL end 6821124 darwin libc allows null args */
 DEF_LIB_BUILTIN        (BUILT_IN_FSCANF, "fscanf", BT_FN_INT_FILEPTR_CONST_STRING_VAR, ATTR_FORMAT_SCANF_2_3)
 DEF_LIB_BUILTIN        (BUILT_IN_FWRITE, "fwrite", BT_FN_SIZE_CONST_PTR_SIZE_SIZE_FILEPTR, ATTR_NONNULL_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_FWRITE_UNLOCKED, "fwrite_unlocked", BT_FN_SIZE_CONST_PTR_SIZE_SIZE_FILEPTR, ATTR_NONNULL_LIST)
@@ -540,8 +547,15 @@
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_PRINTF_UNLOCKED, "printf_unlocked", BT_FN_INT_CONST_STRING_VAR, ATTR_FORMAT_PRINTF_1_2)
 DEF_LIB_BUILTIN        (BUILT_IN_PUTCHAR, "putchar", BT_FN_INT_INT, ATTR_NULL)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_PUTCHAR_UNLOCKED, "putchar_unlocked", BT_FN_INT_INT, ATTR_NULL)
+/* APPLE LOCAL begin 6821124 darwin libc allows null args */
+#ifdef CONFIG_DARWIN_H
+DEF_LIB_BUILTIN        (BUILT_IN_PUTS, "puts", BT_FN_INT_CONST_STRING, ATTR_NULL)
+DEF_EXT_LIB_BUILTIN    (BUILT_IN_PUTS_UNLOCKED, "puts_unlocked", BT_FN_INT_CONST_STRING, ATTR_NULL)
+#else
 DEF_LIB_BUILTIN        (BUILT_IN_PUTS, "puts", BT_FN_INT_CONST_STRING, ATTR_NONNULL_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_PUTS_UNLOCKED, "puts_unlocked", BT_FN_INT_CONST_STRING, ATTR_NONNULL_LIST)
+#endif
+/* APPLE LOCAL end 6821124 darwin libc allows null args */
 DEF_LIB_BUILTIN        (BUILT_IN_SCANF, "scanf", BT_FN_INT_CONST_STRING_VAR, ATTR_FORMAT_SCANF_1_2)
 DEF_C99_BUILTIN        (BUILT_IN_SNPRINTF, "snprintf", BT_FN_INT_STRING_SIZE_CONST_STRING_VAR, ATTR_FORMAT_PRINTF_3_4)
 DEF_LIB_BUILTIN        (BUILT_IN_SPRINTF, "sprintf", BT_FN_INT_STRING_CONST_STRING_VAR, ATTR_FORMAT_PRINTF_2_3)

Modified: llvm-gcc-4.2/trunk/gcc/c-opts.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/c-opts.c?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/c-opts.c (original)
+++ llvm-gcc-4.2/trunk/gcc/c-opts.c Wed Jul 22 15:36:27 2009
@@ -239,7 +239,9 @@
 
   flag_exceptions = c_dialect_cxx ();
   /* LLVM local begin One Definition Rule */
+#ifdef ENABLE_LLVM
   flag_odr = c_dialect_cxx ();
+#endif
   /* LLVM local end */
   warn_pointer_arith = c_dialect_cxx ();
   warn_write_strings = c_dialect_cxx();

Modified: llvm-gcc-4.2/trunk/gcc/c-parser.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/c-parser.c?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/c-parser.c (original)
+++ llvm-gcc-4.2/trunk/gcc/c-parser.c Wed Jul 22 15:36:27 2009
@@ -4122,8 +4122,11 @@
 	  iasm_state = iasm_asm;
 	  inside_iasm_block = true;
 	  iasm_kill_regs = true;
-          /* LLVM LOCAL */
+          /* LLVM LOCAL begin */
+#ifdef ENABLE_LLVM
           iasm_label_counter = 0;
+#endif
+          /* LLVM LOCAL end */
 	  iasm_in_decl = false;
 	  c_parser_iasm_line_seq_opt (parser);
 	  stmt = NULL_TREE;
@@ -9023,8 +9026,11 @@
   iasm_state = iasm_asm;
   inside_iasm_block = true;
   iasm_kill_regs = true;
-  /* LLVM LOCAL */
+  /* LLVM LOCAL begin */
+#ifdef ENABLE_LLVM
   iasm_label_counter = 0;
+#endif
+  /* LLVM LOCAL end */
   stmt = c_begin_compound_stmt (true);
   /* Parse an (optional) statement-seq.  */
   c_parser_iasm_line_seq_opt (parser);
@@ -9045,8 +9051,11 @@
   iasm_state = iasm_asm;
   inside_iasm_block = true;
   iasm_kill_regs = true;
-  /* LLVM LOCAL */
+  /* LLVM LOCAL begin */
+#ifdef ENABLE_LLVM
   iasm_label_counter = 0;
+#endif
+  /* LLVM LOCAL end */
   stmt = c_begin_compound_stmt (true);
   if (!c_parser_iasm_bol (parser))
     {    

Modified: llvm-gcc-4.2/trunk/gcc/config.gcc
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config.gcc?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config.gcc (original)
+++ llvm-gcc-4.2/trunk/gcc/config.gcc Wed Jul 22 15:36:27 2009
@@ -260,7 +260,8 @@
 	;;
 arm*-*-*)
 	cpu_type=arm
-	extra_headers="mmintrin.h"
+        # APPLE LOCAL ARM v7 support, merge from Codesourcery.
+	extra_headers="mmintrin.h arm_neon.h"
 	;;
 bfin*-*)
 	cpu_type=bfin
@@ -809,7 +810,7 @@
 	extra_options="${extra_options} arm/darwin.opt"
         tm_file="${tm_file} arm/darwin.h"
         tmake_file="${tmake_file} arm/t-darwin arm/t-slibgcc-iphoneos"
-	extra_headers=
+	extra_headers="arm_neon.h"
         ;;
 # APPLE LOCAL end ARM darwin target
 arm*-wince-pe*)
@@ -2779,14 +2780,14 @@
 			fi
 		done
 
-		# LLVM LOCAL begin
 		case "$with_arch" in
+                # APPLE LOCAL begin ARM v7 support.
 		"" \
 		| armv[23456] | armv2a | armv3m | armv4t | armv5t \
 		| armv5te | armv6j |armv6k | armv6z | armv6zk \
 		| armv7-a \
 		| iwmmxt | ep9312)
-		# LLVM LOCAL end
+                # APPLE LOCAL end ARM v7 support.
 			# OK
 			;;
 		*)
@@ -2807,8 +2808,10 @@
 		esac
 
 		case "$with_fpu" in
+                # APPLE LOCAL begin ARM v7 support, merge from Codesourcery.
 		"" \
-		| fpa | fpe2 | fpe3 | maverick | vfp )
+		| fpa | fpe2 | fpe3 | maverick | vfp | vfp3 | neon)
+                # APPLE LOCAL end ARM v7 support, merge from Codesourcery.
 			# OK
 			;;
 		*)

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/aof.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/aof.h?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/aof.h (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/aof.h Wed Jul 22 15:36:27 2009
@@ -187,7 +187,12 @@
 #define CTORS_SECTION_ASM_OP "\tAREA\t|C$$gnu_ctorsvec|, DATA, READONLY"
 #define DTORS_SECTION_ASM_OP "\tAREA\t|C$$gnu_dtorsvec|, DATA, READONLY"
 
-/* Output of Assembler Instructions.  */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Output of Assembler Instructions.  Note that the ?xx registers are
+   there so that VFPv3/NEON registers D16-D31 have the same spacing as D0-D15
+   (each of which is overlaid on two S registers), although there are no
+   actual single-precision registers which correspond to D16-D31.  */
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 #define REGISTER_NAMES				\
 {						\
@@ -210,7 +215,13 @@
   "s0",  "s1",  "s2",  "s3",  "s4",  "s5",  "s6",  "s7",  \
   "s8",  "s9",  "s10", "s11", "s12", "s13", "s14", "s15", \
   "s16", "s17", "s18", "s19", "s20", "s21", "s22", "s23", \
-  "s24", "s25", "s26", "s27", "s28", "s29", "s30", "s31",  \
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+  "s24", "s25", "s26", "s27", "s28", "s29", "s30", "s31", \
+  "d16", "?16", "d17", "?17", "d18", "?18", "d19", "?19", \
+  "d20", "?20", "d21", "?21", "d22", "?22", "d23", "?23", \
+  "d24", "?24", "d25", "?25", "d26", "?26", "d27", "?27", \
+  "d28", "?28", "d29", "?29", "d30", "?30", "d31", "?31", \
+/* APPLE LOCAL end v7 support. Merge from mainline */
   "vfpcc"					\
 }
 
@@ -232,22 +243,32 @@
   {"r13", 13}, {"sp", 13}, 			\
   {"r14", 14}, {"lr", 14},			\
   {"r15", 15}, {"pc", 15},			\
-  {"d0", 63},					\
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  {"d0", 63}, {"q0", 63},			\
   {"d1", 65},					\
-  {"d2", 67},					\
+  {"d2", 67}, {"q1", 67},			\
   {"d3", 69},					\
-  {"d4", 71},					\
+  {"d4", 71}, {"q2", 71},			\
   {"d5", 73},					\
-  {"d6", 75},					\
+  {"d6", 75}, {"q3", 75},			\
   {"d7", 77},					\
-  {"d8", 79},					\
+  {"d8", 79}, {"q4", 79},			\
   {"d9", 81},					\
-  {"d10", 83},					\
+  {"d10", 83}, {"q5", 83},			\
   {"d11", 85},					\
-  {"d12", 87},					\
+  {"d12", 87}, {"q6", 87},			\
   {"d13", 89},					\
-  {"d14", 91},					\
-  {"d15", 93}					\
+  {"d14", 91}, {"q7", 91},			\
+  {"d15", 93},					\
+  {"q8", 95},					\
+  {"q9", 99},					\
+  {"q10", 103},					\
+  {"q11", 107},					\
+  {"q12", 111},					\
+  {"q13", 115},					\
+  {"q14", 119},					\
+  {"q15", 123}					\
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
 }
 
 #define REGISTER_PREFIX "__"
@@ -258,18 +279,49 @@
 #define ARM_MCOUNT_NAME "_mcount"
 
 /* Output of Dispatch Tables.  */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM, BODY, VALUE, REL)			\
-  do										\
-    {										\
-      if (TARGET_ARM)								\
-        fprintf ((STREAM), "\tb\t|L..%d|\n", (VALUE));				\
-      else									\
-        fprintf ((STREAM), "\tDCD\t|L..%d| - |L..%d|\n", (VALUE), (REL));	\
-    }										\
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM, BODY, VALUE, REL)		\
+  do									\
+    {									\
+      if (TARGET_ARM)							\
+        fprintf ((STREAM), "\tb\t|L..%d|\n", (VALUE));			\
+      else if (TARGET_THUMB1)						\
+        fprintf ((STREAM), "\tDCD\t|L..%d| - |L..%d|\n", (VALUE), (REL)); \
+      else /* Thumb-2 */						\
+	{								\
+	  switch (GET_MODE(body))					\
+	    {								\
+	    case QImode: /* TBB */					\
+	      asm_fprintf (STREAM, "\tDCB\t(|L..%d| - |L..%d|)/2\n",	\
+			   VALUE, REL);					\
+	      break;							\
+	    case HImode: /* TBH */					\
+	      asm_fprintf (STREAM, "\tDCW\t|L..%d| - |L..%d|)/2\n",	\
+			   VALUE, REL);					\
+	      break;							\
+	    case SImode:						\
+	      if (flag_pic)						\
+		asm_fprintf (STREAM, "\tDCD\t|L..%d| + 1 - |L..%d|\n",	\
+			     VALUE, REL);				\
+	      else							\
+		asm_fprintf (STREAM, "\tDCD\t|L..%d| + 1\n", VALUE);	\
+	      break;							\
+	    default:							\
+	      gcc_unreachable();					\
+	    }								\
+	}								\
+    }									\
   while (0)
 
-#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM, VALUE)	\
-  fprintf ((STREAM), "\tDCD\t|L..%d|\n", (VALUE))
+#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM, VALUE)			\
+  do								\
+    {								\
+      gcc_assert (!TARGET_THUMB2)				\
+      fprintf ((STREAM), "\tDCD\t|L..%d|\n", (VALUE))		\
+    }								\
+  while (0)
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* A label marking the start of a jump table is a data label.  */
 #define ASM_OUTPUT_CASE_LABEL(STREAM, PREFIX, NUM, TABLE)	\

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/aout.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/aout.h?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/aout.h (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/aout.h Wed Jul 22 15:36:27 2009
@@ -47,7 +47,11 @@
 #define LOCAL_LABEL_PREFIX 	""
 #endif
 
-/* The assembler's names for the registers.  */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* The assembler's names for the registers.  Note that the ?xx registers are
+   there so that VFPv3/NEON registers D16-D31 have the same spacing as D0-D15
+   (each of which is overlaid on two S registers), although there are no
+   actual single-precision registers which correspond to D16-D31.  */
 #ifndef REGISTER_NAMES
 #define REGISTER_NAMES				   \
 {				                   \
@@ -68,9 +72,14 @@
   "s8",  "s9",  "s10", "s11", "s12", "s13", "s14", "s15", \
   "s16", "s17", "s18", "s19", "s20", "s21", "s22", "s23", \
   "s24", "s25", "s26", "s27", "s28", "s29", "s30", "s31", \
+  "d16", "?16", "d17", "?17", "d18", "?18", "d19", "?19", \
+  "d20", "?20", "d21", "?21", "d22", "?22", "d23", "?23", \
+  "d24", "?24", "d25", "?25", "d26", "?26", "d27", "?27", \
+  "d28", "?28", "d29", "?29", "d30", "?30", "d31", "?31", \
   "vfpcc"					   \
 }
 #endif
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 #ifndef ADDITIONAL_REGISTER_NAMES
 #define ADDITIONAL_REGISTER_NAMES		\
@@ -158,22 +167,32 @@
   {"mvdx13", 40},				\
   {"mvdx14", 41},				\
   {"mvdx15", 42},				\
-  {"d0", 63},					\
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */ \
+  {"d0", 63}, {"q0", 63},			\
   {"d1", 65},					\
-  {"d2", 67},					\
+  {"d2", 67}, {"q1", 67},			\
   {"d3", 69},					\
-  {"d4", 71},					\
+  {"d4", 71}, {"q2", 71},			\
   {"d5", 73},					\
-  {"d6", 75},					\
+  {"d6", 75}, {"q3", 75},			\
   {"d7", 77},					\
-  {"d8", 79},					\
+  {"d8", 79}, {"q4", 79},			\
   {"d9", 81},					\
-  {"d10", 83},					\
+  {"d10", 83}, {"q5", 83},			\
   {"d11", 85},					\
-  {"d12", 87},					\
+  {"d12", 87}, {"q6", 87},			\
   {"d13", 89},					\
-  {"d14", 91},					\
+  {"d14", 91}, {"q7", 91},			\
   {"d15", 93},					\
+  {"q8", 95},					\
+  {"q9", 99},					\
+  {"q10", 103},					\
+  {"q11", 107},					\
+  {"q12", 111},					\
+  {"q13", 115},					\
+  {"q14", 119},					\
+  {"q15", 123}					\
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */ \
 }
 #endif
 
@@ -213,19 +232,53 @@
   sprintf (STRING, "*%s%s%u", LOCAL_LABEL_PREFIX, PREFIX, (unsigned int)(NUM))
 #endif
      
+
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 /* Output an element of a dispatch table.  */
-#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM, VALUE)  \
-  asm_fprintf (STREAM, "\t.word\t%LL%d\n", VALUE)
+#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM, VALUE)			\
+  do								\
+    {								\
+      gcc_assert (!TARGET_THUMB2);				\
+      asm_fprintf (STREAM, "\t.word\t%LL%d\n", VALUE);		\
+    }								\
+  while (0)
+	  
 
+/* Thumb-2 always uses addr_diff_elf so that the Table Branch instructions
+   can be used.  For non-pic code where the offsets do not suitable for
+   TBB/TBH the elements are output as absolute labels.  */
 #define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM, BODY, VALUE, REL)		\
   do									\
     {									\
       if (TARGET_ARM)							\
 	asm_fprintf (STREAM, "\tb\t%LL%d\n", VALUE);			\
-      else								\
+      else if (TARGET_THUMB1)						\
 	asm_fprintf (STREAM, "\t.word\t%LL%d-%LL%d\n", VALUE, REL);	\
+      else /* Thumb-2 */						\
+	{								\
+	  switch (GET_MODE(body))					\
+	    {								\
+	    case QImode: /* TBB */					\
+	      asm_fprintf (STREAM, "\t.byte\t(%LL%d-%LL%d)/2\n",	\
+			   VALUE, REL);					\
+	      break;							\
+	    case HImode: /* TBH */					\
+	      asm_fprintf (STREAM, "\t.2byte\t(%LL%d-%LL%d)/2\n",	\
+			   VALUE, REL);					\
+	      break;							\
+	    case SImode:						\
+	      if (flag_pic)						\
+		asm_fprintf (STREAM, "\t.word\t%LL%d+1-%LL%d\n", VALUE, REL); \
+	      else							\
+		asm_fprintf (STREAM, "\t.word\t%LL%d+1\n", VALUE);	\
+	      break;							\
+	    default:							\
+	      gcc_unreachable();					\
+	    }								\
+	}								\
     }									\
   while (0)
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 
 #undef  ASM_OUTPUT_ASCII

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/arm-cores.def
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm-cores.def?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm-cores.def (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm-cores.def Wed Jul 22 15:36:27 2009
@@ -115,9 +115,15 @@
 ARM_CORE("arm1176jzf-s",  arm1176jzfs,	6ZK,				 FL_LDSCHED | FL_VFPV2, 9e)
 ARM_CORE("mpcorenovfp",	  mpcorenovfp,	6K,				 FL_LDSCHED, 9e)
 ARM_CORE("mpcore",	  mpcore,	6K,				 FL_LDSCHED | FL_VFPV2, 9e)
-/* LLVM LOCAL begin */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 ARM_CORE("arm1156t2-s",	  arm1156t2s,	6T2,				 FL_LDSCHED, 9e)
+/* LLVM LOCAL begin */
 ARM_CORE("arm1156t2f-s",  arm1156t2fs,  6T2,				 FL_LDSCHED | FL_VFPV2, 9e)
+/* LLVM LOCAL end */
 ARM_CORE("cortex-a8",	  cortexa8,	7A,				 FL_LDSCHED, 9e)
+/* LLVM LOCAL begin */
 ARM_CORE("cortex-a9",	  cortexa9,	7A,				 FL_LDSCHED, 9e)
 /* LLVM LOCAL end */
+ARM_CORE("cortex-r4",	  cortexr4,	7R,				 FL_LDSCHED, 9e)
+ARM_CORE("cortex-m3",	  cortexm3,	7M,				 FL_LDSCHED, 9e)
+/* APPLE LOCAL end v7 support. Merge from mainline */

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/arm-modes.def
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm-modes.def?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm-modes.def (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm-modes.def Wed Jul 22 15:36:27 2009
@@ -58,3 +58,13 @@
 VECTOR_MODES (FLOAT, 8);      /*            V4HF V2SF */
 VECTOR_MODES (FLOAT, 16);     /*       V8HF V4SF V2DF */
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+/* Opaque integer modes for 3, 4, 6 or 8 Neon double registers (2 is
+   TImode).  */
+INT_MODE (EI, 24);
+INT_MODE (OI, 32);
+INT_MODE (CI, 48);
+/* ??? This should actually have 512 bits but the precision only has 9
+   bits.  */
+FRACTIONAL_INT_MODE (XI, 511, 64);
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/arm-protos.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm-protos.h?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm-protos.h (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm-protos.h Wed Jul 22 15:36:27 2009
@@ -46,6 +46,8 @@
 extern void arm_expand_prologue (void);
 extern const char *arm_strip_name_encoding (const char *);
 extern void arm_asm_output_labelref (FILE *, const char *);
+/* APPLE LOCAL v7 support. Merge from mainline */
+extern void thumb2_asm_output_opcode (FILE *);
 extern unsigned long arm_current_func_type (void);
 extern HOST_WIDE_INT arm_compute_initial_elimination_offset (unsigned int,
 							     unsigned int);
@@ -75,7 +77,10 @@
 extern rtx legitimize_pic_address (rtx, enum machine_mode, rtx);
 extern rtx legitimize_tls_address (rtx, rtx);
 extern int arm_legitimate_address_p  (enum machine_mode, rtx, RTX_CODE, int);
-extern int thumb_legitimate_address_p (enum machine_mode, rtx, int);
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+extern int thumb1_legitimate_address_p (enum machine_mode, rtx, int);
+extern int thumb2_legitimate_address_p  (enum machine_mode, rtx, int);
+/* APPLE LOCAL end v7 support. Merge from mainline */
 extern int thumb_legitimate_offset_p (enum machine_mode, HOST_WIDE_INT);
 extern rtx arm_legitimize_address (rtx, rtx, enum machine_mode);
 extern rtx thumb_legitimize_address (rtx, rtx, enum machine_mode);
@@ -83,16 +88,38 @@
 					    int);
 extern int arm_const_double_rtx (rtx);
 extern int neg_const_double_rtx_ok_for_fpa (rtx);
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+extern int vfp3_const_double_rtx (rtx);
+extern int neon_immediate_valid_for_move (rtx, enum machine_mode, rtx *, int *);
+extern int neon_immediate_valid_for_logic (rtx, enum machine_mode, int, rtx *,
+					   int *);
+extern char *neon_output_logic_immediate (const char *, rtx *,
+					  enum machine_mode, int, int);
+extern void neon_pairwise_reduce (rtx, rtx, enum machine_mode,
+				  rtx (*) (rtx, rtx, rtx));
+extern void neon_expand_vector_init (rtx, rtx);
+extern void neon_reinterpret (rtx, rtx);
+extern void neon_emit_pair_result_insn (enum machine_mode,
+					rtx (*) (rtx, rtx, rtx, rtx),
+					rtx, rtx, rtx);
+extern void neon_disambiguate_copy (rtx *, rtx *, rtx *, unsigned int);
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 extern enum reg_class coproc_secondary_reload_class (enum machine_mode, rtx,
 						     bool);
 extern bool arm_tls_referenced_p (rtx);
 
 extern int cirrus_memory_offset (rtx);
 extern int arm_coproc_mem_operand (rtx, bool);
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+extern int neon_vector_mem_operand (rtx, bool);
+extern int neon_struct_mem_operand (rtx);
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 extern int arm_no_early_store_addr_dep (rtx, rtx);
 extern int arm_no_early_alu_shift_dep (rtx, rtx);
 extern int arm_no_early_alu_shift_value_dep (rtx, rtx);
 extern int arm_no_early_mul_dep (rtx, rtx);
+/* APPLE LOCAL v7 support. Merge from Codesourcery */
+extern int arm_mac_accumulator_is_mul_result (rtx, rtx);
 
 extern int tls_mentioned_p (rtx);
 extern int symbol_mentioned_p (rtx);
@@ -128,6 +155,11 @@
 extern const char *output_mov_double_fpa_from_arm (rtx *);
 extern const char *output_mov_double_arm_from_fpa (rtx *);
 extern const char *output_move_double (rtx *);
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+extern const char *output_move_quad (rtx *);
+extern const char *output_move_vfp (rtx *operands);
+extern const char *output_move_neon (rtx *operands);
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 extern const char *output_add_immediate (rtx *);
 extern const char *arithmetic_instr (rtx, int);
 extern void output_ascii_pseudo_op (FILE *, const unsigned char *, int);
@@ -135,15 +167,21 @@
 extern void arm_poke_function_name (FILE *, const char *);
 extern void arm_print_operand (FILE *, rtx, int);
 extern void arm_print_operand_address (FILE *, rtx);
+/* APPLE LOCAL v7 support. Merge from mainline */
+/* Removed line */
 extern void arm_final_prescan_insn (rtx);
-extern int arm_go_if_legitimate_address (enum machine_mode, rtx);
+/* APPLE LOCAL v7 support. Merge from mainline */
+/* Removed line */
 extern int arm_debugger_arg_offset (int, rtx);
 extern int arm_is_longcall_p (rtx, int, int);
 extern int    arm_emit_vector_const (FILE *, rtx);
 extern const char * arm_output_load_gr (rtx *);
-extern const char *vfp_output_fstmx (rtx *);
+/* APPLE LOCAL v7 support. Merge from mainline */
+extern const char *vfp_output_fstmd (rtx *);
 extern void arm_set_return_address (rtx, rtx);
 extern int arm_eliminable_register (rtx);
+/* APPLE LOCAL v7 support. Merge from mainline */
+extern const char *arm_output_shift(rtx *, int);
 
 extern bool arm_output_addr_const_extra (FILE *, rtx);
 
@@ -171,24 +209,27 @@
 /* Thumb functions.  */
 extern void arm_init_expanders (void);
 extern const char *thumb_unexpanded_epilogue (void);
-extern void thumb_expand_prologue (void);
-extern void thumb_expand_epilogue (void);
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+extern void thumb1_expand_prologue (void);
+extern void thumb1_expand_epilogue (void);
 #ifdef TREE_CODE
 extern int is_called_in_ARM_mode (tree);
 #endif
 extern int thumb_shiftable_const (unsigned HOST_WIDE_INT);
 #ifdef RTX_CODE
-extern void thumb_final_prescan_insn (rtx);
+extern void thumb1_final_prescan_insn (rtx);
+extern void thumb2_final_prescan_insn (rtx);
 extern const char *thumb_load_double_from_address (rtx *);
 extern const char *thumb_output_move_mem_multiple (int, rtx *);
 extern const char *thumb_call_via_reg (rtx);
 extern void thumb_expand_movmemqi (rtx *);
-extern int thumb_go_if_legitimate_address (enum machine_mode, rtx);
 extern rtx arm_return_addr (int, rtx);
 extern void thumb_reload_out_hi (rtx *);
 extern void thumb_reload_in_hi (rtx *);
 extern void thumb_set_return_address (rtx, rtx);
+extern const char *thumb2_output_casesi(rtx *);
 #endif
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* APPLE LOCAL begin ARM enhance conditional insn generation */
 #ifdef BB_HEAD
@@ -215,4 +256,18 @@
 /* APPLE LOCAL 5946347 ms_struct support */
 extern int arm_field_ms_struct_align (tree);
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+extern const char * arm_mangle_vector_type (tree);
+
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
+/* APPLE LOCAL v7 support. Fix compact switch tables */
+extern void arm_asm_output_addr_diff_vec (FILE *file, rtx LABEL, rtx BODY);
+
+/* APPLE LOCAL begin 6160917 */
+extern void neon_reload_in (rtx *, enum machine_mode);
+extern void neon_reload_out (rtx *, enum machine_mode);
+/* APPLE LOCAL end 6160917 */
+/* APPLE LOCAL 5571707 Allow R9 as caller-saved register */
+void arm_darwin_subtarget_conditional_register_usage (void);
+
 #endif /* ! GCC_ARM_PROTOS_H */

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/arm-tune.md
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm-tune.md?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm-tune.md (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm-tune.md Wed Jul 22 15:36:27 2009
@@ -1,6 +1,5 @@
-/* LLVM LOCAL (entire file!) */
 ;; -*- buffer-read-only: t -*-
 ;; Generated automatically by gentune.sh from arm-cores.def
 (define_attr "tune"
-	"arm2,arm250,arm3,arm6,arm60,arm600,arm610,arm620,arm7,arm7d,arm7di,arm70,arm700,arm700i,arm710,arm720,arm710c,arm7100,arm7500,arm7500fe,arm7m,arm7dm,arm7dmi,arm8,arm810,strongarm,strongarm110,strongarm1100,strongarm1110,arm7tdmi,arm7tdmis,arm710t,arm720t,arm740t,arm9,arm9tdmi,arm920,arm920t,arm922t,arm940t,ep9312,arm10tdmi,arm1020t,arm9e,arm946es,arm966es,arm968es,arm10e,arm1020e,arm1022e,xscale,iwmmxt,arm926ejs,arm1026ejs,arm1136js,arm1136jfs,arm1176jzs,arm1176jzfs,mpcorenovfp,mpcore,arm1156t2s,arm1156t2fs,cortexa8,cortexa9"
+	"arm2,arm250,arm3,arm6,arm60,arm600,arm610,arm620,arm7,arm7d,arm7di,arm70,arm700,arm700i,arm710,arm720,arm710c,arm7100,arm7500,arm7500fe,arm7m,arm7dm,arm7dmi,arm8,arm810,strongarm,strongarm110,strongarm1100,strongarm1110,arm7tdmi,arm7tdmis,arm710t,arm720t,arm740t,arm9,arm9tdmi,arm920,arm920t,arm922t,arm940t,ep9312,arm10tdmi,arm1020t,arm9e,arm946es,arm966es,arm968es,arm10e,arm1020e,arm1022e,xscale,iwmmxt,arm926ejs,arm1026ejs,arm1136js,arm1136jfs,arm1176jzs,arm1176jzfs,mpcorenovfp,mpcore,arm1156t2fs,cortexa9,arm1156t2s,cortexa8,cortexr4,cortexm3"
 	(const (symbol_ref "arm_tune")))

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/arm.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm.c?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm.c (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm.c Wed Jul 22 15:36:27 2009
@@ -22,6 +22,10 @@
    the Free Software Foundation, 51 Franklin Street, Fifth Floor,
    Boston, MA 02110-1301, USA.  */
 
+/* APPLE LOCAL begin 6902792 Q register clobbers in inline asm */
+#include <stdlib.h>
+#include <ctype.h>
+/* APPLE LOCAL end 6902792 Q register clobbers in inline asm */
 #include "config.h"
 #include "system.h"
 #include "coretypes.h"
@@ -71,10 +75,14 @@
 static unsigned bit_count (unsigned long);
 static int arm_address_register_rtx_p (rtx, int);
 static int arm_legitimate_index_p (enum machine_mode, rtx, RTX_CODE, int);
-static int thumb_base_register_rtx_p (rtx, enum machine_mode, int);
-inline static int thumb_index_register_rtx_p (rtx, int);
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+static int thumb2_legitimate_index_p (enum machine_mode, rtx, int);
+static int thumb1_base_register_rtx_p (rtx, enum machine_mode, int);
+inline static int thumb1_index_register_rtx_p (rtx, int);
 static int thumb_far_jump_used_p (void);
 static bool thumb_force_lr_save (void);
+static unsigned long thumb1_compute_save_reg_mask (void);
+/* APPLE LOCAL end v7 support. Merge from mainline */
 static int const_ok_for_op (HOST_WIDE_INT, enum rtx_code);
 static rtx emit_sfm (int, int);
 static int arm_size_return_regs (void);
@@ -134,7 +142,8 @@
 #endif
 static void arm_output_function_epilogue (FILE *, HOST_WIDE_INT);
 static void arm_output_function_prologue (FILE *, HOST_WIDE_INT);
-static void thumb_output_function_prologue (FILE *, HOST_WIDE_INT);
+/* APPLE LOCAL v7 support. Merge from mainline */
+static void thumb1_output_function_prologue (FILE *, HOST_WIDE_INT);
 static int arm_comp_type_attributes (tree, tree);
 static void arm_set_default_type_attributes (tree);
 static int arm_adjust_cost (rtx, rtx, rtx, int);
@@ -182,6 +191,8 @@
 /* APPLE LOCAL end ARM darwin section_info */
 
 static void arm_file_end (void);
+/* APPLE LOCAL v7 support. Merge from mainline */
+static void arm_file_start (void);
 
 /* APPLE LOCAL begin ARM asm file hooks */
 #if TARGET_MACHO
@@ -211,6 +222,10 @@
 static void arm_unwind_emit (FILE *, rtx);
 static bool arm_output_ttype (rtx);
 #endif
+/* APPLE LOCAL v7 support. Merge from mainline */
+static void arm_dwarf_handle_frame_unspec (const char *, rtx, int);
+/* APPLE LOCAL v7 support. Merge from Codesourcery */
+static rtx arm_dwarf_register_span(rtx);
 
 static tree arm_cxx_guard_type (void);
 static bool arm_cxx_guard_mask_bit (void);
@@ -242,7 +257,15 @@
 static tree arm_handle_gcc_struct_attribute (tree *, tree, tree, int, bool *);
 static bool arm_ms_bitfield_layout_p (tree);
 /* APPLE LOCAL end 5946347 ms_struct support */
+/* APPLE LOCAL ARM 6008578 */
+/* LLVM LOCAL begin */
+#ifndef ENABLE_LLVM
+static HOST_WIDE_INT get_label_pad (rtx, HOST_WIDE_INT);
+#endif
+/* LLVM LOCAL end */
 
+/* APPLE LOCAL 6902792 Q register clobbers in inline asm */
+static tree arm_md_asm_clobbers (tree, tree, tree);
 
 /* Initialize the GCC target structure.  */
 #if TARGET_DLLIMPORT_DECL_ATTRIBUTES
@@ -299,7 +322,8 @@
 #define TARGET_ASM_FUNCTION_EPILOGUE arm_output_function_epilogue
 
 #undef  TARGET_DEFAULT_TARGET_FLAGS
-#define TARGET_DEFAULT_TARGET_FLAGS (TARGET_DEFAULT | MASK_SCHED_PROLOG)
+/* APPLE LOCAL 6216388 Don't schedule prologue by default */
+#define TARGET_DEFAULT_TARGET_FLAGS (TARGET_DEFAULT)
 #undef  TARGET_HANDLE_OPTION
 #define TARGET_HANDLE_OPTION arm_handle_option
 
@@ -428,6 +452,16 @@
 #define TARGET_ARM_EABI_UNWINDER true
 #endif /* TARGET_UNWIND_INFO */
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+#undef TARGET_DWARF_HANDLE_FRAME_UNSPEC
+#define TARGET_DWARF_HANDLE_FRAME_UNSPEC arm_dwarf_handle_frame_unspec
+/* APPLE LOCAL end v7 support. Merge from mainline */
+
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+#undef TARGET_DWARF_REGISTER_SPAN
+#define TARGET_DWARF_REGISTER_SPAN arm_dwarf_register_span
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
+
 #undef  TARGET_CANNOT_COPY_INSN_P
 #define TARGET_CANNOT_COPY_INSN_P arm_cannot_copy_insn_p
 
@@ -440,18 +474,28 @@
 /* APPLE LOCAL ARM -mdynamic-no-pic support */
 #define TARGET_CANNOT_FORCE_CONST_MEM arm_cannot_force_const_mem
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+#undef TARGET_MAX_ANCHOR_OFFSET
+#define TARGET_MAX_ANCHOR_OFFSET 4095
+
+/* The minimum is set such that the total size of the block
+   for a particular anchor is -4088 + 1 + 4095 bytes, which is
+   divisible by eight, ensuring natural spacing of anchors.  */
+#undef TARGET_MIN_ANCHOR_OFFSET
+#define TARGET_MIN_ANCHOR_OFFSET -4088
+
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 /* APPLE LOCAL begin ARM darwin local binding */
 #if TARGET_MACHO
 #undef TARGET_BINDS_LOCAL_P
 #define TARGET_BINDS_LOCAL_P arm_binds_local_p
 #endif
 /* APPLE LOCAL end ARM darwin local binding */
-/* APPLE LOCAL ARM 6008578 */
-/* LLVM LOCAL begin */
-#ifndef ENABLE_LLVM
-static HOST_WIDE_INT get_label_pad (rtx, HOST_WIDE_INT);
-#endif
-/* LLVM LOCAL end */
+
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+#undef TARGET_MANGLE_VECTOR_TYPE
+#define TARGET_MANGLE_VECTOR_TYPE arm_mangle_vector_type
+/* APPLE LOCAL end support. Merge from Codesourcery */
 
 /* APPLE LOCAL begin ARM reliable backtraces */
 #undef TARGET_BUILTIN_SETJMP_FRAME_VALUE
@@ -463,6 +507,11 @@
 #define TARGET_MS_BITFIELD_LAYOUT_P arm_ms_bitfield_layout_p
 /* APPLE LOCAL end 5946347 ms_struct support */
 
+/* APPLE LOCAL begin 6902792 Q register clobbers in inline asm */
+#undef TARGET_MD_ASM_CLOBBERS
+#define TARGET_MD_ASM_CLOBBERS arm_md_asm_clobbers
+/* APPLE LOCAL end 6902792 Q register clobbers in inline asm */
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 /* Obstack for minipool constant handling.  */
@@ -475,6 +524,10 @@
 
 extern FILE * asm_out_file;
 
+/* APPLE LOCAL begin 6879229 disallow -fasm-blocks */
+extern int flag_iasm_blocks;
+/* APPLE LOCAL end 6879229 disallow -fasm-blocks */
+
 /* True if we are currently building a constant table.  */
 int making_const_table;
 
@@ -485,6 +538,11 @@
 /* The processor for which instructions should be scheduled.  */
 enum processor_type arm_tune = arm_none;
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* The default processor used if not overriden by commandline.  */
+static enum processor_type arm_default_cpu = arm_none;
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 /* Which floating point model to use.  */
 enum arm_fp_model arm_fp_model;
 
@@ -510,6 +568,9 @@
 rtx thumb_call_via_label[14];
 static int thumb_call_reg_needed;
 
+/* APPLE LOCAL 5571707 Allow R9 as caller-saved register */
+static int darwin_reserve_r9_on_v6 = 0;
+
 /* APPLE LOCAL begin ARM compact switch tables */
 /* Keeps track of which *_switch* functions we've used, so we
    can emit the right stubs. */
@@ -538,14 +599,22 @@
 #define FL_WBUF	      (1 << 14)	      /* Schedule for write buffer ops.
 					 Note: ARM6 & 7 derivatives only.  */
 #define FL_ARCH6K     (1 << 15)       /* Architecture rel 6 K extensions.  */
-/* LLVM LOCAL begin */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 #define FL_THUMB2     (1 << 16)	      /* Thumb-2.  */
-/* LLVM LOCAL end */
+#define FL_NOTM	      (1 << 17)	      /* Instructions not present in the 'M'
+					 profile.  */
+#define FL_DIV	      (1 << 18)	      /* Hardware divde.  */
+#define FL_VFPV3      (1 << 19)       /* Vector Floating Point V3.  */
+/* APPLE LOCAL end v7 support. Merge from mainline */
+/* APPLE LOCAL v7 support. Merge from Codesourcery */
+#define FL_NEON       (1 << 20)       /* Neon instructions.  */
 
 #define FL_IWMMXT     (1 << 29)	      /* XScale v2 or "Intel Wireless MMX technology".  */
 
-#define FL_FOR_ARCH2	0
-#define FL_FOR_ARCH3	FL_MODE32
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+#define FL_FOR_ARCH2	FL_NOTM
+#define FL_FOR_ARCH3	(FL_FOR_ARCH2 | FL_MODE32)
+/* APPLE LOCAL end v7 support. Merge from mainline */
 #define FL_FOR_ARCH3M	(FL_FOR_ARCH3 | FL_ARCH3M)
 #define FL_FOR_ARCH4	(FL_FOR_ARCH3M | FL_ARCH4)
 #define FL_FOR_ARCH4T	(FL_FOR_ARCH4 | FL_THUMB)
@@ -559,10 +628,14 @@
 #define FL_FOR_ARCH6K	(FL_FOR_ARCH6 | FL_ARCH6K)
 #define FL_FOR_ARCH6Z	FL_FOR_ARCH6
 #define FL_FOR_ARCH6ZK	FL_FOR_ARCH6K
-/* LLVM LOCAL begin */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 #define FL_FOR_ARCH6T2	(FL_FOR_ARCH6 | FL_THUMB2)
-#define FL_FOR_ARCH7A	(FL_FOR_ARCH6T2)
-/* LLVM LOCAL end */
+#define FL_FOR_ARCH7	(FL_FOR_ARCH6T2 &~ FL_NOTM)
+/* APPLE LOCAL 6093388 -mfpu=neon default for v7a */
+#define FL_FOR_ARCH7A	(FL_FOR_ARCH7 | FL_NOTM | FL_NEON)
+#define FL_FOR_ARCH7R	(FL_FOR_ARCH7A | FL_DIV)
+#define FL_FOR_ARCH7M	(FL_FOR_ARCH7 | FL_DIV)
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* The bits in this mask specify which
    instructions we are allowed to generate.  */
@@ -596,12 +669,19 @@
 /* Nonzero if this chip supports the ARM 6K extensions.  */
 int arm_arch6k = 0;
 
+/* APPLE LOCAL begin 6093388 -mfpu=neon default for v7a */
 /* Nonzero if this chip supports the ARM 7A extensions.  */
 int arm_arch7a = 0;
+/* APPLE LOCAL end 6093388 -mfpu=neon default for v7a */
+
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Nonzero if instructions not present in the 'M' profile can be used.  */
+int arm_arch_notm = 0;
 
 /* Nonzero if this chip can benefit from load scheduling.  */
 int arm_ld_sched = 0;
 
+/* APPLE LOCAL end v7 support. Merge from mainline */
 /* Nonzero if this chip is a StrongARM.  */
 int arm_tune_strongarm = 0;
 
@@ -631,11 +711,14 @@
    interworking clean.  */
 int arm_cpp_interwork = 0;
 
-/* LLVM LOCAL begin */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 /* Nonzero if chip supports Thumb 2.  */
 int arm_arch_thumb2;
-/* LLVM LOCAL end */
 
+/* Nonzero if chip supports integer division instruction.  */
+int arm_arch_hwdiv;
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 /* In case of a PRE_INC, POST_INC, PRE_DEC, POST_DEC memory reference, we
    must report the mode of the memory reference from PRINT_OPERAND to
    PRINT_OPERAND_ADDRESS.  */
@@ -657,9 +740,19 @@
 
 /* For an explanation of these variables, see final_prescan_insn below.  */
 int arm_ccfsm_state;
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* arm_current_cc is also used for Thumb-2 cond_exec blocks.  */
 enum arm_cond_code arm_current_cc;
 rtx arm_target_insn;
 int arm_target_label;
+/* The number of conditionally executed insns, including the current insn.  */
+int arm_condexec_count = 0;
+/* A bitmask specifying the patterns for the IT block.
+   Zero means do not output an IT block before this insn. */
+int arm_condexec_mask = 0;
+/* The number of bits used in arm_condexec_mask.  */
+int arm_condexec_masklen = 0;
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* The condition codes of the ARM, and the inverse function.  */
 static const char * const arm_condition_codes[] =
@@ -668,7 +761,14 @@
   "hi", "ls", "ge", "lt", "gt", "le", "al", "nv"
 };
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+#define ARM_LSL_NAME (TARGET_UNIFIED_ASM ? "lsl" : "asl")
 #define streq(string1, string2) (strcmp (string1, string2) == 0)
+
+#define THUMB2_WORK_REGS (0xff & ~(  (1 << THUMB_HARD_FRAME_POINTER_REGNUM) \
+				   | (1 << SP_REGNUM) | (1 << PC_REGNUM) \
+				   | (1 << PIC_OFFSET_TABLE_REGNUM)))
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Initialization code.  */
 
@@ -732,10 +832,18 @@
 /* APPLE LOCAL end ARM custom architectures */
   {"armv6z",  arm1176jzs, "6Z",  FL_CO_PROC |             FL_FOR_ARCH6Z, NULL},
   {"armv6zk", arm1176jzs, "6ZK", FL_CO_PROC |             FL_FOR_ARCH6ZK, NULL},
-/* LLVM LOCAL begin */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
   {"armv6t2", arm1156t2s, "6T2", FL_CO_PROC |             FL_FOR_ARCH6T2, NULL},
+  {"armv7",   cortexa8,	  "7",	 FL_CO_PROC |		  FL_FOR_ARCH7, NULL},
+  {"armv7a",  cortexa8,	  "7A",	 FL_CO_PROC |		  FL_FOR_ARCH7A, NULL},
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  {"armv7r",  cortexr4,	  "7R",	 FL_CO_PROC |		  FL_FOR_ARCH7R, NULL},
+  {"armv7m",  cortexm3,	  "7M",	 FL_CO_PROC |		  FL_FOR_ARCH7M, NULL},
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
   {"armv7-a", cortexa8,	  "7A",	 FL_CO_PROC |		  FL_FOR_ARCH7A, NULL},
-/* LLVM LOCAL end */
+  {"armv7-r", cortexr4,	  "7R",	 FL_CO_PROC |		  FL_FOR_ARCH7R, NULL},
+  {"armv7-m", cortexm3,	  "7M",	 FL_CO_PROC |		  FL_FOR_ARCH7M, NULL},
+/* APPLE LOCAL end v7 support. Merge from mainline */
   {"ep9312",  ep9312,     "4T",  FL_LDSCHED | FL_CIRRUS | FL_FOR_ARCH4, NULL},
   {"iwmmxt",  iwmmxt,     "5TE", FL_LDSCHED | FL_STRONG | FL_FOR_ARCH5TE | FL_XSCALE | FL_IWMMXT , NULL},
   {NULL, arm_none, NULL, 0 , NULL}
@@ -767,7 +875,10 @@
 
 /* The name of the preprocessor macro to define for this architecture.  */
 
-char arm_arch_name[] = "__ARM_ARCH_0UNK__";
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+#define ARM_ARCH_NAME_SIZE 25
+char arm_arch_name[ARM_ARCH_NAME_SIZE] = "__ARM_ARCH_0UNK__";
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 
 struct fpu_desc
 {
@@ -784,7 +895,12 @@
   {"fpe2",	FPUTYPE_FPA_EMU2},
   {"fpe3",	FPUTYPE_FPA_EMU2},
   {"maverick",	FPUTYPE_MAVERICK},
-  {"vfp",	FPUTYPE_VFP}
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+  {"vfp",	FPUTYPE_VFP},
+  {"vfp3",	FPUTYPE_VFP3},
+/* APPLE LOCAL end v7 support. Merge from mainline */
+/* APPLE LOCAL v7 support. Merge from Codesourcery */
+  {"neon",	FPUTYPE_NEON}
 };
 
 
@@ -799,7 +915,12 @@
   ARM_FP_MODEL_FPA,		/* FPUTYPE_FPA_EMU2  */
   ARM_FP_MODEL_FPA,		/* FPUTYPE_FPA_EMU3  */
   ARM_FP_MODEL_MAVERICK,	/* FPUTYPE_MAVERICK  */
-  ARM_FP_MODEL_VFP		/* FPUTYPE_VFP  */
+/* APPLE LOCAL v7 support. Merge from mainline */
+  ARM_FP_MODEL_VFP,		/* FPUTYPE_VFP  */
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  ARM_FP_MODEL_VFP,		/* FPUTYPE_VFP3  */
+  ARM_FP_MODEL_VFP		/* FPUTYPE_NEON  */
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 };
 
 
@@ -1141,6 +1262,8 @@
 arm_override_options (void)
 {
   unsigned i;
+/* APPLE LOCAL v7 support. Merge from Codesourcery */
+  int len;
   enum processor_type target_arch_cpu = arm_none;
 
   /* Set up the flags based on the cpu/architecture selected by the user.  */
@@ -1157,7 +1280,13 @@
               {
 		/* Set the architecture define.  */
 		if (i != ARM_OPT_SET_TUNE)
-		  sprintf (arm_arch_name, "__ARM_ARCH_%s__", sel->arch);
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+		  {
+		    len = snprintf (arm_arch_name, ARM_ARCH_NAME_SIZE,
+				    "__ARM_ARCH_%s__", sel->arch);
+		    gcc_assert (len < ARM_ARCH_NAME_SIZE);
+		  }
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 
 		/* Determine the processor core for which we should
 		   tune code-generation.  */
@@ -1291,9 +1420,16 @@
 
 	  insn_flags = sel->flags;
 	}
-      sprintf (arm_arch_name, "__ARM_ARCH_%s__", sel->arch);
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+
+      len = snprintf (arm_arch_name, ARM_ARCH_NAME_SIZE,
+		      "__ARM_ARCH_%s__", sel->arch);
+      gcc_assert (len < ARM_ARCH_NAME_SIZE);
+
+      arm_default_cpu = (enum processor_type) (sel - all_cores);
       if (arm_tune == arm_none)
-	arm_tune = (enum processor_type) (sel - all_cores);
+	arm_tune = arm_default_cpu;
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
     }
 
   /* The processor for which we should tune should now have been
@@ -1308,6 +1444,11 @@
 
   /* Make sure that the processor choice does not conflict with any of the
      other command line choices.  */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+  if (TARGET_ARM && !(insn_flags & FL_NOTM))
+    error ("target CPU does not support ARM mode");
+/* APPLE LOCAL end v7 support. Merge from mainline */
+
   if (TARGET_INTERWORK && !(insn_flags & FL_THUMB))
     {
       /* APPLE LOCAL begin ARM interworking */
@@ -1319,10 +1460,19 @@
       /* APPLE LOCAL end ARM interworking */
     }
 
+  /* APPLE LOCAL begin 6150882 use thumb2 by default for v7 */
+  /* If we're compiling for v7, we should default to using thumb2
+     codegen. */
+  if ((insn_flags & FL_FOR_ARCH7A) == FL_FOR_ARCH7A 
+      && thumb_option < 0)
+    thumb_option = 1;
+  /* APPLE LOCAL end 6150882 use thumb2 by default for v7 */
+
   if (TARGET_THUMB && !(insn_flags & FL_THUMB))
     {
       warning (0, "target CPU does not support THUMB instructions");
-      target_flags &= ~MASK_THUMB;
+      /* APPLE LOCAL 6150882 use thumb2 by default for v7 */
+      thumb_option = 0;
     }
 
   if (TARGET_APCS_FRAME && TARGET_THUMB)
@@ -1371,6 +1521,11 @@
       && (TARGET_DEFAULT & MASK_APCS_FRAME))
     warning (0, "-g with -mno-apcs-frame may not give sensible debugging");
 
+  /* APPLE LOCAL begin 6879229 disallow -fasm-blocks */
+  if (flag_iasm_blocks)
+    error ("-fasm-blocks option not supported for ARM");
+  /* APPLE LOCAL end 6879229 disallow -fasm-blocks */
+
   /* If stack checking is disabled, we can use r10 as the PIC register,
      which keeps r9 available.  */
   /* APPLE LOCAL ARM pic support */
@@ -1388,10 +1543,12 @@
   arm_arch5e = (insn_flags & FL_ARCH5E) != 0;
   arm_arch6 = (insn_flags & FL_ARCH6) != 0;
   arm_arch6k = (insn_flags & FL_ARCH6K) != 0;
+  /* APPLE LOCAL 6093388 -mfpu=neon default for v7a */
   arm_arch7a = (insn_flags & FL_FOR_ARCH7A) == FL_FOR_ARCH7A;
-  /* LLVM LOCAL begin */
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  arm_arch_notm = (insn_flags & FL_NOTM) != 0;
   arm_arch_thumb2 = (insn_flags & FL_THUMB2) != 0;
-  /* LLVM LOCAL end */
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   arm_arch_xscale = (insn_flags & FL_XSCALE) != 0;
   arm_arch_cirrus = (insn_flags & FL_CIRRUS) != 0;
 
@@ -1401,7 +1558,32 @@
   arm_tune_wbuf = (tune_flags & FL_WBUF) != 0;
   arm_tune_xscale = (tune_flags & FL_XSCALE) != 0;
   arm_arch_iwmmxt = (insn_flags & FL_IWMMXT) != 0;
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  arm_arch_hwdiv = (insn_flags & FL_DIV) != 0;
+
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  /* If we are not using the default (ARM mode) section anchor offset
+     ranges, then set the correct ranges now.  */
+  if (TARGET_THUMB1)
+    {
+      /* Thumb-1 LDR instructions cannot have negative offsets.
+         Permissible positive offset ranges are 5-bit (for byte loads),
+         6-bit (for halfword loads), or 7-bit (for word loads).
+         Empirical results suggest a 7-bit anchor range gives the best
+         overall code size.  */
+      targetm.min_anchor_offset = 0;
+      targetm.max_anchor_offset = 127;
+    }
+  else if (TARGET_THUMB2)
+    {
+      /* The minimum is set such that the total size of the block
+         for a particular anchor is 248 + 1 + 4095 bytes, which is
+         divisible by eight, ensuring natural spacing of anchors.  */
+      targetm.min_anchor_offset = -248;
+      targetm.max_anchor_offset = 4095;
+    }
 
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
   /* APPLE LOCAL begin ARM interworking */
   /* Choose a default interworking setting if not specified on the
      command line.  */
@@ -1520,6 +1702,12 @@
   if (TARGET_IWMMXT && !TARGET_SOFT_FLOAT)
     sorry ("iWMMXt and hardware floating point");
 
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  /* ??? iWMMXt insn patterns need auditing for Thumb-2.  */
+  if (TARGET_THUMB2 && TARGET_IWMMXT)
+    sorry ("Thumb-2 iWMMXt");
+
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   /* If soft-float is specified then don't use FPU.  */
   if (TARGET_SOFT_FLOAT)
     arm_fpu_arch = FPUTYPE_NONE;
@@ -1553,8 +1741,10 @@
 	target_thread_pointer = TP_SOFT;
     }
 
-  if (TARGET_HARD_TP && TARGET_THUMB)
-    error ("can not use -mtp=cp15 with -mthumb");
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  if (TARGET_HARD_TP && TARGET_THUMB1)
+    error ("can not use -mtp=cp15 with 16-bit Thumb");
+  /* APPLE LOCAL end v7 support. Merge from mainline */
 
   /* Override the default structure alignment for AAPCS ABI.  */
   if (TARGET_AAPCS_BASED)
@@ -1589,6 +1779,8 @@
 	arm_pic_register = pic_register;
     }
 
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  /* ??? We might want scheduling for thumb2.  */
   if (TARGET_THUMB && flag_schedule_insns)
     {
       /* Don't warn since it's on by default in -O2.  */
@@ -1680,6 +1872,11 @@
   const isr_attribute_arg * ptr;
   const char *              arg;
 
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  if (!arm_arch_notm)
+    return ARM_FT_NORMAL | ARM_FT_STACKALIGN;
+
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   /* No argument - default to IRQ.  */
   if (argument == NULL_TREE)
     return ARM_FT_ISR;
@@ -1859,14 +2056,17 @@
 
   func_type = arm_current_func_type ();
 
-  /* Naked functions and volatile functions need special
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  /* Naked, volatile and stack alignment functions need special
      consideration.  */
-  if (func_type & (ARM_FT_VOLATILE | ARM_FT_NAKED))
+  if (func_type & (ARM_FT_VOLATILE | ARM_FT_NAKED | ARM_FT_STACKALIGN))
     return 0;
 
-  /* So do interrupt functions that use the frame pointer.  */
-  if (IS_INTERRUPT (func_type) && frame_pointer_needed)
+  /* So do interrupt functions that use the frame pointer and Thumb
+     interrupt functions.  */
+  if (IS_INTERRUPT (func_type) && (frame_pointer_needed || TARGET_THUMB))
     return 0;
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
 
   offsets = arm_get_frame_offsets ();
   stack_adjust = offsets->outgoing_args - offsets->saved_regs;
@@ -1899,7 +2099,8 @@
      We test for !arm_arch5 here, because code for any architecture
      less than this could potentially be run on one of the buggy
      chips.  */
-  if (stack_adjust == 4 && !arm_arch5)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (stack_adjust == 4 && !arm_arch5 && TARGET_ARM)
     {
       /* Validate that r3 is a call-clobbered register (always true in
 	 the default abi) ...  */
@@ -2028,18 +2229,39 @@
   if ((i & ~(unsigned HOST_WIDE_INT) 0xff) == 0)
     return TRUE;
 
-  /* Get the number of trailing zeros, rounded down to the nearest even
-     number.  */
-  lowbit = (ffs ((int) i) - 1) & ~1;
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  /* Get the number of trailing zeros.  */
+  lowbit = ffs((int) i) - 1;
+  
+  /* Only even shifts are allowed in ARM mode so round down to the
+     nearest even number.  */
+  if (TARGET_ARM)
+    lowbit &= ~1;
 
   if ((i & ~(((unsigned HOST_WIDE_INT) 0xff) << lowbit)) == 0)
     return TRUE;
-  else if (lowbit <= 4
+
+  if (TARGET_ARM)
+    {
+      /* Allow rotated constants in ARM mode.  */
+      if (lowbit <= 4
 	   && ((i & ~0xc000003f) == 0
 	       || (i & ~0xf000000f) == 0
 	       || (i & ~0xfc000003) == 0))
-    return TRUE;
+	return TRUE;
+    }
+  else
+    {
+      HOST_WIDE_INT v;
+
+      /* Allow repeated pattern.  */
+      v = i & 0xff;
+      v |= v << 16;
+      if (i == v || i == (v | (v << 8)))
+	return TRUE;
+    }
 
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   return FALSE;
 }
 
@@ -2078,6 +2300,8 @@
    either produce a simpler sequence, or we will want to cse the values.
    Return value is the number of insns emitted.  */
 
+/* APPLE LOCAL v7 support. Merge from mainline */
+/* ??? Tweak this for thumb2.  */
 int
 arm_split_constant (enum rtx_code code, enum machine_mode mode, rtx insn,
 		    HOST_WIDE_INT val, rtx target, rtx source, int subtargets)
@@ -2136,6 +2360,10 @@
 			   1);
 }
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Return the number of ARM instructions required to synthesize the given
+   constant.  */
+/* APPLE LOCAL end v7 support. Merge from mainline */
 static int
 count_insns_for_constant (HOST_WIDE_INT remainder, int i)
 {
@@ -2177,6 +2405,8 @@
 
 /* As above, but extra parameter GENERATE which, if clear, suppresses
    RTL generation.  */
+/* APPLE LOCAL v7 support. Merge from mainline */
+/* ??? This needs more work for thumb2.  */
 
 static int
 arm_gen_constant (enum rtx_code code, enum machine_mode mode, rtx cond,
@@ -2353,6 +2583,17 @@
   switch (code)
     {
     case SET:
+      /* APPLE LOCAL begin v7 support. Merge from mainline */
+      /* See if we can use movw.  */
+      if (arm_arch_thumb2 && (remainder & 0xffff0000) == 0)
+	{
+	  if (generate)
+	    emit_constant_insn (cond, gen_rtx_SET (VOIDmode, target,
+						   GEN_INT (val)));
+	  return 1;
+	}
+
+      /* APPLE LOCAL end v7 support. Merge from mainline */
       /* See if we can do this by sign_extending a constant that is known
 	 to be negative.  This is a good, way of doing it, since the shift
 	 may well merge into a subsequent insn.  */
@@ -2689,64 +2930,73 @@
       can_negate = 0;
     }
 
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
   /* Now try and find a way of doing the job in either two or three
      instructions.
      We start by looking for the largest block of zeros that are aligned on
      a 2-bit boundary, we then fill up the temps, wrapping around to the
      top of the word when we drop off the bottom.
-     In the worst case this code should produce no more than four insns.  */
+     In the worst case this code should produce no more than four insns.
+     Thumb-2 constants are shifted, not rotated, so the MSB is always the
+     best place to start.  */
+
+  /* ??? Use thumb2 replicated constants when the high and low halfwords are
+     the same.  */
   {
     int best_start = 0;
-    int best_consecutive_zeros = 0;
-
-    for (i = 0; i < 32; i += 2)
+    if (!TARGET_THUMB2)
       {
-	int consecutive_zeros = 0;
+	int best_consecutive_zeros = 0;
 
-	if (!(remainder & (3 << i)))
+	for (i = 0; i < 32; i += 2)
 	  {
-	    while ((i < 32) && !(remainder & (3 << i)))
-	      {
-		consecutive_zeros += 2;
-		i += 2;
-	      }
-	    if (consecutive_zeros > best_consecutive_zeros)
+	    int consecutive_zeros = 0;
+
+	    if (!(remainder & (3 << i)))
 	      {
-		best_consecutive_zeros = consecutive_zeros;
-		best_start = i - consecutive_zeros;
+		while ((i < 32) && !(remainder & (3 << i)))
+		  {
+		    consecutive_zeros += 2;
+		    i += 2;
+		  }
+		if (consecutive_zeros > best_consecutive_zeros)
+		  {
+		    best_consecutive_zeros = consecutive_zeros;
+		    best_start = i - consecutive_zeros;
+		  }
+		i -= 2;
 	      }
-	    i -= 2;
 	  }
-      }
 
-    /* So long as it won't require any more insns to do so, it's
-       desirable to emit a small constant (in bits 0...9) in the last
-       insn.  This way there is more chance that it can be combined with
-       a later addressing insn to form a pre-indexed load or store
-       operation.  Consider:
-
-	       *((volatile int *)0xe0000100) = 1;
-	       *((volatile int *)0xe0000110) = 2;
-
-       We want this to wind up as:
-
-		mov rA, #0xe0000000
-		mov rB, #1
-		str rB, [rA, #0x100]
-		mov rB, #2
-		str rB, [rA, #0x110]
-
-       rather than having to synthesize both large constants from scratch.
-
-       Therefore, we calculate how many insns would be required to emit
-       the constant starting from `best_start', and also starting from
-       zero (i.e. with bit 31 first to be output).  If `best_start' doesn't
-       yield a shorter sequence, we may as well use zero.  */
-    if (best_start != 0
-	&& ((((unsigned HOST_WIDE_INT) 1) << best_start) < remainder)
-	&& (count_insns_for_constant (remainder, 0) <=
-	    count_insns_for_constant (remainder, best_start)))
-      best_start = 0;
+	/* So long as it won't require any more insns to do so, it's
+	   desirable to emit a small constant (in bits 0...9) in the last
+	   insn.  This way there is more chance that it can be combined with
+	   a later addressing insn to form a pre-indexed load or store
+	   operation.  Consider:
+
+		   *((volatile int *)0xe0000100) = 1;
+		   *((volatile int *)0xe0000110) = 2;
+
+	   We want this to wind up as:
+
+		    mov rA, #0xe0000000
+		    mov rB, #1
+		    str rB, [rA, #0x100]
+		    mov rB, #2
+		    str rB, [rA, #0x110]
+
+	   rather than having to synthesize both large constants from scratch.
+
+	   Therefore, we calculate how many insns would be required to emit
+	   the constant starting from `best_start', and also starting from
+	   zero (i.e. with bit 31 first to be output).  If `best_start' doesn't
+	   yield a shorter sequence, we may as well use zero.  */
+	if (best_start != 0
+	    && ((((unsigned HOST_WIDE_INT) 1) << best_start) < remainder)
+	    && (count_insns_for_constant (remainder, 0) <=
+		count_insns_for_constant (remainder, best_start)))
+	  best_start = 0;
+      }
 
     /* Now start emitting the insns.  */
     i = best_start;
@@ -2812,12 +3062,21 @@
 	      code = PLUS;
 
 	    insns++;
-	    i -= 6;
+	    if (TARGET_ARM)
+	      i -= 6;
+	    else
+	      i -= 7;
 	  }
-	i -= 2;
+	/* Arm allows rotates by a multiple of two. Thumb-2 allows arbitary
+	   shifts.  */
+	if (TARGET_ARM)
+	  i -= 2;
+	else
+	  i--;
       }
     while (remainder);
   }
+  /* APPLE LOCAL end v7 support. Merge from mainline */
 
   return insns;
 }
@@ -2948,15 +3207,22 @@
 {
   HOST_WIDE_INT size;
 
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  size = int_size_in_bytes (type);
+
+  /* Vector values should be returned using ARM registers, not memory (unless
+     they're over 16 bytes, which will break since we only have four
+     call-clobbered registers to play with).  */
+  if (TREE_CODE (type) == VECTOR_TYPE)
+    return (size < 0 || size > (4 * UNITS_PER_WORD));
+
   if (!AGGREGATE_TYPE_P (type) &&
-      (TREE_CODE (type) != VECTOR_TYPE) &&
       !(TARGET_AAPCS_BASED && TREE_CODE (type) == COMPLEX_TYPE))
     /* All simple types are returned in registers.
        For AAPCS, complex types are treated the same as aggregates.  */
     return 0;
 
-  size = int_size_in_bytes (type);
-
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
   if (arm_abi != ARM_ABI_APCS)
     {
       /* ATPCS and later return aggregate types in memory only if they are
@@ -2964,11 +3230,8 @@
       return (size < 0 || size > UNITS_PER_WORD);
     }
 
-  /* To maximize backwards compatibility with previous versions of gcc,
-     return vectors up to 4 words in registers.  */
-  if (TREE_CODE (type) == VECTOR_TYPE)
-    return (size < 0 || size > (4 * UNITS_PER_WORD));
-
+  /* APPLE LOCAL v7 support. Merge from Codesourcery */
+  /* Removed lines */
   /* For the arm-wince targets we choose to be compatible with Microsoft's
      ARM and Thumb compilers, which always return aggregates in memory.  */
 #ifndef ARM_WINCE
@@ -3202,7 +3465,8 @@
 {
   int nregs = pcum->nregs;
 
-  if (arm_vector_mode_supported_p (mode))
+  /* APPLE LOCAL v7 support. Merge from Codesourcery */
+  if (TARGET_IWMMXT_ABI && arm_vector_mode_supported_p (mode))
     return 0;
 
   if (NUM_ARG_REGS > nregs
@@ -3622,6 +3886,8 @@
 arm_function_ok_for_sibcall (tree decl, tree exp ATTRIBUTE_UNUSED)
 {
   int call_type = TARGET_LONG_CALLS ? CALL_LONG : CALL_NORMAL;
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  unsigned long func_type;
 
   if (cfun->machine->sibcall_blocked)
     return false;
@@ -3670,10 +3936,17 @@
     return false;
   /* APPLE LOCAL end ARM 4956366 */
 
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  func_type = arm_current_func_type ();
   /* Never tailcall from an ISR routine - it needs a special exit sequence.  */
-  if (IS_INTERRUPT (arm_current_func_type ()))
+  if (IS_INTERRUPT (func_type))
     return false;
 
+  /* Never tailcall if function may be called with a misaligned SP.  */
+  if (IS_STACKALIGN (func_type))
+    return false;
+
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   /* Everything else is ok.  */
   return true;
 }
@@ -3826,11 +4099,18 @@
 		  emit_insn (gen_pic_load_addr_arm (address, norig, l1));
 		  emit_insn (gen_pic_add_dot_plus_eight (address, l1, address));
 		}
-	      else
+              /* APPLE LOCAL begin v7 support. Merge from mainline */
+              else if (TARGET_THUMB2)
+                {
+		  emit_insn (gen_pic_load_addr_thumb2 (address, norig, l1));
+		  emit_insn (gen_pic_add_dot_plus_four (address, l1, address));
+                }
+	      else /* TARGET_THUMB1 */
 		{
-		  emit_insn (gen_pic_load_addr_thumb (address, norig, l1));
+		  emit_insn (gen_pic_load_addr_thumb1 (address, norig, l1));
 		  emit_insn (gen_pic_add_dot_plus_four (address, l1, address));
 		}
+              /* APPLE LOCAL end v7 support. Merge from mainline */
 	    }
 	  else
 	    abort ();
@@ -3839,8 +4119,12 @@
 	{
 	  if (TARGET_ARM)
 	    emit_insn (gen_pic_load_addr_arm (address, norig, l1));
-	  else
-	    emit_insn (gen_pic_load_addr_thumb (address, norig, l1));
+          /* APPLE LOCAL begin v7 support. Merge from mainline */
+	  else if (TARGET_THUMB2)
+            emit_insn (gen_pic_load_addr_thumb2 (address, norig, l1));
+          else /* TARGET_THUMB1 */
+	    emit_insn (gen_pic_load_addr_thumb1 (address, norig, l1));
+          /* APPLE LOCAL end v7 support. Merge from mainline */
 	}
       /* APPLE LOCAL end ARM pic support */
 
@@ -3883,6 +4167,19 @@
 	  && XINT (XEXP (orig, 0), 1) == UNSPEC_TLS)
 	return orig;
 
+      /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+      /* Handle the case where we have:
+         const (plus (UNSPEC_TLS) (ADDEND)).  The ADDEND must be a
+         CONST_INT.  */
+      if (GET_CODE (XEXP (orig, 0)) == PLUS
+          && GET_CODE (XEXP (XEXP (orig, 0), 0)) == UNSPEC
+          && XINT (XEXP (XEXP (orig, 0), 0), 1) == UNSPEC_TLS)
+        {
+	  gcc_assert (GET_CODE (XEXP (XEXP (orig, 0), 1)) == CONST_INT);
+	  return orig;
+	}
+
+      /* APPLE LOCAL end v7 support. Merge from Codesourcery */
       if (reg == 0)
 	{
 	  gcc_assert (!no_new_pseudos);
@@ -3929,7 +4226,8 @@
 }
 
 
-/* Find a spare low register to use during the prolog of a function.  */
+/* APPLE LOCAL v7 support. Merge from mainline */
+/* Find a spare register to use during the prolog of a function.  */
 
 static int
 thumb_find_work_register (unsigned long pushed_regs_mask)
@@ -3978,6 +4276,15 @@
     if (pushed_regs_mask & (1 << reg))
       return reg;
 
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  if (TARGET_THUMB2)
+    {
+      /* Thumb-2 can use high regs.  */
+      for (reg = FIRST_HI_REGNUM; reg < 15; reg ++)
+	if (pushed_regs_mask & (1 << reg))
+	  return reg;
+    }
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   /* Something went wrong - thumb_compute_save_reg_mask()
      should have arranged for a suitable register to be pushed.  */
   gcc_unreachable ();
@@ -4027,7 +4334,29 @@
 					     cfun->machine->pic_reg));
       /* APPLE LOCAL end ARM pic support */
     }
-  else
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  else if (TARGET_THUMB2)
+    {
+      /* Thumb-2 only allows very limited access to the PC.  Calculate the
+       address in a temporary register.  */
+      if (arm_pic_register != INVALID_REGNUM)
+        {       
+          pic_tmp = gen_rtx_REG (SImode,
+                                 thumb_find_work_register (saved_regs));
+        }
+      else    
+        {     
+          gcc_assert (!no_new_pseudos);
+          pic_tmp = gen_reg_rtx (Pmode);
+        } 
+
+      emit_insn (gen_pic_load_addr_thumb2 (cfun->machine->pic_reg, 
+                              pic_rtx, l1));
+      emit_insn (gen_pic_load_dot_plus_four (pic_tmp, labelno));
+      emit_insn (gen_addsi3 (cfun->machine->pic_reg, cfun->machine->pic_reg,
+                             pic_tmp));
+    }
+  else /* TARGET_THUMB1 */
     {
       /* APPLE LOCAL begin ARM pic support */
       if (arm_pic_register != INVALID_REGNUM
@@ -4037,15 +4366,16 @@
 	     able to find a work register.  */
 	  pic_tmp = gen_rtx_REG (SImode,
 				 thumb_find_work_register (saved_regs));
-	  emit_insn (gen_pic_load_addr_thumb (pic_tmp, pic_rtx, l1));
+	  emit_insn (gen_pic_load_addr_thumb1 (pic_tmp, pic_rtx, l1));
 	  emit_insn (gen_movsi (pic_offset_table_rtx, pic_tmp));
 	}
       else
-	emit_insn (gen_pic_load_addr_thumb (cfun->machine->pic_reg, pic_rtx, l1));
+	emit_insn (gen_pic_load_addr_thumb1 (cfun->machine->pic_reg, pic_rtx, l1));
       emit_insn (gen_pic_add_dot_plus_four (cfun->machine->pic_reg, l1,
 					    cfun->machine->pic_reg));
       /* APPLE LOCAL end ARM pic support */
     }
+  /* APPLE LOCAL end v7 support. Merge from mainline */
 
   /* Need to emit this whether or not we obey regdecls,
      since setjmp/longjmp can cause life info to screw up.  */
@@ -4102,6 +4432,12 @@
 	      && (mode == DImode
 		  || (mode == DFmode && (TARGET_SOFT_FLOAT || TARGET_VFP))));
 
+  /* APPLE LOCAL begin 6293989 */
+  if (TARGET_NEON && VECTOR_MODE_P (mode)
+      && (code == PRE_DEC || code == PRE_INC || code == POST_DEC))
+    return 0;
+  /* APPLE LOCAL end 6293989 */
+
   if (code == POST_INC || code == PRE_DEC
       || ((code == PRE_INC || code == POST_DEC)
 	  && (use_ldrd || GET_MODE_SIZE (mode) <= 4)))
@@ -4135,7 +4471,8 @@
 		   && GET_CODE (XEXP (XEXP (x, 0), 1)) == CONST_INT)))
     return 1;
 
-  else if (mode == TImode)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  else if (mode == TImode || (TARGET_NEON && VALID_NEON_STRUCT_MODE (mode)))
     return 0;
 
   else if (code == PLUS)
@@ -4172,6 +4509,89 @@
   return 0;
 }
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Return nonzero if X is a valid Thumb-2 address operand.  */
+int
+thumb2_legitimate_address_p (enum machine_mode mode, rtx x, int strict_p)
+{
+  bool use_ldrd;
+  enum rtx_code code = GET_CODE (x);
+  
+  if (arm_address_register_rtx_p (x, strict_p))
+    return 1;
+
+  use_ldrd = (TARGET_LDRD
+	      && (mode == DImode
+		  || (mode == DFmode && (TARGET_SOFT_FLOAT || TARGET_VFP))));
+
+  /* APPLE LOCAL begin 6293989 */
+  if (TARGET_NEON && VECTOR_MODE_P (mode)
+      && (code == PRE_DEC || code == PRE_INC || code == POST_DEC))
+    return 0;
+  /* APPLE LOCAL end 6293989 */
+
+  if (code == POST_INC || code == PRE_DEC
+      || ((code == PRE_INC || code == POST_DEC)
+	  && (use_ldrd || GET_MODE_SIZE (mode) <= 4)))
+    return arm_address_register_rtx_p (XEXP (x, 0), strict_p);
+
+  else if ((code == POST_MODIFY || code == PRE_MODIFY)
+	   && arm_address_register_rtx_p (XEXP (x, 0), strict_p)
+	   && GET_CODE (XEXP (x, 1)) == PLUS
+	   && rtx_equal_p (XEXP (XEXP (x, 1), 0), XEXP (x, 0)))
+    {
+      /* Thumb-2 only has autoincrement by constant.  */
+      rtx addend = XEXP (XEXP (x, 1), 1);
+      HOST_WIDE_INT offset;
+
+      if (GET_CODE (addend) != CONST_INT)
+	return 0;
+
+      offset = INTVAL(addend);
+      if (GET_MODE_SIZE (mode) <= 4)
+	return (offset > -256 && offset < 256);
+      
+      return (use_ldrd && offset > -1024 && offset < 1024
+	      && (offset & 3) == 0);
+    }
+
+  /* After reload constants split into minipools will have addresses
+     from a LABEL_REF.  */
+  else if (reload_completed
+	   && (code == LABEL_REF
+	       || (code == CONST
+		   && GET_CODE (XEXP (x, 0)) == PLUS
+		   && GET_CODE (XEXP (XEXP (x, 0), 0)) == LABEL_REF
+		   && GET_CODE (XEXP (XEXP (x, 0), 1)) == CONST_INT)))
+    return 1;
+
+  /* APPLE LOCAL v7 support. Merge from Codesourcery */
+  else if (mode == TImode || (TARGET_NEON && VALID_NEON_STRUCT_MODE (mode)))
+    return 0;
+
+  else if (code == PLUS)
+    {
+      rtx xop0 = XEXP (x, 0);
+      rtx xop1 = XEXP (x, 1);
+
+      return ((arm_address_register_rtx_p (xop0, strict_p)
+	       && thumb2_legitimate_index_p (mode, xop1, strict_p))
+	      || (arm_address_register_rtx_p (xop1, strict_p)
+		  && thumb2_legitimate_index_p (mode, xop0, strict_p)));
+    }
+
+  else if (GET_MODE_CLASS (mode) != MODE_FLOAT
+	   && code == SYMBOL_REF
+	   && CONSTANT_POOL_ADDRESS_P (x)
+	   && ! (flag_pic
+		 && symbol_mentioned_p (get_pool_constant (x))
+		 && ! pcrel_constant_p (get_pool_constant (x))))
+    return 1;
+
+  return 0;
+}
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 /* Return nonzero if INDEX is valid for an address index operand in
    ARM state.  */
 static int
@@ -4202,6 +4622,17 @@
 		&& (INTVAL (index) & 3) == 0);
     }
 
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  if (TARGET_NEON
+      /* APPLE LOCAL 6150882 use thumb2 by default for v7 */
+      && VECTOR_MODE_P (mode)
+      && (VALID_NEON_DREG_MODE (mode) || VALID_NEON_QREG_MODE (mode)))
+    return (code == CONST_INT
+	    && INTVAL (index) < 1016
+	    && INTVAL (index) > -1024
+	    && (INTVAL (index) & 3) == 0);
+
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
   if (arm_address_register_rtx_p (index, strict_p)
       && (GET_MODE_SIZE (mode) <= 4))
     return 1;
@@ -4265,10 +4696,101 @@
 	  && INTVAL (index) > -range);
 }
 
-/* Return nonzero if X is valid as a Thumb state base register.  */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Return true if OP is a valid index scaling factor for Thumb-2 address
+   index operand.  i.e. 1, 2, 4 or 8.  */
+static bool
+thumb2_index_mul_operand (rtx op)
+{
+  HOST_WIDE_INT val;
+  
+  if (GET_CODE(op) != CONST_INT)
+    return false;
+
+  val = INTVAL(op);
+  return (val == 1 || val == 2 || val == 4 || val == 8);
+}
+  
+/* Return nonzero if INDEX is a valid Thumb-2 address index operand.  */
+static int
+thumb2_legitimate_index_p (enum machine_mode mode, rtx index, int strict_p)
+{
+  enum rtx_code code = GET_CODE (index);
+
+  /* ??? Combine arm and thumb2 coprocessor addressing modes.  */
+  /* Standard coprocessor addressing modes.  */
+  if (TARGET_HARD_FLOAT
+      && (TARGET_FPA || TARGET_MAVERICK)
+      && (GET_MODE_CLASS (mode) == MODE_FLOAT
+	  || (TARGET_MAVERICK && mode == DImode)))
+    return (code == CONST_INT && INTVAL (index) < 1024
+	    && INTVAL (index) > -1024
+	    && (INTVAL (index) & 3) == 0);
+
+  if (TARGET_REALLY_IWMMXT && VALID_IWMMXT_REG_MODE (mode))
+    return (code == CONST_INT
+	    && INTVAL (index) < 1024
+	    && INTVAL (index) > -1024
+	    && (INTVAL (index) & 3) == 0);
+
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  if (TARGET_NEON
+      /* APPLE LOCAL 6150882 use thumb2 by default for v7 */
+      && VECTOR_MODE_P (mode)
+      && (VALID_NEON_DREG_MODE (mode) || VALID_NEON_QREG_MODE (mode)))
+    return (code == CONST_INT
+	    && INTVAL (index) < 1016
+	    && INTVAL (index) > -1024
+	    && (INTVAL (index) & 3) == 0);
+
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
+  if (arm_address_register_rtx_p (index, strict_p)
+      && (GET_MODE_SIZE (mode) <= 4))
+    return 1;
+
+  if (mode == DImode || mode == DFmode)
+    {
+      HOST_WIDE_INT val = INTVAL (index);
+      /* ??? Can we assume ldrd for thumb2?  */
+      /* Thumb-2 ldrd only has reg+const addressing modes.  */
+      if (code != CONST_INT)
+	return 0;
+
+      /* ldrd supports offsets of +-1020.
+         However the ldr fallback does not.  */
+      return val > -256 && val < 256 && (val & 3) == 0;
+    }
+
+  if (code == MULT)
+    {
+      rtx xiop0 = XEXP (index, 0);
+      rtx xiop1 = XEXP (index, 1);
+
+      return ((arm_address_register_rtx_p (xiop0, strict_p)
+	       && thumb2_index_mul_operand (xiop1))
+	      || (arm_address_register_rtx_p (xiop1, strict_p)
+		  && thumb2_index_mul_operand (xiop0)));
+    }
+  else if (code == ASHIFT)
+    {
+      rtx op = XEXP (index, 1);
+
+      return (arm_address_register_rtx_p (XEXP (index, 0), strict_p)
+	      && GET_CODE (op) == CONST_INT
+	      && INTVAL (op) > 0
+	      && INTVAL (op) <= 3);
+    }
+
+  return (code == CONST_INT
+	  && INTVAL (index) < 4096
+	  && INTVAL (index) > -256);
+}
+
+/* Return nonzero if X is valid as a 16-bit Thumb state base register.  */
 static int
-thumb_base_register_rtx_p (rtx x, enum machine_mode mode, int strict_p)
+thumb1_base_register_rtx_p (rtx x, enum machine_mode mode, int strict_p)
 {
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   int regno;
 
   if (GET_CODE (x) != REG)
@@ -4277,7 +4799,8 @@
   regno = REGNO (x);
 
   if (strict_p)
-    return THUMB_REGNO_MODE_OK_FOR_BASE_P (regno, mode);
+    /* APPLE LOCAL v7 support. Merge from mainline */
+    return THUMB1_REGNO_MODE_OK_FOR_BASE_P (regno, mode);
 
   return (regno <= LAST_LO_REGNUM
 	  || regno > LAST_VIRTUAL_REGISTER
@@ -4291,13 +4814,16 @@
 
 /* Return nonzero if x is a legitimate index register.  This is the case
    for any base register that can access a QImode object.  */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 inline static int
-thumb_index_register_rtx_p (rtx x, int strict_p)
+thumb1_index_register_rtx_p (rtx x, int strict_p)
 {
-  return thumb_base_register_rtx_p (x, QImode, strict_p);
+  return thumb1_base_register_rtx_p (x, QImode, strict_p);
 }
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
-/* Return nonzero if x is a legitimate Thumb-state address.
+/* APPLE LOCAL v7 support. Merge from mainline */
+/* Return nonzero if x is a legitimate 16-bit Thumb-state address.
 
    The AP may be eliminated to either the SP or the FP, so we use the
    least common denominator, e.g. SImode, and offsets from 0 to 64.
@@ -4315,7 +4841,8 @@
    reload pass starts.  This is so that eliminating such addresses
    into stack based ones won't produce impossible code.  */
 int
-thumb_legitimate_address_p (enum machine_mode mode, rtx x, int strict_p)
+/* APPLE LOCAL v7 support. Merge from mainline */
+thumb1_legitimate_address_p (enum machine_mode mode, rtx x, int strict_p)
 {
   /* ??? Not clear if this is right.  Experiment.  */
   if (GET_MODE_SIZE (mode) < 4
@@ -4329,7 +4856,8 @@
     return 0;
 
   /* Accept any base register.  SP only in SImode or larger.  */
-  else if (thumb_base_register_rtx_p (x, mode, strict_p))
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  else if (thumb1_base_register_rtx_p (x, mode, strict_p))
     return 1;
 
   /* This is PC relative data before arm_reorg runs.  */
@@ -4349,7 +4877,8 @@
 
   /* Post-inc indexing only supported for SImode and larger.  */
   else if (GET_CODE (x) == POST_INC && GET_MODE_SIZE (mode) >= 4
-	   && thumb_index_register_rtx_p (XEXP (x, 0), strict_p))
+  /* APPLE LOCAL v7 support. Merge from mainline */
+	   && thumb1_index_register_rtx_p (XEXP (x, 0), strict_p))
     return 1;
 
   else if (GET_CODE (x) == PLUS)
@@ -4361,12 +4890,15 @@
       if (GET_MODE_SIZE (mode) <= 4
 	  && XEXP (x, 0) != frame_pointer_rtx
 	  && XEXP (x, 1) != frame_pointer_rtx
-	  && thumb_index_register_rtx_p (XEXP (x, 0), strict_p)
-	  && thumb_index_register_rtx_p (XEXP (x, 1), strict_p))
+          /* APPLE LOCAL begin v7 support. Merge from mainline */
+	  && thumb1_index_register_rtx_p (XEXP (x, 0), strict_p)
+	  && thumb1_index_register_rtx_p (XEXP (x, 1), strict_p))
+          /* APPLE LOCAL end v7 support. Merge from mainline */
 	return 1;
 
       /* REG+const has 5-7 bit offset for non-SP registers.  */
-      else if ((thumb_index_register_rtx_p (XEXP (x, 0), strict_p)
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      else if ((thumb1_index_register_rtx_p (XEXP (x, 0), strict_p)
 		|| XEXP (x, 0) == arg_pointer_rtx)
 	       && GET_CODE (XEXP (x, 1)) == CONST_INT
 	       && thumb_legitimate_offset_p (mode, INTVAL (XEXP (x, 1))))
@@ -4498,8 +5030,19 @@
 
   if (TARGET_ARM)
     emit_insn (gen_pic_add_dot_plus_eight (reg, reg, labelno));
-  else
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  else if (TARGET_THUMB2)
+    {
+      rtx tmp;
+      /* Thumb-2 only allows very limited access to the PC.  Calculate
+	 the address in a temporary register.  */
+      tmp = gen_reg_rtx (SImode);
+      emit_insn (gen_pic_load_dot_plus_four (tmp, labelno));
+      emit_insn (gen_addsi3(reg, reg, tmp));
+    }
+  else /* TARGET_THUMB1 */
     emit_insn (gen_pic_add_dot_plus_four (reg, reg, labelno));
+  /* APPLE LOCAL end v7 support. Merge from mainline */
 
   *valuep = emit_library_call_value (get_tls_get_addr (), NULL_RTX, LCT_PURE, /* LCT_CONST?  */
 				     Pmode, 1, reg, Pmode);
@@ -4552,6 +5095,18 @@
 
       if (TARGET_ARM)
 	emit_insn (gen_tls_load_dot_plus_eight (reg, reg, labelno));
+      /* APPLE LOCAL begin v7 support. Merge from mainline */
+      else if (TARGET_THUMB2)
+	{
+	  rtx tmp;
+	  /* Thumb-2 only allows very limited access to the PC.  Calculate
+	     the address in a temporary register.  */
+	  tmp = gen_reg_rtx (SImode);
+	  emit_insn (gen_pic_load_dot_plus_four (tmp, labelno));
+	  emit_insn (gen_addsi3(reg, reg, tmp));
+	  emit_move_insn (reg, gen_const_mem (SImode, reg));
+	}
+      /* APPLE LOCAL end v7 support. Merge from mainline */
       else
 	{
 	  emit_insn (gen_pic_add_dot_plus_four (reg, reg, labelno));
@@ -4907,7 +5462,8 @@
 #define COSTS_N_INSNS(N) ((N) * 4 - 2)
 #endif
 static inline int
-thumb_rtx_costs (rtx x, enum rtx_code code, enum rtx_code outer)
+/* APPLE LOCAL v7 support. Merge from mainline */
+thumb1_rtx_costs (rtx x, enum rtx_code code, enum rtx_code outer)
 {
   enum machine_mode mode = GET_MODE (x);
 
@@ -5031,7 +5587,7 @@
    anywhere here.)  */
 
 static inline int
-thumb_size_rtx_costs (rtx x, enum rtx_code code, enum rtx_code outer)
+thumb1_size_rtx_costs (rtx x, enum rtx_code code, enum rtx_code outer)
 {
   enum machine_mode mode = GET_MODE (x);
 
@@ -5132,6 +5688,8 @@
 /* APPLE LOCAL end ARM size variant of thumb costs */
 
 /* Worker routine for arm_rtx_costs.  */
+/* APPLE LOCAL v7 support. Merge from mainline */
+/* ??? This needs updating for thumb2.  */
 static inline int
 arm_rtx_costs_1 (rtx x, enum rtx_code code, enum rtx_code outer)
 {
@@ -5180,6 +5738,16 @@
 		 ? 0 : 4));
 
     case MINUS:
+      /* APPLE LOCAL begin v7 support. Merge from mainline */
+      if (GET_CODE (XEXP (x, 1)) == MULT && mode == SImode && arm_arch_thumb2)
+	{
+	  extra_cost = rtx_cost (XEXP (x, 1), code);
+	  if (!REG_OR_SUBREG_REG (XEXP (x, 0)))
+	    extra_cost += 4 * ARM_NUM_REGS (mode);
+	  return extra_cost;
+	}
+
+      /* APPLE LOCAL end v7 support. Merge from mainline */
       if (mode == DImode)
 	return (4 + (REG_OR_SUBREG_REG (XEXP (x, 1)) ? 0 : 8)
 		+ ((REG_OR_SUBREG_REG (XEXP (x, 0))
@@ -5319,6 +5887,8 @@
       return 4 + (mode == DImode ? 4 : 0);
 
     case SIGN_EXTEND:
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      /* ??? value extensions are cheaper on armv6. */
       if (GET_MODE (XEXP (x, 0)) == QImode)
 	return (4 + (mode == DImode ? 4 : 0)
 		+ (GET_CODE (XEXP (x, 0)) == MEM ? 10 : 0));
@@ -5368,7 +5938,8 @@
       return 6;
 
     case CONST_DOUBLE:
-      if (arm_const_double_rtx (x))
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      if (arm_const_double_rtx (x) || vfp3_const_double_rtx (x))
 	return outer == SET ? 2 : -1;
       else if ((outer == COMPARE || outer == PLUS)
 	       && neg_const_double_rtx_ok_for_fpa (x))
@@ -5388,9 +5959,8 @@
 
   if (TARGET_THUMB)
     {
-      /* APPLE LOCAL begin ARM size variant of thumb costs */
-      *total = thumb_size_rtx_costs (x, code, outer_code);
-      /* APPLE LOCAL end ARM size variant of thumb costs */
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      *total = thumb1_size_rtx_costs (x, code, outer_code);
       return true;
     }
 
@@ -5627,7 +6197,10 @@
     }
 }
 
-/* RTX costs for cores with a slow MUL implementation.  */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* RTX costs for cores with a slow MUL implementation.  Thumb-2 is not
+   supported on any "slowmul" cores, so it can be ignored.  */
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 static bool
 arm_slowmul_rtx_costs (rtx x, int code, int outer_code, int *total)
@@ -5636,7 +6209,8 @@
 
   if (TARGET_THUMB)
     {
-      *total = thumb_rtx_costs (x, code, outer_code);
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      *total = thumb1_rtx_costs (x, code, outer_code);
       return true;
     }
 
@@ -5688,12 +6262,15 @@
 {
   enum machine_mode mode = GET_MODE (x);
 
-  if (TARGET_THUMB)
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  if (TARGET_THUMB1)
     {
-      *total = thumb_rtx_costs (x, code, outer_code);
+      *total = thumb1_rtx_costs (x, code, outer_code);
       return true;
     }
 
+  /* ??? should thumb2 use different costs?  */
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   switch (code)
     {
     case MULT:
@@ -5747,7 +6324,10 @@
 }
 
 
-/* RTX cost for XScale CPUs.  */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* RTX cost for XScale CPUs.  Thumb-2 is not supported on any xscale cores,
+   so it can be ignored.  */
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 static bool
 arm_xscale_rtx_costs (rtx x, int code, int outer_code, int *total)
@@ -5756,7 +6336,8 @@
 
   if (TARGET_THUMB)
     {
-      *total = thumb_rtx_costs (x, code, outer_code);
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      *total = thumb1_rtx_costs (x, code, outer_code);
       return true;
     }
 
@@ -5837,7 +6418,8 @@
   int nonreg_cost;
   int cost;
 
-  if (TARGET_THUMB)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_THUMB1)
     {
       switch (code)
 	{
@@ -5846,7 +6428,8 @@
 	  return true;
 
 	default:
-	  *total = thumb_rtx_costs (x, code, outer_code);
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  *total = thumb1_rtx_costs (x, code, outer_code);
 	  return true;
 	}
     }
@@ -5938,7 +6521,8 @@
 static int
 arm_address_cost (rtx x)
 {
-  return TARGET_ARM ? arm_arm_address_cost (x) : arm_thumb_address_cost (x);
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  return TARGET_32BIT ? arm_arm_address_cost (x) : arm_thumb_address_cost (x);
 }
 
 static int
@@ -6101,35 +6685,495 @@
 
   return 0;
 }
-
-/* Predicates for `match_operand' and `match_operator'.  */
 
-/* Return nonzero if OP is a valid Cirrus memory address pattern.  */
-int
-cirrus_memory_offset (rtx op)
-{
-  /* Reject eliminable registers.  */
-  if (! (reload_in_progress || reload_completed)
-      && (   reg_mentioned_p (frame_pointer_rtx, op)
-	  || reg_mentioned_p (arg_pointer_rtx, op)
-	  || reg_mentioned_p (virtual_incoming_args_rtx, op)
-	  || reg_mentioned_p (virtual_outgoing_args_rtx, op)
-	  || reg_mentioned_p (virtual_stack_dynamic_rtx, op)
-	  || reg_mentioned_p (virtual_stack_vars_rtx, op)))
-    return 0;
 
-  if (GET_CODE (op) == MEM)
-    {
-      rtx ind;
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* VFPv3 has a fairly wide range of representable immediates, formed from
+   "quarter-precision" floating-point values. These can be evaluated using this
+   formula (with ^ for exponentiation):
 
-      ind = XEXP (op, 0);
+     -1^s * n * 2^-r
 
-      /* Match: (mem (reg)).  */
-      if (GET_CODE (ind) == REG)
-	return 1;
+   Where 's' is a sign bit (0/1), 'n' and 'r' are integers such that
+   16 <= n <= 31 and 0 <= r <= 7.
 
-      /* Match:
-	 (mem (plus (reg)
+   These values are mapped onto an 8-bit integer ABCDEFGH s.t.
+
+     - A (most-significant) is the sign bit.
+     - BCD are the exponent (encoded as r XOR 3).
+     - EFGH are the mantissa (encoded as n - 16).
+*/
+
+/* Return an integer index for a VFPv3 immediate operand X suitable for the
+   fconst[sd] instruction, or -1 if X isn't suitable.  */
+static int
+vfp3_const_double_index (rtx x)
+{
+  REAL_VALUE_TYPE r, m;
+  int sign, exponent;
+  unsigned HOST_WIDE_INT mantissa, mant_hi;
+  unsigned HOST_WIDE_INT mask;
+  HOST_WIDE_INT m1, m2;
+  int point_pos = 2 * HOST_BITS_PER_WIDE_INT - 1;
+
+  if (!TARGET_VFP3 || GET_CODE (x) != CONST_DOUBLE)
+    return -1;
+
+  REAL_VALUE_FROM_CONST_DOUBLE (r, x);
+
+  /* We can't represent these things, so detect them first.  */
+  if (REAL_VALUE_ISINF (r) || REAL_VALUE_ISNAN (r) || REAL_VALUE_MINUS_ZERO (r))
+    return -1;
+
+  /* Extract sign, exponent and mantissa.  */
+  sign = REAL_VALUE_NEGATIVE (r) ? 1 : 0;
+  r = REAL_VALUE_ABS (r);
+  exponent = REAL_EXP (&r);
+  /* For the mantissa, we expand into two HOST_WIDE_INTS, apart from the
+     highest (sign) bit, with a fixed binary point at bit point_pos.
+     WARNING: If there's ever a VFP version which uses more than 2 * H_W_I - 1
+     bits for the mantissa, this may fail (low bits would be lost).  */
+  real_ldexp (&m, &r, point_pos - exponent);
+  REAL_VALUE_TO_INT (&m1, &m2, m);
+  mantissa = m1;
+  mant_hi = m2;
+
+  /* If there are bits set in the low part of the mantissa, we can't
+     represent this value.  */
+  if (mantissa != 0)
+    return -1;
+
+  /* Now make it so that mantissa contains the most-significant bits, and move
+     the point_pos to indicate that the least-significant bits have been
+     discarded.  */
+  point_pos -= HOST_BITS_PER_WIDE_INT;
+  mantissa = mant_hi;
+
+  /* We can permit four significant bits of mantissa only, plus a high bit
+     which is always 1.  */
+  mask = ((unsigned HOST_WIDE_INT)1 << (point_pos - 5)) - 1;
+  if ((mantissa & mask) != 0)
+    return -1;
+
+  /* Now we know the mantissa is in range, chop off the unneeded bits.  */
+  mantissa >>= point_pos - 5;
+
+  /* The mantissa may be zero. Disallow that case. (It's possible to load the
+     floating-point immediate zero with Neon using an integer-zero load, but
+     that case is handled elsewhere.)  */
+  if (mantissa == 0)
+    return -1;
+
+  gcc_assert (mantissa >= 16 && mantissa <= 31);
+
+  /* The value of 5 here would be 4 if GCC used IEEE754-like encoding (where
+     normalised significands are in the range [1, 2). (Our mantissa is shifted
+     left 4 places at this point relative to normalised IEEE754 values).  GCC
+     internally uses [0.5, 1) (see real.c), so the exponent returned from
+     REAL_EXP must be altered.  */
+  exponent = 5 - exponent;
+
+  if (exponent < 0 || exponent > 7)
+    return -1;
+
+  /* Sign, mantissa and exponent are now in the correct form to plug into the
+     formulae described in the comment above.  */
+  return (sign << 7) | ((exponent ^ 3) << 4) | (mantissa - 16);
+}
+
+/* Return TRUE if rtx X is a valid immediate VFPv3 constant.  */
+int
+vfp3_const_double_rtx (rtx x)
+{
+  if (!TARGET_VFP3)
+    return 0;
+
+  return vfp3_const_double_index (x) != -1;
+}
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+/* Recognize immediates which can be used in various Neon instructions. Legal
+   immediates are described by the following table (for VMVN variants, the
+   bitwise inverse of the constant shown is recognized. In either case, VMOV
+   is output and the correct instruction to use for a given constant is chosen
+   by the assembler). The constant shown is replicated across all elements of
+   the destination vector.
+   
+   insn elems variant constant (binary)
+   ---- ----- ------- -----------------
+   vmov  i32     0    00000000 00000000 00000000 abcdefgh
+   vmov  i32     1    00000000 00000000 abcdefgh 00000000
+   vmov  i32     2    00000000 abcdefgh 00000000 00000000
+   vmov  i32     3    abcdefgh 00000000 00000000 00000000
+   vmov  i16     4    00000000 abcdefgh
+   vmov  i16     5    abcdefgh 00000000
+   vmvn  i32     6    00000000 00000000 00000000 abcdefgh
+   vmvn  i32     7    00000000 00000000 abcdefgh 00000000
+   vmvn  i32     8    00000000 abcdefgh 00000000 00000000
+   vmvn  i32     9    abcdefgh 00000000 00000000 00000000
+   vmvn  i16    10    00000000 abcdefgh
+   vmvn  i16    11    abcdefgh 00000000
+   vmov  i32    12    00000000 00000000 abcdefgh 11111111
+   vmvn  i32    13    00000000 00000000 abcdefgh 11111111
+   vmov  i32    14    00000000 abcdefgh 11111111 11111111
+   vmvn  i32    15    00000000 abcdefgh 11111111 11111111
+   vmov   i8    16    abcdefgh
+   vmov  i64    17    aaaaaaaa bbbbbbbb cccccccc dddddddd
+                      eeeeeeee ffffffff gggggggg hhhhhhhh
+   vmov  f32    18    aBbbbbbc defgh000 00000000 00000000
+
+   For case 18, B = !b. Representable values are exactly those accepted by
+   vfp3_const_double_index, but are output as floating-point numbers rather
+   than indices.
+   
+   Variants 0-5 (inclusive) may also be used as immediates for the second
+   operand of VORR/VBIC instructions.
+   
+   The INVERSE argument causes the bitwise inverse of the given operand to be
+   recognized instead (used for recognizing legal immediates for the VAND/VORN
+   pseudo-instructions). If INVERSE is true, the value placed in *MODCONST is
+   *not* inverted (i.e. the pseudo-instruction forms vand/vorn should still be
+   output, rather than the real insns vbic/vorr).
+   
+   INVERSE makes no difference to the recognition of float vectors.
+   
+   The return value is the variant of immediate as shown in the above table, or
+   -1 if the given value doesn't match any of the listed patterns.
+*/
+static int
+neon_valid_immediate (rtx op, enum machine_mode mode, int inverse,
+		      rtx *modconst, int *elementwidth)
+{
+#define CHECK(STRIDE, ELSIZE, CLASS, TEST)	\
+  matches = 1;					\
+  for (i = 0; i < idx; i += (STRIDE))		\
+    if (!(TEST))				\
+      matches = 0;				\
+  if (matches)					\
+    {						\
+      immtype = (CLASS);			\
+      elsize = (ELSIZE);			\
+      break;					\
+    }
+
+  unsigned int i, elsize, idx = 0, n_elts = CONST_VECTOR_NUNITS (op);
+  unsigned int innersize = GET_MODE_SIZE (GET_MODE_INNER (mode));
+  unsigned char bytes[16];
+  int immtype = -1, matches;
+  unsigned int invmask = inverse ? 0xff : 0;
+  
+  /* Vectors of float constants.  */
+  if (GET_MODE_CLASS (mode) == MODE_VECTOR_FLOAT)
+    {
+      rtx el0 = CONST_VECTOR_ELT (op, 0);
+      REAL_VALUE_TYPE r0;
+
+      if (!vfp3_const_double_rtx (el0))
+        return -1;
+
+      REAL_VALUE_FROM_CONST_DOUBLE (r0, el0);
+
+      for (i = 1; i < n_elts; i++)
+        {
+          rtx elt = CONST_VECTOR_ELT (op, i);
+          REAL_VALUE_TYPE re;
+          
+          REAL_VALUE_FROM_CONST_DOUBLE (re, elt);
+
+          if (!REAL_VALUES_EQUAL (r0, re))
+            return -1;
+        }
+
+      if (modconst)
+        *modconst = CONST_VECTOR_ELT (op, 0);
+      
+      if (elementwidth)
+        *elementwidth = 0;
+      
+      return 18;
+    }
+  
+  /* Splat vector constant out into a byte vector.  */
+  for (i = 0; i < n_elts; i++)
+    {
+      rtx el = CONST_VECTOR_ELT (op, i);
+      unsigned HOST_WIDE_INT elpart;
+      unsigned int part, parts;
+
+      if (GET_CODE (el) == CONST_INT)
+        {
+          elpart = INTVAL (el);
+          parts = 1;
+        }
+      else if (GET_CODE (el) == CONST_DOUBLE)
+        {
+          elpart = CONST_DOUBLE_LOW (el);
+          parts = 2;
+        }
+      else
+        gcc_unreachable ();
+      
+      for (part = 0; part < parts; part++)
+        {
+          unsigned int byte;
+          for (byte = 0; byte < innersize; byte++)
+            {
+              bytes[idx++] = (elpart & 0xff) ^ invmask;
+              elpart >>= BITS_PER_UNIT;
+            }
+          if (GET_CODE (el) == CONST_DOUBLE)
+            elpart = CONST_DOUBLE_HIGH (el);
+        }
+    }
+  
+  /* Sanity check.  */
+  gcc_assert (idx == GET_MODE_SIZE (mode));
+  
+  do
+    {
+      CHECK (4, 32, 0, bytes[i] == bytes[0] && bytes[i + 1] == 0
+		       && bytes[i + 2] == 0 && bytes[i + 3] == 0);
+
+      CHECK (4, 32, 1, bytes[i] == 0 && bytes[i + 1] == bytes[1]
+		       && bytes[i + 2] == 0 && bytes[i + 3] == 0);
+
+      CHECK (4, 32, 2, bytes[i] == 0 && bytes[i + 1] == 0
+		       && bytes[i + 2] == bytes[2] && bytes[i + 3] == 0);
+
+      CHECK (4, 32, 3, bytes[i] == 0 && bytes[i + 1] == 0
+		       && bytes[i + 2] == 0 && bytes[i + 3] == bytes[3]);
+
+      CHECK (2, 16, 4, bytes[i] == bytes[0] && bytes[i + 1] == 0);
+
+      CHECK (2, 16, 5, bytes[i] == 0 && bytes[i + 1] == bytes[1]);
+
+      CHECK (4, 32, 6, bytes[i] == bytes[0] && bytes[i + 1] == 0xff
+		       && bytes[i + 2] == 0xff && bytes[i + 3] == 0xff);
+
+      CHECK (4, 32, 7, bytes[i] == 0xff && bytes[i + 1] == bytes[1]
+		       && bytes[i + 2] == 0xff && bytes[i + 3] == 0xff);
+                   
+      CHECK (4, 32, 8, bytes[i] == 0xff && bytes[i + 1] == 0xff
+		       && bytes[i + 2] == bytes[2] && bytes[i + 3] == 0xff);
+
+      CHECK (4, 32, 9, bytes[i] == 0xff && bytes[i + 1] == 0xff
+		       && bytes[i + 2] == 0xff && bytes[i + 3] == bytes[3]);
+      
+      CHECK (2, 16, 10, bytes[i] == bytes[0] && bytes[i + 1] == 0xff);
+
+      CHECK (2, 16, 11, bytes[i] == 0xff && bytes[i + 1] == bytes[1]);
+                    
+      CHECK (4, 32, 12, bytes[i] == 0xff && bytes[i + 1] == bytes[1]
+			&& bytes[i + 2] == 0 && bytes[i + 3] == 0);
+
+      CHECK (4, 32, 13, bytes[i] == 0 && bytes[i + 1] == bytes[1]
+			&& bytes[i + 2] == 0xff && bytes[i + 3] == 0xff);
+      
+      CHECK (4, 32, 14, bytes[i] == 0xff && bytes[i + 1] == 0xff
+			&& bytes[i + 2] == bytes[2] && bytes[i + 3] == 0);
+                    
+      CHECK (4, 32, 15, bytes[i] == 0 && bytes[i + 1] == 0
+			&& bytes[i + 2] == bytes[2] && bytes[i + 3] == 0xff);
+                    
+      CHECK (1, 8, 16, bytes[i] == bytes[0]);
+
+      CHECK (1, 64, 17, (bytes[i] == 0 || bytes[i] == 0xff)
+			&& bytes[i] == bytes[(i + 8) % idx]);
+    }
+  while (0);
+
+  if (immtype == -1)
+    return -1;
+
+  if (elementwidth)
+    *elementwidth = elsize;
+  
+  if (modconst)
+    {
+      unsigned HOST_WIDE_INT imm = 0;
+
+      /* Un-invert bytes of recognized vector, if neccessary.  */
+      if (invmask != 0)
+        for (i = 0; i < idx; i++)
+          bytes[i] ^= invmask;
+
+      if (immtype == 17)
+        {
+          /* FIXME: Broken on 32-bit H_W_I hosts.  */
+          gcc_assert (sizeof (HOST_WIDE_INT) == 8);
+          
+          for (i = 0; i < 8; i++)
+            imm |= (unsigned HOST_WIDE_INT) (bytes[i] ? 0xff : 0)
+                   << (i * BITS_PER_UNIT);
+
+          *modconst = GEN_INT (imm);
+        }
+      else
+        {
+          unsigned HOST_WIDE_INT imm = 0;
+
+          for (i = 0; i < elsize / BITS_PER_UNIT; i++)
+            imm |= (unsigned HOST_WIDE_INT) bytes[i] << (i * BITS_PER_UNIT);
+
+          *modconst = GEN_INT (imm);
+        }
+    }
+  
+  return immtype;
+#undef CHECK
+}
+
+/* Return TRUE if rtx X is legal for use as either a Neon VMOV (or, implicitly,
+   VMVN) immediate. Write back width per element to *ELEMENTWIDTH (or zero for
+   float elements), and a modified constant (whatever should be output for a
+   VMOV) in *MODCONST.  */
+
+int
+neon_immediate_valid_for_move (rtx op, enum machine_mode mode,
+			       rtx *modconst, int *elementwidth)
+{
+  rtx tmpconst;
+  int tmpwidth;
+  int retval = neon_valid_immediate (op, mode, 0, &tmpconst, &tmpwidth);
+  
+  if (retval == -1)
+    return 0;
+  
+  if (modconst)
+    *modconst = tmpconst;
+  
+  if (elementwidth)
+    *elementwidth = tmpwidth;
+  
+  return 1;
+}
+
+/* Return TRUE if rtx X is legal for use in a VORR or VBIC instruction.  If
+   the immediate is valid, write a constant suitable for using as an operand
+   to VORR/VBIC/VAND/VORN to *MODCONST and the corresponding element width to
+   *ELEMENTWIDTH. See neon_valid_immediate for description of INVERSE.  */
+
+int
+neon_immediate_valid_for_logic (rtx op, enum machine_mode mode, int inverse,
+				rtx *modconst, int *elementwidth)
+{
+  rtx tmpconst;
+  int tmpwidth;
+  int retval = neon_valid_immediate (op, mode, inverse, &tmpconst, &tmpwidth);
+
+  if (retval < 0 || retval > 5)
+    return 0;
+  
+  if (modconst)
+    *modconst = tmpconst;
+  
+  if (elementwidth)
+    *elementwidth = tmpwidth;
+  
+  return 1;
+}
+
+/* Return a string suitable for output of Neon immediate logic operation
+   MNEM.  */
+
+char *
+neon_output_logic_immediate (const char *mnem, rtx *op2, enum machine_mode mode,
+			     int inverse, int quad)
+{
+  int width, is_valid;
+  static char templ[40];
+  
+  is_valid = neon_immediate_valid_for_logic (*op2, mode, inverse, op2, &width);
+  
+  gcc_assert (is_valid != 0);
+  
+  if (quad)
+    sprintf (templ, "%s.i%d\t%%q0, %%2", mnem, width);
+  else
+    sprintf (templ, "%s.i%d\t%%P0, %%2", mnem, width);
+  
+  return templ;
+}
+
+/* Output a sequence of pairwise operations to implement a reduction.
+   NOTE: We do "too much work" here, because pairwise operations work on two
+   registers-worth of operands in one go. Unfortunately we can't exploit those
+   extra calculations to do the full operation in fewer steps, I don't think.
+   Although all vector elements of the result but the first are ignored, we
+   actually calculate the same result in each of the elements. An alternative
+   such as initially loading a vector with zero to use as each of the second
+   operands would use up an additional register and take an extra instruction,
+   for no particular gain.  */
+
+void
+neon_pairwise_reduce (rtx op0, rtx op1, enum machine_mode mode,
+		      rtx (*reduc) (rtx, rtx, rtx))
+{
+  enum machine_mode inner = GET_MODE_INNER (mode);
+  unsigned int i, parts = GET_MODE_SIZE (mode) / GET_MODE_SIZE (inner);
+  rtx tmpsum = op1;
+  
+  for (i = parts / 2; i >= 1; i /= 2)
+    {
+      rtx dest = (i == 1) ? op0 : gen_reg_rtx (mode);
+      emit_insn (reduc (dest, tmpsum, tmpsum));
+      tmpsum = dest;
+    }
+}
+
+/* Initialise a vector with non-constant elements.  FIXME: We can do better
+   than the current implementation (building a vector on the stack and then
+   loading it) in many cases.  See rs6000.c.  */
+
+void
+neon_expand_vector_init (rtx target, rtx vals)
+{
+  enum machine_mode mode = GET_MODE (target);
+  enum machine_mode inner = GET_MODE_INNER (mode);
+  unsigned int i, n_elts = GET_MODE_NUNITS (mode);
+  rtx mem;
+
+  gcc_assert (VECTOR_MODE_P (mode));
+
+  mem = assign_stack_temp (mode, GET_MODE_SIZE (mode), 0);
+  for (i = 0; i < n_elts; i++)
+    emit_move_insn (adjust_address_nv (mem, inner, i * GET_MODE_SIZE (inner)),
+                   XVECEXP (vals, 0, i));
+
+  emit_move_insn (target, mem);
+}
+
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
+
+/* Predicates for `match_operand' and `match_operator'.  */
+
+/* Return nonzero if OP is a valid Cirrus memory address pattern.  */
+int
+cirrus_memory_offset (rtx op)
+{
+  /* Reject eliminable registers.  */
+  if (! (reload_in_progress || reload_completed)
+      && (   reg_mentioned_p (frame_pointer_rtx, op)
+	  || reg_mentioned_p (arg_pointer_rtx, op)
+	  || reg_mentioned_p (virtual_incoming_args_rtx, op)
+	  || reg_mentioned_p (virtual_outgoing_args_rtx, op)
+	  || reg_mentioned_p (virtual_stack_dynamic_rtx, op)
+	  || reg_mentioned_p (virtual_stack_vars_rtx, op)))
+    return 0;
+
+  if (GET_CODE (op) == MEM)
+    {
+      rtx ind;
+
+      ind = XEXP (op, 0);
+
+      /* Match: (mem (reg)).  */
+      if (GET_CODE (ind) == REG)
+	return 1;
+
+      /* Match:
+	 (mem (plus (reg)
 	            (const))).  */
       if (GET_CODE (ind) == PLUS
 	  && GET_CODE (XEXP (ind, 0)) == REG
@@ -6141,8 +7185,12 @@
   return 0;
 }
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 /* Return TRUE if OP is a valid coprocessor memory address pattern.
-   WB if true if writeback address modes are allowed.  */
+   WB is true if full writeback address modes are allowed and is false
+   if limited writeback address modes (POST_INC and PRE_DEC) are
+   allowed.  */
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 int
 arm_coproc_mem_operand (rtx op, bool wb)
@@ -6177,13 +7225,18 @@
   if (GET_CODE (ind) == REG)
     return arm_address_register_rtx_p (ind, 0);
 
-  /* Autoincremment addressing modes.  */
-  if (wb
-      && (GET_CODE (ind) == PRE_INC
-	  || GET_CODE (ind) == POST_INC
-	  || GET_CODE (ind) == PRE_DEC
-	  || GET_CODE (ind) == POST_DEC))
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  /* Autoincremment addressing modes.  POST_INC and PRE_DEC are
+     acceptable in any case (subject to verification by
+     arm_address_register_rtx_p).  We need WB to be true to accept
+     PRE_INC and POST_DEC.  */
+  if (GET_CODE (ind) == POST_INC
+      || GET_CODE (ind) == PRE_DEC
+      || (wb
+	  && (GET_CODE (ind) == PRE_INC
+	      || GET_CODE (ind) == POST_DEC)))
     return arm_address_register_rtx_p (XEXP (ind, 0), 0);
+  /* APPLE LOCAL end v7 support. Merge from mainline */
 
   if (wb
       && (GET_CODE (ind) == POST_MODIFY || GET_CODE (ind) == PRE_MODIFY)
@@ -6207,6 +7260,112 @@
   return FALSE;
 }
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+/* Return TRUE if OP is a memory operand which we can load or store a vector
+   to/from. If CORE is true, we're moving from ARM registers not Neon
+   registers.  */
+int
+neon_vector_mem_operand (rtx op, bool core)
+{
+  rtx ind;
+
+  /* Reject eliminable registers.  */
+  if (! (reload_in_progress || reload_completed)
+      && (   reg_mentioned_p (frame_pointer_rtx, op)
+	  || reg_mentioned_p (arg_pointer_rtx, op)
+	  || reg_mentioned_p (virtual_incoming_args_rtx, op)
+	  || reg_mentioned_p (virtual_outgoing_args_rtx, op)
+	  || reg_mentioned_p (virtual_stack_dynamic_rtx, op)
+	  || reg_mentioned_p (virtual_stack_vars_rtx, op)))
+    return FALSE;
+
+  /* Constants are converted into offsets from labels.  */
+  if (GET_CODE (op) != MEM)
+    return FALSE;
+
+  ind = XEXP (op, 0);
+
+  if (reload_completed
+      && (GET_CODE (ind) == LABEL_REF
+	  || (GET_CODE (ind) == CONST
+	      && GET_CODE (XEXP (ind, 0)) == PLUS
+	      && GET_CODE (XEXP (XEXP (ind, 0), 0)) == LABEL_REF
+	      && GET_CODE (XEXP (XEXP (ind, 0), 1)) == CONST_INT)))
+    return TRUE;
+
+  /* Match: (mem (reg)).  */
+  if (GET_CODE (ind) == REG)
+    return arm_address_register_rtx_p (ind, 0);
+
+  /* Allow post-increment with Neon registers.  */
+  if (!core && GET_CODE (ind) == POST_INC)
+    return arm_address_register_rtx_p (XEXP (ind, 0), 0);
+
+#if 0
+  /* FIXME: We can support this too if we use VLD1/VST1.  */
+  if (!core
+      && GET_CODE (ind) == POST_MODIFY
+      && arm_address_register_rtx_p (XEXP (ind, 0), 0)
+      && GET_CODE (XEXP (ind, 1)) == PLUS
+      && rtx_equal_p (XEXP (XEXP (ind, 1), 0), XEXP (ind, 0)))
+    ind = XEXP (ind, 1);
+#endif
+
+  /* Match:
+     (plus (reg)
+          (const)).  */
+  if (!core
+      && GET_CODE (ind) == PLUS
+      && GET_CODE (XEXP (ind, 0)) == REG
+      && REG_MODE_OK_FOR_BASE_P (XEXP (ind, 0), VOIDmode)
+      /* APPLE LOCAL begin 6160917 */
+      /* Make call consistent with the ones used in neon_reload_{in,out} */
+      && arm_legitimate_index_p (GET_MODE (op), XEXP (ind, 1), SET, 0))
+      /* APPLE LOCAL end 6160917 */
+    return TRUE;
+
+  return FALSE;
+}
+
+/* Return TRUE if OP is a mem suitable for loading/storing a Neon struct
+   type.  */
+int
+neon_struct_mem_operand (rtx op)
+{
+  rtx ind;
+
+  /* Reject eliminable registers.  */
+  if (! (reload_in_progress || reload_completed)
+      && (   reg_mentioned_p (frame_pointer_rtx, op)
+	  || reg_mentioned_p (arg_pointer_rtx, op)
+	  || reg_mentioned_p (virtual_incoming_args_rtx, op)
+	  || reg_mentioned_p (virtual_outgoing_args_rtx, op)
+	  || reg_mentioned_p (virtual_stack_dynamic_rtx, op)
+	  || reg_mentioned_p (virtual_stack_vars_rtx, op)))
+    return FALSE;
+
+  /* Constants are converted into offsets from labels.  */
+  if (GET_CODE (op) != MEM)
+    return FALSE;
+
+  ind = XEXP (op, 0);
+
+  if (reload_completed
+      && (GET_CODE (ind) == LABEL_REF
+	  || (GET_CODE (ind) == CONST
+	      && GET_CODE (XEXP (ind, 0)) == PLUS
+	      && GET_CODE (XEXP (XEXP (ind, 0), 0)) == LABEL_REF
+	      && GET_CODE (XEXP (XEXP (ind, 0), 1)) == CONST_INT)))
+    return TRUE;
+
+  /* Match: (mem (reg)).  */
+  if (GET_CODE (ind) == REG)
+    return arm_address_register_rtx_p (ind, 0);
+
+  return FALSE;
+}
+
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 /* Return true if X is a register that will be eliminated later on.  */
 int
 arm_eliminable_register (rtx x)
@@ -6223,6 +7382,14 @@
 enum reg_class
 coproc_secondary_reload_class (enum machine_mode mode, rtx x, bool wb)
 {
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  if (TARGET_NEON
+      && (GET_MODE_CLASS (mode) == MODE_VECTOR_INT
+          || GET_MODE_CLASS (mode) == MODE_VECTOR_FLOAT)
+      && neon_vector_mem_operand (x, FALSE))
+     return NO_REGS;
+
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
   if (arm_coproc_mem_operand (x, wb) || s_register_operand (x, mode))
     return NO_REGS;
 
@@ -6786,10 +7953,12 @@
   if (unsorted_offsets[order[0]] == 0)
     return 1; /* ldmia */
 
-  if (unsorted_offsets[order[0]] == 4)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_ARM && unsorted_offsets[order[0]] == 4)
     return 2; /* ldmib */
 
-  if (unsorted_offsets[order[nops - 1]] == 0)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_ARM && unsorted_offsets[order[nops - 1]] == 0)
     return 3; /* ldmda */
 
   if (unsorted_offsets[order[nops - 1]] == -4)
@@ -6845,19 +8014,23 @@
   switch (load_multiple_sequence (operands, nops, regs, &base_reg, &offset))
     {
     case 1:
-      strcpy (buf, "ldm%?ia\t");
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      strcpy (buf, "ldm%(ia%)\t");
       break;
 
     case 2:
-      strcpy (buf, "ldm%?ib\t");
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      strcpy (buf, "ldm%(ib%)\t");
       break;
 
     case 3:
-      strcpy (buf, "ldm%?da\t");
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      strcpy (buf, "ldm%(da%)\t");
       break;
 
     case 4:
-      strcpy (buf, "ldm%?db\t");
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      strcpy (buf, "ldm%(db%)\t");
       break;
 
     case 5:
@@ -6871,7 +8044,8 @@
 		 (long) -offset);
       output_asm_insn (buf, operands);
       base_reg = regs[0];
-      strcpy (buf, "ldm%?ia\t");
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      strcpy (buf, "ldm%(ia%)\t");
       break;
 
     default:
@@ -7034,19 +8208,23 @@
   switch (store_multiple_sequence (operands, nops, regs, &base_reg, &offset))
     {
     case 1:
-      strcpy (buf, "stm%?ia\t");
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      strcpy (buf, "stm%(ia%)\t");
       break;
 
     case 2:
-      strcpy (buf, "stm%?ib\t");
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      strcpy (buf, "stm%(ib%)\t");
       break;
 
     case 3:
-      strcpy (buf, "stm%?da\t");
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      strcpy (buf, "stm%(da%)\t");
       break;
 
     case 4:
-      strcpy (buf, "stm%?db\t");
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      strcpy (buf, "stm%(db%)\t");
       break;
 
     default:
@@ -7623,7 +8801,8 @@
   /* An operation (on Thumb) where we want to test for a single bit.
      This is done by shifting that bit up into the top bit of a
      scratch register; we can then branch on the sign bit.  */
-  if (TARGET_THUMB
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_THUMB1
       && GET_MODE (x) == SImode
       && (op == EQ || op == NE)
       && GET_CODE (x) == ZERO_EXTRACT
@@ -7634,6 +8813,8 @@
      V flag is not set correctly, so we can only use comparisons where
      this doesn't matter.  (For LT and GE we can use "mi" and "pl"
      instead.)  */
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  /* ??? Does the ZERO_EXTRACT case really apply to thumb2?  */
   if (GET_MODE (x) == SImode
       && y == const0_rtx
       && (op == EQ || op == NE || op == LT || op == GE)
@@ -7644,7 +8825,8 @@
 	  || GET_CODE (x) == LSHIFTRT
 	  || GET_CODE (x) == ASHIFT || GET_CODE (x) == ASHIFTRT
 	  || GET_CODE (x) == ROTATERT
-	  || (TARGET_ARM && GET_CODE (x) == ZERO_EXTRACT)))
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  || (TARGET_32BIT && GET_CODE (x) == ZERO_EXTRACT)))
     return CC_NOOVmode;
 
   if (GET_MODE (x) == QImode && (op == EQ || op == NE))
@@ -8190,6 +9372,17 @@
 arm_adjust_insn_length (rtx insn, int *length)
 {
   rtx body = PATTERN (insn);
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+
+  /* Add two bytes to the length of conditionally executed Thumb-2
+     instructions for the IT instruction.  */
+  if (TARGET_THUMB2 && GET_CODE (PATTERN (insn)) == COND_EXEC)
+    {
+      *length += 2;
+      return;
+    }
+
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   if (GET_CODE (body) == UNSPEC_VOLATILE
       && (int) XEXP (body, 1) == VUNSPEC_POOL_STRING)
     {
@@ -8198,7 +9391,7 @@
       len = (len + 3) & ~3;
       *length = len;
     }
-  if (GET_CODE (body) == ADDR_DIFF_VEC)
+  if (!TARGET_THUMB2 && GET_CODE (body) == ADDR_DIFF_VEC)
     {
       /* The obvious sizeof(elt)*nelts, plus sizeof(elt) for the
 	 count.  */
@@ -8218,13 +9411,17 @@
       *length = len;
     }
   if (TARGET_THUMB
+      /* APPLE LOCAL 6279481 */
+      && !TARGET_32BIT
       && GET_CODE (body) == UNSPEC_VOLATILE
       && (int) XEXP (body, 1) == VUNSPEC_EPILOGUE)
     {
       *length = handle_thumb_unexpanded_epilogue (false);
     }
-  if (INSN_CODE (insn) == CODE_FOR_adjustable_thumb_zero_extendhisi2
-      || INSN_CODE (insn) == CODE_FOR_adjustable_thumb_zero_extendhisi2_v6)
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  if (INSN_CODE (insn) == CODE_FOR_adjustable_thumb1_zero_extendhisi2
+      || INSN_CODE (insn) == CODE_FOR_adjustable_thumb1_zero_extendhisi2_v6)
+  /* APPLE LOCAL end v7 support. Merge from mainline */
     {
       rtx mem = XEXP (XEXP (body, 1), 0);
       if (GET_CODE (mem) == REG || GET_CODE (mem) == SUBREG)
@@ -8243,8 +9440,10 @@
 	    *length = 2;
 	}
     }
-  if (INSN_CODE (insn) == CODE_FOR_thumb_extendhisi2
-      || INSN_CODE (insn) == CODE_FOR_adjustable_thumb_extendhisi2_insn_v6)
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  if (INSN_CODE (insn) == CODE_FOR_thumb1_extendhisi2
+      || INSN_CODE (insn) == CODE_FOR_adjustable_thumb1_extendhisi2_insn_v6)
+  /* APPLE LOCAL end v7 support. Merge from mainline */
     {
       rtx mem = XEXP (XEXP (XVECEXP (body, 0, 0), 1), 0);
       if (GET_CODE (mem) == REG || GET_CODE (mem) == SUBREG)
@@ -8268,7 +9467,8 @@
 	    }
 	}
     }
-  if (INSN_CODE (insn) == CODE_FOR_adjustable_thumb_extendqisi2)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (INSN_CODE (insn) == CODE_FOR_adjustable_thumb1_extendqisi2)
     {
       rtx mem = XEXP (XEXP (body, 1), 0);
       if (GET_CODE (mem) == REG || GET_CODE (mem) == SUBREG)
@@ -8313,7 +9513,8 @@
 	    *length = 4;
 	}
     }
-  if (INSN_CODE (insn) == CODE_FOR_adjustable_thumb_extendqisi2_v6)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (INSN_CODE (insn) == CODE_FOR_adjustable_thumb1_extendqisi2_v6)
     {
       rtx mem = XEXP (XEXP (body, 1), 0);
       if (GET_CODE (mem) == REG || GET_CODE (mem) == SUBREG)
@@ -8358,7 +9559,8 @@
 	    *length = 4;
 	}
     }
-  if (INSN_CODE (insn) == CODE_FOR_adjustable_thumb_movhi_insn)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (INSN_CODE (insn) == CODE_FOR_adjustable_thumb1_movhi_insn)
     {
       rtx mem = XEXP (body, 1);
       if (GET_CODE (mem) != MEM)
@@ -8370,7 +9572,8 @@
       else
 	*length = 2;
     }
-  if (INSN_CODE (insn) == CODE_FOR_adjustable_thumb_movdi_insn)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (INSN_CODE (insn) == CODE_FOR_adjustable_thumb1_movdi_insn)
     {
       rtx op0 = XEXP (body, 0);
       rtx op1 = XEXP (body, 1);
@@ -8474,8 +9677,31 @@
     {
       rtx body = PATTERN (insn);
       int elt = GET_CODE (body) == ADDR_DIFF_VEC ? 1 : 0;
-
-      return GET_MODE_SIZE (GET_MODE (body)) * XVECLEN (body, elt);
+      /* APPLE LOCAL begin v7 support. Merge from mainline */
+      HOST_WIDE_INT size;
+      HOST_WIDE_INT modesize;
+
+      modesize = GET_MODE_SIZE (GET_MODE (body));
+      size = modesize * XVECLEN (body, elt);
+      switch (modesize)
+	{
+	case 1:
+	  /* Round up size  of TBB table to a hafword boundary.  */
+	  size = (size + 1) & ~(HOST_WIDE_INT)1;
+	  break;
+	case 2:
+	  /* No padding neccessary for TBH.  */
+	  break;
+	case 4:
+	  /* Add two bytes for alignment on Thumb.  */
+	  if (TARGET_THUMB)
+	    size += 2;
+	  break;
+	default:
+	  gcc_unreachable ();
+	}
+      return size;
+      /* APPLE LOCAL end v7 support. Merge from mainline */
     }
 
   return 0;
@@ -8941,6 +10167,14 @@
 	      break;
 
 #endif
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+#ifdef HAVE_consttable_16
+	    case 16:
+              scan = emit_insn_after (gen_consttable_16 (mp->value), scan);
+              break;
+
+#endif
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 	    default:
 	      gcc_unreachable ();
 	    }
@@ -9637,29 +10871,38 @@
 
   fputc ('\t', stream);
   asm_fprintf (stream, instr, reg);
-  fputs (", {", stream);
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  fputc ('{', stream);
 
   for (i = 0; i <= LAST_ARM_REGNUM; i++)
     if (mask & (1 << i))
       {
-	if (not_first)
-	  fprintf (stream, ", ");
-
-	asm_fprintf (stream, "%r", i);
-	not_first = TRUE;
+        /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+        if (not_first)
+          fprintf (stream, ", ");
+
+        asm_fprintf (stream, "%r", i);
+        not_first = TRUE;
+        /* APPLE LOCAL end v7 support. Merge from Codesourcery */
       }
 
   fprintf (stream, "}\n");
 }
 
 
-/* Output a FLDMX instruction to STREAM.
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Output a FLDMD instruction to STREAM.
    BASE if the register containing the address.
    REG and COUNT specify the register range.
-   Extra registers may be added to avoid hardware bugs.  */
+   Extra registers may be added to avoid hardware bugs.
+
+   We output FLDMD even for ARMv5 VFP implementations.  Although
+   FLDMD is technically not supported until ARMv6, it is believed
+   that all VFP implementations support its use in this context.  */
 
 static void
-arm_output_fldmx (FILE * stream, unsigned int base, int reg, int count)
+vfp_output_fldmd (FILE * stream, unsigned int base, int reg, int count)
+/* APPLE LOCAL end v7 support. Merge from mainline */
 {
   int i;
 
@@ -9671,8 +10914,19 @@
       count++;
     }
 
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  /* FLDMD may not load more than 16 doubleword registers at a time. Split the
+     load into multiple parts if we have to handle more than 16 registers.  */
+  if (count > 16)
+    {
+      vfp_output_fldmd (stream, base, reg, 16);
+      vfp_output_fldmd (stream, base, reg + 16, count - 16);
+      return;
+    }
+
   fputc ('\t', stream);
-  asm_fprintf (stream, "fldmfdx\t%r!, {", base);
+  asm_fprintf (stream, "fldmfdd\t%r!, {", base);
+  /* APPLE LOCAL end v7 support. Merge from mainline */
 
   for (i = reg; i < reg + count; i++)
     {
@@ -9688,14 +10942,16 @@
 /* Output the assembly for a store multiple.  */
 
 const char *
-vfp_output_fstmx (rtx * operands)
+/* APPLE LOCAL v7 support. Merge from mainline */
+vfp_output_fstmd (rtx * operands)
 {
   char pattern[100];
   int p;
   int base;
   int i;
 
-  strcpy (pattern, "fstmfdx\t%m0!, {%P1");
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  strcpy (pattern, "fstmfdd\t%m0!, {%P1");
   p = strlen (pattern);
 
   gcc_assert (GET_CODE (operands[1]) == REG);
@@ -9716,7 +10972,8 @@
    number of bytes pushed.  */
 
 static int
-vfp_emit_fstmx (int base_reg, int count)
+/* APPLE LOCAL v7 support. Merge from mainline */
+vfp_emit_fstmd (int base_reg, int count)
 {
   rtx par;
   rtx dwarf;
@@ -9733,10 +10990,21 @@
       count++;
     }
 
-  /* ??? The frame layout is implementation defined.  We describe
-     standard format 1 (equivalent to a FSTMD insn and unused pad word).
-     We really need some way of representing the whole block so that the
-     unwinder can figure it out at runtime.  */
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  /* FSTMD may not store more than 16 doubleword registers at once.  Split
+     larger stores into multiple parts (up to a maximum of two, in
+     practice).  */
+  if (count > 16)
+    {
+      int saved;
+      /* NOTE: base_reg is an internal register number, so each D register
+         counts as 2.  */
+      saved = vfp_emit_fstmd (base_reg + 32, count - 16);
+      saved += vfp_emit_fstmd (base_reg, 16);
+      return saved;
+    }
+
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   par = gen_rtx_PARALLEL (VOIDmode, rtvec_alloc (count));
   dwarf = gen_rtx_SEQUENCE (VOIDmode, rtvec_alloc (count + 1));
 
@@ -9753,7 +11021,8 @@
 				   UNSPEC_PUSH_MULT));
 
   tmp = gen_rtx_SET (VOIDmode, stack_pointer_rtx,
-		     plus_constant (stack_pointer_rtx, -(count * 8 + 4)));
+  /* APPLE LOCAL v7 support. Merge from mainline */
+		     plus_constant (stack_pointer_rtx, -(count * 8)));
   RTX_FRAME_RELATED_P (tmp) = 1;
   XVECEXP (dwarf, 0, 0) = tmp;
 
@@ -9783,7 +11052,8 @@
 				       REG_NOTES (par));
   RTX_FRAME_RELATED_P (par) = 1;
 
-  return count * 8 + 4;
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  return count * 8;
 }
 
 
@@ -9862,7 +11132,8 @@
   ops[1] = gen_rtx_REG (SImode, 1 + arm_reg0);
   ops[2] = gen_rtx_REG (SImode, 2 + arm_reg0);
 
-  output_asm_insn ("stm%?fd\t%|sp!, {%0, %1, %2}", ops);
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  output_asm_insn ("stm%(fd%)\t%|sp!, {%0, %1, %2}", ops);
   output_asm_insn ("ldf%?e\t%0, [%|sp], #12", operands);
 
   return "";
@@ -9884,7 +11155,8 @@
   ops[2] = gen_rtx_REG (SImode, 2 + arm_reg0);
 
   output_asm_insn ("stf%?e\t%1, [%|sp, #-12]!", operands);
-  output_asm_insn ("ldm%?fd\t%|sp!, {%0, %1, %2}", ops);
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  output_asm_insn ("ldm%(fd%)\t%|sp!, {%0, %1, %2}", ops);
   return "";
 }
 
@@ -9936,7 +11208,8 @@
 
   ops[0] = gen_rtx_REG (SImode, arm_reg0);
   ops[1] = gen_rtx_REG (SImode, 1 + arm_reg0);
-  output_asm_insn ("stm%?fd\t%|sp!, {%0, %1}", ops);
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  output_asm_insn ("stm%(fd%)\t%|sp!, {%0, %1}", ops);
   output_asm_insn ("ldf%?d\t%0, [%|sp], #8", operands);
   return "";
 }
@@ -9955,7 +11228,8 @@
   ops[0] = gen_rtx_REG (SImode, arm_reg0);
   ops[1] = gen_rtx_REG (SImode, 1 + arm_reg0);
   output_asm_insn ("stf%?d\t%1, [%|sp, #-8]!", operands);
-  output_asm_insn ("ldm%?fd\t%|sp!, {%0, %1}", ops);
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  output_asm_insn ("ldm%(fd%)\t%|sp!, {%0, %1}", ops);
   return "";
 }
 
@@ -9980,25 +11254,34 @@
       switch (GET_CODE (XEXP (operands[1], 0)))
 	{
 	case REG:
-	  output_asm_insn ("ldm%?ia\t%m1, %M0", operands);
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  output_asm_insn ("ldm%(ia%)\t%m1, %M0", operands);
 	  break;
 
 	case PRE_INC:
 	  gcc_assert (TARGET_LDRD);
-	  output_asm_insn ("ldr%?d\t%0, [%m1, #8]!", operands);
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  output_asm_insn ("ldr%(d%)\t%0, [%m1, #8]!", operands);
 	  break;
 
 	case PRE_DEC:
-	  output_asm_insn ("ldm%?db\t%m1!, %M0", operands);
+          /* APPLE LOCAL begin v7 support. Merge from mainline */
+	  if (TARGET_LDRD)
+	    output_asm_insn ("ldr%(d%)\t%0, [%m1, #-8]!", operands);
+	  else
+	    output_asm_insn ("ldm%(db%)\t%m1!, %M0", operands);
+          /* APPLE LOCAL end v7 support. Merge from mainline */
 	  break;
 
 	case POST_INC:
-	  output_asm_insn ("ldm%?ia\t%m1!, %M0", operands);
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  output_asm_insn ("ldm%(ia%)\t%m1!, %M0", operands);
 	  break;
 
 	case POST_DEC:
 	  gcc_assert (TARGET_LDRD);
-	  output_asm_insn ("ldr%?d\t%0, [%m1], #-8", operands);
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  output_asm_insn ("ldr%(d%)\t%0, [%m1], #-8", operands);
 	  break;
 
 	case PRE_MODIFY:
@@ -10007,19 +11290,20 @@
 	  otherops[1] = XEXP (XEXP (XEXP (operands[1], 0), 1), 0);
 	  otherops[2] = XEXP (XEXP (XEXP (operands[1], 0), 1), 1);
 
+          /* APPLE LOCAL begin v7 support. Merge from mainline */
 	  if (GET_CODE (XEXP (operands[1], 0)) == PRE_MODIFY)
 	    {
 	      if (reg_overlap_mentioned_p (otherops[0], otherops[2]))
 		{
 		  /* Registers overlap so split out the increment.  */
 		  output_asm_insn ("add%?\t%1, %1, %2", otherops);
-		  output_asm_insn ("ldr%?d\t%0, [%1] @split", otherops);
+		  output_asm_insn ("ldr%(d%)\t%0, [%1] @split", otherops);
 		}
 	      else
 		{
-		  /* IWMMXT allows offsets larger than ldrd can handle,
+		  /* IWMMXT allows offsets larger than ARM ldrd can handle,
 		     fix these up with a pair of ldr.  */
-		  if (GET_CODE (otherops[2]) == CONST_INT
+		  if (TARGET_ARM && GET_CODE (otherops[2]) == CONST_INT
 		      && (INTVAL(otherops[2]) <= -256
 			  || INTVAL(otherops[2]) >= 256))
 		    {
@@ -10028,14 +11312,14 @@
 		      output_asm_insn ("ldr%?\t%0, [%1, #4]", otherops);
 		    }
 		  else
-		    output_asm_insn ("ldr%?d\t%0, [%1, %2]!", otherops);
+		    output_asm_insn ("ldr%(d%)\t%0, [%1, %2]!", otherops);
 		}
 	    }
 	  else
 	    {
-	      /* IWMMXT allows offsets larger than ldrd can handle,
+	      /* IWMMXT allows offsets larger than ARM ldrd can handle,
 		 fix these up with a pair of ldr.  */
-	      if (GET_CODE (otherops[2]) == CONST_INT
+	      if (TARGET_ARM && GET_CODE (otherops[2]) == CONST_INT
 		  && (INTVAL(otherops[2]) <= -256
 		      || INTVAL(otherops[2]) >= 256))
 		{
@@ -10046,16 +11330,20 @@
 		}
 	      else
 		/* We only allow constant increments, so this is safe.  */
-		output_asm_insn ("ldr%?d\t%0, [%1], %2", otherops);
+		output_asm_insn ("ldr%(d%)\t%0, [%1], %2", otherops);
 	    }
+          /* APPLE LOCAL end v7 support. Merge from mainline */
 	  break;
 
 	case LABEL_REF:
 	case CONST:
 	  output_asm_insn ("adr%?\t%0, %1", operands);
-	  output_asm_insn ("ldm%?ia\t%0, %M0", operands);
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  output_asm_insn ("ldm%(ia%)\t%0, %M0", operands);
 	  break;
 
+          /* APPLE LOCAL v7 support. Merge from mainline */
+          /* ??? This needs checking for thumb2.  */
 	default:
 	  if (arm_add_operand (XEXP (XEXP (operands[1], 0), 1),
 			       GET_MODE (XEXP (XEXP (operands[1], 0), 1))))
@@ -10068,18 +11356,24 @@
 		{
 		  if (GET_CODE (otherops[2]) == CONST_INT)
 		    {
+                      /* APPLE LOCAL begin v7 support. Merge from mainline */
 		      switch ((int) INTVAL (otherops[2]))
 			{
 			case -8:
-			  output_asm_insn ("ldm%?db\t%1, %M0", otherops);
+			  output_asm_insn ("ldm%(db%)\t%1, %M0", otherops);
 			  return "";
 			case -4:
-			  output_asm_insn ("ldm%?da\t%1, %M0", otherops);
+			  if (TARGET_THUMB2)
+			    break;
+			  output_asm_insn ("ldm%(da%)\t%1, %M0", otherops);
 			  return "";
 			case 4:
-			  output_asm_insn ("ldm%?ib\t%1, %M0", otherops);
+			  if (TARGET_THUMB2)
+			    break;
+			  output_asm_insn ("ldm%(ib%)\t%1, %M0", otherops);
 			  return "";
 			}
+                      /* APPLE LOCAL end v7 support. Merge from mainline */
 		    }
 		  if (TARGET_LDRD
 		      && (GET_CODE (otherops[2]) == REG
@@ -10100,11 +11394,13 @@
 		      if (reg_overlap_mentioned_p (otherops[0], otherops[2]))
 			{
 			  output_asm_insn ("add%?\t%1, %1, %2", otherops);
-			  output_asm_insn ("ldr%?d\t%0, [%1]",
+                          /* APPLE LOCAL v7 support. Merge from mainline */
+			  output_asm_insn ("ldr%(d%)\t%0, [%1]",
 					   otherops);
 			}
 		      else
-			output_asm_insn ("ldr%?d\t%0, [%1, %2]", otherops);
+                        /* APPLE LOCAL v7 support. Merge from mainline */
+			output_asm_insn ("ldr%(d%)\t%0, [%1, %2]", otherops);
 		      return "";
 		    }
 
@@ -10121,7 +11417,8 @@
 	      else
 		output_asm_insn ("sub%?\t%0, %1, %2", otherops);
 
-	      return "ldm%?ia\t%0, %M0";
+              /* APPLE LOCAL v7 support. Merge from mainline */
+	      return "ldm%(ia%)\t%0, %M0";
 	    }
 	  else
 	    {
@@ -10149,25 +11446,34 @@
       switch (GET_CODE (XEXP (operands[0], 0)))
         {
 	case REG:
-	  output_asm_insn ("stm%?ia\t%m0, %M1", operands);
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  output_asm_insn ("stm%(ia%)\t%m0, %M1", operands);
 	  break;
 
         case PRE_INC:
 	  gcc_assert (TARGET_LDRD);
-	  output_asm_insn ("str%?d\t%1, [%m0, #8]!", operands);
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  output_asm_insn ("str%(d%)\t%1, [%m0, #8]!", operands);
 	  break;
 
         case PRE_DEC:
-	  output_asm_insn ("stm%?db\t%m0!, %M1", operands);
+          /* APPLE LOCAL begin v7 support. Merge from mainline */
+	  if (TARGET_LDRD)
+	    output_asm_insn ("str%(d%)\t%1, [%m0, #-8]!", operands);
+	  else
+	    output_asm_insn ("stm%(db%)\t%m0!, %M1", operands);
+          /* APPLE LOCAL end v7 support. Merge from mainline */
 	  break;
 
         case POST_INC:
-	  output_asm_insn ("stm%?ia\t%m0!, %M1", operands);
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  output_asm_insn ("stm%(ia%)\t%m0!, %M1", operands);
 	  break;
 
         case POST_DEC:
 	  gcc_assert (TARGET_LDRD);
-	  output_asm_insn ("str%?d\t%1, [%m0], #-8", operands);
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  output_asm_insn ("str%(d%)\t%1, [%m0], #-8", operands);
 	  break;
 
 	case PRE_MODIFY:
@@ -10176,9 +11482,11 @@
 	  otherops[1] = XEXP (XEXP (XEXP (operands[0], 0), 1), 0);
 	  otherops[2] = XEXP (XEXP (XEXP (operands[0], 0), 1), 1);
 
-	  /* IWMMXT allows offsets larger than ldrd can handle,
+          /* APPLE LOCAL begin v7 support. Merge from mainline */
+	  /* IWMMXT allows offsets larger than ARM ldrd can handle,
 	     fix these up with a pair of ldr.  */
-	  if (GET_CODE (otherops[2]) == CONST_INT
+	  if (TARGET_ARM && GET_CODE (otherops[2]) == CONST_INT
+          /* APPLE LOCAL end v7 support. Merge from mainline */
 	      && (INTVAL(otherops[2]) <= -256
 		  || INTVAL(otherops[2]) >= 256))
 	    {
@@ -10199,29 +11507,37 @@
 		}
 	    }
 	  else if (GET_CODE (XEXP (operands[0], 0)) == PRE_MODIFY)
-	    output_asm_insn ("str%?d\t%0, [%1, %2]!", otherops);
+            /* APPLE LOCAL v7 support. Merge from mainline */
+	    output_asm_insn ("str%(d%)\t%0, [%1, %2]!", otherops);
 	  else
-	    output_asm_insn ("str%?d\t%0, [%1], %2", otherops);
+            /* APPLE LOCAL v7 support. Merge from mainline */
+	    output_asm_insn ("str%(d%)\t%0, [%1], %2", otherops);
 	  break;
 
 	case PLUS:
 	  otherops[2] = XEXP (XEXP (operands[0], 0), 1);
 	  if (GET_CODE (otherops[2]) == CONST_INT)
 	    {
+              /* APPLE LOCAL begin v7 support. Merge from mainline */
 	      switch ((int) INTVAL (XEXP (XEXP (operands[0], 0), 1)))
 		{
 		case -8:
-		  output_asm_insn ("stm%?db\t%m0, %M1", operands);
+		  output_asm_insn ("stm%(db%)\t%m0, %M1", operands);
 		  return "";
 
 		case -4:
-		  output_asm_insn ("stm%?da\t%m0, %M1", operands);
+		  if (TARGET_THUMB2)
+		    break;
+		  output_asm_insn ("stm%(da%)\t%m0, %M1", operands);
 		  return "";
 
 		case 4:
-		  output_asm_insn ("stm%?ib\t%m0, %M1", operands);
+		  if (TARGET_THUMB2)
+		    break;
+		  output_asm_insn ("stm%(ib%)\t%m0, %M1", operands);
 		  return "";
 		}
+              /* APPLE LOCAL end v7 support. Merge from mainline */
 	    }
 	  if (TARGET_LDRD
 	      && (GET_CODE (otherops[2]) == REG
@@ -10231,7 +11547,8 @@
 	    {
 	      otherops[0] = operands[1];
 	      otherops[1] = XEXP (XEXP (operands[0], 0), 0);
-	      output_asm_insn ("str%?d\t%0, [%1, %2]", otherops);
+              /* APPLE LOCAL v7 support. Merge from mainline */
+	      output_asm_insn ("str%(d%)\t%0, [%1, %2]", otherops);
 	      return "";
 	    }
 	  /* Fall through */
@@ -10247,6 +11564,261 @@
   return "";
 }
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+/* Output a move, load or store for quad-word vectors in ARM registers.  Only
+   handles MEMs accepted by neon_vector_mem_operand with CORE=true.  */
+
+const char *
+output_move_quad (rtx *operands)
+{
+  if (REG_P (operands[0]))
+    {
+      /* Load, or reg->reg move.  */
+
+      if (MEM_P (operands[1]))
+        {
+          switch (GET_CODE (XEXP (operands[1], 0)))
+            {
+            case REG:
+              output_asm_insn ("ldm%(ia%)\t%m1, %M0", operands);
+              break;
+
+            case LABEL_REF:
+            case CONST:
+              output_asm_insn ("adr%?\t%0, %1", operands);
+              output_asm_insn ("ldm%(ia%)\t%0, %M0", operands);
+              break;
+
+            default:
+              gcc_unreachable ();
+            }
+        }
+      else
+        {
+          rtx ops[2];
+          int dest, src, i;
+
+          gcc_assert (REG_P (operands[1]));
+
+          dest = REGNO (operands[0]);
+          src = REGNO (operands[1]);
+          
+          /* This seems pretty dumb, but hopefully GCC won't try to do it
+             very often.  */
+          if (dest < src)
+            for (i = 0; i < 4; i++)
+              {
+                ops[0] = gen_rtx_REG (SImode, dest + i);
+                ops[1] = gen_rtx_REG (SImode, src + i);
+                output_asm_insn ("mov%?\t%0, %1", ops);
+              }
+          else
+            for (i = 3; i >= 0; i--)
+              {
+                ops[0] = gen_rtx_REG (SImode, dest + i);
+                ops[1] = gen_rtx_REG (SImode, src + i);
+                output_asm_insn ("mov%?\t%0, %1", ops);
+              }
+        }
+    }
+  else 
+    {
+      gcc_assert (MEM_P (operands[0]));
+      gcc_assert (REG_P (operands[1]));
+      gcc_assert (!reg_overlap_mentioned_p (operands[1], operands[0]));
+      
+      switch (GET_CODE (XEXP (operands[0], 0)))
+        {
+        case REG:
+          output_asm_insn ("stm%(ia%)\t%m0, %M1", operands);
+          break;
+
+        default:
+          gcc_unreachable ();
+        }
+    }
+  
+  return "";
+}
+
+/* Output a VFP load or store instruction.  */
+
+const char *
+output_move_vfp (rtx *operands)
+{
+  rtx reg, mem, addr, ops[2];
+  int load = REG_P (operands[0]);
+  int dp = GET_MODE_SIZE (GET_MODE (operands[0])) == 8;
+  int integer_p = GET_MODE_CLASS (GET_MODE (operands[0])) == MODE_INT;
+  const char *template;
+  char buff[50];
+  enum machine_mode mode;
+
+  reg = operands[!load];
+  mem = operands[load];
+
+  mode = GET_MODE (reg);
+
+  gcc_assert (REG_P (reg));
+  gcc_assert (IS_VFP_REGNUM (REGNO (reg)));
+  gcc_assert (mode == SFmode
+	      || mode == DFmode
+	      || mode == SImode
+	      || mode == DImode
+              || (TARGET_NEON && VALID_NEON_DREG_MODE (mode)));
+  gcc_assert (MEM_P (mem));
+
+  addr = XEXP (mem, 0);
+
+  switch (GET_CODE (addr))
+    {
+    case PRE_DEC:
+      template = "f%smdb%c%%?\t%%0!, {%%%s1}%s";
+      ops[0] = XEXP (addr, 0);
+      ops[1] = reg;
+      break;
+
+    case POST_INC:
+      template = "f%smia%c%%?\t%%0!, {%%%s1}%s";
+      ops[0] = XEXP (addr, 0);
+      ops[1] = reg;
+      break;
+
+    default:
+      template = "f%s%c%%?\t%%%s0, %%1%s";
+      ops[0] = reg;
+      ops[1] = mem;
+      break;
+    }
+
+  sprintf (buff, template,
+	   load ? "ld" : "st",
+	   dp ? 'd' : 's',
+	   dp ? "P" : "",
+	   integer_p ? "\t%@ int" : "");
+  output_asm_insn (buff, ops);
+
+  return "";
+}
+
+/* Output a Neon quad-word load or store, or a load or store for
+   larger structure modes. We could also support post-modify
+   forms using VLD1/VST1, but we don't do that yet.
+   WARNING, FIXME: The ordering of elements in memory is going to be weird in
+   big-endian mode at present, because we use VSTM instead of VST1, to make
+   it easy to make vector stores via ARM registers write values in the same
+   order as stores direct from Neon registers.  For example, the byte ordering
+   of a quadword vector with 16-byte elements like this:
+
+     [e7:e6:e5:e4:e3:e2:e1:e0]  (highest-numbered element first)
+
+   will be (with lowest address first, h = most-significant byte,
+   l = least-significant byte of element):
+
+     [e3h, e3l, e2h, e2l, e1h, e1l, e0h, e0l,
+      e7h, e7l, e6h, e6l, e5h, e5l, e4h, e4l]
+   
+   When necessary, quadword registers (dN, dN+1) are moved to ARM registers from
+   rN in the order:
+   
+     dN -> (rN+1, rN), dN+1 -> (rN+3, rN+2)
+   
+   So that STM/LDM can be used on vectors in ARM registers, and the same memory
+   layout will result as if VSTM/VLDM were used.
+
+   This memory format (in BE mode) is very likely to change in the future.  */
+
+const char *
+output_move_neon (rtx *operands)
+{
+  rtx reg, mem, addr, ops[2];
+  int regno, load = REG_P (operands[0]);
+  const char *template;
+  char buff[50];
+  enum machine_mode mode;
+  
+  reg = operands[!load];
+  mem = operands[load];
+  
+  mode = GET_MODE (reg);
+  
+  gcc_assert (REG_P (reg));
+  regno = REGNO (reg);
+  gcc_assert (VFP_REGNO_OK_FOR_DOUBLE (regno)
+	      || NEON_REGNO_OK_FOR_QUAD (regno));
+  gcc_assert (VALID_NEON_DREG_MODE (mode)
+	      || VALID_NEON_QREG_MODE (mode)
+	      || VALID_NEON_STRUCT_MODE (mode));
+  gcc_assert (MEM_P (mem));
+  
+  addr = XEXP (mem, 0);
+  
+  /* Strip off const from addresses like (const (plus (...))).  */
+  if (GET_CODE (addr) == CONST && GET_CODE (XEXP (addr, 0)) == PLUS)
+    addr = XEXP (addr, 0);
+  
+  switch (GET_CODE (addr))
+    {
+    case POST_INC:
+      /* FIXME: We should be using vld1/vst1 here in BE mode?  */
+      template = "v%smia%%?\t%%0!, %%h1";
+      ops[0] = XEXP (addr, 0);
+      ops[1] = reg;
+      break;
+    
+    case POST_MODIFY:
+      /* FIXME: Not currently enabled in neon_vector_mem_operand.  */
+      gcc_unreachable ();
+
+    case LABEL_REF:
+    case PLUS:
+      {
+	int nregs = HARD_REGNO_NREGS (REGNO (reg), mode) / 2;
+	int i;
+	int overlap = -1;
+	for (i = 0; i < nregs; i++)
+	  {
+	    /* We're only using DImode here because it's a convenient size.
+	       FIXME: This will need updating if the memory format of vectors
+	       changes.  */
+	    ops[0] = gen_rtx_REG (DImode, REGNO (reg) + 2 * i);
+	    ops[1] = adjust_address (mem, SImode, 8 * i);
+	    if (reg_overlap_mentioned_p (ops[0], mem))
+	      {
+		gcc_assert (overlap == -1);
+		overlap = i;
+	      }
+	    else
+	      {
+		sprintf (buff, "v%sr%%?\t%%P0, %%1", load ? "ld" : "st");
+		output_asm_insn (buff, ops);
+	      }
+	  }
+	if (overlap != -1)
+	  {
+	    ops[0] = gen_rtx_REG (DImode, REGNO (reg) + 2 * overlap);
+	    ops[1] = adjust_address (mem, SImode, 8 * overlap);
+	    sprintf (buff, "v%sr%%?\t%%P0, %%1", load ? "ld" : "st");
+	    output_asm_insn (buff, ops);
+	  }
+
+        return "";
+      }
+
+    default:
+      /* FIXME: See POST_INC.  */
+      template = "v%smia%%?\t%%m0, %%h1";
+      ops[0] = mem;
+      ops[1] = reg;
+    }
+  
+  sprintf (buff, template, load ? "ld" : "st");
+  output_asm_insn (buff, ops);
+  
+  return "";
+}
+
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 /* Output an ADD r, s, #n where n may be too big for one instruction.
    If adding zero to one register, output nothing.  */
 const char *
@@ -10310,6 +11882,31 @@
   return "";
 }
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Return the name of a shifter operation.  */
+static const char *
+arm_shift_nmem(enum rtx_code code)
+{
+  switch (code)
+    {
+    case ASHIFT:
+      return ARM_LSL_NAME;
+
+    case ASHIFTRT:
+      return "asr";
+
+    case LSHIFTRT:
+      return "lsr";
+
+    case ROTATERT:
+      return "ror";
+
+    default:
+      abort();
+    }
+}
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 /* Return the appropriate ARM instruction for the operation code.
    The returned result should not be overwritten.  OP is the rtx of the
    operation.  SHIFT_FIRST_ARG is TRUE if the first argument of the operator
@@ -10334,6 +11931,14 @@
     case AND:
       return "and";
 
+    /* APPLE LOCAL begin v7 support. Merge from mainline */
+    case ASHIFT:
+    case ASHIFTRT:
+    case LSHIFTRT:
+    case ROTATERT:
+      return arm_shift_nmem(GET_CODE(op));
+    /* APPLE LOCAL end v7 support. Merge from mainline */
+
     default:
       gcc_unreachable ();
     }
@@ -10365,28 +11970,21 @@
       gcc_unreachable ();
     }
 
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
   switch (code)
     {
-    case ASHIFT:
-      mnem = "asl";
-      break;
-
-    case ASHIFTRT:
-      mnem = "asr";
-      break;
-
-    case LSHIFTRT:
-      mnem = "lsr";
-      break;
-
     case ROTATE:
       gcc_assert (*amountp != -1);
       *amountp = 32 - *amountp;
+      code = ROTATERT;
 
       /* Fall through.  */
 
+    case ASHIFT:
+    case ASHIFTRT:
+    case LSHIFTRT:
     case ROTATERT:
-      mnem = "ror";
+      mnem = arm_shift_nmem(code);
       break;
 
     case MULT:
@@ -10394,7 +11992,7 @@
 	 power of 2, since this case can never be reloaded from a reg.  */
       gcc_assert (*amountp != -1);
       *amountp = int_log2 (*amountp);
-      return "asl";
+      return ARM_LSL_NAME;
 
     default:
       gcc_unreachable ();
@@ -10404,7 +12002,7 @@
     {
       /* This is not 100% correct, but follows from the desire to merge
 	 multiplication by a power of 2 with the recognizer for a
-	 shift.  >=32 is not a valid shift for "asl", so we must try and
+	 shift.  >=32 is not a valid shift for "lsl", so we must try and
 	 output a shift that produces the correct arithmetical result.
 	 Using lsr #32 is identical except for the fact that the carry bit
 	 is not set correctly if we set the flags; but we never use the
@@ -10423,6 +12021,7 @@
       if (*amountp == 0)
 	return NULL;
     }
+  /* APPLE LOCAL end v7 support. Merge from mainline */
 
   return mnem;
 }
@@ -10554,6 +12153,12 @@
 	  && (regs_ever_live[PIC_OFFSET_TABLE_REGNUM]
 	      || current_function_uses_pic_offset_table))
 	save_reg_mask |= 1 << PIC_OFFSET_TABLE_REGNUM;
+
+      /* APPLE LOCAL begin v7 support. Merge from mainline */
+      /* The prologue will copy SP into R0, so save it.  */
+      if (IS_STACKALIGN (func_type))
+	save_reg_mask |= 1;
+      /* APPLE LOCAL end v7 support. Merge from mainline */
     }
 
   /* Save registers so the exception handler can modify them.  */
@@ -10581,6 +12186,8 @@
 {
   unsigned int save_reg_mask = 0;
   unsigned long func_type = arm_current_func_type ();
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  unsigned int reg;
 
   if (IS_NAKED (func_type))
     /* This should never really happen.  */
@@ -10623,8 +12230,8 @@
       && ((bit_count (save_reg_mask)
 	   + ARM_NUM_INTS (current_function_pretend_args_size)) % 2) != 0)
     {
-      unsigned int reg;
-
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      /* moved definition of 'reg' to function level scope */
       /* The total number of registers that are going to be pushed
 	 onto the stack is odd.  We need to ensure that the stack
 	 is 64-bit aligned before we start to save iWMMXt registers,
@@ -10647,6 +12254,18 @@
 	}
     }
 
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  /* We may need to push an additional register for use initializing the
+     PIC base register.  */
+  if (TARGET_THUMB2 && IS_NESTED (func_type) && flag_pic
+      && (save_reg_mask & THUMB2_WORK_REGS) == 0)
+    {
+      reg = thumb_find_work_register (1 << 4);
+      if (!call_used_regs[reg])
+	save_reg_mask |= (1 << reg);
+    }
+
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   return save_reg_mask;
 }
 
@@ -10654,7 +12273,8 @@
 /* Compute a bit mask of which registers need to be
    saved on the stack for the current function.  */
 static unsigned long
-thumb_compute_save_reg_mask (void)
+/* APPLE LOCAL v7 support. Merge from mainline */
+thumb1_compute_save_reg_mask (void)
 {
   unsigned long mask;
   unsigned reg;
@@ -10742,7 +12362,8 @@
 		  /* Workaround ARM10 VFPr1 bug.  */
 		  if (count == 2 && !arm_arch6)
 		    count++;
-		  saved += count * 8 + 4;
+                  /* APPLE LOCAL v7 support. Merge from mainline */
+		  saved += count * 8;
 		}
 	      count = 0;
 	    }
@@ -10753,7 +12374,8 @@
 	{
 	  if (count == 2 && !arm_arch6)
 	    count++;
-	  saved += count * 8 + 4;
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  saved += count * 8;
 	}
     }
   return saved;
@@ -10870,7 +12492,8 @@
 	      stack_adjust = offsets->outgoing_args - offsets->saved_regs;
 	      gcc_assert (stack_adjust == 0 || stack_adjust == 4);
 
-	      if (stack_adjust && arm_arch5)
+              /* APPLE LOCAL v7 support. Merge from mainline */
+	      if (stack_adjust && arm_arch5 && TARGET_ARM)
 		sprintf (instr, "ldm%sib\t%%|sp, {", conditional);
 	      else
 		{
@@ -10935,6 +12558,8 @@
 	{
 	case ARM_FT_ISR:
 	case ARM_FT_FIQ:
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  /* ??? This is wrong for unified assembly syntax.  */
 	  sprintf (instr, "sub%ss\t%%|pc, %%|lr, #4", conditional);
 	  break;
 
@@ -10943,6 +12568,8 @@
 	  break;
 
 	case ARM_FT_EXCEPTION:
+          /* APPLE LOCAL v7 support. Merge from mainline */
+   	  /* ??? This is wrong for unified assembly syntax.  */
 	  sprintf (instr, "mov%ss\t%%|pc, %%|lr", conditional);
 	  break;
 
@@ -11010,11 +12637,13 @@
 {
   unsigned long func_type;
 
-  if (!TARGET_ARM)
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  if (TARGET_THUMB1)
     {
-      thumb_output_function_prologue (f, frame_size);
+      thumb1_output_function_prologue (f, frame_size);
       return;
     }
+  /* APPLE LOCAL end v7 support. Merge from mainline */
 
   /* Sanity check.  */
   gcc_assert (!arm_ccfsm_state && !arm_target_insn);
@@ -11048,6 +12677,10 @@
 
   if (IS_NESTED (func_type))
     asm_fprintf (f, "\t%@ Nested: function declared inside another function.\n");
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  if (IS_STACKALIGN (func_type))
+    asm_fprintf (f, "\t%@ Stack Align: May be called with mis-aligned SP.\n");
+  /* APPLE LOCAL end v7 support. Merge from mainline */
 
   asm_fprintf (f, "\t%@ args = %d, pretend = %d, frame = %wd\n",
 	       current_function_args_size,
@@ -11071,6 +12704,32 @@
   return_used_this_function = 0;
 }
 
+/* APPLE LOCAL begin 6902937 out of order VFP restore */
+static void
+arm_output_epilogue_vfp_restore (void)
+{
+  int reg, start_reg;
+  FILE * f = asm_out_file;
+  start_reg = LAST_VFP_REGNUM - 1;
+  for (reg = LAST_VFP_REGNUM - 1 ; reg >= FIRST_VFP_REGNUM; reg -= 2)
+  {
+    if ((!regs_ever_live[reg] || call_used_regs[reg])
+        && (!regs_ever_live[reg + 1] || call_used_regs[reg + 1]))
+    {
+      if (start_reg != reg)
+        vfp_output_fldmd (f, SP_REGNUM,
+                          (reg - FIRST_VFP_REGNUM) / 2 + 1,
+                          (start_reg - reg) / 2);
+      start_reg = reg - 2;
+    }
+  }
+  if (start_reg != reg)
+    vfp_output_fldmd (f, SP_REGNUM,
+                      (reg - FIRST_VFP_REGNUM + 2) / 2 + 1,
+                      (start_reg - reg) / 2);
+}
+/* APPLE LOCAL end 6902937 out of order VFP restore */
+
 const char *
 arm_output_epilogue (rtx sibling)
 {
@@ -11086,6 +12745,8 @@
   int really_return = (sibling == NULL);
   int start_reg;
   arm_stack_offsets *offsets;
+  /* APPLE LOCAL 6196857 use pop for thumb-2 epilogue */
+  const char *pop_insn;
 
   /* If we have already generated the return instruction
      then it is futile to generate anything else.  */
@@ -11126,7 +12787,8 @@
     if (saved_regs_mask & (1 << reg))
       floats_offset += 4;
 
-  if (frame_pointer_needed)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (frame_pointer_needed && TARGET_32BIT)
     {
       /* This variable is for the Virtual Frame Pointer, not VFP regs.  */
       int vfp_offset = offsets->frame;
@@ -11184,33 +12846,23 @@
 	{
 	  int saved_size;
 
-	  /* The fldmx insn does not have base+offset addressing modes,
-	     so we use IP to hold the address.  */
+	  /* APPLE LOCAL begin 4809156 */
+          /* APPLE LOCAL begin v7 support. Merge from mainline */
+	  /* The fldmd insns do not have base+offset addressing modes,
+	     so we use SP to hold the address.  (IP might have a
+	     live value in it for indirect sibcalls, can't use that.)  */
+          /* APPLE LOCAL end v7 support. Merge from mainline */
 	  saved_size = arm_get_vfp_saved_size ();
 
 	  if (saved_size > 0)
 	    {
 	      floats_offset += saved_size;
-	      asm_fprintf (f, "\tsub\t%r, %r, #%d\n", IP_REGNUM,
+	      asm_fprintf (f, "\tsub\t%r, %r, #%d\n", SP_REGNUM,
 			   FP_REGNUM, floats_offset - vfp_offset);
 	    }
-	  start_reg = FIRST_VFP_REGNUM;
-	  for (reg = FIRST_VFP_REGNUM; reg < LAST_VFP_REGNUM; reg += 2)
-	    {
-	      if ((!regs_ever_live[reg] || call_used_regs[reg])
-		  && (!regs_ever_live[reg + 1] || call_used_regs[reg + 1]))
-		{
-		  if (start_reg != reg)
-		    arm_output_fldmx (f, IP_REGNUM,
-				      (start_reg - FIRST_VFP_REGNUM) / 2,
-				      (reg - start_reg) / 2);
-		  start_reg = reg + 2;
-		}
-	    }
-	  if (start_reg != reg)
-	    arm_output_fldmx (f, IP_REGNUM,
-			      (start_reg - FIRST_VFP_REGNUM) / 2,
-			      (reg - start_reg) / 2);
+          /* APPLE LOCAL 6902937 out of order VFP restore */
+          arm_output_epilogue_vfp_restore ();
+	  /* APPLE LOCAL end 4809156 */
 	}
 
       if (TARGET_IWMMXT)
@@ -11305,15 +12957,24 @@
       /* We mustn't be trying to restore SP from the stack.  */
       gcc_assert (! (saved_regs_mask & (1 << SP_REGNUM)));
 
+      /* APPLE LOCAL 6196857 begin use pop for thumb-2 epilogue */
+      if (TARGET_ARM)
+	pop_insn = "ldmfd\t%r!, ";
+      else /* (TARGET_THUMB2) */
+	pop_insn = "pop\t";
+
+      /* APPLE LOCAL begin v7 support. Merge from mainline */
       if (saved_regs_mask & regs_above_fp)
 	{
-	  print_multi_reg (f, "ldmfd\t%r!", SP_REGNUM,
+	  print_multi_reg (f, pop_insn, SP_REGNUM,
 			   saved_regs_mask & regs_above_fp);
-	  print_multi_reg (f, "ldmfd\t%r!", SP_REGNUM,
+	  print_multi_reg (f, pop_insn, SP_REGNUM,
 			   saved_regs_mask & ~regs_above_fp);
 	}
       else
-	print_multi_reg (f, "ldmfd\t%r!", SP_REGNUM, saved_regs_mask);
+	print_multi_reg (f, pop_insn, SP_REGNUM, saved_regs_mask);
+      /* APPLE LOCAL end v7 support. Merge from mainline */
+      /* APPLE LOCAL 6196857 begin use pop for thumb-2 epilogue */
 
       if (current_function_pretend_args_size)
 	{
@@ -11327,7 +12988,8 @@
       if (IS_INTERRUPT (func_type))
 	/* Interrupt handlers will have pushed the
 	   IP onto the stack, so restore it now.  */
-	print_multi_reg (f, "ldmfd\t%r!", SP_REGNUM, 1 << IP_REGNUM);
+        /* APPLE LOCAL v7 support. Merge from mainline */
+	print_multi_reg (f, "ldmfd\t%r!, ", SP_REGNUM, 1 << IP_REGNUM);
     }
   else
     {
@@ -11465,23 +13127,8 @@
 
       if (TARGET_HARD_FLOAT && TARGET_VFP)
 	{
-	  start_reg = FIRST_VFP_REGNUM;
-	  for (reg = FIRST_VFP_REGNUM; reg < LAST_VFP_REGNUM; reg += 2)
-	    {
-	      if ((!regs_ever_live[reg] || call_used_regs[reg])
-		  && (!regs_ever_live[reg + 1] || call_used_regs[reg + 1]))
-		{
-		  if (start_reg != reg)
-		    arm_output_fldmx (f, SP_REGNUM,
-				      (start_reg - FIRST_VFP_REGNUM) / 2,
-				      (reg - start_reg) / 2);
-		  start_reg = reg + 2;
-		}
-	    }
-	  if (start_reg != reg)
-	    arm_output_fldmx (f, SP_REGNUM,
-			      (start_reg - FIRST_VFP_REGNUM) / 2,
-			      (reg - start_reg) / 2);
+          /* APPLE LOCAL 6902937 out of order VFP restore */
+          arm_output_epilogue_vfp_restore ();
 	}
       if (TARGET_IWMMXT)
 	for (reg = FIRST_IWMMXT_REGNUM; reg <= LAST_IWMMXT_REGNUM; reg++)
@@ -11490,6 +13137,8 @@
 
       /* If we can, restore the LR into the PC.  */
       if (ARM_FUNC_TYPE (func_type) == ARM_FT_NORMAL
+          /* APPLE LOCAL v7 support. Merge from mainline */
+	  && !IS_STACKALIGN (func_type)
 	  && really_return
 	  && current_function_pretend_args_size == 0
 	  && saved_regs_mask & (1 << LR_REGNUM)
@@ -11499,9 +13148,11 @@
 	  saved_regs_mask |=   (1 << PC_REGNUM);
 	}
 
+      /* APPLE LOCAL begin v7 support. Merge from mainline */
       /* Load the registers off the stack.  If we only have one register
-	 to load use the LDR instruction - it is faster.  */
-      if (saved_regs_mask == (1 << LR_REGNUM))
+	 to load use the LDR instruction - it is faster.  For Thumb-2
+	 always use pop and the assembler will pick the best instruction.*/
+      if (TARGET_ARM && saved_regs_mask == (1 << LR_REGNUM))
 	{
 	  asm_fprintf (f, "\tldr\t%r, [%r], #4\n", LR_REGNUM, SP_REGNUM);
 	}
@@ -11512,10 +13163,13 @@
 	       (i.e. "ldmfd sp!...").  We know that the stack pointer is
 	       in the list of registers and if we add writeback the
 	       instruction becomes UNPREDICTABLE.  */
-	    print_multi_reg (f, "ldmfd\t%r", SP_REGNUM, saved_regs_mask);
+	    print_multi_reg (f, "ldmfd\t%r, ", SP_REGNUM, saved_regs_mask);
+	  else if (TARGET_ARM)
+	    print_multi_reg (f, "ldmfd\t%r!, ", SP_REGNUM, saved_regs_mask);
 	  else
-	    print_multi_reg (f, "ldmfd\t%r!", SP_REGNUM, saved_regs_mask);
+	    print_multi_reg (f, "pop\t", SP_REGNUM, saved_regs_mask);
 	}
+      /* APPLE LOCAL end v7 support. Merge from mainline */
 
       if (current_function_pretend_args_size)
 	{
@@ -11552,6 +13206,13 @@
       break;
 
     default:
+      /* APPLE LOCAL begin v7 support. Merge from mainline */
+      if (IS_STACKALIGN (func_type))
+	{
+	  /* See comment in arm_expand_prologue.  */
+	  asm_fprintf (f, "\tmov\t%r, %r\n", SP_REGNUM, 0);
+	}
+      /* APPLE LOCAL end v7 support. Merge from mainline */
       if (arm_arch5 || arm_arch4t)
 	asm_fprintf (f, "\tbx\t%r\n", LR_REGNUM);
       else
@@ -11568,7 +13229,8 @@
 {
   arm_stack_offsets *offsets;
 
-  if (TARGET_THUMB)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_THUMB1)
     {
       int regno;
 
@@ -11592,7 +13254,8 @@
 	 RTL for it.  This does not happen for inline functions.  */
       return_used_this_function = 0;
     }
-  else
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  else /* TARGET_32BIT */
     {
       /* We need to take into account any stack-frame rounding.  */
       offsets = arm_get_frame_offsets ();
@@ -11922,7 +13585,8 @@
   /* APPLE LOCAL ARM custom frame layout */
   offsets->frame = offsets->saved_args + (frame_pointer_needed ? 8 : 0);
 
-  if (TARGET_ARM)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_32BIT)
     {
       unsigned int regno;
 
@@ -11955,9 +13619,11 @@
 	    saved += arm_get_vfp_saved_size ();
 	}
     }
-  else /* TARGET_THUMB */
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  else /* TARGET_THUMB1 */
     {
-      saved = bit_count (thumb_compute_save_reg_mask ()) * 4;
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      saved = bit_count (thumb1_compute_save_reg_mask ()) * 4;
       if (TARGET_BACKTRACE)
 	saved += 16;
       /* APPLE LOCAL begin 6465387 exception handling interworking VFP save */
@@ -12059,27 +13725,152 @@
 	case STACK_POINTER_REGNUM:
 	  return offsets->outgoing_args - offsets->soft_frame;
 
-	default:
-	  gcc_unreachable ();
+	default:
+	  gcc_unreachable ();
+	}
+      gcc_unreachable ();
+
+    default:
+      /* You cannot eliminate from the stack pointer.
+	 In theory you could eliminate from the hard frame
+	 pointer to the stack pointer, but this will never
+	 happen, since if a stack frame is not needed the
+	 hard frame pointer will never be used.  */
+      gcc_unreachable ();
+    }
+}
+
+
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Emit RTL to save coprocessor registers on funciton entry.  Returns the
+   number of bytes pushed.  */
+
+static int
+arm_save_coproc_regs(void)
+{
+  int saved_size = 0;
+  unsigned reg;
+  unsigned start_reg;
+  rtx insn;
+
+  for (reg = LAST_IWMMXT_REGNUM; reg >= FIRST_IWMMXT_REGNUM; reg--)
+    if (regs_ever_live[reg] && ! call_used_regs [reg])
+      {
+	insn = gen_rtx_PRE_DEC (V2SImode, stack_pointer_rtx);
+	insn = gen_rtx_MEM (V2SImode, insn);
+	insn = emit_set_insn (insn, gen_rtx_REG (V2SImode, reg));
+	RTX_FRAME_RELATED_P (insn) = 1;
+	saved_size += 8;
+      }
+
+  /* Save any floating point call-saved registers used by this
+     function.  */
+  if (arm_fpu_arch == FPUTYPE_FPA_EMU2)
+    {
+      for (reg = LAST_FPA_REGNUM; reg >= FIRST_FPA_REGNUM; reg--)
+	if (regs_ever_live[reg] && !call_used_regs[reg])
+	  {
+	    insn = gen_rtx_PRE_DEC (XFmode, stack_pointer_rtx);
+	    insn = gen_rtx_MEM (XFmode, insn);
+	    insn = emit_set_insn (insn, gen_rtx_REG (XFmode, reg));
+	    RTX_FRAME_RELATED_P (insn) = 1;
+	    saved_size += 12;
+	  }
+    }
+  else
+    {
+      start_reg = LAST_FPA_REGNUM;
+
+      for (reg = LAST_FPA_REGNUM; reg >= FIRST_FPA_REGNUM; reg--)
+	{
+	  if (regs_ever_live[reg] && !call_used_regs[reg])
+	    {
+	      if (start_reg - reg == 3)
+		{
+		  insn = emit_sfm (reg, 4);
+		  RTX_FRAME_RELATED_P (insn) = 1;
+		  saved_size += 48;
+		  start_reg = reg - 1;
+		}
+	    }
+	  else
+	    {
+	      if (start_reg != reg)
+		{
+		  insn = emit_sfm (reg + 1, start_reg - reg);
+		  RTX_FRAME_RELATED_P (insn) = 1;
+		  saved_size += (start_reg - reg) * 12;
+		}
+	      start_reg = reg - 1;
+	    }
+	}
+
+      if (start_reg != reg)
+	{
+	  insn = emit_sfm (reg + 1, start_reg - reg);
+	  saved_size += (start_reg - reg) * 12;
+	  RTX_FRAME_RELATED_P (insn) = 1;
 	}
-      gcc_unreachable ();
+    }
+  if (TARGET_HARD_FLOAT && TARGET_VFP)
+    {
+      start_reg = FIRST_VFP_REGNUM;
 
-    default:
-      /* You cannot eliminate from the stack pointer.
-	 In theory you could eliminate from the hard frame
-	 pointer to the stack pointer, but this will never
-	 happen, since if a stack frame is not needed the
-	 hard frame pointer will never be used.  */
-      gcc_unreachable ();
+      for (reg = FIRST_VFP_REGNUM; reg < LAST_VFP_REGNUM; reg += 2)
+	{
+	  if ((!regs_ever_live[reg] || call_used_regs[reg])
+	      && (!regs_ever_live[reg + 1] || call_used_regs[reg + 1]))
+	    {
+	      if (start_reg != reg)
+		saved_size += vfp_emit_fstmd (start_reg,
+					      (reg - start_reg) / 2);
+	      start_reg = reg + 2;
+	    }
+	}
+      if (start_reg != reg)
+	saved_size += vfp_emit_fstmd (start_reg,
+				      (reg - start_reg) / 2);
     }
+  return saved_size;
 }
 
 
-/* Generate the prologue instructions for entry into an ARM function.  */
+/* Set the Thumb frame pointer from the stack pointer.  */
+
+static void
+thumb_set_frame_pointer (arm_stack_offsets *offsets)
+{
+  HOST_WIDE_INT amount;
+  rtx insn, dwarf;
+
+  amount = offsets->outgoing_args - offsets->locals_base;
+  if (amount < 1024)
+    insn = emit_insn (gen_addsi3 (hard_frame_pointer_rtx,
+				  stack_pointer_rtx, GEN_INT (amount)));
+  else
+    {
+      emit_insn (gen_movsi (hard_frame_pointer_rtx, GEN_INT (amount)));
+      insn = emit_insn (gen_addsi3 (hard_frame_pointer_rtx,
+				    hard_frame_pointer_rtx,
+				    stack_pointer_rtx));
+      dwarf = gen_rtx_SET (VOIDmode, hard_frame_pointer_rtx,
+			   plus_constant (stack_pointer_rtx, amount));
+      RTX_FRAME_RELATED_P (dwarf) = 1;
+      REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_FRAME_RELATED_EXPR, dwarf,
+					    REG_NOTES (insn));
+    }
+
+  RTX_FRAME_RELATED_P (insn) = 1;
+}
+
+/* Generate the prologue instructions for entry into an ARM or Thumb-2
+   function.  */
+/* APPLE LOCAL end v7 support. Merge from mainline */
 void
 arm_expand_prologue (void)
 {
-  int reg;
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  /* Remove 'reg' variable */
   rtx amount;
   rtx insn;
   rtx ip_rtx;
@@ -12105,8 +13896,41 @@
 
   ip_rtx = gen_rtx_REG (SImode, IP_REGNUM);
 
-  if (frame_pointer_needed)
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  if (IS_STACKALIGN (func_type))
+    {
+      rtx dwarf;
+      rtx r0;
+      rtx r1;
+      /* Handle a word-aligned stack pointer.  We generate the following:
+
+	  mov r0, sp
+	  bic r1, r0, #7
+	  mov sp, r1
+	  <save and restore r0 in normal prologue/epilogue>
+	  mov sp, r0
+	  bx lr
+
+	 The unwinder doesn't need to know about the stack realignment.
+	 Just tell it we saved SP in r0.  */
+      gcc_assert (TARGET_THUMB2 && !arm_arch_notm && args_to_push == 0);
+
+      r0 = gen_rtx_REG (SImode, 0);
+      r1 = gen_rtx_REG (SImode, 1);
+      dwarf = gen_rtx_UNSPEC (SImode, NULL_RTVEC, UNSPEC_STACK_ALIGN);
+      dwarf = gen_rtx_SET (VOIDmode, r0, dwarf);
+      insn = gen_movsi (r0, stack_pointer_rtx);
+      RTX_FRAME_RELATED_P (insn) = 1;
+      REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_FRAME_RELATED_EXPR,
+					    dwarf, REG_NOTES (insn));
+      emit_insn (insn);
+      emit_insn (gen_andsi3 (r1, r0, GEN_INT (~(HOST_WIDE_INT)7)));
+      emit_insn (gen_movsi (stack_pointer_rtx, r1));
+    }
+
+  if (frame_pointer_needed && TARGET_ARM)
     {
+      /* APPLE LOCAL end v7 support. Merge from mainline */
       if (IS_INTERRUPT (func_type))
 	{
 	  /* Interrupt functions must not corrupt any registers.
@@ -12244,90 +14068,10 @@
     }
   /* APPLE LOCAL end ARM peephole combine reg store and stack push */
 
-  if (TARGET_IWMMXT)
-    for (reg = LAST_IWMMXT_REGNUM; reg >= FIRST_IWMMXT_REGNUM; reg--)
-      if (regs_ever_live[reg] && ! call_used_regs [reg])
-	{
-	  insn = gen_rtx_PRE_DEC (V2SImode, stack_pointer_rtx);
-	  insn = gen_frame_mem (V2SImode, insn);
-	  insn = emit_set_insn (insn, gen_rtx_REG (V2SImode, reg));
-	  RTX_FRAME_RELATED_P (insn) = 1;
-	  saved_regs += 8;
-	}
-
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
   if (! IS_VOLATILE (func_type))
-    {
-      int start_reg;
-
-      /* Save any floating point call-saved registers used by this
-	 function.  */
-      if (arm_fpu_arch == FPUTYPE_FPA_EMU2)
-	{
-	  for (reg = LAST_FPA_REGNUM; reg >= FIRST_FPA_REGNUM; reg--)
-	    if (regs_ever_live[reg] && !call_used_regs[reg])
-	      {
-		insn = gen_rtx_PRE_DEC (XFmode, stack_pointer_rtx);
-		insn = gen_frame_mem (XFmode, insn);
-		insn = emit_set_insn (insn, gen_rtx_REG (XFmode, reg));
-		RTX_FRAME_RELATED_P (insn) = 1;
-		saved_regs += 12;
-	      }
-	}
-      else
-	{
-	  start_reg = LAST_FPA_REGNUM;
-
-	  for (reg = LAST_FPA_REGNUM; reg >= FIRST_FPA_REGNUM; reg--)
-	    {
-	      if (regs_ever_live[reg] && !call_used_regs[reg])
-		{
-		  if (start_reg - reg == 3)
-		    {
-		      insn = emit_sfm (reg, 4);
-		      RTX_FRAME_RELATED_P (insn) = 1;
-		      saved_regs += 48;
-		      start_reg = reg - 1;
-		    }
-		}
-	      else
-		{
-		  if (start_reg != reg)
-		    {
-		      insn = emit_sfm (reg + 1, start_reg - reg);
-		      RTX_FRAME_RELATED_P (insn) = 1;
-		      saved_regs += (start_reg - reg) * 12;
-		    }
-		  start_reg = reg - 1;
-		}
-	    }
-
-	  if (start_reg != reg)
-	    {
-	      insn = emit_sfm (reg + 1, start_reg - reg);
-	      saved_regs += (start_reg - reg) * 12;
-	      RTX_FRAME_RELATED_P (insn) = 1;
-	    }
-	}
-      if (TARGET_HARD_FLOAT && TARGET_VFP)
-	{
-	  start_reg = FIRST_VFP_REGNUM;
-
- 	  for (reg = FIRST_VFP_REGNUM; reg < LAST_VFP_REGNUM; reg += 2)
-	    {
-	      if ((!regs_ever_live[reg] || call_used_regs[reg])
-		  && (!regs_ever_live[reg + 1] || call_used_regs[reg + 1]))
-		{
-		  if (start_reg != reg)
-		    saved_regs += vfp_emit_fstmx (start_reg,
-						  (reg - start_reg) / 2);
-		  start_reg = reg + 2;
-		}
-	    }
-	  if (start_reg != reg)
-	    saved_regs += vfp_emit_fstmx (start_reg,
-					  (reg - start_reg) / 2);
-	}
-    }
+    saved_regs += arm_save_coproc_regs ();
+  /* APPLE LOCAL end v7 support. Merge from mainline */
 
   /* APPLE LOCAL ARM custom frame layout */
   /* Removed lines.  */
@@ -12361,9 +14105,23 @@
     }
 
 
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  /* APPLE LOCAL add !live_regs_mask. that's handled above by apple code */
+  if (frame_pointer_needed && TARGET_THUMB2 && !live_regs_mask)
+    thumb_set_frame_pointer (offsets);
+
   if (flag_pic && arm_pic_register != INVALID_REGNUM)
-    arm_load_pic_register (0UL);
+    {
+      unsigned long mask;
+
+      mask = live_regs_mask;
+      mask &= THUMB2_WORK_REGS;
+      if (!IS_NESTED (func_type))
+	mask |= (1 << IP_REGNUM);
+      arm_load_pic_register (mask);
+    }
 
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   /* If we are profiling, make sure no instructions are scheduled before
      the call to mcount.  Similarly if the user has requested no
      scheduling in the prolog.  Similarly if we want non-call exceptions
@@ -12382,6 +14140,45 @@
     }
 }
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Print condition code to STREAM.  Helper function for arm_print_operand.  */
+static void
+arm_print_condition (FILE *stream)
+{
+  if (arm_ccfsm_state == 3 || arm_ccfsm_state == 4)
+    {
+      /* Branch conversion is not implemented for Thumb-2.  */
+      if (TARGET_THUMB)
+	{
+	  output_operand_lossage ("predicated Thumb instruction");
+	  return;
+	}
+      if (current_insn_predicate != NULL)
+	{
+	  output_operand_lossage
+	    ("predicated instruction in conditional sequence");
+	  return;
+	}
+
+      fputs (arm_condition_codes[arm_current_cc], stream);
+    }
+  else if (current_insn_predicate)
+    {
+      enum arm_cond_code code;
+
+      if (TARGET_THUMB1)
+	{
+	  output_operand_lossage ("predicated Thumb instruction");
+	  return;
+	}
+
+      code = get_arm_condition_code (current_insn_predicate);
+      fputs (arm_condition_codes[code], stream);
+    }
+}
+
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 /* If CODE is 'd', then the X is a condition operand and the instruction
    should only be executed if the condition is true.
    if CODE is 'D', then the X is a condition operand and the instruction
@@ -12401,7 +14198,8 @@
   switch (code)
     {
     /* APPLE LOCAL begin ARM MACH assembler */
-    case '.':
+    /* APPLE LOCAL change appple '.' code to '~' for v7 to avoid conflict */
+    case '~':
 #ifdef LOCAL_LABEL_PREFIX
       fputs (LOCAL_LABEL_PREFIX, stream);
 #endif
@@ -12420,37 +14218,57 @@
       fputs (REGISTER_PREFIX, stream);
       return;
 
+    /* APPLE LOCAL begin v7 support. Merge from mainline */
     case '?':
-      if (arm_ccfsm_state == 3 || arm_ccfsm_state == 4)
-	{
-	  if (TARGET_THUMB)
-	    {
-	      output_operand_lossage ("predicated Thumb instruction");
-	      break;
-	    }
-	  if (current_insn_predicate != NULL)
-	    {
-	      output_operand_lossage
-		("predicated instruction in conditional sequence");
-	      break;
-	    }
+      arm_print_condition (stream);
+      return;
 
-	  fputs (arm_condition_codes[arm_current_cc], stream);
+    case '(':
+      /* Nothing in unified syntax, otherwise the current condition code.  */
+      if (!TARGET_UNIFIED_ASM)
+	arm_print_condition (stream);
+      break;
+
+    case ')':
+      /* The current condition code in unified syntax, otherwise nothing.  */
+      if (TARGET_UNIFIED_ASM)
+	arm_print_condition (stream);
+      break;
+  
+    case '.':
+      /* The current condition code for a condition code setting instruction.
+	 Preceeded by 's' in unified syntax, otherwise followed by 's'.  */
+      if (TARGET_UNIFIED_ASM)
+	{
+	  fputc('s', stream);
+	  arm_print_condition (stream);
 	}
-      else if (current_insn_predicate)
+      else
 	{
-	  enum arm_cond_code code;
+	  arm_print_condition (stream);
+	  fputc('s', stream);
+	}
+      return;
 
-	  if (TARGET_THUMB)
-	    {
-	      output_operand_lossage ("predicated Thumb instruction");
-	      break;
-	    }
+    case '!':
+      /* If the instruction is conditionally executed then print
+	 the current condition code, otherwise print 's'.  */
+      gcc_assert (TARGET_THUMB2 && TARGET_UNIFIED_ASM);
+      if (current_insn_predicate)
+	arm_print_condition (stream);
+      else
+	fputc('s', stream);
+      break;
 
-	  code = get_arm_condition_code (current_insn_predicate);
-	  fputs (arm_condition_codes[code], stream);
-	}
+    /* APPLE LOCAL end v7 support. Merge from mainline */
+    /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+    /* %# is a "break" sequence. It doesn't output anything, but is used to
+       seperate e.g. operand numbers from following text, if that text consists
+       of further digits which we don't want to be part of the operand
+       number.  */
+    case '#':
       return;
+    /* APPLE LOCAL end v7 support. Merge from Codesourcery */
 
     case 'N':
       {
@@ -12461,6 +14279,14 @@
       }
       return;
 
+    /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+    /* An integer without a preceding # sign.  */
+    case 'c':
+      gcc_assert (GET_CODE (x) == CONST_INT);
+      fprintf (stream, HOST_WIDE_INT_PRINT_DEC, INTVAL (x));
+      return;
+
+    /* APPLE LOCAL end v7 support. Merge from Codesourcery */
     case 'B':
       if (GET_CODE (x) == CONST_INT)
 	{
@@ -12475,6 +14301,13 @@
 	}
       return;
 
+    /* APPLE LOCAL begin v7 support. Merge from mainline */
+    case 'L':
+      /* The low 16 bits of an immediate constant.  */
+      fprintf (stream, HOST_WIDE_INT_PRINT_DEC, INTVAL(x) & 0xffff);
+      return;
+
+    /* APPLE LOCAL end v7 support. Merge from mainline */
     case 'i':
       fprintf (stream, "%s", arithmetic_instr (x, 1));
       return;
@@ -12574,6 +14407,28 @@
       asm_fprintf (stream, "%r", REGNO (x) + 1);
       return;
 
+    /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+    case 'J':
+      if (GET_CODE (x) != REG || REGNO (x) > LAST_ARM_REGNUM)
+	{
+	  output_operand_lossage ("invalid operand for code '%c'", code);
+	  return;
+	}
+
+      asm_fprintf (stream, "%r", REGNO (x) + (WORDS_BIG_ENDIAN ? 3 : 2));
+      return;
+
+    case 'K':
+      if (GET_CODE (x) != REG || REGNO (x) > LAST_ARM_REGNUM)
+	{
+	  output_operand_lossage ("invalid operand for code '%c'", code);
+	  return;
+	}
+
+      asm_fprintf (stream, "%r", REGNO (x) + (WORDS_BIG_ENDIAN ? 2 : 3));
+      return;
+
+    /* APPLE LOCAL end v7 support. Merge from Codesourcery */
     case 'm':
       asm_fprintf (stream, "%r",
 		   GET_CODE (XEXP (x, 0)) == REG
@@ -12586,6 +14441,21 @@
 		   REGNO (x) + ARM_NUM_REGS (GET_MODE (x)) - 1);
       return;
 
+    /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+    /* Like 'M', but writing doubleword vector registers, for use by Neon
+       insns.  */
+    case 'h':
+      {
+        int regno = (REGNO (x) - FIRST_VFP_REGNUM) / 2;
+        int numregs = ARM_NUM_REGS (GET_MODE (x)) / 2;
+        if (numregs == 1)
+          asm_fprintf (stream, "{d%d}", regno);
+        else
+          asm_fprintf (stream, "{d%d-d%d}", regno, regno + numregs - 1);
+      }
+      return;
+
+    /* APPLE LOCAL end v7 support. Merge from Codesourcery */
     case 'd':
       /* CONST_TRUE_RTX means always -- that's the default.  */
       if (x == const_true_rtx)
@@ -12699,13 +14569,16 @@
 	}
       return;
 
-      /* Print a VFP double precision register name.  */
+    /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+    /* Print a VFP/Neon double precision or quad precision register name.  */
     case 'P':
+    case 'q':
       {
 	int mode = GET_MODE (x);
-	int num;
+        int is_quad = (code == 'q');
+	int regno;
 
-	if (mode != DImode && mode != DFmode)
+	if (GET_MODE_SIZE (mode) != (is_quad ? 16 : 8))
 	  {
 	    output_operand_lossage ("invalid operand for code '%c'", code);
 	    return;
@@ -12718,17 +14591,143 @@
 	    return;
 	  }
 
-	num = REGNO(x) - FIRST_VFP_REGNUM;
-	if (num & 1)
+	regno = REGNO (x);
+	if ((is_quad && !NEON_REGNO_OK_FOR_QUAD (regno))
+            || (!is_quad && !VFP_REGNO_OK_FOR_DOUBLE (regno)))
+	  {
+	    output_operand_lossage ("invalid operand for code '%c'", code);
+	    return;
+	  }
+
+	fprintf (stream, "%c%d", is_quad ? 'q' : 'd',
+	  (regno - FIRST_VFP_REGNUM) >> (is_quad ? 2 : 1));
+      }
+      return;
+
+    /* APPLE LOCAL 6150859 begin use NEON instructions for SF math */
+    /* This code prints the double precision register name starting at
+       register number of the indicated single precision register.  */
+    case 'p':
+      {
+	int mode = GET_MODE (x);
+	int regno;
+
+	if (GET_CODE (x) != REG || !IS_VFP_REGNUM (REGNO (x))
+	    || GET_MODE_SIZE (mode) != 4)
+	  {
+	    output_operand_lossage ("invalid operand for code '%c'", code);
+	    return;
+	  }
+
+	regno = REGNO (x);
+	if (((regno - FIRST_VFP_REGNUM) & 0x1) != 0)
 	  {
 	    output_operand_lossage ("invalid operand for code '%c'", code);
 	    return;
 	  }
 
-	fprintf (stream, "d%d", num >> 1);
+	fprintf (stream, "d%d", (regno - FIRST_VFP_REGNUM) >> 1);
+      }
+      return;
+    /* APPLE LOCAL 6150859 end use NEON instructions for SF math */
+
+    /* These two codes print the low/high doubleword register of a Neon quad
+       register, respectively.  For pair-structure types, can also print
+       low/high quadword registers.  */
+    case 'e':
+    case 'f':
+      {
+        int mode = GET_MODE (x);
+        int regno;
+        
+        if ((GET_MODE_SIZE (mode) != 16
+	     && GET_MODE_SIZE (mode) != 32) || GET_CODE (x) != REG)
+          {
+	    output_operand_lossage ("invalid operand for code '%c'", code);
+	    return;
+          }
+        
+        regno = REGNO (x);
+        if (!NEON_REGNO_OK_FOR_QUAD (regno))
+          {
+	    output_operand_lossage ("invalid operand for code '%c'", code);
+	    return;
+          }
+        
+        if (GET_MODE_SIZE (mode) == 16)
+          fprintf (stream, "d%d", ((regno - FIRST_VFP_REGNUM) >> 1)
+				  + (code == 'f' ? 1 : 0));
+        else
+          fprintf (stream, "q%d", ((regno - FIRST_VFP_REGNUM) >> 2)
+				  + (code == 'f' ? 1 : 0));
+      }
+      return;
+
+    /* APPLE LOCAL end v7 support. Merge from Codesourcery */
+    /* APPLE LOCAL begin v7 support. Merge from mainline */
+    /* Print a VFPv3 floating-point constant, represented as an integer
+       index.  */
+    case 'G':
+      {
+        int index = vfp3_const_double_index (x);
+	gcc_assert (index != -1);
+	fprintf (stream, "%d", index);
+      }
+      return;
+
+    /* APPLE LOCAL end v7 support. Merge from mainline */
+    /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+    /* Print bits representing opcode features for Neon.
+
+       Bit 0 is 1 for signed, 0 for unsigned.  Floats count as signed
+       and polynomials as unsigned.
+
+       Bit 1 is 1 for rounding functions, 0 otherwise.
+
+       Bit 2 is 1 for floats and polynomials, 0 for ordinary integers.  */
+
+    /* Identify the type as 's', 'u', 'p' or 'f'.  */
+    case 'T':
+      {
+        HOST_WIDE_INT bits = INTVAL (x);
+        fputc ((bits & 1) != 0
+	       ? ((bits & 4) != 0 ? 'f' : 's')
+	       : ((bits & 4) != 0 ? 'p' : 'u'),
+	       stream);
+      }
+      return;
+
+    /* Likewise, but signed and unsigned integers are both 'i'.  */
+    case 'F':
+      {
+        HOST_WIDE_INT bits = INTVAL (x);
+        fputc ((bits & 4) != 0
+	       ? ((bits & 1) != 0 ? 'f' : 'p')
+	       : 'i',
+	       stream);
+      }
+      return;
+
+    /* As for 'T', but emit 'u' instead of 'p'.  */
+    case 't':
+      {
+        HOST_WIDE_INT bits = INTVAL (x);
+        fputc ((bits & 1) != 0
+	       ? ((bits & 4) != 0 ? 'f' : 's')
+	       : 'u',
+	       stream);
+      }
+      return;
+
+    /* Bit 1: rounding (vs none).  */
+    case 'O':
+      {
+        HOST_WIDE_INT bits = INTVAL (x);
+        fputs ((bits & 2) != 0 ? "r" : "", stream);
       }
       return;
 
+    /* APPLE LOCAL end v7 support. Merge from Codesourcery */
     default:
       if (x == 0)
 	{
@@ -12748,7 +14747,17 @@
 	  break;
 
 	case CONST_DOUBLE:
-	  fprintf (stream, "#%s", fp_immediate_constant (x));
+          /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+          if (TARGET_NEON)
+            {
+              char fpstr[20];
+              real_to_decimal (fpstr, CONST_DOUBLE_REAL_VALUE (x),
+			       sizeof (fpstr), 0, 1);
+              fprintf (stream, "#%s", fpstr);
+            }
+          else
+	    fprintf (stream, "#%s", fp_immediate_constant (x));
+          /* APPLE LOCAL end v7 support. Merge from Codesourcery */
 	  break;
 
 	default:
@@ -12766,6 +14775,10 @@
 static bool
 arm_assemble_integer (rtx x, unsigned int size, int aligned_p)
 {
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  enum machine_mode mode;
+
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
   /* APPLE LOCAL begin ARM MACH assembler */
   /* We can always handle unaligned data with the normal pseudoops.  */
   if (TARGET_MACHO)
@@ -12796,34 +14809,53 @@
       return true;
     }
 
-  if (arm_vector_mode_supported_p (GET_MODE (x)))
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  mode = GET_MODE (x);
+
+  if (arm_vector_mode_supported_p (mode))
     {
       int i, units;
+      unsigned int invmask = 0, parts_per_word;
 
       gcc_assert (GET_CODE (x) == CONST_VECTOR);
 
       units = CONST_VECTOR_NUNITS (x);
-
-      switch (GET_MODE (x))
-	{
-	case V2SImode: size = 4; break;
-	case V4HImode: size = 2; break;
-	case V8QImode: size = 1; break;
-	default:
-	  gcc_unreachable ();
-	}
-
+      size = GET_MODE_SIZE (GET_MODE_INNER (mode));
+      
+      /* For big-endian Neon vectors, we must permute the vector to the form
+         which, when loaded by a VLDR or VLDM instruction, will give a vector
+         with the elements in the right order.  */
+      if (TARGET_NEON && WORDS_BIG_ENDIAN)
+        {
+          parts_per_word = UNITS_PER_WORD / size;
+          /* FIXME: This might be wrong for 64-bit vector elements, but we don't
+             support those anywhere yet.  */
+          invmask = (parts_per_word == 0) ? 0 : (1 << (parts_per_word - 1)) - 1;
+        }
+      
+      if (GET_MODE_CLASS (mode) == MODE_VECTOR_INT)
       for (i = 0; i < units; i++)
 	{
-	  rtx elt;
-
-	  elt = CONST_VECTOR_ELT (x, i);
+	    rtx elt = CONST_VECTOR_ELT (x, i ^ invmask);
 	  assemble_integer
 	    (elt, size, i == 0 ? BIGGEST_ALIGNMENT : size * BITS_PER_UNIT, 1);
 	}
+      else
+        for (i = 0; i < units; i++)
+          {
+            rtx elt = CONST_VECTOR_ELT (x, i);
+            REAL_VALUE_TYPE rval;
+            
+            REAL_VALUE_FROM_CONST_DOUBLE (rval, elt);
+            
+            assemble_real
+              (rval, GET_MODE_INNER (mode),
+              i == 0 ? BIGGEST_ALIGNMENT : size * BITS_PER_UNIT);
+          }
 
       return true;
     }
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
 
   return default_assemble_integer (x, size, aligned_p);
 }
@@ -12884,6 +14916,15 @@
    time.  But then, I want to reduce the code size to somewhere near what
    /bin/cc produces.  */
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* In addition to this, state is maintained for Thumb-2 COND_EXEC
+   instructions.  When a COND_EXEC instruction is seen the subsequent
+   instructions are scanned so that multiple conditional instructions can be
+   combined into a single IT block.  arm_condexec_count and arm_condexec_mask
+   specify the length and true/false mask for the IT block.  These will be
+   decremented/zeroed by arm_asm_output_opcode as the insns are output.  */
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 /* Returns the index of the ARM condition code string in
    `arm_condition_codes'.  COMPARISON should be an rtx like
    `(eq (...) (...))'.  */
@@ -13013,14 +15054,96 @@
     }
 }
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Tell arm_asm_ouput_opcode to output IT blocks for conditionally executed
+   instructions.  */
+void
+thumb2_final_prescan_insn (rtx insn)
+{
+  rtx first_insn = insn;
+  rtx body = PATTERN (insn);
+  rtx predicate;
+  enum arm_cond_code code;
+  int n;
+  int mask;
+
+  /* Remove the previous insn from the count of insns to be output.  */
+  if (arm_condexec_count)
+      arm_condexec_count--;
+
+  /* Nothing to do if we are already inside a conditional block.  */
+  if (arm_condexec_count)
+    return;
+
+  if (GET_CODE (body) != COND_EXEC)
+    return;
+
+  /* Conditional jumps are implemented directly.  */
+  if (GET_CODE (insn) == JUMP_INSN)
+    return;
+
+  predicate = COND_EXEC_TEST (body);
+  arm_current_cc = get_arm_condition_code (predicate);
+
+  n = get_attr_ce_count (insn);
+  arm_condexec_count = 1;
+  arm_condexec_mask = (1 << n) - 1;
+  arm_condexec_masklen = n;
+  /* See if subsequent instructions can be combined into the same block.  */
+  for (;;)
+    {
+      insn = next_nonnote_insn (insn);
+
+      /* Jumping into the middle of an IT block is illegal, so a label or
+         barrier terminates the block.  */
+      if (GET_CODE (insn) != INSN && GET_CODE(insn) != JUMP_INSN)
+	break;
+
+      body = PATTERN (insn);
+      /* USE and CLOBBER aren't really insns, so just skip them.  */
+      if (GET_CODE (body) == USE
+	  || GET_CODE (body) == CLOBBER)
+	continue;
+
+      /* ??? Recognise conditional jumps, and combine them with IT blocks.  */
+      if (GET_CODE (body) != COND_EXEC)
+	break;
+      /* Allow up to 4 conditionally executed instructions in a block.  */
+      n = get_attr_ce_count (insn);
+      if (arm_condexec_masklen + n > 4)
+	break;
+
+      predicate = COND_EXEC_TEST (body);
+      code = get_arm_condition_code (predicate);
+      mask = (1 << n) - 1;
+      if (arm_current_cc == code)
+	arm_condexec_mask |= (mask << arm_condexec_masklen);
+      else if (arm_current_cc != ARM_INVERSE_CONDITION_CODE(code))
+	break;
+
+      arm_condexec_count++;
+      arm_condexec_masklen += n;
+
+      /* A jump must be the last instruction in a conditional block.  */
+      if (GET_CODE(insn) == JUMP_INSN)
+	break;
+    }
+  /* Restore recog_data (getting the attributes of other insns can
+     destroy this array, but final.c assumes that it remains intact
+     across this call).  */
+  extract_constrain_insn_cached (first_insn);
+}
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 void
 arm_final_prescan_insn (rtx insn)
 {
-/* LLVM LOCAL begin */
+  /* LLVM LOCAL begin */
 #ifdef ENABLE_LLVM
   insn = insn;
+  return;
 #else
-/* LLVM LOCAL end */
+  /* LLVM LOCAL end */
   /* BODY will hold the body of INSN.  */
   rtx body = PATTERN (insn);
 
@@ -13298,6 +15421,15 @@
 		  && GET_CODE (scanbody) != CLOBBER
 		  && get_attr_cirrus (this_insn) != CIRRUS_NOT)
 		fail = TRUE;
+
+	      /* APPLE LOCAL begin 6280380 */
+	      /* While most ARM instructions are predicable, a few
+		 like NEON instructions aren't... */
+	      if (GET_CODE (scanbody) != USE
+		  && GET_CODE (scanbody) != CLOBBER
+		  && get_attr_predicable (this_insn) == PREDICABLE_NO)
+		fail = TRUE;
+	      /* APPLE LOCAL end 6280380 */
 	      break;
 
 	    default:
@@ -13322,7 +15454,8 @@
 	      if (!this_insn)
 	        {
 		  /* Oh, dear! we ran off the end.. give up.  */
-		  recog (PATTERN (insn), insn, NULL);
+                  /* APPLE LOCAL v7 support. Merge from mainline */
+		  extract_constrain_insn_cached (insn);
 		  arm_ccfsm_state = 0;
 		  arm_target_insn = NULL;
 		  return;
@@ -13353,15 +15486,36 @@
 	    arm_current_cc = ARM_INVERSE_CONDITION_CODE (arm_current_cc);
 	}
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
       /* Restore recog_data (getting the attributes of other insns can
 	 destroy this array, but final.c assumes that it remains intact
-	 across this call; since the insn has been recognized already we
-	 call recog direct).  */
-      recog (PATTERN (insn), insn, NULL);
+	 across this call.  */
+      extract_constrain_insn_cached (insn);
     }
 /* LLVM LOCAL */
 #endif
 }
+/* APPLE LOCAL end v7 support. Merge from mainline */
+
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Output IT instructions.  */
+void
+thumb2_asm_output_opcode (FILE * stream)
+{
+  char buff[5];
+  int n;
+
+  if (arm_condexec_mask)
+    {
+      for (n = 0; n < arm_condexec_masklen; n++)
+	buff[n] = (arm_condexec_mask & (1 << n)) ? 't' : 'e';
+      buff[n] = 0;
+      asm_fprintf(stream, "i%s\t%s\n\t", buff,
+		  arm_condition_codes[arm_current_cc]);
+      arm_condexec_mask = 0;
+    }
+}
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Returns true if REGNO is a valid register
    for holding a quantity of type MODE.  */
@@ -13373,7 +15527,8 @@
 	    || (TARGET_HARD_FLOAT && TARGET_VFP
 		&& regno == VFPCC_REGNUM));
 
-  if (TARGET_THUMB)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_THUMB1)
     /* For the Thumb we only allow values bigger than SImode in
        registers 0 - 6, so that there is always a second low
        register available to hold the upper part of the value.
@@ -13393,12 +15548,26 @@
   if (TARGET_HARD_FLOAT && TARGET_VFP
       && IS_VFP_REGNUM (regno))
     {
+      /* APPLE LOCAL begin v7 support. Merge from mainline */
       if (mode == SFmode || mode == SImode)
-	return TRUE;
+	return VFP_REGNO_OK_FOR_SINGLE (regno);
 
-      /* DFmode values are only valid in even register pairs.  */
       if (mode == DFmode)
-	return ((regno - FIRST_VFP_REGNUM) & 1) == 0;
+	return VFP_REGNO_OK_FOR_DOUBLE (regno);
+      /* APPLE LOCAL end v7 support. Merge from mainline */
+      /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+      
+      if (TARGET_NEON)
+        return (VALID_NEON_DREG_MODE (mode) && VFP_REGNO_OK_FOR_DOUBLE (regno))
+               || (VALID_NEON_QREG_MODE (mode)
+                   && NEON_REGNO_OK_FOR_QUAD (regno))
+	       || (mode == TImode && NEON_REGNO_OK_FOR_NREGS (regno, 2))
+	       || (mode == EImode && NEON_REGNO_OK_FOR_NREGS (regno, 3))
+	       || (mode == OImode && NEON_REGNO_OK_FOR_NREGS (regno, 4))
+	       || (mode == CImode && NEON_REGNO_OK_FOR_NREGS (regno, 6))
+	       || (mode == XImode && NEON_REGNO_OK_FOR_NREGS (regno, 8));
+      
+      /* APPLE LOCAL end v7 support. Merge from Codesourcery */
       return FALSE;
     }
 
@@ -13411,11 +15580,15 @@
 	return VALID_IWMMXT_REG_MODE (mode);
     }
   
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
   /* We allow any value to be stored in the general registers.
      Restrict doubleword quantities to even register pairs so that we can
-     use ldrd.  */
+     use ldrd.  Do not allow Neon structure opaque modes in general registers;
+     they would use too many.  */
   if (regno <= LAST_ARM_REGNUM)
-    return !(TARGET_LDRD && GET_MODE_SIZE (mode) > 4 && (regno & 1) != 0);
+    return !(TARGET_LDRD && GET_MODE_SIZE (mode) > 4 && (regno & 1) != 0)
+      && !VALID_NEON_STRUCT_MODE (mode);
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
 
   if (regno == FRAME_POINTER_REGNUM
       || regno == ARG_POINTER_REGNUM)
@@ -13430,10 +15603,13 @@
 	  && regno <= LAST_FPA_REGNUM);
 }
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* For efficiency and historical reasons LO_REGS, HI_REGS and CC_REGS are
+   not used in arm mode.  */
 int
 arm_regno_class (int regno)
 {
-  if (TARGET_THUMB)
+  if (TARGET_THUMB1)
     {
       if (regno == STACK_POINTER_REGNUM)
 	return STACK_REG;
@@ -13444,19 +15620,29 @@
       return HI_REGS;
     }
 
+  if (TARGET_THUMB2 && regno < 8)
+    return LO_REGS;
+
   if (   regno <= LAST_ARM_REGNUM
       || regno == FRAME_POINTER_REGNUM
       || regno == ARG_POINTER_REGNUM)
-    return GENERAL_REGS;
+    return TARGET_THUMB2 ? HI_REGS : GENERAL_REGS;
 
   if (regno == CC_REGNUM || regno == VFPCC_REGNUM)
-    return NO_REGS;
+    return TARGET_THUMB2 ? CC_REG : NO_REGS;
 
   if (IS_CIRRUS_REGNUM (regno))
     return CIRRUS_REGS;
 
   if (IS_VFP_REGNUM (regno))
-    return VFP_REGS;
+    {
+      if (regno <= D7_VFP_REGNUM)
+	return VFP_D0_D7_REGS;
+      else if (regno <= LAST_LO_VFP_REGNUM)
+        return VFP_LO_REGS;
+      else
+        return VFP_HI_REGS;
+    }
 
   if (IS_IWMMXT_REGNUM (regno))
     return IWMMXT_REGS;
@@ -13466,6 +15652,7 @@
 
   return FPA_REGS;
 }
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Handle a special case when computing the offset
    of an argument from the frame pointer.  */
@@ -13505,6 +15692,8 @@
 
   /* If we are using the stack pointer to point at the
      argument, then an offset of 0 is correct.  */
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  /* ??? Check this is consistent with thumb2 frame layout.  */
   if ((TARGET_THUMB || !frame_pointer_needed)
       && REGNO (addr) == SP_REGNUM)
     return 0;
@@ -14048,6 +16237,2807 @@
 			       NULL, const_nothrow);
 }
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+typedef enum {
+  T_V8QI  = 0x0001,
+  T_V4HI  = 0x0002,
+  T_V2SI  = 0x0004,
+  T_V2SF  = 0x0008,
+  T_DI    = 0x0010,
+  T_V16QI = 0x0020,
+  T_V8HI  = 0x0040,
+  T_V4SI  = 0x0080,
+  T_V4SF  = 0x0100,
+  T_V2DI  = 0x0200,
+  T_TI	  = 0x0400,
+  T_EI	  = 0x0800,
+  T_OI	  = 0x1000
+} neon_builtin_type_bits;
+
+#define v8qi_UP  T_V8QI
+#define v4hi_UP  T_V4HI
+#define v2si_UP  T_V2SI
+#define v2sf_UP  T_V2SF
+#define di_UP    T_DI
+#define v16qi_UP T_V16QI
+#define v8hi_UP  T_V8HI
+#define v4si_UP  T_V4SI
+#define v4sf_UP  T_V4SF
+#define v2di_UP  T_V2DI
+#define ti_UP	 T_TI
+#define ei_UP	 T_EI
+#define oi_UP	 T_OI
+
+#define UP(X) X##_UP
+
+#define T_MAX 13
+
+/* FIXME: Add other types of insn (loads & stores, etc.).  */
+typedef enum {
+  NEON_BINOP,
+  NEON_TERNOP,
+  NEON_UNOP,
+  NEON_GETLANE,
+  NEON_SETLANE,
+  NEON_CREATE,
+  NEON_DUP,
+  NEON_DUPLANE,
+  NEON_COMBINE,
+  NEON_SPLIT,
+  NEON_LANEMUL,
+  NEON_LANEMULL,
+  NEON_LANEMULH,
+  NEON_LANEMAC,
+  NEON_SCALARMUL,
+  NEON_SCALARMULL,
+  NEON_SCALARMULH,
+  NEON_SCALARMAC,
+  NEON_CONVERT,
+  NEON_FIXCONV,
+  NEON_SELECT,
+  NEON_RESULTPAIR,
+  NEON_REINTERP,
+  NEON_VTBL,
+  NEON_VTBX,
+  NEON_LOAD1,
+  NEON_LOAD1LANE,
+  NEON_STORE1,
+  NEON_STORE1LANE,
+  NEON_LOADSTRUCT,
+  NEON_LOADSTRUCTLANE,
+  NEON_STORESTRUCT,
+  NEON_STORESTRUCTLANE,
+  NEON_LOGICBINOP,
+  NEON_SHIFTINSERT,
+  NEON_SHIFTIMM,
+  NEON_SHIFTACC
+} neon_itype;
+
+typedef struct {
+  const char *name;
+  const neon_itype itype;
+  const neon_builtin_type_bits bits;
+  const enum insn_code codes[T_MAX];
+  const unsigned int num_vars;
+  unsigned int base_fcode;
+} neon_builtin_datum;
+
+#define CF(N,X) CODE_FOR_neon_##N##X
+
+#define VAR1(T, N, A) \
+  #N, NEON_##T, UP (A), { CF (N, A) }, 1, 0
+#define VAR2(T, N, A, B) \
+  #N, NEON_##T, UP (A) | UP (B), { CF (N, A), CF (N, B) }, 2, 0
+#define VAR3(T, N, A, B, C) \
+  #N, NEON_##T, UP (A) | UP (B) | UP (C), \
+  { CF (N, A), CF (N, B), CF (N, C) }, 3, 0
+#define VAR4(T, N, A, B, C, D) \
+  #N, NEON_##T, UP (A) | UP (B) | UP (C) | UP (D), \
+  { CF (N, A), CF (N, B), CF (N, C), CF (N, D) }, 4, 0
+#define VAR5(T, N, A, B, C, D, E) \
+  #N, NEON_##T, UP (A) | UP (B) | UP (C) | UP (D) | UP (E), \
+  { CF (N, A), CF (N, B), CF (N, C), CF (N, D), CF (N, E) }, 5, 0
+#define VAR6(T, N, A, B, C, D, E, F) \
+  #N, NEON_##T, UP (A) | UP (B) | UP (C) | UP (D) | UP (E) | UP (F), \
+  { CF (N, A), CF (N, B), CF (N, C), CF (N, D), CF (N, E), CF (N, F) }, 6, 0
+#define VAR7(T, N, A, B, C, D, E, F, G) \
+  #N, NEON_##T, UP (A) | UP (B) | UP (C) | UP (D) | UP (E) | UP (F) | UP (G), \
+  { CF (N, A), CF (N, B), CF (N, C), CF (N, D), CF (N, E), CF (N, F), \
+    CF (N, G) }, 7, 0
+#define VAR8(T, N, A, B, C, D, E, F, G, H) \
+  #N, NEON_##T, UP (A) | UP (B) | UP (C) | UP (D) | UP (E) | UP (F) | UP (G) \
+                | UP (H), \
+  { CF (N, A), CF (N, B), CF (N, C), CF (N, D), CF (N, E), CF (N, F), \
+    CF (N, G), CF (N, H) }, 8, 0
+#define VAR9(T, N, A, B, C, D, E, F, G, H, I) \
+  #N, NEON_##T, UP (A) | UP (B) | UP (C) | UP (D) | UP (E) | UP (F) | UP (G) \
+                | UP (H) | UP (I), \
+  { CF (N, A), CF (N, B), CF (N, C), CF (N, D), CF (N, E), CF (N, F), \
+    CF (N, G), CF (N, H), CF (N, I) }, 9, 0
+#define VAR10(T, N, A, B, C, D, E, F, G, H, I, J) \
+  #N, NEON_##T, UP (A) | UP (B) | UP (C) | UP (D) | UP (E) | UP (F) | UP (G) \
+                | UP (H) | UP (I) | UP (J), \
+  { CF (N, A), CF (N, B), CF (N, C), CF (N, D), CF (N, E), CF (N, F), \
+    CF (N, G), CF (N, H), CF (N, I), CF (N, J) }, 10, 0
+
+/* The mode entries in the following table correspond to the "key" type of the
+   instruction variant, i.e. equivalent to that which would be specified after
+   the assembler mnemonic, which usually refers to the last vector operand.
+   (Signed/unsigned/polynomial types are not differentiated between though, and
+   are all mapped onto the same mode for a given element size.) The modes
+   listed per instruction should be the same as those defined for that
+   instruction's pattern in neon.md.
+   WARNING: Variants should be listed in the same increasing order as
+   neon_builtin_type_bits.  */
+
+static neon_builtin_datum neon_builtin_data[] =
+{
+  { VAR10 (BINOP, vadd,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR3 (BINOP, vaddl, v8qi, v4hi, v2si) },
+  { VAR3 (BINOP, vaddw, v8qi, v4hi, v2si) },
+  { VAR6 (BINOP, vhadd, v8qi, v4hi, v2si, v16qi, v8hi, v4si) },
+  { VAR8 (BINOP, vqadd, v8qi, v4hi, v2si, di, v16qi, v8hi, v4si, v2di) },
+  { VAR3 (BINOP, vaddhn, v8hi, v4si, v2di) },
+  { VAR8 (BINOP, vmul, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR8 (TERNOP, vmla, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR3 (TERNOP, vmlal, v8qi, v4hi, v2si) },
+  { VAR8 (TERNOP, vmls, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR3 (TERNOP, vmlsl, v8qi, v4hi, v2si) },
+  { VAR4 (BINOP, vqdmulh, v4hi, v2si, v8hi, v4si) },
+  { VAR2 (TERNOP, vqdmlal, v4hi, v2si) },
+  { VAR2 (TERNOP, vqdmlsl, v4hi, v2si) },
+  { VAR3 (BINOP, vmull, v8qi, v4hi, v2si) },
+  { VAR2 (SCALARMULL, vmull_n, v4hi, v2si) },
+  { VAR2 (LANEMULL, vmull_lane, v4hi, v2si) },
+  { VAR2 (SCALARMULL, vqdmull_n, v4hi, v2si) },
+  { VAR2 (LANEMULL, vqdmull_lane, v4hi, v2si) },
+  { VAR4 (SCALARMULH, vqdmulh_n, v4hi, v2si, v8hi, v4si) },
+  { VAR4 (LANEMULH, vqdmulh_lane, v4hi, v2si, v8hi, v4si) },
+  { VAR2 (BINOP, vqdmull, v4hi, v2si) },
+  { VAR8 (BINOP, vshl, v8qi, v4hi, v2si, di, v16qi, v8hi, v4si, v2di) },
+  { VAR8 (BINOP, vqshl, v8qi, v4hi, v2si, di, v16qi, v8hi, v4si, v2di) },
+  { VAR8 (SHIFTIMM, vshr_n, v8qi, v4hi, v2si, di, v16qi, v8hi, v4si, v2di) },
+  { VAR3 (SHIFTIMM, vshrn_n, v8hi, v4si, v2di) },
+  { VAR3 (SHIFTIMM, vqshrn_n, v8hi, v4si, v2di) },
+  { VAR3 (SHIFTIMM, vqshrun_n, v8hi, v4si, v2di) },
+  { VAR8 (SHIFTIMM, vshl_n, v8qi, v4hi, v2si, di, v16qi, v8hi, v4si, v2di) },
+  { VAR8 (SHIFTIMM, vqshl_n, v8qi, v4hi, v2si, di, v16qi, v8hi, v4si, v2di) },
+  { VAR8 (SHIFTIMM, vqshlu_n, v8qi, v4hi, v2si, di, v16qi, v8hi, v4si, v2di) },
+  { VAR3 (SHIFTIMM, vshll_n, v8qi, v4hi, v2si) },
+  { VAR8 (SHIFTACC, vsra_n, v8qi, v4hi, v2si, di, v16qi, v8hi, v4si, v2di) },
+  { VAR10 (BINOP, vsub,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR3 (BINOP, vsubl, v8qi, v4hi, v2si) },
+  { VAR3 (BINOP, vsubw, v8qi, v4hi, v2si) },
+  { VAR8 (BINOP, vqsub, v8qi, v4hi, v2si, di, v16qi, v8hi, v4si, v2di) },
+  { VAR6 (BINOP, vhsub, v8qi, v4hi, v2si, v16qi, v8hi, v4si) },
+  { VAR3 (BINOP, vsubhn, v8hi, v4si, v2di) },
+  { VAR8 (BINOP, vceq, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR8 (BINOP, vcge, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR8 (BINOP, vcgt, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR2 (BINOP, vcage, v2sf, v4sf) },
+  { VAR2 (BINOP, vcagt, v2sf, v4sf) },
+  { VAR6 (BINOP, vtst, v8qi, v4hi, v2si, v16qi, v8hi, v4si) },
+  { VAR8 (BINOP, vabd, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR3 (BINOP, vabdl, v8qi, v4hi, v2si) },
+  { VAR6 (TERNOP, vaba, v8qi, v4hi, v2si, v16qi, v8hi, v4si) },
+  { VAR3 (TERNOP, vabal, v8qi, v4hi, v2si) },
+  { VAR8 (BINOP, vmax, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR8 (BINOP, vmin, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR4 (BINOP, vpadd, v8qi, v4hi, v2si, v2sf) },
+  { VAR6 (UNOP, vpaddl, v8qi, v4hi, v2si, v16qi, v8hi, v4si) },
+  { VAR6 (BINOP, vpadal, v8qi, v4hi, v2si, v16qi, v8hi, v4si) },
+  { VAR4 (BINOP, vpmax, v8qi, v4hi, v2si, v2sf) },
+  { VAR4 (BINOP, vpmin, v8qi, v4hi, v2si, v2sf) },
+  { VAR2 (BINOP, vrecps, v2sf, v4sf) },
+  { VAR2 (BINOP, vrsqrts, v2sf, v4sf) },
+  { VAR8 (SHIFTINSERT, vsri_n, v8qi, v4hi, v2si, di, v16qi, v8hi, v4si, v2di) },
+  { VAR8 (SHIFTINSERT, vsli_n, v8qi, v4hi, v2si, di, v16qi, v8hi, v4si, v2di) },
+  { VAR8 (UNOP, vabs, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR6 (UNOP, vqabs, v8qi, v4hi, v2si, v16qi, v8hi, v4si) },
+  { VAR8 (UNOP, vneg, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR6 (UNOP, vqneg, v8qi, v4hi, v2si, v16qi, v8hi, v4si) },
+  { VAR6 (UNOP, vcls, v8qi, v4hi, v2si, v16qi, v8hi, v4si) },
+  { VAR6 (UNOP, vclz, v8qi, v4hi, v2si, v16qi, v8hi, v4si) },
+  { VAR2 (UNOP, vcnt, v8qi, v16qi) },
+  { VAR4 (UNOP, vrecpe, v2si, v2sf, v4si, v4sf) },
+  { VAR4 (UNOP, vrsqrte, v2si, v2sf, v4si, v4sf) },
+  { VAR6 (UNOP, vmvn, v8qi, v4hi, v2si, v16qi, v8hi, v4si) },
+  /* FIXME: vget_lane supports more variants than this!  */
+  { VAR10 (GETLANE, vget_lane,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR10 (SETLANE, vset_lane,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR5 (CREATE, vcreate, v8qi, v4hi, v2si, v2sf, di) },
+  { VAR10 (DUP, vdup_n,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR10 (DUPLANE, vdup_lane,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR5 (COMBINE, vcombine, v8qi, v4hi, v2si, v2sf, di) },
+  { VAR5 (SPLIT, vget_high, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR5 (SPLIT, vget_low, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR3 (UNOP, vmovn, v8hi, v4si, v2di) },
+  { VAR3 (UNOP, vqmovn, v8hi, v4si, v2di) },
+  { VAR3 (UNOP, vqmovun, v8hi, v4si, v2di) },
+  { VAR3 (UNOP, vmovl, v8qi, v4hi, v2si) },
+  { VAR6 (LANEMUL, vmul_lane, v4hi, v2si, v2sf, v8hi, v4si, v4sf) },
+  { VAR6 (LANEMAC, vmla_lane, v4hi, v2si, v2sf, v8hi, v4si, v4sf) },
+  { VAR2 (LANEMAC, vmlal_lane, v4hi, v2si) },
+  { VAR2 (LANEMAC, vqdmlal_lane, v4hi, v2si) },
+  { VAR6 (LANEMAC, vmls_lane, v4hi, v2si, v2sf, v8hi, v4si, v4sf) },
+  { VAR2 (LANEMAC, vmlsl_lane, v4hi, v2si) },
+  { VAR2 (LANEMAC, vqdmlsl_lane, v4hi, v2si) },
+  { VAR6 (SCALARMUL, vmul_n, v4hi, v2si, v2sf, v8hi, v4si, v4sf) },
+  { VAR6 (SCALARMAC, vmla_n, v4hi, v2si, v2sf, v8hi, v4si, v4sf) },
+  { VAR2 (SCALARMAC, vmlal_n, v4hi, v2si) },
+  { VAR2 (SCALARMAC, vqdmlal_n, v4hi, v2si) },
+  { VAR6 (SCALARMAC, vmls_n, v4hi, v2si, v2sf, v8hi, v4si, v4sf) },
+  { VAR2 (SCALARMAC, vmlsl_n, v4hi, v2si) },
+  { VAR2 (SCALARMAC, vqdmlsl_n, v4hi, v2si) },
+  { VAR10 (BINOP, vext,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR8 (UNOP, vrev64, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR4 (UNOP, vrev32, v8qi, v4hi, v16qi, v8hi) },
+  { VAR2 (UNOP, vrev16, v8qi, v16qi) },
+  { VAR4 (CONVERT, vcvt, v2si, v2sf, v4si, v4sf) },
+  { VAR4 (FIXCONV, vcvt_n, v2si, v2sf, v4si, v4sf) },
+  { VAR10 (SELECT, vbsl,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR1 (VTBL, vtbl1, v8qi) },
+  { VAR1 (VTBL, vtbl2, v8qi) },
+  { VAR1 (VTBL, vtbl3, v8qi) },
+  { VAR1 (VTBL, vtbl4, v8qi) },
+  { VAR1 (VTBX, vtbx1, v8qi) },
+  { VAR1 (VTBX, vtbx2, v8qi) },
+  { VAR1 (VTBX, vtbx3, v8qi) },
+  { VAR1 (VTBX, vtbx4, v8qi) },
+  { VAR8 (RESULTPAIR, vtrn, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR8 (RESULTPAIR, vzip, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR8 (RESULTPAIR, vuzp, v8qi, v4hi, v2si, v2sf, v16qi, v8hi, v4si, v4sf) },
+  { VAR5 (REINTERP, vreinterpretv8qi, v8qi, v4hi, v2si, v2sf, di) },
+  { VAR5 (REINTERP, vreinterpretv4hi, v8qi, v4hi, v2si, v2sf, di) },
+  { VAR5 (REINTERP, vreinterpretv2si, v8qi, v4hi, v2si, v2sf, di) },
+  { VAR5 (REINTERP, vreinterpretv2sf, v8qi, v4hi, v2si, v2sf, di) },
+  { VAR5 (REINTERP, vreinterpretdi, v8qi, v4hi, v2si, v2sf, di) },
+  { VAR5 (REINTERP, vreinterpretv16qi, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR5 (REINTERP, vreinterpretv8hi, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR5 (REINTERP, vreinterpretv4si, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR5 (REINTERP, vreinterpretv4sf, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR5 (REINTERP, vreinterpretv2di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR10 (LOAD1, vld1,
+           v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR10 (LOAD1LANE, vld1_lane,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR10 (LOAD1, vld1_dup,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR10 (STORE1, vst1,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR10 (STORE1LANE, vst1_lane,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR9 (LOADSTRUCT,
+	  vld2, v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf) },
+  { VAR7 (LOADSTRUCTLANE, vld2_lane,
+	  v8qi, v4hi, v2si, v2sf, v8hi, v4si, v4sf) },
+  { VAR5 (LOADSTRUCT, vld2_dup, v8qi, v4hi, v2si, v2sf, di) },
+  { VAR9 (STORESTRUCT, vst2,
+	  v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf) },
+  { VAR7 (STORESTRUCTLANE, vst2_lane,
+	  v8qi, v4hi, v2si, v2sf, v8hi, v4si, v4sf) },
+  { VAR9 (LOADSTRUCT,
+	  vld3, v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf) },
+  { VAR7 (LOADSTRUCTLANE, vld3_lane,
+	  v8qi, v4hi, v2si, v2sf, v8hi, v4si, v4sf) },
+  { VAR5 (LOADSTRUCT, vld3_dup, v8qi, v4hi, v2si, v2sf, di) },
+  { VAR9 (STORESTRUCT, vst3,
+	  v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf) },
+  { VAR7 (STORESTRUCTLANE, vst3_lane,
+	  v8qi, v4hi, v2si, v2sf, v8hi, v4si, v4sf) },
+  { VAR9 (LOADSTRUCT, vld4,
+	  v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf) },
+  { VAR7 (LOADSTRUCTLANE, vld4_lane,
+	  v8qi, v4hi, v2si, v2sf, v8hi, v4si, v4sf) },
+  { VAR5 (LOADSTRUCT, vld4_dup, v8qi, v4hi, v2si, v2sf, di) },
+  { VAR9 (STORESTRUCT, vst4,
+	  v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf) },
+  { VAR7 (STORESTRUCTLANE, vst4_lane,
+	  v8qi, v4hi, v2si, v2sf, v8hi, v4si, v4sf) },
+  { VAR10 (LOGICBINOP, vand,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR10 (LOGICBINOP, vorr,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR10 (BINOP, veor,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR10 (LOGICBINOP, vbic,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) },
+  { VAR10 (LOGICBINOP, vorn,
+	   v8qi, v4hi, v2si, v2sf, di, v16qi, v8hi, v4si, v4sf, v2di) }
+};
+
+#undef CF
+#undef VAR1
+#undef VAR2
+#undef VAR3
+#undef VAR4
+#undef VAR5
+#undef VAR6
+#undef VAR7
+#undef VAR8
+#undef VAR9
+#undef VAR10
+
+static int
+valid_neon_mode (enum machine_mode mode)
+{
+  return VALID_NEON_DREG_MODE (mode) || VALID_NEON_QREG_MODE (mode);
+}
+
+static void
+arm_init_neon_builtins (void)
+{
+#define qi_TN neon_intQI_type_node
+#define hi_TN neon_intHI_type_node
+#define pqi_TN neon_polyQI_type_node
+#define qhi_TN neon_polyHI_type_node
+#define si_TN neon_intSI_type_node
+#define di_TN neon_intDI_type_node
+#define ti_TN intTI_type_node
+#define ei_TN intEI_type_node
+#define oi_TN intOI_type_node
+#define ci_TN intCI_type_node
+#define xi_TN intXI_type_node
+
+#define sf_TN neon_float_type_node
+
+#define v8qi_TN V8QI_type_node
+#define v4hi_TN V4HI_type_node
+#define v2si_TN V2SI_type_node
+#define v2sf_TN V2SF_type_node
+
+#define v16qi_TN V16QI_type_node
+#define v8hi_TN V8HI_type_node
+#define v4si_TN V4SI_type_node
+#define v4sf_TN V4SF_type_node
+#define v2di_TN V2DI_type_node
+
+#define pv8qi_TN V8QI_pointer_node
+#define pv4hi_TN V4HI_pointer_node
+#define pv2si_TN V2SI_pointer_node
+#define pv2sf_TN V2SF_pointer_node
+#define pdi_TN intDI_pointer_node
+
+#define pv16qi_TN V16QI_pointer_node
+#define pv8hi_TN V8HI_pointer_node
+#define pv4si_TN V4SI_pointer_node
+#define pv4sf_TN V4SF_pointer_node
+#define pv2di_TN V2DI_pointer_node
+
+#define void_TN void_type_node
+
+#define TYPE2(A,B) \
+  tree A##_##ftype##_##B = build_function_type_list (A##_TN, B##_TN, NULL)
+#define TYPE3(A,B,C) \
+  tree A##_##ftype##_##B##_##C = build_function_type_list (A##_TN, B##_TN, \
+    C##_TN, NULL)
+#define TYPE4(A,B,C,D) \
+  tree A##_##ftype##_##B##_##C##_##D = build_function_type_list (A##_TN, \
+    B##_TN, C##_TN, D##_TN, NULL)
+#define TYPE5(A,B,C,D,E) \
+  tree A##_##ftype##_##B##_##C##_##D##_##E = build_function_type_list (A##_TN, \
+    B##_TN, C##_TN, D##_TN, E##_TN, NULL)
+#define TYPE6(A,B,C,D,E,F) \
+  tree A##_##ftype##_##B##_##C##_##D##_##E##_##F = build_function_type_list \
+    (A##_TN, B##_TN, C##_TN, D##_TN, E##_TN, F##_TN, NULL)
+
+  unsigned int i, fcode = ARM_BUILTIN_NEON_BASE;
+
+  /* Create distinguished type nodes for NEON vector element types,
+     and pointers to values of such types, so we can detect them later.  */
+  tree neon_intQI_type_node = make_signed_type (GET_MODE_PRECISION (QImode));
+  tree neon_intHI_type_node = make_signed_type (GET_MODE_PRECISION (HImode));
+  tree neon_polyQI_type_node = make_signed_type (GET_MODE_PRECISION (QImode));
+  tree neon_polyHI_type_node = make_signed_type (GET_MODE_PRECISION (HImode));
+  tree neon_intSI_type_node = make_signed_type (GET_MODE_PRECISION (SImode));
+  tree neon_intDI_type_node = make_signed_type (GET_MODE_PRECISION (DImode));
+  tree neon_float_type_node = make_node (REAL_TYPE);
+  TYPE_PRECISION (neon_float_type_node) = FLOAT_TYPE_SIZE;
+  layout_type (neon_float_type_node);
+  
+  /* Define typedefs which exactly correspond to the modes we are basing vector
+     types on.  If you change these names you'll need to change
+     the table used by arm_mangle_vector_type too.  */
+  (*lang_hooks.types.register_builtin_type) (neon_intQI_type_node,
+					     "__builtin_neon_qi");
+  (*lang_hooks.types.register_builtin_type) (neon_intHI_type_node,
+					     "__builtin_neon_hi");
+  (*lang_hooks.types.register_builtin_type) (neon_intSI_type_node,
+					     "__builtin_neon_si");
+  (*lang_hooks.types.register_builtin_type) (neon_float_type_node,
+					     "__builtin_neon_sf");
+  (*lang_hooks.types.register_builtin_type) (neon_intDI_type_node,
+					     "__builtin_neon_di");
+
+  (*lang_hooks.types.register_builtin_type) (neon_polyQI_type_node,
+					     "__builtin_neon_poly8");
+  (*lang_hooks.types.register_builtin_type) (neon_polyHI_type_node,
+					     "__builtin_neon_poly16");
+
+  tree intQI_pointer_node = build_pointer_type (neon_intQI_type_node);
+  tree intHI_pointer_node = build_pointer_type (neon_intHI_type_node);
+  tree intSI_pointer_node = build_pointer_type (neon_intSI_type_node);
+  tree intDI_pointer_node = build_pointer_type (neon_intDI_type_node);
+  tree float_pointer_node = build_pointer_type (neon_float_type_node);
+ 
+  /* Next create constant-qualified versions of the above types.  */ 
+  tree const_intQI_node = build_qualified_type (neon_intQI_type_node,
+						TYPE_QUAL_CONST);
+  tree const_intHI_node = build_qualified_type (neon_intHI_type_node,
+						TYPE_QUAL_CONST);
+  tree const_intSI_node = build_qualified_type (neon_intSI_type_node,
+						TYPE_QUAL_CONST);
+  tree const_intDI_node = build_qualified_type (neon_intDI_type_node,
+						TYPE_QUAL_CONST);
+  tree const_float_node = build_qualified_type (neon_float_type_node,
+						TYPE_QUAL_CONST);
+
+  tree const_intQI_pointer_node = build_pointer_type (const_intQI_node);
+  tree const_intHI_pointer_node = build_pointer_type (const_intHI_node);
+  tree const_intSI_pointer_node = build_pointer_type (const_intSI_node);
+  tree const_intDI_pointer_node = build_pointer_type (const_intDI_node);
+  tree const_float_pointer_node = build_pointer_type (const_float_node);
+
+  /* Now create vector types based on our NEON element types.  */
+  /* 64-bit vectors.  */
+  tree V8QI_type_node =
+    build_vector_type_for_mode (neon_intQI_type_node, V8QImode);
+  tree V4HI_type_node =
+    build_vector_type_for_mode (neon_intHI_type_node, V4HImode);
+  tree V2SI_type_node =
+    build_vector_type_for_mode (neon_intSI_type_node, V2SImode);
+  tree V2SF_type_node =
+    build_vector_type_for_mode (neon_float_type_node, V2SFmode);
+  /* 128-bit vectors.  */
+  tree V16QI_type_node =
+    build_vector_type_for_mode (neon_intQI_type_node, V16QImode);
+  tree V8HI_type_node =
+    build_vector_type_for_mode (neon_intHI_type_node, V8HImode);
+  tree V4SI_type_node =
+    build_vector_type_for_mode (neon_intSI_type_node, V4SImode);
+  tree V4SF_type_node =
+    build_vector_type_for_mode (neon_float_type_node, V4SFmode);
+  tree V2DI_type_node =
+    build_vector_type_for_mode (neon_intDI_type_node, V2DImode);
+
+  /* Unsigned integer types for various mode sizes.  */
+  tree intUQI_type_node = make_unsigned_type (GET_MODE_PRECISION (QImode));
+  tree intUHI_type_node = make_unsigned_type (GET_MODE_PRECISION (HImode));
+  tree intUSI_type_node = make_unsigned_type (GET_MODE_PRECISION (SImode));
+  tree intUDI_type_node = make_unsigned_type (GET_MODE_PRECISION (DImode));
+
+  (*lang_hooks.types.register_builtin_type) (intUQI_type_node,
+					     "__builtin_neon_uqi");
+  (*lang_hooks.types.register_builtin_type) (intUHI_type_node,
+					     "__builtin_neon_uhi");
+  (*lang_hooks.types.register_builtin_type) (intUSI_type_node,
+					     "__builtin_neon_usi");
+  (*lang_hooks.types.register_builtin_type) (intUDI_type_node,
+					     "__builtin_neon_udi");
+
+  /* Opaque integer types for structures of vectors.  */
+  tree intEI_type_node = make_signed_type (GET_MODE_PRECISION (EImode));
+  tree intOI_type_node = make_signed_type (GET_MODE_PRECISION (OImode));
+  tree intCI_type_node = make_signed_type (GET_MODE_PRECISION (CImode));
+  tree intXI_type_node = make_signed_type (GET_MODE_PRECISION (XImode));
+
+  (*lang_hooks.types.register_builtin_type) (intTI_type_node,
+					     "__builtin_neon_ti");
+  (*lang_hooks.types.register_builtin_type) (intEI_type_node,
+					     "__builtin_neon_ei");
+  (*lang_hooks.types.register_builtin_type) (intOI_type_node,
+					     "__builtin_neon_oi");
+  (*lang_hooks.types.register_builtin_type) (intCI_type_node,
+					     "__builtin_neon_ci");
+  (*lang_hooks.types.register_builtin_type) (intXI_type_node,
+					     "__builtin_neon_xi");
+
+  /* Pointers to vector types.  */
+  tree V8QI_pointer_node = build_pointer_type (V8QI_type_node);
+  tree V4HI_pointer_node = build_pointer_type (V4HI_type_node);
+  tree V2SI_pointer_node = build_pointer_type (V2SI_type_node);
+  tree V2SF_pointer_node = build_pointer_type (V2SF_type_node);
+  tree V16QI_pointer_node = build_pointer_type (V16QI_type_node);
+  tree V8HI_pointer_node = build_pointer_type (V8HI_type_node);
+  tree V4SI_pointer_node = build_pointer_type (V4SI_type_node);
+  tree V4SF_pointer_node = build_pointer_type (V4SF_type_node);
+  tree V2DI_pointer_node = build_pointer_type (V2DI_type_node);
+
+  /* Binops, all-doubleword arithmetic.  */
+  TYPE4 (v8qi, v8qi, v8qi, si);
+  TYPE4 (v4hi, v4hi, v4hi, si);
+  TYPE4 (v2si, v2si, v2si, si);
+  TYPE4 (v2sf, v2sf, v2sf, si);
+  TYPE4 (di, di, di, si);
+  
+  /* Binops, all-quadword arithmetic.  */
+  TYPE4 (v16qi, v16qi, v16qi, si);
+  TYPE4 (v8hi, v8hi, v8hi, si);
+  TYPE4 (v4si, v4si, v4si, si);
+  TYPE4 (v4sf, v4sf, v4sf, si);
+  TYPE4 (v2di, v2di, v2di, si);
+
+  /* Binops, "long" operations (dest wider than operands).  */
+  TYPE4 (v8hi, v8qi, v8qi, si);
+  TYPE4 (v4si, v4hi, v4hi, si);
+  TYPE4 (v2di, v2si, v2si, si);
+
+  /* Binops, "wide" operations (dest and first operand wider than second
+     operand).  */
+  TYPE4 (v8hi, v8hi, v8qi, si);
+  TYPE4 (v4si, v4si, v4hi, si);
+  TYPE4 (v2di, v2di, v2si, si);
+
+  /* Binops, "narrow" operations (dest narrower than operands).  */
+  TYPE4 (v8qi, v8hi, v8hi, si);
+  TYPE4 (v4hi, v4si, v4si, si);
+  TYPE4 (v2si, v2di, v2di, si);
+
+  /* Binops, comparisons (return type always an integer vector).  */
+  TYPE4 (v2si, v2sf, v2sf, si);
+  TYPE4 (v4si, v4sf, v4sf, si);
+
+  /* Binops, dest and first operand elements wider (vpadal).  */
+  TYPE4 (v4hi, v4hi, v8qi, si);
+  TYPE4 (v2si, v2si, v4hi, si);
+  TYPE4 (di, di, v2si, si);
+  TYPE4 (v8hi, v8hi, v16qi, si);
+  TYPE4 (v4si, v4si, v8hi, si);
+  TYPE4 (v2di, v2di, v4si, si);
+
+  /* Ternary operations, all-doubleword arithmetic.  */
+  TYPE5 (v8qi, v8qi, v8qi, v8qi, si);
+  TYPE5 (v4hi, v4hi, v4hi, v4hi, si);
+  TYPE5 (v2si, v2si, v2si, v2si, si);
+  TYPE5 (v2sf, v2sf, v2sf, v2sf, si);
+
+  /* Ternary operations, all-quadword arithmetic.  */
+  TYPE5 (v16qi, v16qi, v16qi, v16qi, si);
+  TYPE5 (v8hi, v8hi, v8hi, v8hi, si);
+  TYPE5 (v4si, v4si, v4si, v4si, si);
+  TYPE5 (v4sf, v4sf, v4sf, v4sf, si);
+  
+  /* Ternary operations, "long" operations (dest and first operand
+     wider than second and third operands).  */
+  TYPE5 (v8hi, v8hi, v8qi, v8qi, si);
+  TYPE5 (v4si, v4si, v4hi, v4hi, si);
+  TYPE5 (v2di, v2di, v2si, v2si, si);
+  
+  /* Unops, all-doubleword arithmetic.  */
+  TYPE3 (v8qi, v8qi, si);
+  TYPE3 (v4hi, v4hi, si);
+  TYPE3 (v2si, v2si, si);
+  TYPE3 (v2sf, v2sf, si);
+  TYPE3 (di, di, si);
+
+  /* Unops, all-quadword arithmetic.  */
+  TYPE3 (v16qi, v16qi, si);
+  TYPE3 (v8hi, v8hi, si);
+  TYPE3 (v4si, v4si, si);
+  TYPE3 (v4sf, v4sf, si);
+  TYPE3 (v2di, v2di, si);
+
+  /* Unops, narrowing.  */
+  TYPE3 (v8qi, v8hi, si);
+  TYPE3 (v4hi, v4si, si);
+  TYPE3 (v2si, v2di, si);
+
+  /* Unops, widening.  */
+  TYPE3 (v8hi, v8qi, si);
+  TYPE3 (v4si, v4hi, si);
+  TYPE3 (v2di, v2si, si);
+
+  /* Unops, dest elements wider (vpaddl).  */
+  TYPE3 (v4hi, v8qi, si);
+  TYPE3 (v2si, v4hi, si);
+  TYPE3 (di, v2si, si);
+  TYPE3 (v8hi, v16qi, si);
+  TYPE3 (v4si, v8hi, si);
+  TYPE3 (v2di, v4si, si);
+
+  /* Get-lane from doubleword insns (single-element result).  */
+  TYPE4 (qi, v8qi, si, si);
+  TYPE4 (hi, v4hi, si, si);
+  TYPE4 (si, v2si, si, si);
+  TYPE4 (sf, v2sf, si, si);
+  TYPE4 (di, di, si, si);
+
+  /* Get-lane from quadword insns.  */
+  TYPE4 (qi, v16qi, si, si);
+  TYPE4 (hi, v8hi, si, si);
+  TYPE4 (si, v4si, si, si);
+  TYPE4 (sf, v4sf, si, si);
+  TYPE4 (di, v2di, si, si);
+
+  /* Set lane in doubleword insns (single-element result).  */
+  TYPE4 (v8qi, qi, v8qi, si);
+  TYPE4 (v4hi, hi, v4hi, si);
+  TYPE4 (v2si, si, v2si, si);
+  TYPE4 (v2sf, sf, v2sf, si);
+
+  /* Set lane in quadword insns.  */
+  TYPE4 (v16qi, qi, v16qi, si);
+  TYPE4 (v8hi, hi, v8hi, si);
+  TYPE4 (v4si, si, v4si, si);
+  TYPE4 (v4sf, sf, v4sf, si);
+  TYPE4 (v2di, di, v2di, si);
+
+  /* Create vector from bit pattern.  */
+  TYPE2 (v8qi, di);
+  TYPE2 (v4hi, di);
+  TYPE2 (v2si, di);
+  TYPE2 (v2sf, di);
+  TYPE2 (di, di);
+
+  /* Duplicate an ARM register into lanes of a vector.  */
+  TYPE2 (v8qi, qi);
+  TYPE2 (v4hi, hi);
+  TYPE2 (v2si, si);
+  TYPE2 (v2sf, sf);
+
+  TYPE2 (v16qi, qi);
+  TYPE2 (v8hi, hi);
+  TYPE2 (v4si, si);
+  TYPE2 (v4sf, sf);
+  TYPE2 (v2di, di);
+
+  /* Duplicate a lane of a vector to all lanes of another vector.  */
+  TYPE3 (v16qi, v8qi, si);
+  TYPE3 (v8hi, v4hi, si);
+  TYPE3 (v4si, v2si, si);
+  TYPE3 (v4sf, v2sf, si);
+  TYPE3 (v2di, di, si);
+
+  /* Combine doubleword vectors into quadword vectors.  */
+  TYPE3 (v16qi, v8qi, v8qi);
+  TYPE3 (v8hi, v4hi, v4hi);
+  TYPE3 (v4si, v2si, v2si);
+  TYPE3 (v4sf, v2sf, v2sf);
+  TYPE3 (v2di, di, di);
+
+  /* Split quadword vectors into high or low parts.  */
+  TYPE2 (v8qi, v16qi);
+  TYPE2 (v4hi, v8hi);
+  TYPE2 (v2si, v4si);
+  TYPE2 (v2sf, v4sf);
+  TYPE2 (di, v2di);
+
+  /* Conversions, int<->float.  */
+  TYPE3 (v2si, v2sf, si);
+  TYPE3 (v2sf, v2si, si);
+  TYPE3 (v4si, v4sf, si);
+  TYPE3 (v4sf, v4si, si);
+
+  /* Conversions, fixed point<->float.  */
+  TYPE4 (v2si, v2sf, si, si);
+  TYPE4 (v2sf, v2si, si, si);
+  TYPE4 (v4si, v4sf, si, si);
+  TYPE4 (v4sf, v4si, si, si);
+  
+  /* Multiply by scalar (lane).  */
+  TYPE5 (v4hi, v4hi, v4hi, si, si);
+  TYPE5 (v2si, v2si, v2si, si, si);
+  TYPE5 (v2sf, v2sf, v2sf, si, si);
+  TYPE5 (v8hi, v8hi, v4hi, si, si);
+  TYPE5 (v4si, v4si, v2si, si, si);
+  TYPE5 (v4sf, v4sf, v2sf, si, si);
+
+  /* Long multiply by scalar (lane).  */
+  TYPE5 (v4si, v4hi, v4hi, si, si);
+  TYPE5 (v2di, v2si, v2si, si, si);
+
+  /* Multiply-accumulate etc. by scalar (lane).  */
+  TYPE6 (v4hi, v4hi, v4hi, v4hi, si, si);
+  TYPE6 (v2si, v2si, v2si, v2si, si, si);
+  TYPE6 (v2sf, v2sf, v2sf, v2sf, si, si);
+  TYPE6 (v8hi, v8hi, v8hi, v4hi, si, si);
+  TYPE6 (v4si, v4si, v4si, v2si, si, si);
+  TYPE6 (v4sf, v4sf, v4sf, v2sf, si, si);
+
+  /* Multiply-accumulate, etc. by scalar (lane), widening.  */
+  TYPE6 (v4si, v4si, v4hi, v4hi, si, si);
+  TYPE6 (v2di, v2di, v2si, v2si, si, si);
+  
+  /* Multiply by scalar.  */
+  TYPE4 (v4hi, v4hi, hi, si);
+  TYPE4 (v2si, v2si, si, si);
+  TYPE4 (v2sf, v2sf, sf, si);
+  
+  TYPE4 (v8hi, v8hi, hi, si);
+  TYPE4 (v4si, v4si, si, si);
+  TYPE4 (v4sf, v4sf, sf, si);
+
+  /* Long multiply by scalar.  */
+  TYPE4 (v4si, v4hi, hi, si);
+
+  /* Multiply-accumulate etc. by scalar.  */
+  TYPE5 (v4hi, v4hi, v4hi, hi, si);
+ /* TYPE5 (v2si, v2si, v2si, si, si);*/
+  TYPE5 (v2sf, v2sf, v2sf, sf, si);
+  TYPE5 (v8hi, v8hi, v8hi, hi, si);
+  TYPE5 (v4si, v4si, v4si, si, si);
+  TYPE5 (v4sf, v4sf, v4sf, sf, si);
+
+  /* Multiply-accumulate by scalar, widening.  */
+  TYPE5 (v4si, v4si, v4hi, hi, si);
+  TYPE5 (v2di, v2di, v2si, si, si);
+
+  /* Bit select operations.  */
+  TYPE4 (v8qi, v8qi, v8qi, v8qi);
+  TYPE4 (v4hi, v4hi, v4hi, v4hi);
+  TYPE4 (v2si, v2si, v2si, v2si);
+  TYPE4 (v2sf, v2si, v2sf, v2sf);
+  TYPE4 (di, di, di, di);
+
+  TYPE4 (v16qi, v16qi, v16qi, v16qi);
+  TYPE4 (v8hi, v8hi, v8hi, v8hi);
+  TYPE4 (v4si, v4si, v4si, v4si);
+  TYPE4 (v4sf, v4si, v4sf, v4sf);
+  TYPE4 (v2di, v2di, v2di, v2di);
+
+  /* Shift immediate operations.  */
+  TYPE4 (v8qi, v8qi, si, si);
+  TYPE4 (v4hi, v4hi, si, si);
+
+  TYPE4 (v16qi, v16qi, si, si);
+  TYPE4 (v8hi, v8hi, si, si);
+  TYPE4 (v2di, v2di, si, si);
+
+  /* Shift immediate, long operations.  */
+  TYPE4 (v8hi, v8qi, si, si);
+  TYPE4 (v4si, v4hi, si, si);
+  TYPE4 (v2di, v2si, si, si);
+
+  /* Shift immediate, narrowing operations.  */
+  TYPE4 (v8qi, v8hi, si, si);
+  TYPE4 (v4hi, v4si, si, si);
+  TYPE4 (v2si, v2di, si, si);
+
+  /* Shift + accumulate operations.  */
+  TYPE5 (v8qi, v8qi, v8qi, si, si);
+  TYPE5 (di, di, di, si, si);
+
+  TYPE5 (v16qi, v16qi, v16qi, si, si);
+  TYPE5 (v8hi, v8hi, v8hi, si, si);
+  TYPE5 (v4sf, v4sf, v4sf, si, si);
+  TYPE5 (v2di, v2di, v2di, si, si);
+
+  /* Operations which return results as pairs.  */
+  TYPE4 (void, pv8qi, v8qi, v8qi);
+  TYPE4 (void, pv4hi, v4hi, v4hi);
+  TYPE4 (void, pv2si, v2si, v2si);
+  TYPE4 (void, pv2sf, v2sf, v2sf);
+  TYPE4 (void, pdi, di, di);
+
+  TYPE4 (void, pv16qi, v16qi, v16qi);
+  TYPE4 (void, pv8hi, v8hi, v8hi);
+  TYPE4 (void, pv4si, v4si, v4si);
+  TYPE4 (void, pv4sf, v4sf, v4sf);
+  TYPE4 (void, pv2di, v2di, v2di);
+
+  /* Table look-up.  */
+  TYPE3 (v8qi, v8qi, v8qi);
+  TYPE3 (v8qi, ti, v8qi);
+  TYPE3 (v8qi, ei, v8qi);
+  TYPE3 (v8qi, oi, v8qi);
+  
+  /* Extended table look-up.  */
+  /*TYPE4 (v8qi, v8qi, v8qi, v8qi);*/
+  TYPE4 (v8qi, v8qi, ti, v8qi);
+  TYPE4 (v8qi, v8qi, ei, v8qi);
+  TYPE4 (v8qi, v8qi, oi, v8qi);
+
+  /* Load operations, double-word.  */
+  tree v8qi_ftype_const_qi_pointer =
+    build_function_type_list (V8QI_type_node, const_intQI_pointer_node, NULL);
+  tree v4hi_ftype_const_hi_pointer =
+    build_function_type_list (V4HI_type_node, const_intHI_pointer_node, NULL);
+  tree v2si_ftype_const_si_pointer =
+    build_function_type_list (V2SI_type_node, const_intSI_pointer_node, NULL);
+  tree di_ftype_const_di_pointer =
+    build_function_type_list (intDI_type_node, const_intDI_pointer_node, NULL);
+  tree v2sf_ftype_const_sf_pointer =
+    build_function_type_list (V2SF_type_node, const_float_pointer_node, NULL);
+
+  /* Load operations, quad-word.  */
+  tree v16qi_ftype_const_qi_pointer =
+    build_function_type_list (V16QI_type_node, const_intQI_pointer_node, NULL);
+  tree v8hi_ftype_const_hi_pointer =
+    build_function_type_list (V8HI_type_node, const_intHI_pointer_node, NULL);
+  tree v4si_ftype_const_si_pointer =
+    build_function_type_list (V4SI_type_node, const_intSI_pointer_node, NULL);
+  tree v2di_ftype_const_di_pointer =
+    build_function_type_list (V2DI_type_node, const_intDI_pointer_node, NULL);
+  tree v4sf_ftype_const_sf_pointer =
+    build_function_type_list (V4SF_type_node, const_float_pointer_node, NULL);
+
+  /* Load lane operations, double-word.  */
+  tree v8qi_ftype_const_qi_pointer_v8qi_si =
+    build_function_type_list (V8QI_type_node, const_intQI_pointer_node,
+			      V8QI_type_node, intSI_type_node, NULL);
+  tree v4hi_ftype_const_hi_pointer_v4hi_si =
+    build_function_type_list (V4HI_type_node, const_intHI_pointer_node,
+			      V4HI_type_node, intSI_type_node, NULL);
+  tree v2si_ftype_const_si_pointer_v2si_si =
+    build_function_type_list (V2SI_type_node, const_intSI_pointer_node,
+			      V2SI_type_node, intSI_type_node, NULL);
+  tree di_ftype_const_di_pointer_di_si =
+    build_function_type_list (intDI_type_node, const_intDI_pointer_node,
+			      intDI_type_node, intSI_type_node, NULL);
+  tree v2sf_ftype_const_sf_pointer_v2sf_si =
+    build_function_type_list (V2SF_type_node, const_float_pointer_node,
+			      V2SF_type_node, intSI_type_node, NULL);
+
+  /* Load lane operations, quad-word.  */
+  tree v16qi_ftype_const_qi_pointer_v16qi_si =
+    build_function_type_list (V16QI_type_node, const_intQI_pointer_node,
+			      V16QI_type_node, intSI_type_node, NULL);
+  tree v8hi_ftype_const_hi_pointer_v8hi_si =
+    build_function_type_list (V8HI_type_node, const_intHI_pointer_node,
+			      V8HI_type_node, intSI_type_node, NULL);
+  tree v4si_ftype_const_si_pointer_v4si_si =
+    build_function_type_list (V4SI_type_node, const_intSI_pointer_node,
+			      V4SI_type_node, intSI_type_node, NULL);
+  tree v2di_ftype_const_di_pointer_v2di_si =
+    build_function_type_list (V2DI_type_node, const_intDI_pointer_node,
+			      V2DI_type_node, intSI_type_node, NULL);
+  tree v4sf_ftype_const_sf_pointer_v4sf_si =
+    build_function_type_list (V4SF_type_node, const_float_pointer_node,
+			      V4SF_type_node, intSI_type_node, NULL);
+
+  /* Store operations, double-word.  */
+  tree void_ftype_qi_pointer_v8qi =
+    build_function_type_list (void_type_node, intQI_pointer_node,
+			      V8QI_type_node, NULL);
+  tree void_ftype_hi_pointer_v4hi =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      V4HI_type_node, NULL);
+  tree void_ftype_si_pointer_v2si =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      V2SI_type_node, NULL);
+  tree void_ftype_di_pointer_di =
+    build_function_type_list (void_type_node, intDI_pointer_node,
+			      intDI_type_node, NULL);
+  tree void_ftype_sf_pointer_v2sf =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      V2SF_type_node, NULL);
+
+  /* Store operations, quad-word.  */
+  tree void_ftype_qi_pointer_v16qi =
+    build_function_type_list (void_type_node, intQI_pointer_node,
+			      V16QI_type_node, NULL);
+  tree void_ftype_hi_pointer_v8hi =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      V8HI_type_node, NULL);
+  tree void_ftype_si_pointer_v4si =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      V4SI_type_node, NULL);
+  tree void_ftype_di_pointer_v2di =
+    build_function_type_list (void_type_node, intDI_pointer_node,
+			      V2DI_type_node, NULL);
+  tree void_ftype_sf_pointer_v4sf =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      V4SF_type_node, NULL);
+
+  /* Store lane operations, double-word.  */
+  tree void_ftype_qi_pointer_v8qi_si =
+    build_function_type_list (void_type_node, intQI_pointer_node,
+			      V8QI_type_node, intSI_type_node, NULL);
+  tree void_ftype_hi_pointer_v4hi_si =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      V4HI_type_node, intSI_type_node, NULL);
+  tree void_ftype_si_pointer_v2si_si =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      V2SI_type_node, intSI_type_node, NULL);
+  tree void_ftype_di_pointer_di_si =
+    build_function_type_list (void_type_node, intDI_pointer_node,
+			      intDI_type_node, intSI_type_node, NULL);
+  tree void_ftype_sf_pointer_v2sf_si =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      V2SF_type_node, intSI_type_node, NULL);
+
+  /* Store lane operations, quad-word.  */
+  tree void_ftype_qi_pointer_v16qi_si =
+    build_function_type_list (void_type_node, intQI_pointer_node,
+			      V16QI_type_node, intSI_type_node, NULL);
+  tree void_ftype_hi_pointer_v8hi_si =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      V8HI_type_node, intSI_type_node, NULL);
+  tree void_ftype_si_pointer_v4si_si =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      V4SI_type_node, intSI_type_node, NULL);
+  tree void_ftype_di_pointer_v2di_si =
+    build_function_type_list (void_type_node, intDI_pointer_node,
+			      V2DI_type_node, intSI_type_node, NULL);
+  tree void_ftype_sf_pointer_v4sf_si =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      V4SF_type_node, intSI_type_node, NULL);
+
+  /* Load size-2 structure operations, double-word.  */
+  tree ti_ftype_const_qi_pointer =
+    build_function_type_list (intTI_type_node, const_intQI_pointer_node, NULL);
+  tree ti_ftype_const_hi_pointer =
+    build_function_type_list (intTI_type_node, const_intHI_pointer_node, NULL);
+  tree ti_ftype_const_si_pointer =
+    build_function_type_list (intTI_type_node, const_intSI_pointer_node, NULL);
+  tree ti_ftype_const_di_pointer =
+    build_function_type_list (intTI_type_node, const_intDI_pointer_node, NULL);
+  tree ti_ftype_const_sf_pointer =
+    build_function_type_list (intTI_type_node, const_float_pointer_node, NULL);
+
+  /* Load size-2 structure operations, quad-word; also load size-4,
+     double-word.  */
+  tree oi_ftype_const_qi_pointer =
+    build_function_type_list (intOI_type_node, const_intQI_pointer_node, NULL);
+  tree oi_ftype_const_hi_pointer =
+    build_function_type_list (intOI_type_node, const_intHI_pointer_node, NULL);
+  tree oi_ftype_const_si_pointer =
+    build_function_type_list (intOI_type_node, const_intSI_pointer_node, NULL);
+  tree oi_ftype_const_sf_pointer =
+    build_function_type_list (intOI_type_node, const_float_pointer_node, NULL);
+
+  /* Load lane size-2 structure operations, double-word.  */
+  tree ti_ftype_const_qi_pointer_ti_si =
+    build_function_type_list (intTI_type_node, const_intQI_pointer_node,
+			      intTI_type_node, intSI_type_node, NULL);
+  tree ti_ftype_const_hi_pointer_ti_si =
+    build_function_type_list (intTI_type_node, const_intHI_pointer_node,
+			      intTI_type_node, intSI_type_node, NULL);
+  tree ti_ftype_const_si_pointer_ti_si =
+    build_function_type_list (intTI_type_node, const_intSI_pointer_node,
+			      intTI_type_node, intSI_type_node, NULL);
+  tree ti_ftype_const_sf_pointer_ti_si =
+    build_function_type_list (intTI_type_node, const_float_pointer_node,
+			      intTI_type_node, intSI_type_node, NULL);
+
+  /* Load lane size-2 structure operations, quad-word; also load lane size-4,
+     double-word.  */
+  tree oi_ftype_const_hi_pointer_oi_si =
+    build_function_type_list (intOI_type_node, const_intHI_pointer_node,
+			      intOI_type_node, intSI_type_node, NULL);
+  tree oi_ftype_const_si_pointer_oi_si =
+    build_function_type_list (intOI_type_node, const_intSI_pointer_node,
+			      intOI_type_node, intSI_type_node, NULL);
+  tree oi_ftype_const_sf_pointer_oi_si =
+    build_function_type_list (intOI_type_node, const_float_pointer_node,
+			      intOI_type_node, intSI_type_node, NULL);
+
+  /* Store size-2 structure operations, double-word.  */
+  tree void_ftype_qi_pointer_ti =
+    build_function_type_list (void_type_node, intQI_pointer_node,
+			      intTI_type_node, NULL);
+  tree void_ftype_hi_pointer_ti =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      intTI_type_node, NULL);
+  tree void_ftype_si_pointer_ti =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      intTI_type_node, NULL);
+  tree void_ftype_di_pointer_ti =
+    build_function_type_list (void_type_node, intDI_pointer_node,
+			      intTI_type_node, NULL);
+  tree void_ftype_sf_pointer_ti =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      intTI_type_node, NULL);
+
+  /* Store size-2 structure operations, quad-word; also store size-4,
+     double-word.  */
+  tree void_ftype_qi_pointer_oi =
+    build_function_type_list (void_type_node, intQI_pointer_node,
+			      intOI_type_node, NULL);
+  tree void_ftype_hi_pointer_oi =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      intOI_type_node, NULL);
+  tree void_ftype_si_pointer_oi =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      intOI_type_node, NULL);
+  tree void_ftype_sf_pointer_oi =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      intOI_type_node, NULL);
+
+  /* Store lane size-2 structure operations, double-word.  */
+  tree void_ftype_qi_pointer_ti_si =
+    build_function_type_list (void_type_node, intQI_pointer_node,
+			      intTI_type_node, intSI_type_node, NULL);
+  tree void_ftype_hi_pointer_ti_si =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      intTI_type_node, intSI_type_node, NULL);
+  tree void_ftype_si_pointer_ti_si =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      intTI_type_node, intSI_type_node, NULL);
+  tree void_ftype_sf_pointer_ti_si =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      intTI_type_node, intSI_type_node, NULL);
+
+  /* Store lane size-2 structure operations, quad-word; also store
+     lane size-4, double-word.  */
+  tree void_ftype_hi_pointer_oi_si =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      intOI_type_node, intSI_type_node, NULL);
+  tree void_ftype_si_pointer_oi_si =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      intOI_type_node, intSI_type_node, NULL);
+  tree void_ftype_sf_pointer_oi_si =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      intOI_type_node, intSI_type_node, NULL);
+
+  /* Load size-3 structure operations, double-word.  */
+  tree ei_ftype_const_qi_pointer =
+    build_function_type_list (intEI_type_node, const_intQI_pointer_node, NULL);
+  tree ei_ftype_const_hi_pointer =
+    build_function_type_list (intEI_type_node, const_intHI_pointer_node, NULL);
+  tree ei_ftype_const_si_pointer =
+    build_function_type_list (intEI_type_node, const_intSI_pointer_node, NULL);
+  tree ei_ftype_const_di_pointer =
+    build_function_type_list (intEI_type_node, const_intDI_pointer_node, NULL);
+  tree ei_ftype_const_sf_pointer =
+    build_function_type_list (intEI_type_node, const_float_pointer_node, NULL);
+
+  /* Load size-3 structure operations, quad-word.  */
+  tree ci_ftype_const_qi_pointer =
+    build_function_type_list (intCI_type_node, const_intQI_pointer_node, NULL);
+  tree ci_ftype_const_hi_pointer =
+    build_function_type_list (intCI_type_node, const_intHI_pointer_node, NULL);
+  tree ci_ftype_const_si_pointer =
+    build_function_type_list (intCI_type_node, const_intSI_pointer_node, NULL);
+  tree ci_ftype_const_sf_pointer =
+    build_function_type_list (intCI_type_node, const_float_pointer_node, NULL);
+
+  /* Load lane size-3 structure operations, double-word.  */
+  tree ei_ftype_const_qi_pointer_ei_si =
+    build_function_type_list (intEI_type_node, const_intQI_pointer_node,
+			      intEI_type_node, intSI_type_node, NULL);
+  tree ei_ftype_const_hi_pointer_ei_si =
+    build_function_type_list (intEI_type_node, const_intHI_pointer_node,
+			      intEI_type_node, intSI_type_node, NULL);
+  tree ei_ftype_const_si_pointer_ei_si =
+    build_function_type_list (intEI_type_node, const_intSI_pointer_node,
+			      intEI_type_node, intSI_type_node, NULL);
+  tree ei_ftype_const_sf_pointer_ei_si =
+    build_function_type_list (intEI_type_node, const_float_pointer_node,
+			      intEI_type_node, intSI_type_node, NULL);
+
+  /* Load lane size-3 structure operations, quad-word.  */
+  tree ci_ftype_const_hi_pointer_ci_si =
+    build_function_type_list (intCI_type_node, const_intHI_pointer_node,
+			      intCI_type_node, intSI_type_node, NULL);
+  tree ci_ftype_const_si_pointer_ci_si =
+    build_function_type_list (intCI_type_node, const_intSI_pointer_node,
+			      intCI_type_node, intSI_type_node, NULL);
+  tree ci_ftype_const_sf_pointer_ci_si =
+    build_function_type_list (intCI_type_node, const_float_pointer_node,
+			      intCI_type_node, intSI_type_node, NULL);
+
+  /* Store size-3 structure operations, double-word.  */
+  tree void_ftype_qi_pointer_ei =
+    build_function_type_list (void_type_node, intQI_pointer_node,
+			      intEI_type_node, NULL);
+  tree void_ftype_hi_pointer_ei =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      intEI_type_node, NULL);
+  tree void_ftype_si_pointer_ei =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      intEI_type_node, NULL);
+  tree void_ftype_di_pointer_ei =
+    build_function_type_list (void_type_node, intDI_pointer_node,
+			      intEI_type_node, NULL);
+  tree void_ftype_sf_pointer_ei =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      intEI_type_node, NULL);
+
+  /* Store size-3 structure operations, quad-word.  */
+  tree void_ftype_qi_pointer_ci =
+    build_function_type_list (void_type_node, intQI_pointer_node,
+			      intCI_type_node, NULL);
+  tree void_ftype_hi_pointer_ci =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      intCI_type_node, NULL);
+  tree void_ftype_si_pointer_ci =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      intCI_type_node, NULL);
+  tree void_ftype_sf_pointer_ci =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      intCI_type_node, NULL);
+
+  /* Store lane size-3 structure operations, double-word.  */
+  tree void_ftype_qi_pointer_ei_si =
+    build_function_type_list (void_type_node, intQI_pointer_node,
+			      intEI_type_node, intSI_type_node, NULL);
+  tree void_ftype_hi_pointer_ei_si =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      intEI_type_node, intSI_type_node, NULL);
+  tree void_ftype_si_pointer_ei_si =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      intEI_type_node, intSI_type_node, NULL);
+  tree void_ftype_sf_pointer_ei_si =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      intEI_type_node, intSI_type_node, NULL);
+
+  /* Store lane size-3 structure operations, quad-word.  */
+  tree void_ftype_hi_pointer_ci_si =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      intCI_type_node, intSI_type_node, NULL);
+  tree void_ftype_si_pointer_ci_si =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      intCI_type_node, intSI_type_node, NULL);
+  tree void_ftype_sf_pointer_ci_si =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      intCI_type_node, intSI_type_node, NULL);
+
+  /* Load size-4 structure operations, double-word.  */
+  tree oi_ftype_const_di_pointer =
+    build_function_type_list (intOI_type_node, const_intDI_pointer_node, NULL);
+
+  /* Load size-4 structure operations, quad-word.  */
+  tree xi_ftype_const_qi_pointer =
+    build_function_type_list (intXI_type_node, const_intQI_pointer_node, NULL);
+  tree xi_ftype_const_hi_pointer =
+    build_function_type_list (intXI_type_node, const_intHI_pointer_node, NULL);
+  tree xi_ftype_const_si_pointer =
+    build_function_type_list (intXI_type_node, const_intSI_pointer_node, NULL);
+  tree xi_ftype_const_sf_pointer =
+    build_function_type_list (intXI_type_node, const_float_pointer_node, NULL);
+
+  /* Load lane size-4 structure operations, double-word.  */
+  tree oi_ftype_const_qi_pointer_oi_si =
+    build_function_type_list (intOI_type_node, const_intQI_pointer_node,
+			      intOI_type_node, intSI_type_node, NULL);
+
+  /* Load lane size-4 structure operations, quad-word.  */
+  tree xi_ftype_const_hi_pointer_xi_si =
+    build_function_type_list (intXI_type_node, const_intHI_pointer_node,
+			      intXI_type_node, intSI_type_node, NULL);
+  tree xi_ftype_const_si_pointer_xi_si =
+    build_function_type_list (intXI_type_node, const_intSI_pointer_node,
+			      intXI_type_node, intSI_type_node, NULL);
+  tree xi_ftype_const_sf_pointer_xi_si =
+    build_function_type_list (intXI_type_node, const_float_pointer_node,
+			      intXI_type_node, intSI_type_node, NULL);
+
+  /* Store size-4 structure operations, double-word.  */
+  tree void_ftype_di_pointer_oi =
+    build_function_type_list (void_type_node, intDI_pointer_node,
+			      intOI_type_node, NULL);
+
+  /* Store size-4 structure operations, quad-word.  */
+  tree void_ftype_qi_pointer_xi =
+    build_function_type_list (void_type_node, intQI_pointer_node,
+			      intXI_type_node, NULL);
+  tree void_ftype_hi_pointer_xi =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      intXI_type_node, NULL);
+  tree void_ftype_si_pointer_xi =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      intXI_type_node, NULL);
+  tree void_ftype_sf_pointer_xi =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      intXI_type_node, NULL);
+
+  /* Store lane size-4 structure operations, double-word.  */
+  tree void_ftype_qi_pointer_oi_si =
+    build_function_type_list (void_type_node, intQI_pointer_node,
+			      intOI_type_node, intSI_type_node, NULL);
+
+  /* Store lane size-4 structure operations, quad-word.  */
+  tree void_ftype_hi_pointer_xi_si =
+    build_function_type_list (void_type_node, intHI_pointer_node,
+			      intXI_type_node, intSI_type_node, NULL);
+  tree void_ftype_si_pointer_xi_si =
+    build_function_type_list (void_type_node, intSI_pointer_node,
+			      intXI_type_node, intSI_type_node, NULL);
+  tree void_ftype_sf_pointer_xi_si =
+    build_function_type_list (void_type_node, float_pointer_node,
+			      intXI_type_node, intSI_type_node, NULL);
+
+  tree reinterp_ftype_dreg[5][5];
+  tree reinterp_ftype_qreg[5][5];
+  tree dreg_types[5], qreg_types[5];
+
+  dreg_types[0] = V8QI_type_node;
+  dreg_types[1] = V4HI_type_node;
+  dreg_types[2] = V2SI_type_node;
+  dreg_types[3] = V2SF_type_node;
+  dreg_types[4] = neon_intDI_type_node;
+
+  qreg_types[0] = V16QI_type_node;
+  qreg_types[1] = V8HI_type_node;
+  qreg_types[2] = V4SI_type_node;
+  qreg_types[3] = V4SF_type_node;
+  qreg_types[4] = V2DI_type_node;
+  
+  for (i = 0; i < 5; i++)
+    {
+      int j;
+      for (j = 0; j < 5; j++)
+        {
+          reinterp_ftype_dreg[i][j]
+            = build_function_type_list (dreg_types[i], dreg_types[j], NULL);
+          reinterp_ftype_qreg[i][j]
+            = build_function_type_list (qreg_types[i], qreg_types[j], NULL);
+        }
+    }
+
+  for (i = 0; i < ARRAY_SIZE (neon_builtin_data); i++)
+    {
+      neon_builtin_datum *d = &neon_builtin_data[i];
+      unsigned int j, codeidx = 0;
+
+      d->base_fcode = fcode;
+
+      for (j = 0; j < T_MAX; j++)
+        {
+          const char* const modenames[] = {
+            "v8qi", "v4hi", "v2si", "v2sf", "di",
+            "v16qi", "v8hi", "v4si", "v4sf", "v2di"
+          };
+          char namebuf[60];
+          tree ftype = NULL;
+          enum insn_code icode;
+          enum machine_mode tmode, mode0, mode1, mode2, mode3;
+          
+          if ((d->bits & (1 << j)) == 0)
+            continue;
+          
+          icode = d->codes[codeidx++];
+          
+          tmode = insn_data[icode].operand[0].mode;
+          mode0 = insn_data[icode].operand[1].mode;
+          mode1 = insn_data[icode].operand[2].mode;
+          mode2 = insn_data[icode].operand[3].mode;
+          mode3 = insn_data[icode].operand[4].mode;
+          
+          switch (d->itype)
+            {
+            case NEON_UNOP:
+              /* A unary operation with one vector operand and a vector
+                 destination, plus an extra information word.  */
+              gcc_assert (valid_neon_mode (tmode) && valid_neon_mode (mode0)
+                          && mode1 == SImode);
+              switch (tmode)
+                {
+                case V8QImode:
+                  if (mode0 == V8QImode)
+                    ftype = v8qi_ftype_v8qi_si;
+                  else if (mode0 == V8HImode)
+                    ftype = v8qi_ftype_v8hi_si;
+                  break;
+
+                case V4HImode:
+                  if (mode0 == V4HImode)
+                    ftype = v4hi_ftype_v4hi_si;
+                  else if (mode0 == V4SImode)
+                    ftype = v4hi_ftype_v4si_si;
+                  else if (mode0 == V8QImode)
+                    ftype = v4hi_ftype_v8qi_si;
+                  break;
+
+                case V2SImode:
+                  if (mode0 == V2SImode)
+                    ftype = v2si_ftype_v2si_si;
+                  else if (mode0 == V2DImode)
+                    ftype = v2si_ftype_v2di_si;
+                  else if (mode0 == V4HImode)
+                    ftype = v2si_ftype_v4hi_si;
+                  break;
+
+                case V2SFmode:
+                  if (mode0 == V2SFmode)
+                    ftype = v2sf_ftype_v2sf_si;
+                  break;
+
+                case DImode:
+                  if (mode0 == DImode)
+                    ftype = di_ftype_di_si;
+		  else if (mode0 == V2SImode)
+		    ftype = di_ftype_v2si_si;
+                  break;
+
+                case V16QImode:
+                  if (mode0 == V16QImode)
+                    ftype = v16qi_ftype_v16qi_si;
+                  break;
+                
+                case V8HImode:
+                  if (mode0 == V8HImode)
+                    ftype = v8hi_ftype_v8hi_si;
+                  else if (mode0 == V8QImode)
+                    ftype = v8hi_ftype_v8qi_si;
+                  else if (mode0 == V16QImode)
+                    ftype = v8hi_ftype_v16qi_si;
+                  break;
+                
+                case V4SImode:
+                  if (mode0 == V4SImode)
+                    ftype = v4si_ftype_v4si_si;
+                  else if (mode0 == V4HImode)
+                    ftype = v4si_ftype_v4hi_si;
+                  else if (mode0 == V8HImode)
+                    ftype = v4si_ftype_v8hi_si;
+                  break;
+                
+                case V4SFmode:
+                  if (mode0 == V4SFmode)
+                    ftype = v4sf_ftype_v4sf_si;
+                  break;
+                
+                case V2DImode:
+                  if (mode0 == V2DImode)
+                    ftype = v2di_ftype_v2di_si;
+                  else if (mode0 == V2SImode)
+                    ftype = v2di_ftype_v2si_si;
+                  else if (mode0 == V4SImode)
+                    ftype = v2di_ftype_v4si_si;
+                  break;
+
+                default:
+                  gcc_unreachable ();
+                }
+              break;
+
+            case NEON_BINOP:
+            case NEON_LOGICBINOP:
+            case NEON_SHIFTINSERT:
+              /* A binary operation with two vector operands and a vector
+                 destination, plus an extra information word.  */
+              gcc_assert (valid_neon_mode (tmode) && valid_neon_mode (mode0)
+                          && valid_neon_mode (mode1) && mode2 == SImode);
+              switch (tmode)
+                {
+                case V8QImode:
+                  if (mode0 == V8QImode && mode1 == V8QImode)
+                    ftype = v8qi_ftype_v8qi_v8qi_si;
+                  else if (mode0 == V8HImode && mode1 == V8HImode)
+                    ftype = v8qi_ftype_v8hi_v8hi_si;
+                  break;
+
+                case V4HImode:
+                  if (mode0 == V4HImode && mode1 == V4HImode)
+                    ftype = v4hi_ftype_v4hi_v4hi_si;
+                  else if (mode0 == V4SImode && mode1 == V4SImode)
+                    ftype = v4hi_ftype_v4si_v4si_si;
+                  else if (mode0 == V4HImode && mode1 == V8QImode)
+                    ftype = v4hi_ftype_v4hi_v8qi_si;
+                  break;
+
+                case V2SImode:
+                  if (mode0 == V2SImode && mode1 == V2SImode)
+                    ftype = v2si_ftype_v2si_v2si_si;
+                  else if (mode0 == V2DImode && mode1 == V2DImode)
+                    ftype = v2si_ftype_v2di_v2di_si;
+                  else if (mode0 == V2SFmode && mode1 == V2SFmode)
+                    ftype = v2si_ftype_v2sf_v2sf_si;
+                  else if (mode0 == V2SImode && mode1 == V4HImode)
+                    ftype = v2si_ftype_v2si_v4hi_si;
+                  break;
+
+                case V2SFmode:
+                  if (mode0 == V2SFmode && mode1 == V2SFmode)
+                    ftype = v2sf_ftype_v2sf_v2sf_si;
+                  break;
+
+                case DImode:
+                  if (mode0 == DImode && mode1 == DImode)
+                    ftype = di_ftype_di_di_si;
+		  else if (mode0 == DImode && mode1 == V2SImode)
+		    ftype = di_ftype_di_v2si_si;
+                  break;
+
+                case V16QImode:
+                  if (mode0 == V16QImode && mode1 == V16QImode)
+                    ftype = v16qi_ftype_v16qi_v16qi_si;
+                  break;
+
+                case V8HImode:
+                  if (mode0 == V8HImode && mode1 == V8HImode)
+                    ftype = v8hi_ftype_v8hi_v8hi_si;
+                  else if (mode0 == V8QImode && mode1 == V8QImode)
+                    ftype = v8hi_ftype_v8qi_v8qi_si;
+                  else if (mode0 == V8HImode && mode1 == V8QImode)
+                    ftype = v8hi_ftype_v8hi_v8qi_si;
+                  else if (mode0 == V8HImode && mode1 == V16QImode)
+                    ftype = v8hi_ftype_v8hi_v16qi_si;
+                  break;
+
+                case V4SImode:
+                  if (mode0 == V4SImode && mode1 == V4SImode)
+                    ftype = v4si_ftype_v4si_v4si_si;
+                  else if (mode0 == V4HImode && mode1 == V4HImode)
+                    ftype = v4si_ftype_v4hi_v4hi_si;
+                  else if (mode0 == V4SImode && mode1 == V4HImode)
+                    ftype = v4si_ftype_v4si_v4hi_si;
+                  else if (mode0 == V4SFmode && mode1 == V4SFmode)
+                    ftype = v4si_ftype_v4sf_v4sf_si;
+                  else if (mode0 == V4SImode && mode1 == V8HImode)
+                    ftype = v4si_ftype_v4si_v8hi_si;
+                  break;
+
+                case V4SFmode:
+                  if (mode0 == V4SFmode && mode1 == V4SFmode)
+                    ftype = v4sf_ftype_v4sf_v4sf_si;
+                  break;
+
+                case V2DImode:
+                  if (mode0 == V2DImode && mode1 == V2DImode)
+                    ftype = v2di_ftype_v2di_v2di_si;
+                  else if (mode0 == V2SImode && mode1 == V2SImode)
+                    ftype = v2di_ftype_v2si_v2si_si;
+                  else if (mode0 == V2DImode && mode1 == V2SImode)
+                    ftype = v2di_ftype_v2di_v2si_si;
+                  else if (mode0 == V2DImode && mode1 == V4SImode)
+                    ftype = v2di_ftype_v2di_v4si_si;
+                  break;
+
+                default:
+                  gcc_unreachable ();
+                }
+              break;
+
+            case NEON_TERNOP:
+              /* A ternary operation with three vector operands and a
+                 vector destination, plus an extra information
+                 word.  */
+              gcc_assert (valid_neon_mode (tmode) && valid_neon_mode (mode0)
+                          && valid_neon_mode (mode1)
+			  && valid_neon_mode (mode2)
+			  && mode3 == SImode);
+              switch (tmode)
+                {
+                case V8QImode:
+                  if (mode0 == V8QImode && mode1 == V8QImode
+		      && mode2 == V8QImode)
+                    ftype = v8qi_ftype_v8qi_v8qi_v8qi_si;
+                  break;
+
+                case V4HImode:
+                  if (mode0 == V4HImode && mode1 == V4HImode
+		      && mode2 == V4HImode)
+                    ftype = v4hi_ftype_v4hi_v4hi_v4hi_si;
+                  break;
+
+                case V2SImode:
+                  if (mode0 == V2SImode && mode1 == V2SImode
+		      && mode2 == V2SImode)
+                    ftype = v2si_ftype_v2si_v2si_v2si_si;
+                  break;
+
+                case V2SFmode:
+                  if (mode0 == V2SFmode && mode1 == V2SFmode
+		      && mode2 == V2SFmode)
+                    ftype = v2sf_ftype_v2sf_v2sf_v2sf_si;
+                  break;
+
+                case V16QImode:
+                  if (mode0 == V16QImode && mode1 == V16QImode
+		      && mode2 == V16QImode)
+                    ftype = v16qi_ftype_v16qi_v16qi_v16qi_si;
+                  break;
+
+                case V8HImode:
+                  if (mode0 == V8HImode && mode1 == V8HImode
+		      && mode2 == V8HImode)
+                    ftype = v8hi_ftype_v8hi_v8hi_v8hi_si;
+                  else if (mode0 == V8HImode && mode1 == V8QImode
+			   && mode2 == V8QImode)
+                    ftype = v8hi_ftype_v8hi_v8qi_v8qi_si;
+                  break;
+
+                case V4SImode:
+                  if (mode0 == V4SImode && mode1 == V4SImode
+		      && mode2 == V4SImode)
+                    ftype = v4si_ftype_v4si_v4si_v4si_si;
+                  else if (mode0 == V4SImode && mode1 == V4HImode
+			   && mode2 == V4HImode)
+                    ftype = v4si_ftype_v4si_v4hi_v4hi_si;
+                  break;
+
+                case V4SFmode:
+                  if (mode0 == V4SFmode && mode1 == V4SFmode
+		      && mode2 == V4SFmode)
+                    ftype = v4sf_ftype_v4sf_v4sf_v4sf_si;
+                  break;
+
+                case V2DImode:
+                  if (mode0 == V2DImode && mode1 == V2SImode
+		      && mode2 == V2SImode)
+                    ftype = v2di_ftype_v2di_v2si_v2si_si;
+                  break;
+
+                default:
+                  gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_GETLANE:
+              /* Vector lane extraction.  */
+              gcc_assert (valid_neon_mode (mode0) && mode1 == SImode
+			  && mode2 == SImode);
+              switch (tmode)
+		{
+                case QImode:
+                  if (mode0 == V8QImode)
+                    ftype = qi_ftype_v8qi_si_si;
+                  else if (mode0 == V16QImode)
+                    ftype = qi_ftype_v16qi_si_si;
+                  break;
+
+                case HImode:
+                  if (mode0 == V4HImode)
+                    ftype = hi_ftype_v4hi_si_si;
+                  else if (mode0 == V8HImode)
+                    ftype = hi_ftype_v8hi_si_si;
+                  break;
+
+                case SImode:
+                  if (mode0 == V2SImode)
+                    ftype = si_ftype_v2si_si_si;
+                  else if (mode0 == V4SImode)
+                    ftype = si_ftype_v4si_si_si;
+                  break;
+
+                case SFmode:
+                  if (mode0 == V2SFmode)
+                    ftype = sf_ftype_v2sf_si_si;
+                  else if (mode0 == V4SFmode)
+                    ftype = sf_ftype_v4sf_si_si;
+                  break;
+
+                case DImode:
+                  if (mode0 == DImode)
+                    ftype = di_ftype_di_si_si;
+                  else if (mode0 == V2DImode)
+                    ftype = di_ftype_v2di_si_si;
+                  break;
+
+                default:
+                  gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_SETLANE:
+              {
+                /* Set lanes in vector.  */
+                gcc_assert (valid_neon_mode (mode1) && mode2 == SImode);
+                switch (tmode)
+                  {
+                  case V8QImode:
+                    if (mode0 == QImode && mode1 == V8QImode)
+                      ftype = v8qi_ftype_qi_v8qi_si;
+                    break;
+
+                  case V4HImode:
+                    if (mode0 == HImode && mode1 == V4HImode)
+                      ftype = v4hi_ftype_hi_v4hi_si;
+                    break;
+
+                  case V2SImode:
+                    if (mode0 == SImode && mode1 == V2SImode)
+                      ftype = v2si_ftype_si_v2si_si;
+                    break;
+
+                  case V2SFmode:
+                    if (mode0 == SFmode && mode1 == V2SFmode)
+                      ftype = v2sf_ftype_sf_v2sf_si;
+                    break;
+
+                  case DImode:
+                    if (mode0 == DImode && mode1 == DImode)
+                      ftype = di_ftype_di_di_si;
+                    break;
+
+                  case V16QImode:
+                    if (mode0 == QImode && mode1 == V16QImode)
+                      ftype = v16qi_ftype_qi_v16qi_si;
+                    break;
+
+                  case V8HImode:
+                    if (mode0 == HImode && mode1 == V8HImode)
+                      ftype = v8hi_ftype_hi_v8hi_si;
+                    break;
+
+                  case V4SImode:
+                    if (mode0 == SImode && mode1 == V4SImode)
+                      ftype = v4si_ftype_si_v4si_si;
+                    break;
+
+                  case V4SFmode:
+                    if (mode0 == SFmode && mode1 == V4SFmode)
+                      ftype = v4sf_ftype_sf_v4sf_si;
+                    break;
+
+                  case V2DImode:
+                    if (mode0 == DImode && mode1 == V2DImode)
+                      ftype = v2di_ftype_di_v2di_si;
+                    break;
+
+                  default:
+                    gcc_unreachable ();
+                  }
+              }
+              break;
+
+	    case NEON_CREATE:
+              gcc_assert (mode0 == DImode);
+              /* Create vector from bit pattern.  */
+              switch (tmode)
+                {
+                case V8QImode: ftype = v8qi_ftype_di; break;
+                case V4HImode: ftype = v4hi_ftype_di; break;
+                case V2SImode: ftype = v2si_ftype_di; break;
+                case V2SFmode: ftype = v2sf_ftype_di; break;
+                case DImode: ftype = di_ftype_di; break;
+                default: gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_DUP:
+              gcc_assert ((mode0 == DImode && tmode == DImode)
+                          || mode0 == GET_MODE_INNER (tmode));
+              switch (tmode)
+                {
+                case V8QImode:  ftype = v8qi_ftype_qi; break;
+                case V4HImode:  ftype = v4hi_ftype_hi; break;
+                case V2SImode:  ftype = v2si_ftype_si; break;
+                case V2SFmode:  ftype = v2sf_ftype_sf; break;
+                case DImode:    ftype = di_ftype_di; break;
+                case V16QImode: ftype = v16qi_ftype_qi; break;
+                case V8HImode:  ftype = v8hi_ftype_hi; break;
+                case V4SImode:  ftype = v4si_ftype_si; break;
+                case V4SFmode:  ftype = v4sf_ftype_sf; break;
+                case V2DImode:  ftype = v2di_ftype_di; break;
+                default: gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_DUPLANE:
+              gcc_assert (valid_neon_mode (mode0) && mode1 == SImode);
+              switch (tmode)
+                {
+                case V8QImode:  ftype = v8qi_ftype_v8qi_si; break;
+                case V4HImode:  ftype = v4hi_ftype_v4hi_si; break;
+                case V2SImode:  ftype = v2si_ftype_v2si_si; break;
+                case V2SFmode:  ftype = v2sf_ftype_v2sf_si; break;
+                case DImode:    ftype = di_ftype_di_si; break;
+                case V16QImode: ftype = v16qi_ftype_v8qi_si; break;
+                case V8HImode:  ftype = v8hi_ftype_v4hi_si; break;
+                case V4SImode:  ftype = v4si_ftype_v2si_si; break;
+                case V4SFmode:  ftype = v4sf_ftype_v2sf_si; break;
+                case V2DImode:  ftype = v2di_ftype_di_si; break;
+                default: gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_SHIFTIMM:
+              gcc_assert (mode1 == SImode && mode2 == SImode);
+              switch (tmode)
+                {
+                case V8QImode:
+                  if (mode0 == V8QImode)
+                    ftype = v8qi_ftype_v8qi_si_si;
+                  else if (mode0 == V8HImode)
+                    ftype = v8qi_ftype_v8hi_si_si;
+                  break;
+
+                case V4HImode:
+                  if (mode0 == V4HImode)
+                    ftype = v4hi_ftype_v4hi_si_si;
+                  else if (mode0 == V4SImode)
+                    ftype = v4hi_ftype_v4si_si_si;
+                  break;
+                  
+                case V2SImode:
+                  if (mode0 == V2SImode)
+		    ftype = v2si_ftype_v2si_si_si;
+                  else if (mode0 == V2DImode)
+                    ftype = v2si_ftype_v2di_si_si;
+                  break;
+
+                case DImode:
+                  if (mode0 == DImode)
+		    ftype = di_ftype_di_si_si;
+                  break;
+
+                case V16QImode:
+                  if (mode0 == V16QImode)
+                    ftype = v16qi_ftype_v16qi_si_si;
+                  break;
+
+                case V8HImode:
+                  if (mode0 == V8HImode)
+                    ftype = v8hi_ftype_v8hi_si_si;
+                  else if (mode0 == V8QImode)
+                    ftype = v8hi_ftype_v8qi_si_si;
+                  break;
+
+                case V4SImode:
+                  if (mode0 == V4SImode)
+                    ftype = v4si_ftype_v4si_si_si;
+                  else if (mode0 == V4HImode)
+                    ftype = v4si_ftype_v4hi_si_si;
+                  break;
+
+                case V2DImode:
+                  if (mode0 == V2DImode)
+                    ftype = v2di_ftype_v2di_si_si;
+                  else if (mode0 == V2SImode)
+                    ftype = v2di_ftype_v2si_si_si;
+                  break;
+
+                default: gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_SHIFTACC:
+              gcc_assert (tmode == mode0 && mode0 == mode1 && mode2 == SImode
+			  && mode3 == SImode);
+	      switch (tmode)
+                {
+                case V8QImode:  ftype = v8qi_ftype_v8qi_v8qi_si_si; break;
+                case V4HImode:  ftype = v4hi_ftype_v4hi_v4hi_si_si; break;
+                case V2SImode:  ftype = v2si_ftype_v2si_v2si_si_si; break;
+                case V2SFmode:  ftype = v2sf_ftype_v2sf_v2sf_si_si; break;
+                case DImode:    ftype = di_ftype_di_di_si_si; break;
+                case V16QImode: ftype = v16qi_ftype_v16qi_v16qi_si_si; break;
+                case V8HImode:  ftype = v8hi_ftype_v8hi_v8hi_si_si; break;
+                case V4SImode:  ftype = v4si_ftype_v4si_v4si_si_si; break;
+                case V4SFmode:  ftype = v4sf_ftype_v4sf_v4sf_si_si; break;
+                case V2DImode:  ftype = v2di_ftype_v2di_v2di_si_si; break;
+                default: gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_COMBINE:
+              gcc_assert (valid_neon_mode (mode0) && valid_neon_mode (mode1));
+              switch (tmode)
+                {
+                case V16QImode:
+                  if (mode0 == V8QImode && mode1 == V8QImode)
+                    ftype = v16qi_ftype_v8qi_v8qi;
+                  break;
+                
+                case V8HImode:
+                  if (mode0 == V4HImode && mode1 == V4HImode)
+                    ftype = v8hi_ftype_v4hi_v4hi;
+                  break;
+                  
+                case V4SImode:
+                  if (mode0 == V2SImode && mode1 == V2SImode)
+                    ftype = v4si_ftype_v2si_v2si;
+                  break;
+                  
+                case V4SFmode:
+                  if (mode0 == V2SFmode && mode1 == V2SFmode)
+                    ftype = v4sf_ftype_v2sf_v2sf;
+                  break;
+                  
+                case V2DImode:
+                  if (mode0 == DImode && mode1 == DImode)
+                    ftype = v2di_ftype_di_di;
+                  break;
+
+                default:
+                  gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_SPLIT:
+              gcc_assert (valid_neon_mode (mode0));
+              switch (tmode)
+                {
+                case V8QImode:
+                  if (mode0 == V16QImode)
+                    ftype = v8qi_ftype_v16qi;
+                  break;
+
+                case V4HImode:
+                  if (mode0 == V8HImode)
+                    ftype = v4hi_ftype_v8hi;
+                  break;
+
+                case V2SImode:
+                  if (mode0 == V4SImode)
+                    ftype = v2si_ftype_v4si;
+                  break;
+
+                case V2SFmode:
+                  if (mode0 == V4SFmode)
+                    ftype = v2sf_ftype_v4sf;
+                  break;
+
+                case DImode:
+                  if (mode0 == V2DImode)
+                    ftype = di_ftype_v2di;
+                  break;
+
+                default:
+                  gcc_unreachable ();
+		}
+              break;
+
+	    case NEON_CONVERT:
+              gcc_assert (valid_neon_mode (mode0) && mode1 == SImode);
+              switch (tmode)
+                {
+                case V2SImode:
+                  if (mode0 == V2SFmode)
+                    ftype = v2si_ftype_v2sf_si;
+                  break;
+
+                case V2SFmode:
+                  if (mode0 == V2SImode)
+                    ftype = v2sf_ftype_v2si_si;
+                  break;
+
+                case V4SImode:
+                  if (mode0 == V4SFmode)
+                    ftype = v4si_ftype_v4sf_si;
+                  break;
+
+                case V4SFmode:
+                  if (mode0 == V4SImode)
+                    ftype = v4sf_ftype_v4si_si;
+                  break;
+
+                default: gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_FIXCONV:
+              gcc_assert (valid_neon_mode (mode0) && mode1 == SImode
+			  && mode2 == SImode);
+              switch (tmode)
+		{
+                case V2SImode:
+                  if (mode0 == V2SFmode)
+                    ftype = v2si_ftype_v2sf_si_si;
+                  break;
+
+                case V2SFmode:
+                  if (mode0 == V2SImode)
+                    ftype = v2sf_ftype_v2si_si_si;
+                  break;
+
+                case V4SImode:
+                  if (mode0 == V4SFmode)
+                    ftype = v4si_ftype_v4sf_si_si;
+                  break;
+
+                case V4SFmode:
+                  if (mode0 == V4SImode)
+                    ftype = v4sf_ftype_v4si_si_si;
+                  break;
+                
+                default:
+                  gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_LANEMUL:
+              {
+                enum machine_mode mode3 = insn_data[icode].operand[4].mode;
+                gcc_assert (valid_neon_mode (mode0) && valid_neon_mode (mode1)
+			    && mode2 == SImode && mode3 == SImode);
+                switch (tmode)
+                  {
+                  case V4HImode:
+                    if (mode0 == V4HImode && mode1 == V4HImode)
+                      ftype = v4hi_ftype_v4hi_v4hi_si_si;
+                    break;
+                  
+                  case V2SImode:
+                    if (mode0 == V2SImode && mode1 == V2SImode)
+                      ftype = v2si_ftype_v2si_v2si_si_si;
+                    break;
+                  
+                  case V2SFmode:
+                    if (mode0 == V2SFmode && mode1 == V2SFmode)
+                      ftype = v2sf_ftype_v2sf_v2sf_si_si;
+                    break;
+                  
+                  case V8HImode:
+                    if (mode0 == V8HImode && mode1 == V4HImode)
+                      ftype = v8hi_ftype_v8hi_v4hi_si_si;
+                    break;
+                  
+                  case V4SImode:
+                    if (mode0 == V4SImode && mode1 == V2SImode)
+                      ftype = v4si_ftype_v4si_v2si_si_si;
+                    break;
+                  
+                  case V4SFmode:
+                    if (mode0 == V4SFmode && mode1 == V2SFmode)
+                      ftype = v4sf_ftype_v4sf_v2sf_si_si;
+                    break;
+                  
+                  default:
+                    gcc_unreachable ();
+                  }
+              }
+              break;
+
+	    case NEON_LANEMULL:
+              {
+                enum machine_mode mode3 = insn_data[icode].operand[4].mode;
+                gcc_assert (valid_neon_mode (mode0) && valid_neon_mode (mode1)
+			    && mode2 == SImode && mode3 == SImode);
+                switch (tmode)
+                  {
+                  case V4SImode:
+                    if (mode0 == V4HImode && mode1 == V4HImode)
+                      ftype = v4si_ftype_v4hi_v4hi_si_si;
+                    break;
+                  
+                  case V2DImode:
+                    if (mode0 == V2SImode && mode1 == V2SImode)
+                      ftype = v2di_ftype_v2si_v2si_si_si;
+                    break;
+                  
+                  default:
+                    gcc_unreachable ();
+                  }
+              }
+              break;
+
+	    case NEON_LANEMULH:
+              {
+                enum machine_mode mode3 = insn_data[icode].operand[4].mode;
+                gcc_assert (valid_neon_mode (mode0) && valid_neon_mode (mode1)
+			    && mode2 == SImode && mode3 == SImode);
+                switch (tmode)
+                  {
+                  case V4SImode:
+                    if (mode0 == V4SImode && mode1 == V2SImode)
+                      ftype = v4si_ftype_v4si_v2si_si_si;
+                    break;
+                  
+                  case V8HImode:
+                    if (mode0 == V8HImode && mode1 == V4HImode)
+                      ftype = v8hi_ftype_v8hi_v4hi_si_si;
+                    break;
+
+                  case V2SImode:
+                    if (mode0 == V2SImode && mode1 == V2SImode)
+                      ftype = v2si_ftype_v2si_v2si_si_si;
+                    break;
+                  
+                  case V4HImode:
+                    if (mode0 == V4HImode && mode1 == V4HImode)
+                      ftype = v4hi_ftype_v4hi_v4hi_si_si;
+                    break;
+                  
+                  default:
+                    gcc_unreachable ();
+                  }
+              }
+              break;
+
+	    case NEON_LANEMAC:
+              {
+                enum machine_mode mode3 = insn_data[icode].operand[4].mode;
+                enum machine_mode mode4 = insn_data[icode].operand[5].mode;
+                gcc_assert (valid_neon_mode (mode0) && valid_neon_mode (mode1)
+			    && valid_neon_mode (mode2) && mode3 == SImode
+                            && mode4 == SImode);
+                switch (tmode)
+                  {
+                  case V4HImode:
+                    if (mode0 == V4HImode && mode1 == V4HImode
+			&& mode2 == V4HImode)
+                      ftype = v4hi_ftype_v4hi_v4hi_v4hi_si_si;
+                    break;
+                  
+                  case V2SImode:
+                    if (mode0 == V2SImode && mode1 == V2SImode
+			&& mode2 == V2SImode)
+                      ftype = v2si_ftype_v2si_v2si_v2si_si_si;
+                    break;
+                  
+                  case V2SFmode:
+                    if (mode0 == V2SFmode && mode1 == V2SFmode
+			&& mode2 == V2SFmode)
+                      ftype = v2sf_ftype_v2sf_v2sf_v2sf_si_si;
+                    break;
+                  
+                  case V8HImode:
+                    if (mode0 == V8HImode && mode1 == V8HImode
+			&& mode2 == V4HImode)
+                      ftype = v8hi_ftype_v8hi_v8hi_v4hi_si_si;
+                    break;
+                  
+                  case V4SImode:
+                    if (mode0 == V4SImode && mode1 == V4SImode
+			&& mode2 == V2SImode)
+                      ftype = v4si_ftype_v4si_v4si_v2si_si_si;
+                    else if (mode0 == V4SImode && mode1 == V4HImode
+			&& mode2 == V4HImode)
+                      ftype = v4si_ftype_v4si_v4hi_v4hi_si_si;
+                    break;
+                  
+                  case V4SFmode:
+                    if (mode0 == V4SFmode && mode1 == V4SFmode
+			&& mode2 == V2SFmode)
+                      ftype = v4sf_ftype_v4sf_v4sf_v2sf_si_si;
+                    break;
+                  
+                  case V2DImode:
+                    if (mode0 == V2DImode && mode1 == V2SImode
+			&& mode2 == V2SImode)
+                      ftype = v2di_ftype_v2di_v2si_v2si_si_si;
+                    break;
+                  
+                  default:
+                    gcc_unreachable ();
+                  }
+              }
+              break;
+
+	    case NEON_SCALARMUL:
+              switch (tmode)
+                {
+                case V4HImode:
+                  if (mode0 == V4HImode && mode1 == HImode)
+                    ftype = v4hi_ftype_v4hi_hi_si;
+                  break;
+
+                case V2SImode:
+                  if (mode0 == V2SImode && mode1 == SImode)
+                    ftype = v2si_ftype_v2si_si_si;
+                  break;
+
+                case V2SFmode:
+                  if (mode0 == V2SFmode && mode1 == SFmode)
+                    ftype = v2sf_ftype_v2sf_sf_si;
+                  break;
+
+                case V8HImode:
+                  if (mode0 == V8HImode && mode1 == HImode)
+                    ftype = v8hi_ftype_v8hi_hi_si;
+                  break;
+
+                case V4SImode:
+                  if (mode0 == V4SImode && mode1 == SImode)
+                    ftype = v4si_ftype_v4si_si_si;
+                  break;
+
+                case V4SFmode:
+                  if (mode0 == V4SFmode && mode1 == SFmode)
+                    ftype = v4sf_ftype_v4sf_sf_si;
+                  break;
+
+                default:
+                  gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_SCALARMULL:
+              switch (tmode)
+                {
+                case V4SImode:
+                  if (mode0 == V4HImode && mode1 == HImode)
+                    ftype = v4si_ftype_v4hi_hi_si;
+                  break;
+
+                case V2DImode:
+                  if (mode0 == V2SImode && mode1 == SImode)
+                    ftype = v2di_ftype_v2si_si_si;
+                  break;
+
+                default:
+                  gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_SCALARMULH:
+              {
+                switch (tmode)
+                  {
+                  case V4SImode:
+                    if (mode0 == V4SImode && mode1 == SImode)
+                      ftype = v4si_ftype_v4si_si_si;
+                    break;
+                  
+                  case V8HImode:
+                    if (mode0 == V8HImode && mode1 == HImode)
+                      ftype = v8hi_ftype_v8hi_hi_si;
+                    break;
+
+                  case V2SImode:
+                    if (mode0 == V2SImode && mode1 == SImode)
+                      ftype = v2si_ftype_v2si_si_si;
+                    break;
+                  
+                  case V4HImode:
+                    if (mode0 == V4HImode && mode1 == HImode)
+                      ftype = v4hi_ftype_v4hi_hi_si;
+                    break;
+                  
+                  default:
+                    gcc_unreachable ();
+                  }
+              }
+              break;
+
+	    case NEON_SCALARMAC:
+              {
+                gcc_assert (mode2 == GET_MODE_INNER (mode1));
+                switch (tmode)
+                  {
+                  case V4HImode:
+                    if (mode0 == V4HImode && mode1 == V4HImode)
+                      ftype = v4hi_ftype_v4hi_v4hi_hi_si;
+                    break;
+
+                  case V2SImode:
+                    if (mode0 == V2SImode && mode1 == V2SImode)
+                      ftype = v2si_ftype_v2si_v2si_si_si;
+                    break;
+
+                  case V2SFmode:
+                    if (mode0 == V2SFmode && mode1 == V2SFmode)
+                      ftype = v2sf_ftype_v2sf_v2sf_sf_si;
+                    break;
+
+                  case V8HImode:
+                    if (mode0 == V8HImode && mode1 == V8HImode)
+                      ftype = v8hi_ftype_v8hi_v8hi_hi_si;
+                    break;
+
+                  case V4SImode:
+                    if (mode0 == V4SImode && mode1 == V4SImode)
+                      ftype = v4si_ftype_v4si_v4si_si_si;
+                    else if (mode0 == V4SImode && mode1 == V4HImode)
+                      ftype = v4si_ftype_v4si_v4hi_hi_si;
+                    break;
+
+                  case V4SFmode:
+                    if (mode0 == V4SFmode && mode1 == V4SFmode)
+                      ftype = v4sf_ftype_v4sf_v4sf_sf_si;
+                    break;
+
+		  case V2DImode:
+                    if (mode0 == V2DImode && mode1 == V2SImode)
+                      ftype = v2di_ftype_v2di_v2si_si_si;
+                    break;
+
+                  default:
+                    gcc_unreachable ();
+                  }
+              }
+              break;
+
+	    case NEON_SELECT:
+              gcc_assert (mode1 == mode2
+                          && (mode0 == mode1
+                              || (mode0 == V2SImode && mode1 == V2SFmode)
+                              || (mode0 == V4SImode && mode1 == V4SFmode)));
+              switch (tmode)
+                {
+                case V8QImode: ftype = v8qi_ftype_v8qi_v8qi_v8qi; break;
+                case V4HImode: ftype = v4hi_ftype_v4hi_v4hi_v4hi; break;
+                case V2SImode: ftype = v2si_ftype_v2si_v2si_v2si; break;
+                case V2SFmode: ftype = v2sf_ftype_v2si_v2sf_v2sf; break;
+                case DImode: ftype = di_ftype_di_di_di; break;
+                case V16QImode: ftype = v16qi_ftype_v16qi_v16qi_v16qi; break;
+                case V8HImode: ftype = v8hi_ftype_v8hi_v8hi_v8hi; break;
+                case V4SImode: ftype = v4si_ftype_v4si_v4si_v4si; break;
+                case V4SFmode: ftype = v4sf_ftype_v4si_v4sf_v4sf; break;
+                case V2DImode: ftype = v2di_ftype_v2di_v2di_v2di; break;
+                default: gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_VTBL:
+              {
+                gcc_assert (tmode == V8QImode && mode1 == V8QImode);
+                switch (mode0)
+                  {
+                  case V8QImode: ftype = v8qi_ftype_v8qi_v8qi; break;
+                  case TImode: ftype = v8qi_ftype_ti_v8qi; break;
+                  case EImode: ftype = v8qi_ftype_ei_v8qi; break;
+                  case OImode: ftype = v8qi_ftype_oi_v8qi; break;
+                  default: gcc_unreachable ();
+                  }
+              }
+              break;
+
+	    case NEON_VTBX:
+              {
+                gcc_assert (tmode == V8QImode && mode0 == V8QImode
+			    && mode2 == V8QImode);
+                switch (mode1)
+                  {
+                  case V8QImode: ftype = v8qi_ftype_v8qi_v8qi_v8qi; break;
+                  case TImode: ftype = v8qi_ftype_v8qi_ti_v8qi; break;
+                  case EImode: ftype = v8qi_ftype_v8qi_ei_v8qi; break;
+                  case OImode: ftype = v8qi_ftype_v8qi_oi_v8qi; break;
+                  default: gcc_unreachable ();
+                  }
+              }
+              break;
+
+	    case NEON_RESULTPAIR:
+              {
+                switch (mode0)
+                  {
+		  case V8QImode: ftype = void_ftype_pv8qi_v8qi_v8qi; break;
+                  case V4HImode: ftype = void_ftype_pv4hi_v4hi_v4hi; break;
+                  case V2SImode: ftype = void_ftype_pv2si_v2si_v2si; break;
+                  case V2SFmode: ftype = void_ftype_pv2sf_v2sf_v2sf; break;
+                  case DImode: ftype = void_ftype_pdi_di_di; break;
+                  case V16QImode: ftype = void_ftype_pv16qi_v16qi_v16qi; break;
+                  case V8HImode: ftype = void_ftype_pv8hi_v8hi_v8hi; break;
+                  case V4SImode: ftype = void_ftype_pv4si_v4si_v4si; break;
+                  case V4SFmode: ftype = void_ftype_pv4sf_v4sf_v4sf; break;
+                  case V2DImode: ftype = void_ftype_pv2di_v2di_v2di; break;
+                  default: gcc_unreachable ();
+                  }
+              }
+              break;
+
+	    case NEON_REINTERP:
+              {
+                /* We iterate over 5 doubleword types, then 5 quadword
+                   types.  */
+                int rhs = j % 5;
+                switch (tmode)
+                  {
+                  case V8QImode: ftype = reinterp_ftype_dreg[0][rhs]; break;
+                  case V4HImode: ftype = reinterp_ftype_dreg[1][rhs]; break;
+                  case V2SImode: ftype = reinterp_ftype_dreg[2][rhs]; break;
+                  case V2SFmode: ftype = reinterp_ftype_dreg[3][rhs]; break;
+                  case DImode: ftype = reinterp_ftype_dreg[4][rhs]; break;
+                  case V16QImode: ftype = reinterp_ftype_qreg[0][rhs]; break;
+                  case V8HImode: ftype = reinterp_ftype_qreg[1][rhs]; break;
+                  case V4SImode: ftype = reinterp_ftype_qreg[2][rhs]; break;
+		  case V4SFmode: ftype = reinterp_ftype_qreg[3][rhs]; break;
+                  case V2DImode: ftype = reinterp_ftype_qreg[4][rhs]; break;
+                  default: gcc_unreachable ();
+                  }
+              }
+              break;
+
+	    case NEON_LOAD1:
+              switch (tmode)
+                {
+                case V8QImode: ftype = v8qi_ftype_const_qi_pointer; break;
+                case V4HImode: ftype = v4hi_ftype_const_hi_pointer; break;
+                case V2SImode: ftype = v2si_ftype_const_si_pointer; break;
+                case V2SFmode: ftype = v2sf_ftype_const_sf_pointer; break;
+                case DImode: ftype = di_ftype_const_di_pointer; break;
+                case V16QImode: ftype = v16qi_ftype_const_qi_pointer; break;
+                case V8HImode: ftype = v8hi_ftype_const_hi_pointer; break;
+                case V4SImode: ftype = v4si_ftype_const_si_pointer; break;
+                case V4SFmode: ftype = v4sf_ftype_const_sf_pointer; break;
+                case V2DImode: ftype = v2di_ftype_const_di_pointer; break;
+                default: gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_LOAD1LANE:
+              switch (tmode)
+                {
+                case V8QImode:
+		  ftype = v8qi_ftype_const_qi_pointer_v8qi_si;
+		  break;
+                case V4HImode:
+		  ftype = v4hi_ftype_const_hi_pointer_v4hi_si;
+		  break;
+                case V2SImode:
+		  ftype = v2si_ftype_const_si_pointer_v2si_si;
+		  break;
+                case V2SFmode:
+		  ftype = v2sf_ftype_const_sf_pointer_v2sf_si;
+		  break;
+                case DImode:
+		  ftype = di_ftype_const_di_pointer_di_si;
+		  break;
+                case V16QImode:
+		  ftype = v16qi_ftype_const_qi_pointer_v16qi_si;
+		  break;
+                case V8HImode:
+		  ftype = v8hi_ftype_const_hi_pointer_v8hi_si;
+		  break;
+                case V4SImode:
+		  ftype = v4si_ftype_const_si_pointer_v4si_si;
+		  break;
+                case V4SFmode:
+		  ftype = v4sf_ftype_const_sf_pointer_v4sf_si;
+		  break;
+                case V2DImode:
+		  ftype = v2di_ftype_const_di_pointer_v2di_si;
+		  break;
+                default:
+		  gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_STORE1:
+              switch (mode0)
+                {
+                case V8QImode: ftype = void_ftype_qi_pointer_v8qi; break;
+                case V4HImode: ftype = void_ftype_hi_pointer_v4hi; break;
+                case V2SImode: ftype = void_ftype_si_pointer_v2si; break;
+                case V2SFmode: ftype = void_ftype_sf_pointer_v2sf; break;
+                case DImode: ftype = void_ftype_di_pointer_di; break;
+                case V16QImode: ftype = void_ftype_qi_pointer_v16qi; break;
+                case V8HImode: ftype = void_ftype_hi_pointer_v8hi; break;
+                case V4SImode: ftype = void_ftype_si_pointer_v4si; break;
+                case V4SFmode: ftype = void_ftype_sf_pointer_v4sf; break;
+                case V2DImode: ftype = void_ftype_di_pointer_v2di; break;
+                default: gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_STORE1LANE:
+              switch (mode0)
+                {
+                case V8QImode: ftype = void_ftype_qi_pointer_v8qi_si; break;
+                case V4HImode: ftype = void_ftype_hi_pointer_v4hi_si; break;
+                case V2SImode: ftype = void_ftype_si_pointer_v2si_si; break;
+                case V2SFmode: ftype = void_ftype_sf_pointer_v2sf_si; break;
+                case DImode: ftype = void_ftype_di_pointer_di_si; break;
+                case V16QImode: ftype = void_ftype_qi_pointer_v16qi_si; break;
+                case V8HImode: ftype = void_ftype_hi_pointer_v8hi_si; break;
+                case V4SImode: ftype = void_ftype_si_pointer_v4si_si; break;
+                case V4SFmode: ftype = void_ftype_sf_pointer_v4sf_si; break;
+                case V2DImode: ftype = void_ftype_di_pointer_v2di_si; break;
+                default: gcc_unreachable ();
+                }
+              break;
+
+	    case NEON_LOADSTRUCT:
+	      switch (tmode)
+		{
+		case TImode:
+		  /* vld2 cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V8QI: ftype = ti_ftype_const_qi_pointer; break;
+		    case T_V4HI: ftype = ti_ftype_const_hi_pointer; break;
+		    case T_V2SI: ftype = ti_ftype_const_si_pointer; break;
+		    case T_V2SF: ftype = ti_ftype_const_sf_pointer; break;
+		    case T_DI: ftype = ti_ftype_const_di_pointer; break;
+		    default: gcc_unreachable ();
+		    }
+		  break;
+
+		case EImode:
+		  /* vld3 cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V8QI: ftype = ei_ftype_const_qi_pointer; break;
+		    case T_V4HI: ftype = ei_ftype_const_hi_pointer; break;
+		    case T_V2SI: ftype = ei_ftype_const_si_pointer; break;
+		    case T_V2SF: ftype = ei_ftype_const_sf_pointer; break;
+		    case T_DI: ftype = ei_ftype_const_di_pointer; break;
+		    default: gcc_unreachable ();
+		    }
+		  break;
+
+		case OImode:
+		  /* vld2q and vld4 cases.  */
+		  switch (1 << j)
+		    {
+		      /* vld2q cases.  */
+		    case T_V16QI: ftype = oi_ftype_const_qi_pointer; break;
+		    case T_V8HI: ftype = oi_ftype_const_hi_pointer; break;
+		    case T_V4SI: ftype = oi_ftype_const_si_pointer; break;
+		    case T_V4SF: ftype = oi_ftype_const_sf_pointer; break;
+		      /* vld4 cases.  */
+		    case T_V8QI: ftype = oi_ftype_const_qi_pointer; break;
+		    case T_V4HI: ftype = oi_ftype_const_hi_pointer; break;
+		    case T_V2SI: ftype = oi_ftype_const_si_pointer; break;
+		    case T_V2SF: ftype = oi_ftype_const_sf_pointer; break;
+		    case T_DI: ftype = oi_ftype_const_di_pointer; break;
+		    default: gcc_unreachable ();
+		    }
+		  break;
+
+		case CImode:
+		  /* vld3q cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V16QI: ftype = ci_ftype_const_qi_pointer; break;
+		    case T_V8HI: ftype = ci_ftype_const_hi_pointer; break;
+		    case T_V4SI: ftype = ci_ftype_const_si_pointer; break;
+		    case T_V4SF: ftype = ci_ftype_const_sf_pointer; break;
+		    default: gcc_unreachable ();
+		    }
+		  break;
+
+		case XImode:
+		  /* vld4q cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V16QI: ftype = xi_ftype_const_qi_pointer; break;
+		    case T_V8HI: ftype = xi_ftype_const_hi_pointer; break;
+		    case T_V4SI: ftype = xi_ftype_const_si_pointer; break;
+		    case T_V4SF: ftype = xi_ftype_const_sf_pointer; break;
+		    default: gcc_unreachable ();
+		    }
+		  break;
+
+		default:
+		  gcc_unreachable ();
+		}
+              break;
+
+	    case NEON_LOADSTRUCTLANE:
+	      switch (tmode)
+		{
+		case TImode:
+		  /* vld2_lane cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V8QI:
+		      ftype = ti_ftype_const_qi_pointer_ti_si;
+		      break;
+		    case T_V4HI:
+		      ftype = ti_ftype_const_hi_pointer_ti_si;
+		      break;
+		    case T_V2SI:
+		      ftype = ti_ftype_const_si_pointer_ti_si;
+		      break;
+		    case T_V2SF:
+		      ftype = ti_ftype_const_sf_pointer_ti_si;
+		      break;
+		    default:
+		      gcc_unreachable ();
+		    }
+		  break;
+
+		case EImode:
+		  /* vld3_lane cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V8QI:
+		      ftype = ei_ftype_const_qi_pointer_ei_si;
+		      break;
+		    case T_V4HI:
+		      ftype = ei_ftype_const_hi_pointer_ei_si;
+		      break;
+		    case T_V2SI:
+		      ftype = ei_ftype_const_si_pointer_ei_si;
+		      break;
+		    case T_V2SF:
+		      ftype = ei_ftype_const_sf_pointer_ei_si;
+		      break;
+		    default:
+		      gcc_unreachable ();
+		    }
+		  break;
+
+		case OImode:
+		  /* vld2q_lane and vld4_lane cases.  */
+		  switch (1 << j)
+		    {
+		      /* vld2q_lane cases.  */
+		    case T_V8HI:
+		      ftype = oi_ftype_const_hi_pointer_oi_si;
+		      break;
+		    case T_V4SI:
+		      ftype = oi_ftype_const_si_pointer_oi_si;
+		      break;
+		    case T_V4SF:
+		      ftype = oi_ftype_const_sf_pointer_oi_si;
+		      break;
+		      /* vld4_lane cases.  */
+		    case T_V8QI:
+		      ftype = oi_ftype_const_qi_pointer_oi_si;
+		      break;
+		    case T_V4HI:
+		      ftype = oi_ftype_const_hi_pointer_oi_si;
+		      break;
+		    case T_V2SI:
+		      ftype = oi_ftype_const_si_pointer_oi_si;
+		      break;
+		    case T_V2SF:
+		      ftype = oi_ftype_const_sf_pointer_oi_si;
+		      break;
+		    default:
+		      gcc_unreachable ();
+		    }
+		  break;
+
+		case CImode:
+		  /* vld3q_lane cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V8HI:
+		      ftype = ci_ftype_const_hi_pointer_ci_si;
+		      break;
+		    case T_V4SI:
+		      ftype = ci_ftype_const_si_pointer_ci_si;
+		      break;
+		    case T_V4SF:
+		      ftype = ci_ftype_const_sf_pointer_ci_si;
+		      break;
+		    default:
+		      gcc_unreachable ();
+		    }
+		  break;
+
+		case XImode:
+		  /* vld4q_lane cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V8HI:
+		      ftype = xi_ftype_const_hi_pointer_xi_si;
+		      break;
+		    case T_V4SI:
+		      ftype = xi_ftype_const_si_pointer_xi_si;
+		      break;
+		    case T_V4SF:
+		      ftype = xi_ftype_const_sf_pointer_xi_si;
+		      break;
+		    default:
+		      gcc_unreachable ();
+		    }
+		  break;
+
+		default:
+		  gcc_unreachable ();
+		}
+              break;
+
+
+	    case NEON_STORESTRUCT:
+	      switch (mode0)
+		{
+		case TImode:
+		  /* vst2 cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V8QI: ftype = void_ftype_qi_pointer_ti; break;
+		    case T_V4HI: ftype = void_ftype_hi_pointer_ti; break;
+		    case T_V2SI: ftype = void_ftype_si_pointer_ti; break;
+		    case T_V2SF: ftype = void_ftype_sf_pointer_ti; break;
+		    case T_DI: ftype = void_ftype_di_pointer_ti; break;
+		    default: gcc_unreachable ();
+		    }
+		  break;
+
+		case EImode:
+		  /* vst3 cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V8QI: ftype = void_ftype_qi_pointer_ei; break;
+		    case T_V4HI: ftype = void_ftype_hi_pointer_ei; break;
+		    case T_V2SI: ftype = void_ftype_si_pointer_ei; break;
+		    case T_V2SF: ftype = void_ftype_sf_pointer_ei; break;
+		    case T_DI: ftype = void_ftype_di_pointer_ei; break;
+		    default: gcc_unreachable ();
+		    }
+		  break;
+
+		case OImode:
+		  /* vst2q and vst4 cases.  */
+		  switch (1 << j)
+		    {
+		      /* vst2q cases.  */
+		    case T_V16QI: ftype = void_ftype_qi_pointer_oi; break;
+		    case T_V8HI: ftype = void_ftype_hi_pointer_oi; break;
+		    case T_V4SI: ftype = void_ftype_si_pointer_oi; break;
+		    case T_V4SF: ftype = void_ftype_sf_pointer_oi; break;
+		      /* vst4 cases.  */
+		    case T_V8QI: ftype = void_ftype_qi_pointer_oi; break;
+		    case T_V4HI: ftype = void_ftype_hi_pointer_oi; break;
+		    case T_V2SI: ftype = void_ftype_si_pointer_oi; break;
+		    case T_V2SF: ftype = void_ftype_sf_pointer_oi; break;
+		    case T_DI: ftype = void_ftype_di_pointer_oi; break;
+		    default: gcc_unreachable ();
+		    }
+		  break;
+
+		case CImode:
+		  /* vst3q cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V16QI: ftype = void_ftype_qi_pointer_ci; break;
+		    case T_V8HI: ftype = void_ftype_hi_pointer_ci; break;
+		    case T_V4SI: ftype = void_ftype_si_pointer_ci; break;
+		    case T_V4SF: ftype = void_ftype_sf_pointer_ci; break;
+		    default: gcc_unreachable ();
+		    }
+		  break;
+
+		case XImode:
+		  /* vst4q cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V16QI: ftype = void_ftype_qi_pointer_xi; break;
+		    case T_V8HI: ftype = void_ftype_hi_pointer_xi; break;
+		    case T_V4SI: ftype = void_ftype_si_pointer_xi; break;
+		    case T_V4SF: ftype = void_ftype_sf_pointer_xi; break;
+		    default: gcc_unreachable ();
+		    }
+		  break;
+
+		default:
+		  gcc_unreachable ();
+		}
+              break;
+
+	    case NEON_STORESTRUCTLANE:
+	      switch (mode0)
+		{
+		case TImode:
+		  /* vst2_lane cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V8QI:
+		      ftype = void_ftype_qi_pointer_ti_si;
+		      break;
+		    case T_V4HI:
+		      ftype = void_ftype_hi_pointer_ti_si;
+		      break;
+		    case T_V2SI:
+		      ftype = void_ftype_si_pointer_ti_si;
+		      break;
+		    case T_V2SF:
+		      ftype = void_ftype_sf_pointer_ti_si;
+		      break;
+		    default:
+		      gcc_unreachable ();
+		    }
+		  break;
+
+		case EImode:
+		  /* vst3_lane cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V8QI:
+		      ftype = void_ftype_qi_pointer_ei_si;
+		      break;
+		    case T_V4HI:
+		      ftype = void_ftype_hi_pointer_ei_si;
+		      break;
+		    case T_V2SI:
+		      ftype = void_ftype_si_pointer_ei_si;
+		      break;
+		    case T_V2SF:
+		      ftype = void_ftype_sf_pointer_ei_si;
+		      break;
+		    default:
+		      gcc_unreachable ();
+		    }
+		  break;
+
+		case OImode:
+		  /* vst2q_lane and vst4_lane cases.  */
+		  switch (1 << j)
+		    {
+		      /* vst2q_lane cases.  */
+		    case T_V8HI:
+		      ftype = void_ftype_hi_pointer_oi_si;
+		      break;
+		    case T_V4SI:
+		      ftype = void_ftype_si_pointer_oi_si;
+		      break;
+		    case T_V4SF:
+		      ftype = void_ftype_sf_pointer_oi_si;
+		      break;
+		      /* vst4_lane cases.  */
+		    case T_V8QI:
+		      ftype = void_ftype_qi_pointer_oi_si;
+		      break;
+		    case T_V4HI:
+		      ftype = void_ftype_hi_pointer_oi_si;
+		      break;
+		    case T_V2SI:
+		      ftype = void_ftype_si_pointer_oi_si;
+		      break;
+		    case T_V2SF:
+		      ftype = void_ftype_sf_pointer_oi_si;
+		      break;
+		    default:
+		      gcc_unreachable ();
+		    }
+		  break;
+
+		case CImode:
+		  /* vst3q_lane cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V8HI:
+		      ftype = void_ftype_hi_pointer_ci_si;
+		      break;
+		    case T_V4SI:
+		      ftype = void_ftype_si_pointer_ci_si;
+		      break;
+		    case T_V4SF:
+		      ftype = void_ftype_sf_pointer_ci_si;
+		      break;
+		    default:
+		      gcc_unreachable ();
+		    }
+		  break;
+
+		case XImode:
+		  /* vst4q_lane cases.  */
+		  switch (1 << j)
+		    {
+		    case T_V8HI:
+		      ftype = void_ftype_hi_pointer_xi_si;
+		      break;
+		    case T_V4SI:
+		      ftype = void_ftype_si_pointer_xi_si;
+		      break;
+		    case T_V4SF:
+		      ftype = void_ftype_sf_pointer_xi_si;
+		      break;
+		    default:
+		      gcc_unreachable ();
+		    }
+		  break;
+
+		default:
+		  gcc_unreachable ();
+		}
+              break;
+
+            default:
+              gcc_unreachable ();
+            }
+            
+          gcc_assert (ftype != NULL);
+          
+          sprintf (namebuf, "__builtin_neon_%s%s", d->name, modenames[j]);
+          
+          lang_hooks.builtin_function (namebuf, ftype, fcode++, BUILT_IN_MD,
+				       NULL, NULL_TREE);
+        }
+    }
+#undef qi_TN
+#undef hi_TN
+#undef si_TN
+#undef di_TN
+#undef ti_TN
+#undef ei_TN
+#undef oi_TN
+#undef ci_TN
+#undef xi_TN
+
+#undef sf_TN
+
+#undef v8qi_TN
+#undef v4hi_TN
+#undef v2si_TN
+#undef v2sf_TN
+
+#undef v16qi_TN
+#undef v8hi_TN
+#undef v4si_TN
+#undef v4sf_TN
+#undef v2di_TN
+
+#undef pv8qi_TN
+#undef pv4hi_TN
+#undef pv2si_TN
+#undef pv2sf_TN
+#undef pdi_TN
+
+#undef pv16qi_TN
+#undef pv8hi_TN
+#undef pv4si_TN
+#undef pv4sf_TN
+#undef pv2di_TN
+
+#undef void_TN
+
+#undef TYPE2
+#undef TYPE3
+#undef TYPE4
+#undef TYPE5
+#undef TYPE6
+}
+
 static void
 arm_init_builtins (void)
 {
@@ -14055,13 +19045,16 @@
 
   if (TARGET_REALLY_IWMMXT)
     arm_init_iwmmxt_builtins ();
-
+  
+  if (TARGET_NEON)
+    arm_init_neon_builtins ();
 /* APPLE LOCAL begin ARM darwin builtins */
 #ifdef SUBTARGET_INIT_BUILTINS
   SUBTARGET_INIT_BUILTINS;
 #endif
 /* APPLE LOCAL end ARM darwin builtins */
 }
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 
 /* Errors in the source file can cause expand_expr to return const0_rtx
    where we expect a vector.  To avoid crashing, use one of the vector
@@ -14152,6 +19145,347 @@
   return target;
 }
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+static int
+neon_builtin_compare (const void *a, const void *b)
+{
+  const neon_builtin_datum *key = a;
+  const neon_builtin_datum *memb = b;
+  unsigned int soughtcode = key->base_fcode;
+  
+  if (soughtcode >= memb->base_fcode
+      && soughtcode < memb->base_fcode + memb->num_vars)
+    return 0;
+  else if (soughtcode < memb->base_fcode)
+    return -1;
+  else
+    return 1;
+}
+
+static enum insn_code
+locate_neon_builtin_icode (int fcode, neon_itype *itype)
+{
+  neon_builtin_datum key, *found;
+  int idx;
+  
+  key.base_fcode = fcode;
+  found = bsearch (&key, &neon_builtin_data[0], ARRAY_SIZE (neon_builtin_data),
+		   sizeof (neon_builtin_data[0]), neon_builtin_compare);
+  gcc_assert (found);
+  idx = fcode - (int) found->base_fcode;
+  gcc_assert (idx >= 0 && idx < T_MAX && idx < (int)found->num_vars);
+
+  if (itype)
+    *itype = found->itype;
+
+  return found->codes[idx];
+}
+
+typedef enum {
+  NEON_ARG_COPY_TO_REG,
+  NEON_ARG_CONSTANT,
+  NEON_ARG_STOP
+} builtin_arg;
+
+#define NEON_MAX_BUILTIN_ARGS 5
+
+/* Expand a Neon builtin.  */
+static rtx
+arm_expand_neon_args (rtx target, int icode, int have_retval,
+		      tree arglist, ...)
+{
+  va_list ap;
+  rtx pat;
+  tree arg[NEON_MAX_BUILTIN_ARGS];
+  rtx op[NEON_MAX_BUILTIN_ARGS];
+  enum machine_mode tmode = insn_data[icode].operand[0].mode;
+  enum machine_mode mode[NEON_MAX_BUILTIN_ARGS];
+  int argc = 0;
+  
+  if (have_retval
+      && (!target
+	  || GET_MODE (target) != tmode
+	  || !(*insn_data[icode].operand[0].predicate) (target, tmode)))
+    target = gen_reg_rtx (tmode);
+  
+  va_start (ap, arglist);
+  
+  for (;;)
+    {
+      builtin_arg thisarg = va_arg (ap, int);
+      
+      if (thisarg == NEON_ARG_STOP)
+        break;
+      else
+        {
+          arg[argc] = TREE_VALUE (arglist);
+          op[argc] = expand_expr (arg[argc], NULL_RTX, VOIDmode, 0);
+          mode[argc] = insn_data[icode].operand[argc + have_retval].mode;
+
+          arglist = TREE_CHAIN (arglist);
+
+          switch (thisarg)
+            {
+            case NEON_ARG_COPY_TO_REG:
+              /*gcc_assert (GET_MODE (op[argc]) == mode[argc]);*/
+              if (!(*insn_data[icode].operand[argc + have_retval].predicate)
+                     (op[argc], mode[argc]))
+                op[argc] = copy_to_mode_reg (mode[argc], op[argc]);
+              break;
+
+            case NEON_ARG_CONSTANT:
+              /* FIXME: This error message is somewhat unhelpful.  */
+              if (!(*insn_data[icode].operand[argc + have_retval].predicate)
+                    (op[argc], mode[argc]))
+		error ("argument must be a constant");
+              break;
+
+            case NEON_ARG_STOP:
+              gcc_unreachable ();
+            }
+          
+          argc++;
+        }
+    }
+
+  va_end (ap);
+
+  if (have_retval)
+    switch (argc)
+      {
+      case 1:
+	pat = GEN_FCN (icode) (target, op[0]);
+	break;
+
+      case 2:
+	pat = GEN_FCN (icode) (target, op[0], op[1]);
+	break;
+
+      case 3:
+	pat = GEN_FCN (icode) (target, op[0], op[1], op[2]);
+	break;
+
+      case 4:
+	pat = GEN_FCN (icode) (target, op[0], op[1], op[2], op[3]);
+	break;
+
+      case 5:
+	pat = GEN_FCN (icode) (target, op[0], op[1], op[2], op[3], op[4]);
+	break;
+    
+      default:
+	gcc_unreachable ();
+      }
+  else
+    switch (argc)
+      {
+      case 1:
+	pat = GEN_FCN (icode) (op[0]);
+	break;
+
+      case 2:
+	pat = GEN_FCN (icode) (op[0], op[1]);
+	break;
+
+      case 3:
+	pat = GEN_FCN (icode) (op[0], op[1], op[2]);
+	break;
+
+      case 4:
+	pat = GEN_FCN (icode) (op[0], op[1], op[2], op[3]);
+	break;
+
+      case 5:
+	pat = GEN_FCN (icode) (op[0], op[1], op[2], op[3], op[4]);
+        break;
+
+      default:
+	gcc_unreachable ();
+      }
+
+  if (!pat)
+    return 0;
+
+  emit_insn (pat);
+
+  return target;
+}
+
+/* Expand a Neon builtin. These are "special" because they don't have symbolic
+   constants defined per-instruction or per instruction-variant. Instead, the
+   required info is looked up in the table neon_builtin_data.  */
+static rtx
+arm_expand_neon_builtin (rtx target, int fcode, tree arglist)
+{
+  neon_itype itype;
+  enum insn_code icode = locate_neon_builtin_icode (fcode, &itype);
+  
+  switch (itype)
+    {
+    case NEON_UNOP:
+    case NEON_CONVERT:
+    case NEON_DUPLANE:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+        NEON_ARG_COPY_TO_REG, NEON_ARG_CONSTANT, NEON_ARG_STOP);
+    
+    case NEON_BINOP:
+    case NEON_SETLANE:
+    case NEON_SCALARMUL:
+    case NEON_SCALARMULL:
+    case NEON_SCALARMULH:
+    case NEON_SHIFTINSERT:
+    case NEON_LOGICBINOP:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+        NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG, NEON_ARG_CONSTANT,
+        NEON_ARG_STOP);
+        
+    case NEON_TERNOP:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+        NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG,
+        NEON_ARG_CONSTANT, NEON_ARG_STOP);
+    
+    case NEON_GETLANE:
+    case NEON_FIXCONV:
+    case NEON_SHIFTIMM:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+        NEON_ARG_COPY_TO_REG, NEON_ARG_CONSTANT, NEON_ARG_CONSTANT,
+        NEON_ARG_STOP);
+        
+    case NEON_CREATE:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+        NEON_ARG_COPY_TO_REG, NEON_ARG_STOP);
+
+    case NEON_DUP:
+    case NEON_SPLIT:
+    case NEON_REINTERP:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+        NEON_ARG_COPY_TO_REG, NEON_ARG_STOP);
+    
+    case NEON_COMBINE:
+    case NEON_VTBL:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+        NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG, NEON_ARG_STOP);
+
+    case NEON_RESULTPAIR:
+      return arm_expand_neon_args (target, icode, 0, arglist,
+        NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG,
+        NEON_ARG_STOP);
+    
+    case NEON_LANEMUL:
+    case NEON_LANEMULL:
+    case NEON_LANEMULH:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+        NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG, NEON_ARG_CONSTANT,
+        NEON_ARG_CONSTANT, NEON_ARG_STOP);
+    
+    case NEON_LANEMAC:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+        NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG,
+        NEON_ARG_CONSTANT, NEON_ARG_CONSTANT, NEON_ARG_STOP);
+
+    case NEON_SHIFTACC:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+        NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG, NEON_ARG_CONSTANT,
+        NEON_ARG_CONSTANT, NEON_ARG_STOP);
+
+    case NEON_SCALARMAC:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+	NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG,
+        NEON_ARG_CONSTANT, NEON_ARG_STOP);
+
+    case NEON_SELECT:
+    case NEON_VTBX:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+	NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG,
+        NEON_ARG_STOP);
+
+    case NEON_LOAD1:
+    case NEON_LOADSTRUCT:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+	NEON_ARG_COPY_TO_REG, NEON_ARG_STOP);
+
+    case NEON_LOAD1LANE:
+    case NEON_LOADSTRUCTLANE:
+      return arm_expand_neon_args (target, icode, 1, arglist,
+	NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG, NEON_ARG_CONSTANT,
+	NEON_ARG_STOP);
+
+    case NEON_STORE1:
+    case NEON_STORESTRUCT:
+      return arm_expand_neon_args (target, icode, 0, arglist,
+	NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG, NEON_ARG_STOP);
+
+    case NEON_STORE1LANE:
+    case NEON_STORESTRUCTLANE:
+      return arm_expand_neon_args (target, icode, 0, arglist,
+	NEON_ARG_COPY_TO_REG, NEON_ARG_COPY_TO_REG, NEON_ARG_CONSTANT,
+	NEON_ARG_STOP);
+    }
+  
+  gcc_unreachable ();
+}
+
+/* Emit code to reinterpret one Neon type as another, without altering bits.  */
+void
+neon_reinterpret (rtx dest, rtx src)
+{
+  emit_move_insn (dest, gen_lowpart (GET_MODE (dest), src));
+}
+
+/* Emit code to place a Neon pair result in memory locations (with equal
+   registers).  */
+void
+neon_emit_pair_result_insn (enum machine_mode mode,
+			    rtx (*intfn) (rtx, rtx, rtx, rtx), rtx destaddr,
+                            rtx op1, rtx op2)
+{
+  rtx mem = gen_rtx_MEM (mode, destaddr);
+  rtx tmp1 = gen_reg_rtx (mode);
+  rtx tmp2 = gen_reg_rtx (mode);
+  
+  emit_insn (intfn (tmp1, op1, tmp2, op2));
+  
+  emit_move_insn (mem, tmp1);
+  mem = adjust_address (mem, mode, GET_MODE_SIZE (mode));
+  emit_move_insn (mem, tmp2);
+}
+
+/* Set up operands for a register copy from src to dest, taking care not to
+   clobber registers in the process.
+   FIXME: This has rather high polynomial complexity (O(n^3)?) but shouldn't
+   be called with a large N, so that should be OK.  */
+
+void
+neon_disambiguate_copy (rtx *operands, rtx *dest, rtx *src, unsigned int count)
+{
+  unsigned int copied = 0, opctr = 0;
+  unsigned int done = (1 << count) - 1;
+  unsigned int i, j;
+  
+  while (copied != done)
+    {
+      for (i = 0; i < count; i++)
+        {
+          int good = 1;
+
+          for (j = 0; good && j < count; j++)
+            if (i != j && (copied & (1 << j)) == 0
+                && reg_overlap_mentioned_p (src[j], dest[i]))
+              good = 0;
+
+          if (good)
+            {
+              operands[opctr++] = dest[i];
+              operands[opctr++] = src[i];
+              copied |= 1 << i;
+            }
+        }
+    }
+
+  gcc_assert (opctr == count * 2);
+}
+
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 /* Expand an expression EXP that calls a built-in function,
    with result going to TARGET if that's convenient
    (and in mode MODE if that's convenient).
@@ -14183,6 +19517,11 @@
   enum machine_mode mode1;
   enum machine_mode mode2;
 
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  if (fcode >= ARM_BUILTIN_NEON_BASE)
+    return arm_expand_neon_builtin (target, fcode, arglist);
+
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
   switch (fcode)
     {
     case ARM_BUILTIN_TEXTRMSB:
@@ -14850,7 +20189,8 @@
 
 
 void
-thumb_final_prescan_insn (rtx insn)
+/* APPLE LOCAL v7 support. Merge from mainline */
+thumb1_final_prescan_insn (rtx insn)
 {
   if (flag_print_asm_name)
     asm_fprintf (asm_out_file, "%@ 0x%04x\n",
@@ -15000,7 +20340,8 @@
     }
   /* APPLE LOCAL end 6465387 exception handling interworking VFP save */
 
-  live_regs_mask = thumb_compute_save_reg_mask ();
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  live_regs_mask = thumb1_compute_save_reg_mask ();
   high_regs_pushed = bit_count (live_regs_mask & 0x0f00);
 
   /* If we can deduce the registers used from the function's return value.
@@ -15262,7 +20603,8 @@
 
 /* Generate the rest of a function's prologue.  */
 void
-thumb_expand_prologue (void)
+/* APPLE LOCAL v7 support. Merge from mainline */
+thumb1_expand_prologue (void)
 {
   rtx insn, dwarf;
 
@@ -15284,7 +20626,8 @@
       return;
     }
 
-  live_regs_mask = thumb_compute_save_reg_mask ();
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  live_regs_mask = thumb1_compute_save_reg_mask ();
   /* Load the pic register before setting the frame pointer,
      so we can use r7 as a temporary work register.  */
   if (flag_pic && arm_pic_register != INVALID_REGNUM)
@@ -15421,7 +20764,8 @@
 
 
 void
-thumb_expand_epilogue (void)
+/* APPLE LOCAL v7 support. Merge from mainline */
+thumb1_expand_epilogue (void)
 {
   HOST_WIDE_INT amount;
   arm_stack_offsets *offsets;
@@ -15601,7 +20945,8 @@
     }
 
   /* Get the registers we are going to push.  */
-  live_regs_mask = thumb_compute_save_reg_mask ();
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  live_regs_mask = thumb1_compute_save_reg_mask ();
   /* Extract a mask of the ones we can give to the Thumb's push instruction.  */
   l_mask = live_regs_mask & 0x40ff;
   /* Then count how many other high registers will need to be pushed.  */
@@ -15785,7 +21130,8 @@
 }
 
 static void
-thumb_output_function_prologue (FILE *f, HOST_WIDE_INT size ATTRIBUTE_UNUSED)
+/* APPLE LOCAL v7 support. Merge from mainline */
+thumb1_output_function_prologue (FILE *f, HOST_WIDE_INT size ATTRIBUTE_UNUSED)
 {
   (void) handle_thumb_unexpanded_prologue (f, true);
 }
@@ -16089,6 +21435,142 @@
     asm_fprintf (stream, "%U%s", name);
 }
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+static void
+arm_file_start (void)
+{
+  int val;
+
+  if (TARGET_UNIFIED_ASM)
+    asm_fprintf (asm_out_file, "\t.syntax unified\n");
+
+  if (TARGET_BPABI)
+    {
+      const char *fpu_name;
+      if (arm_select[0].string)
+	asm_fprintf (asm_out_file, "\t.cpu %s\n", arm_select[0].string);
+      else if (arm_select[1].string)
+	asm_fprintf (asm_out_file, "\t.arch %s\n", arm_select[1].string);
+      else
+	asm_fprintf (asm_out_file, "\t.cpu %s\n",
+		     all_cores[arm_default_cpu].name);
+
+      if (TARGET_SOFT_FLOAT)
+	{
+	  if (TARGET_VFP)
+	    fpu_name = "softvfp";
+	  else
+	    fpu_name = "softfpa";
+	}
+      else
+	{
+	  int set_float_abi_attributes = 0;
+	  switch (arm_fpu_arch)
+	    {
+	    case FPUTYPE_FPA:
+	      fpu_name = "fpa";
+	      break;
+	    case FPUTYPE_FPA_EMU2:
+	      fpu_name = "fpe2";
+	      break;
+	    case FPUTYPE_FPA_EMU3:
+	      fpu_name = "fpe3";
+	      break;
+	    case FPUTYPE_MAVERICK:
+	      fpu_name = "maverick";
+	      break;
+	    case FPUTYPE_VFP:
+	      fpu_name = "vfp";
+	      set_float_abi_attributes = 1;
+	      break;
+	    case FPUTYPE_VFP3:
+	      fpu_name = "vfp3";
+	      set_float_abi_attributes = 1;
+	      break;
+	    case FPUTYPE_NEON:
+	      fpu_name = "neon";
+	      set_float_abi_attributes = 1;
+	      break;
+	    default:
+	      abort();
+	    }
+	  if (set_float_abi_attributes)
+	    {
+	      if (TARGET_HARD_FLOAT)
+		asm_fprintf (asm_out_file, "\t.eabi_attribute 27, 3\n");
+	      if (TARGET_HARD_FLOAT_ABI)
+		asm_fprintf (asm_out_file, "\t.eabi_attribute 28, 1\n");
+	    }
+	}
+      asm_fprintf (asm_out_file, "\t.fpu %s\n", fpu_name);
+
+      /* Some of these attributes only apply when the corresponding features
+         are used.  However we don't have any easy way of figuring this out.
+	 Conservatively record the setting that would have been used.  */
+
+      /* Tag_ABI_PCS_wchar_t.  */
+      asm_fprintf (asm_out_file, "\t.eabi_attribute 18, %d\n",
+		   (int)WCHAR_TYPE_SIZE / BITS_PER_UNIT);
+
+      /* Tag_ABI_FP_rounding.  */
+      if (flag_rounding_math)
+	asm_fprintf (asm_out_file, "\t.eabi_attribute 19, 1\n");
+      if (!flag_unsafe_math_optimizations)
+	{
+	  /* Tag_ABI_FP_denomal.  */
+	  asm_fprintf (asm_out_file, "\t.eabi_attribute 20, 1\n");
+	  /* Tag_ABI_FP_exceptions.  */
+	  asm_fprintf (asm_out_file, "\t.eabi_attribute 21, 1\n");
+	}
+      /* Tag_ABI_FP_user_exceptions.  */
+      if (flag_signaling_nans)
+	asm_fprintf (asm_out_file, "\t.eabi_attribute 22, 1\n");
+      /* Tag_ABI_FP_number_model.  */
+      asm_fprintf (asm_out_file, "\t.eabi_attribute 23, %d\n", 
+		   flag_finite_math_only ? 1 : 3);
+
+      /* Tag_ABI_align8_needed.  */
+      asm_fprintf (asm_out_file, "\t.eabi_attribute 24, 1\n");
+      /* Tag_ABI_align8_preserved.  */
+      asm_fprintf (asm_out_file, "\t.eabi_attribute 25, 1\n");
+      /* Tag_ABI_enum_size.  */
+      asm_fprintf (asm_out_file, "\t.eabi_attribute 26, %d\n",
+		   flag_short_enums ? 1 : 2);
+
+      /* Tag_ABI_optimization_goals.  */
+      if (optimize_size)
+	val = 4;
+      else if (optimize >= 2)
+	val = 2;
+      else if (optimize)
+	val = 1;
+      else
+	val = 6;
+      asm_fprintf (asm_out_file, "\t.eabi_attribute 30, %d\n", val);
+    }
+  /* APPLE LOCAL 6345234 begin place text sections together */
+#if TARGET_MACHO
+  /* Emit declarations for all code sections at the beginning of the file; 
+     this keeps them from being separated by data sections, which can 
+     lead to out-of-range branches. */
+  if (flag_pic || MACHO_DYNAMIC_NO_PIC_P)
+    {
+      fprintf (asm_out_file, "\t.section __TEXT,__text,regular\n");
+      fprintf (asm_out_file, "\t.section __TEXT,__textcoal_nt,coalesced\n");
+      fprintf (asm_out_file, "\t.section __TEXT,__const_coal,coalesced\n");
+      if (MACHO_DYNAMIC_NO_PIC_P )
+        fprintf (asm_out_file, 
+                 "\t.section __TEXT,__symbol_stub4,symbol_stubs,none,12\n");
+      else
+        fprintf (asm_out_file, 
+                 "\t.section __TEXT,__picsymbolstub4,symbol_stubs,none,16\n");
+    }
+#endif
+  /* APPLE LOCAL 6345234 end place text sections together */
+  default_file_start();
+}
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 static void
 arm_file_end (void)
 {
@@ -16119,7 +21601,8 @@
 static void
 arm_darwin_file_start (void)
 {
-  default_file_start();
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  arm_file_start();
   darwin_file_start();
 }
 
@@ -16463,6 +21946,7 @@
 					SYMBOL_REF_FLAGS (function_rtx),
 					1);
   bool is_indirected = false;
+    
 
   /* Darwin/mach-o: use a stub for dynamic references.  */
 #if TARGET_MACHO
@@ -16471,7 +21955,8 @@
       && (! machopic_data_defined_p (function_rtx)))
     {
       function_name = machopic_indirection_name (function_rtx, !is_longcall);
-      is_indirected = true;
+      /* APPLE LOCAL 6858124 don't indirect if it's just a stub */
+      is_indirected = is_longcall;
     }
   else
 #endif
@@ -16479,7 +21964,13 @@
 
   if (mi_delta < 0)
     mi_delta = - mi_delta;
-  if (TARGET_THUMB || is_longcall)
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  /* APPLE LOCAL 6361608 begin Thumb-2 longcall thunks */
+  /* When generating 16-bit thumb-1 code, thunks are entered in arm mode.
+     In thumb-2, thunks can be in thumb mode.  */
+  /* APPLE LOCAL 6361608 end Thumb-2 longcall thunks */
+  if (TARGET_THUMB1 || is_longcall)
+  /* APPLE LOCAL end v7 support. Merge from mainline */
     {
       int labelno = thunk_label++;
       ASM_GENERATE_INTERNAL_LABEL (label, "LTHUMBFUNC", labelno);
@@ -16506,6 +21997,8 @@
       if (is_indirected)
 	fputs ("\tldr\tr12, [r12]\n", file);
     }
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  /* TODO: Use movw/movt for large constants when available.  */
   while (mi_delta != 0)
     {
       if ((mi_delta & (3 << shift)) == 0)
@@ -16519,7 +22012,8 @@
           shift += 8;
         }
     }
-  if (TARGET_THUMB || is_longcall)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_THUMB1 || is_longcall)
     {
       fprintf (file, "\tbx\tr12\n");
       ASM_OUTPUT_ALIGN (file, 2);
@@ -16527,17 +22021,24 @@
       fputs (":\n", file);
       if (flag_pic)
 	{
+          /* APPLE LOCAL 6361608 begin Thumb-2 longcall thunks */
+          int pc_offset;
 	  /* If we're branching to a local Thumb routine, output:
 	       ".word .LTHUNKn-7-.LTHUNKPCn".
 	     Otherwise, output:
 	       ".word .LTHUNKn-8-.LTHUNKPCn".
-	     (inter-module thumbness is fixed up by the linker).  */
+	     (inter-module thumbness is fixed up by the linker).  
+             If we're in a Thumb2 thunk, it's -4 and -3, respectively.  */
 	  rtx tem = gen_rtx_SYMBOL_REF (Pmode, function_name);
 
+          /* Thumb2 add instructions w/ PC source have a +4 bias. ARM
+             mode has +8. */
+          pc_offset = TARGET_THUMB2 ? -4 : -8;
 	  if (TARGET_MACHO && (TARGET_ARM || is_indirected))
-	    tem = gen_rtx_PLUS (GET_MODE (tem), tem, GEN_INT (-8));
+	    tem = gen_rtx_PLUS (GET_MODE (tem), tem, GEN_INT (pc_offset));
 	  else
-	    tem = gen_rtx_PLUS (GET_MODE (tem), tem, GEN_INT (-7));
+	    tem = gen_rtx_PLUS (GET_MODE (tem), tem, GEN_INT (pc_offset+1));
+          /* APPLE LOCAL 6361608 end Thumb-2 longcall thunks */
 
 	  tem = gen_rtx_MINUS (GET_MODE (tem),
 			       tem,
@@ -16552,7 +22053,13 @@
     }
   else
     {
-      fputs ("\tb\t", file);
+      /* APPLE LOCAL begin 6297258 */
+      if (TARGET_THUMB2)
+	fputs ("\tb.w\t", file);
+      else
+        fputs ("\tb\t", file);
+      /* APPLE LOCAL end 6297258 */
+
       assemble_name (file, function_name);
       if (NEED_PLT_RELOC)
         fputs ("(PLT)", file);
@@ -16750,7 +22257,42 @@
 	  && !reg_overlap_mentioned_p (value, XEXP (op, 0)));
 }
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+/* Return non-zero if the consumer (a multiply-accumulate instruction)
+   has an accumulator dependency on the result of the producer (a
+   multiplication instruction) and no other dependency on that result.  */
+int
+arm_mac_accumulator_is_mul_result (rtx producer, rtx consumer)
+{
+  rtx mul = PATTERN (producer);
+  rtx mac = PATTERN (consumer);
+  rtx mul_result;
+  rtx mac_op0, mac_op1, mac_acc;
+
+  if (GET_CODE (mul) == COND_EXEC)
+    mul = COND_EXEC_CODE (mul);
+  if (GET_CODE (mac) == COND_EXEC)
+    mac = COND_EXEC_CODE (mac);
+
+  /* Check that mul is of the form (set (...) (mult ...))
+     and mla is of the form (set (...) (plus (mult ...) (...))).  */
+  if ((GET_CODE (mul) != SET || GET_CODE (XEXP (mul, 1)) != MULT)
+      || (GET_CODE (mac) != SET || GET_CODE (XEXP (mac, 1)) != PLUS
+          || GET_CODE (XEXP (XEXP (mac, 1), 0)) != MULT))
+    return 0;
+
+  mul_result = XEXP (mul, 0);
+  mac_op0 = XEXP (XEXP (XEXP (mac, 1), 0), 0);
+  mac_op1 = XEXP (XEXP (XEXP (mac, 1), 0), 1);
+  mac_acc = XEXP (XEXP (mac, 1), 1);
+
+  return (reg_overlap_mentioned_p (mul_result, mac_acc)
+          && !reg_overlap_mentioned_p (mul_result, mac_op0)
+          && !reg_overlap_mentioned_p (mul_result, mac_op1));
+}
+
 
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 /* We can't rely on the caller doing the proper promotion when
    using APCS or ATPCS.  */
 
@@ -16933,7 +22475,8 @@
 
   emit_insn (gen_rtx_USE (VOIDmode, source));
 
-  mask = thumb_compute_save_reg_mask ();
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  mask = thumb1_compute_save_reg_mask ();
   if (mask & (1 << LR_REGNUM))
     {
       offsets = arm_get_frame_offsets ();
@@ -16952,7 +22495,8 @@
 	  reg = SP_REGNUM;
 	}
       /* Allow for the stack frame.  */
-      if (TARGET_BACKTRACE)
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      if (TARGET_THUMB1 && TARGET_BACKTRACE)
 	delta -= 16;
       /* APPLE LOCAL ARM custom frame layout */
       /* Removed lines.  */
@@ -16979,6 +22523,13 @@
 bool
 arm_vector_mode_supported_p (enum machine_mode mode)
 {
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  /* Neon also supports V2SImode, etc. listed in the clause below.  */
+  if (TARGET_NEON && (mode == V2SFmode || mode == V4SImode || mode == V8HImode
+      || mode == V16QImode || mode == V4SFmode || mode == V2DImode))
+    return true;
+
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
   if ((mode == V2SImode)
       || (mode == V4HImode)
       || (mode == V8QImode))
@@ -17012,9 +22563,17 @@
   if (IS_FPA_REGNUM (regno))
     return (TARGET_AAPCS_BASED ? 96 : 16) + regno - FIRST_FPA_REGNUM;
 
+  /* APPLE LOCAL begin v7 support. Merge from Codesourcery */
   if (IS_VFP_REGNUM (regno))
-    /* APPLE LOCAL ARM 5757769 */
-    return 256 + regno - FIRST_VFP_REGNUM;
+    {
+      /* See comment in arm_dwarf_register_span.  */
+      if (VFP_REGNO_OK_FOR_SINGLE (regno))
+        /* APPLE LOCAL ARM 5757769 */
+	return 256 + regno - FIRST_VFP_REGNUM;
+      else
+	  return 256 + (regno - FIRST_VFP_REGNUM) / 2;
+    }
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
 
   if (IS_IWMMXT_GR_REGNUM (regno))
     return 104 + regno - FIRST_IWMMXT_GR_REGNUM;
@@ -17025,14 +22584,52 @@
   gcc_unreachable ();
 }
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+/* Dwarf models VFPv3 registers as 32 64-bit registers.
+   GCC models tham as 64 32-bit registers, so we need to describe this to
+   the DWARF generation code.  Other registers can use the default.  */
+static rtx
+arm_dwarf_register_span(rtx rtl)
+{
+    unsigned regno;
+    int nregs;
+    int i;
+    rtx p;
+
+    regno = REGNO (rtl);
+    if (!IS_VFP_REGNUM (regno))
+	return NULL_RTX;
+
+    /* The EABI defines two VFP register ranges:
+	  64-95: Legacy VFPv2 numbering for S0-S31 (obsolescent)
+	  256-287: D0-D31
+       The recommended encodings for s0-s31 is a DW_OP_bit_piece of the
+       corresponding D register.  However gdb6.6 does not support this, so
+       we use the legacy encodings.  We also use these encodings for D0-D15
+       for compatibility with older debuggers.  */
+    if (VFP_REGNO_OK_FOR_SINGLE (regno))
+	return NULL_RTX;
+
+    nregs = GET_MODE_SIZE (GET_MODE (rtl)) / 8;
+    p = gen_rtx_PARALLEL (VOIDmode, rtvec_alloc(nregs));
+    regno = (regno - FIRST_VFP_REGNUM) / 2;
+    for (i = 0; i < nregs; i++)
+      XVECEXP (p, 0, i) = gen_rtx_REG (DImode, 256 + regno + i);
+
+    return p;
+}
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 
 #ifdef TARGET_UNWIND_INFO
-/* Emit unwind directives for a store-multiple instruction.  This should
-   only ever be generated by the function prologue code, so we expect it
-   to have a particular form.  */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Emit unwind directives for a store-multiple instruction or stack pointer
+   push during alignment.
+   These should only ever be generated by the function prologue code, so
+   expect them to have a particular form.  */
 
 static void
-arm_unwind_emit_stm (FILE * asm_out_file, rtx p)
+arm_unwind_emit_sequence (FILE * asm_out_file, rtx p)
+/* APPLE LOCAL end v7 support. Merge from mainline */
 {
   int i;
   HOST_WIDE_INT offset;
@@ -17042,8 +22639,13 @@
   unsigned lastreg;
   rtx e;
 
-  /* First insn will adjust the stack pointer.  */
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
   e = XVECEXP (p, 0, 0);
+  if (GET_CODE (e) != SET)
+    abort ();
+
+  /* First insn will adjust the stack pointer.  */
+  /* APPLE LOCAL end v7 support. Merge from Codesourcery */
   if (GET_CODE (e) != SET
       || GET_CODE (XEXP (e, 0)) != REG
       || REGNO (XEXP (e, 0)) != SP_REGNUM
@@ -17053,6 +22655,7 @@
   offset = -INTVAL (XEXP (XEXP (e, 1), 1));
   nregs = XVECLEN (p, 0) - 1;
 
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
   reg = REGNO (XEXP (XVECEXP (p, 0, 1), 1));
   if (reg < 16)
     {
@@ -17064,13 +22667,14 @@
 	  offset -= 4;
 	}
       reg_size = 4;
+      fprintf (asm_out_file, "\t.save {");
     }
   else if (IS_VFP_REGNUM (reg))
     {
-      /* FPA register saves use an additional word.  */
-      offset -= 4;
       reg_size = 8;
+      fprintf (asm_out_file, "\t.vsave {");
     }
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   else if (reg >= FIRST_FPA_REGNUM && reg <= LAST_FPA_REGNUM)
     {
       /* FPA registers are done differently.  */
@@ -17086,8 +22690,8 @@
   if (offset != nregs * reg_size)
     abort ();
 
-  fprintf (asm_out_file, "\t.save {");
-
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  /* Removed lines */
   offset = 0;
   lastreg = 0;
   /* The remaining insns will describe the stores.  */
@@ -17142,6 +22746,8 @@
 {
   rtx e0;
   rtx e1;
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  unsigned reg;
 
   e0 = XEXP (p, 0);
   e1 = XEXP (p, 1);
@@ -17178,7 +22784,8 @@
       else if (REGNO (e0) == HARD_FRAME_POINTER_REGNUM)
 	{
 	  HOST_WIDE_INT offset;
-	  unsigned reg;
+          /* APPLE LOCAL v7 support. Merge from mainline */
+          /* moved 'reg' to function level scope */
 
 	  if (GET_CODE (e1) == PLUS)
 	    {
@@ -17214,6 +22821,15 @@
 	  asm_fprintf (asm_out_file, "\t.movsp %r, #%d\n",
 		       REGNO (e0), (int)INTVAL(XEXP (e1, 1)));
 	}
+      /* APPLE LOCAL begin v7 support. Merge from mainline */
+      else if (GET_CODE (e1) == UNSPEC && XINT (e1, 1) == UNSPEC_STACK_ALIGN)
+	{
+	  /* Stack pointer save before alignment.  */
+	  reg = REGNO (e0);
+	  asm_fprintf (asm_out_file, "\t.unwind_raw 0, 0x%x @ vsp = r%d\n",
+		       reg + 0x90, reg);
+	}
+      /* APPLE LOCAL end v7 support. Merge from mainline */
       else
 	abort ();
       break;
@@ -17251,7 +22867,8 @@
 
     case SEQUENCE:
       /* Store multiple.  */
-      arm_unwind_emit_stm (asm_out_file, pat);
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      arm_unwind_emit_sequence (asm_out_file, pat);
       break;
 
     default:
@@ -17279,6 +22896,32 @@
 #endif /* TARGET_UNWIND_INFO */
 
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Handle UNSPEC DWARF call frame instructions.  These are needed for dynamic
+   stack alignment.  */
+
+static void
+arm_dwarf_handle_frame_unspec (const char *label, rtx pattern, int index)
+{
+  rtx unspec = SET_SRC (pattern);
+  gcc_assert (GET_CODE (unspec) == UNSPEC);
+
+  switch (index)
+    {
+    case UNSPEC_STACK_ALIGN:
+      /* ??? We should set the CFA = (SP & ~7).  At this point we haven't
+         put anything on the stack, so hopefully it won't matter.
+         CFA = SP will be correct after alignment.  */
+      dwarf2out_reg_save_reg (label, stack_pointer_rtx,
+                              SET_DEST (pattern));
+      break;
+    default:
+      gcc_unreachable ();
+    }
+}
+
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 /* Output unwind directives for the start/end of a function.  */
 
 void
@@ -17566,6 +23209,202 @@
 }
 /* APPLE LOCAL end ARM compact switch tables */
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Output assembly for a shift instruction.
+   SET_FLAGS determines how the instruction modifies the condition codes.
+   0 - Do not set conditiona codes.
+   1 - Set condition codes.
+   2 - Use smallest instruction.  */
+const char *
+arm_output_shift(rtx * operands, int set_flags)
+{
+  char pattern[100];
+  static const char flag_chars[3] = {'?', '.', '!'};
+  const char *shift;
+  HOST_WIDE_INT val;
+  char c;
+  
+  c = flag_chars[set_flags];
+  if (TARGET_UNIFIED_ASM)
+    {
+      shift = shift_op(operands[3], &val);
+      if (shift)
+	{
+	  if (val != -1)
+	    operands[2] = GEN_INT(val);
+	  sprintf (pattern, "%s%%%c\t%%0, %%1, %%2", shift, c);
+	}
+      else
+	sprintf (pattern, "mov%%%c\t%%0, %%1", c);
+    }
+  else
+    sprintf (pattern, "mov%%%c\t%%0, %%1%%S3", c);
+  output_asm_insn (pattern, operands);
+  return "";
+}
+
+/* Output a Thumb-2 casesi instruction.  */
+const char *
+thumb2_output_casesi (rtx *operands)
+{
+  rtx diff_vec = PATTERN (next_real_insn (operands[2]));
+
+  gcc_assert (GET_CODE (diff_vec) == ADDR_DIFF_VEC);
+
+  output_asm_insn ("cmp\t%0, %1", operands);
+  output_asm_insn ("bhi\t%l3", operands);
+  switch (GET_MODE(diff_vec))
+    {
+    case QImode:
+      return "tbb\t[%|pc, %0]";
+    case HImode:
+      return "tbh\t[%|pc, %0, lsl #1]";
+    case SImode:
+      /* APPLE LOCAL begin 6152801 SImode thumb2 switch table dispatch */
+      output_asm_insn ("adr\t%4, %l2", operands);
+      output_asm_insn ("add\t%4, %4, %0, lsl #2", operands);
+      return "mov\t%|pc, %4";
+      /* APPLE LOCAL end 6152801 SImode thumb2 switch table dispatch */
+    default:
+      gcc_unreachable ();
+    }
+}
+/* APPLE LOCAL end v7 support. Merge from mainline */
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+ 
+/* A table and a function to perform ARM-specific name mangling for
+   NEON vector types in order to conform to the AAPCS (see "Procedure
+   Call Standard for the ARM Architecture", Appendix A).  To qualify
+   for emission with the mangled names defined in that document, a
+   vector type must not only be of the correct mode but also be
+   composed of NEON vector element types (e.g. __builtin_neon_qi).  */
+typedef struct
+{
+  enum machine_mode mode;
+  const char *element_type_name;
+  const char *aapcs_name;
+} arm_mangle_map_entry;
+
+static arm_mangle_map_entry arm_mangle_map[] = {
+  /* 64-bit containerized types.  */
+  { V8QImode,  "__builtin_neon_qi",     "15__simd64_int8_t" },
+  { V8QImode,  "__builtin_neon_uqi",    "16__simd64_uint8_t" },
+  { V4HImode,  "__builtin_neon_hi",     "16__simd64_int16_t" },
+  { V4HImode,  "__builtin_neon_uhi",    "17__simd64_uint16_t" },
+  { V2SImode,  "__builtin_neon_si",     "16__simd64_int32_t" },
+  { V2SImode,  "__builtin_neon_usi",    "17__simd64_uint32_t" },
+  { V2SFmode,  "__builtin_neon_sf",     "18__simd64_float32_t" },
+  { V8QImode,  "__builtin_neon_poly8",  "16__simd64_poly8_t" },
+  { V4HImode,  "__builtin_neon_poly16", "17__simd64_poly16_t" },
+  /* 128-bit containerized types.  */
+  { V16QImode, "__builtin_neon_qi",     "16__simd128_int8_t" },
+  { V16QImode, "__builtin_neon_uqi",    "17__simd128_uint8_t" },
+  { V8HImode,  "__builtin_neon_hi",     "17__simd128_int16_t" },
+  { V8HImode,  "__builtin_neon_uhi",    "18__simd128_uint16_t" },
+  { V4SImode,  "__builtin_neon_si",     "17__simd128_int32_t" },
+  { V4SImode,  "__builtin_neon_usi",    "18__simd128_uint32_t" },
+  { V4SFmode,  "__builtin_neon_sf",     "19__simd128_float32_t" },
+  { V16QImode, "__builtin_neon_poly8",  "17__simd128_poly8_t" },
+  { V8HImode,  "__builtin_neon_poly16", "18__simd128_poly16_t" },
+  { VOIDmode, NULL, NULL }
+};
+
+const char *
+arm_mangle_vector_type (tree type)
+{
+  arm_mangle_map_entry *pos = arm_mangle_map;
+
+  gcc_assert (TREE_CODE (type) == VECTOR_TYPE);
+
+  /* Check the mode of the vector type, and the name of the vector
+     element type, against the table.  */
+  while (pos->mode != VOIDmode)
+    {
+      tree elt_type = TREE_TYPE (type);
+
+      if (pos->mode == TYPE_MODE (type)
+          && TREE_CODE (TYPE_NAME (elt_type)) == TYPE_DECL
+          && !strcmp (IDENTIFIER_POINTER (DECL_NAME (TYPE_NAME (elt_type))),
+                      pos->element_type_name))
+	return pos->aapcs_name;
+
+      pos++;
+    }
+
+  /* Use the default mangling for unrecognized (possibly user-defined)
+     vector types.  */
+  return NULL;
+}
+
+void
+arm_asm_output_addr_diff_vec (FILE *file, rtx label, rtx body)
+{
+  int idx, size = GET_MODE_SIZE (GET_MODE (body));
+  int pack = (TARGET_THUMB) ? 2 : 4;
+  /* APPLE LOCAL 5837498 assembler expr for (L1-L2)/2 */
+  /* removed unused variable "base_addr" */
+  int base_label_no = CODE_LABEL_NUMBER (label);
+  int vlen = XVECLEN (body, 1); /*includes trailing default */
+  const char* directive;
+  if (GET_MODE (body) == QImode)
+      directive = ".byte";
+  else if (GET_MODE (body) == HImode)
+      directive = ".short";
+  else
+    {
+      pack = 1;		    
+      directive = ".long";
+    }
+  /* Alignment of table was handled by aligning its label,
+     in final_scan_insn. */
+  targetm.asm_out.internal_label (file, "L", base_label_no);
+  /* Default is not included in output count */
+  if (TARGET_COMPACT_SWITCH_TABLES)
+    asm_fprintf (file, "\t%s\t%d @ size\n", directive, vlen - 1);
+  for (idx = 0; idx < vlen; idx++)
+    {
+      rtx target_label = XEXP (XVECEXP (body, 1, idx), 0);
+      /* APPLE LOCAL begin 5837498 assembler expr for (L1-L2)/2 */
+      if (GET_MODE (body) != SImode)
+        {
+	  /* ARM mode is always SImode bodies */
+	  asm_fprintf (file, "\t%s\t(L%d-L%d)/%d\n",
+	    directive,
+	  CODE_LABEL_NUMBER (target_label), base_label_no, pack);
+        }    
+      /* APPLE LOCAL end 5837498 assembler expr for (L1-L2)/2 */
+      /* APPLE LOCAL begin 6152801 SImode thumb2 switch table dispatch */
+      else if (TARGET_ARM)
+	asm_fprintf (file, "\tb\tL%d\n",
+			CODE_LABEL_NUMBER (target_label));
+      else if (TARGET_THUMB2)
+	asm_fprintf (file, "\tb.w\tL%d\n",
+			CODE_LABEL_NUMBER (target_label));
+      /* APPLE LOCAL end 6152801 SImode thumb2 switch table dispatch */
+      else if (TARGET_COMPACT_SWITCH_TABLES || flag_pic)
+	/* Let the assembler do the computation here; one case that
+	   uses is this is when there are asm's, which makes
+	   compile time computations unreliable. */
+	asm_fprintf (file, "\t%s\tL%d-L%d\n",
+	  directive,
+	  CODE_LABEL_NUMBER (target_label), base_label_no);
+      else
+	asm_fprintf (file, "\t%s\tL%d\n", directive,
+		     CODE_LABEL_NUMBER (target_label));
+    }
+  /* Pad to instruction boundary. */
+  if (TARGET_COMPACT_SWITCH_TABLES)
+    vlen = (vlen + 1/*count*/) * size;
+  else
+    vlen = vlen * size;
+  while (vlen % pack != 0)
+    {
+      asm_fprintf (file, "\t%s\t0 @ pad\n", directive);
+      vlen += size;
+    }
+}
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
+
 /* APPLE LOCAL begin ARM enhance conditional insn generation */
 /* A C expression to modify the code described by the conditional if
    information CE_INFO, for the basic block BB, possibly updating the tests in
@@ -17795,7 +23634,8 @@
 int
 arm_function_boundary (void)
 {
-  int min_align = TARGET_ARM ? 32 : 16;
+  /* APPLE LOCAL 6357106 thumb2 functions should be 4-byte aligned */
+  int min_align = TARGET_32BIT ? 32 : 16;
 
   /* Even in Thumb mode, thunks are output as ARM functions.  */
   if (cfun && current_function_is_thunk)
@@ -17817,4 +23657,120 @@
 }
 /* APPLE LOCAL end ARM 6008578 */
 
+/* APPLE LOCAL begin 6160917 */
+/* Handle the cases where SECONDARY_INPUT_RELOAD_CLASS said that we
+   needed a scratch register.  Currently, we only handle the case
+   where there was indexed literal addressing with an out-of-range
+   offset.  */
+void
+neon_reload_in (rtx *operands, enum machine_mode mode)
+{
+  if (GET_CODE (operands[1]) == MEM)
+    {
+      rtx mem_addr = XEXP (operands[1], 0);
+      if (GET_CODE (mem_addr) == PLUS
+	  && GET_CODE (XEXP (mem_addr, 0)) == REG
+	  && REG_MODE_OK_FOR_BASE_P (XEXP (mem_addr, 0), VOIDmode)
+	  && ! arm_legitimate_index_p (mode, XEXP (mem_addr, 1), SET, 0))
+	{
+	  rtx scratch;
+
+	  /* Load the address into the scratch register provided,
+	     and then indirect it.  */
+	  emit_move_insn (operands[2], mem_addr);
+	  scratch = gen_rtx_MEM (mode, operands[2]);
+	  emit_move_insn (operands[0], scratch);
+	  return;
+	}
+    }
+  /* If you reach here, SECONDARY_INPUT_RELOAD_CLASS is indicating that
+     a scratch register is needed, but we don't have any code to
+     handle it.  Add that code here.  */
+  gcc_unreachable ();
+}
+  
+/* Handle the cases where SECONDARY_OUTPUT_RELOAD_CLASS said that we
+   needed a scratch register.  Currently, we only handle the case
+   where there was indexed literal addressing with an out-of-range
+   offset.  */
+void
+neon_reload_out (rtx *operands, enum machine_mode mode)
+{
+  if (GET_CODE (operands[0]) == MEM)
+    {
+      rtx mem_addr = XEXP (operands[0], 0);
+      if (GET_CODE (mem_addr) == PLUS
+	  && GET_CODE (XEXP (mem_addr, 0)) == REG
+	  && REG_MODE_OK_FOR_BASE_P (XEXP (mem_addr, 0), VOIDmode)
+	  && ! arm_legitimate_index_p (mode, XEXP (mem_addr, 1), SET, 0))
+	{
+	  rtx scratch;
+
+	  /* Load the address into the scratch register provided,
+	     and then indirect it.  */
+	  emit_move_insn (operands[2], mem_addr);
+	  scratch = gen_rtx_MEM (mode, operands[2]);
+	  emit_move_insn (scratch, operands[1]);
+	  return;
+	}
+    }
+  /* If you reach here, SECONDARY_OUTPUT_RELOAD_CLASS is indicating that
+     a scratch register is needed, but we don't have any code to
+     handle it.  Add that code here.  */
+  gcc_unreachable ();
+}
+/* APPLE LOCAL end 6160917 */
+
+/* APPLE LOCAL begin 5571707 Allow R9 as caller-saved register */
+/* For v4 and v5, we always reserve R9 for thread local data. For v6 and
+   v7, we can make it available when the target is iPhoneOS v3.0 or later. */
+void
+arm_darwin_subtarget_conditional_register_usage (void)
+{
+  if (!(arm_arch6 && !darwin_reserve_r9_on_v6) && !arm_arch7a)
+    fixed_regs[9]   = 1;
+  call_used_regs[9] = 1;
+
+  if (TARGET_THUMB)
+    {		
+      fixed_regs[THUMB_HARD_FRAME_POINTER_REGNUM] = 1;
+      call_used_regs[THUMB_HARD_FRAME_POINTER_REGNUM] = 1;
+      global_regs[THUMB_HARD_FRAME_POINTER_REGNUM] = 1;	
+    }
+}
+/* APPLE LOCAL end 5571707 Allow R9 as caller-saved register */
+
+/* APPLE LOCAL begin 6902792 Q register clobbers in inline asm */
+/* Worker function for TARGET_MD_ASM_CLOBBERS.
+   We do this to translate references to Qn registers into the equivalent
+   D(2n)/D(2n+1) register pairs. */
+static tree
+arm_md_asm_clobbers (tree outputs ATTRIBUTE_UNUSED,
+		      tree inputs ATTRIBUTE_UNUSED,
+		      tree clobbers)
+{
+  tree tail;
+
+  for (tail = clobbers; tail; tail = TREE_CHAIN (tail))
+    {
+      const char *clobber_name;
+      clobber_name = TREE_STRING_POINTER (TREE_VALUE (tail));
+      if (tolower (clobber_name[0]) == 'q' && isdigit (clobber_name[1])
+          && (isdigit (clobber_name[2]) || clobber_name[2] == '\0'))
+        {
+          char regname[4] = "dXX";
+          /* found a Q register in the clobber list, so add the D reference
+             to the upper dword of it. The existing clobber for the Q
+             register will automatically translate to the low dword. */
+          int regno = atoi (clobber_name + 1) * 2 + 1;
+          snprintf (regname + 1, 3, "%d", regno);
+          clobbers =
+            tree_cons (NULL_TREE, build_string (strlen(regname), regname),
+                       clobbers);
+        }
+    }
+  return clobbers;
+}
+/* APPLE LOCAL end 6902792 Q register clobbers in inline asm */
+
 #include "gt-arm.h"

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/arm.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm.h?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm.h (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm.h Wed Jul 22 15:36:27 2009
@@ -47,6 +47,13 @@
 
 /* APPLE LOCAL end ARM darwin target */
 
+/* APPLE LOCAL begin 6150882 use thumb2 by default for v7 */
+/* thumb_option is initialized to -1, so we can tell whether the user
+   explicitly passed -mthumb or -mno-thumb. override_options will
+   set thumb_option = 1 if -mno-thumb was not seen. */
+#define TARGET_THUMB (thumb_option == 1)
+/* APPLE LOCAL end 6150882 use thumb2 by default for v7 */
+
 /* APPLE LOCAL ARM interworking */
 #define TARGET_INTERWORK (interwork_option == 1)
 
@@ -63,10 +70,10 @@
 	builtin_define ("__APCS_32__");			\
 	if (TARGET_THUMB)				\
 	  builtin_define ("__thumb__");			\
-	/* LLVM LOCAL begin */				\
+/* APPLE LOCAL begin v7 support. Merge from mainline */ \
 	if (TARGET_THUMB2)				\
 	  builtin_define ("__thumb2__");		\
-	/* LLVM LOCAL end */				\
+/* APPLE LOCAL end v7 support. Merge from mainline */   \
 							\
 	if (TARGET_BIG_END)				\
 	  {						\
@@ -89,6 +96,10 @@
 	if (TARGET_VFP)					\
 	  builtin_define ("__VFP_FP__");		\
 							\
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */ \
+	if (TARGET_NEON)				\
+	  builtin_define ("__ARM_NEON__");		\
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */   \
 	/* Add a define for interworking.		\
 	   Needed when building libgcc.a.  */		\
 	if (arm_cpp_interwork)				\
@@ -208,9 +219,11 @@
 #define TARGET_FPA			(arm_fp_model == ARM_FP_MODEL_FPA)
 #define TARGET_MAVERICK			(arm_fp_model == ARM_FP_MODEL_MAVERICK)
 #define TARGET_VFP			(arm_fp_model == ARM_FP_MODEL_VFP)
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 #define TARGET_IWMMXT			(arm_arch_iwmmxt)
-#define TARGET_REALLY_IWMMXT		(TARGET_IWMMXT && TARGET_ARM)
-#define TARGET_IWMMXT_ABI (TARGET_ARM && arm_abi == ARM_ABI_IWMMXT)
+#define TARGET_REALLY_IWMMXT		(TARGET_IWMMXT && TARGET_32BIT)
+#define TARGET_IWMMXT_ABI (TARGET_32BIT && arm_abi == ARM_ABI_IWMMXT)
+/* APPLE LOCAL end v7 support. Merge from mainline */
 #define TARGET_ARM                      (! TARGET_THUMB)
 #define TARGET_EITHER			1 /* (TARGET_ARM | TARGET_THUMB) */
 #define TARGET_BACKTRACE	        (leaf_function_p () \
@@ -220,23 +233,63 @@
 #define TARGET_AAPCS_BASED \
     (arm_abi != ARM_ABI_APCS && arm_abi != ARM_ABI_ATPCS)
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+/* True if we should avoid generating conditional execution instructions.  */
+#define TARGET_NO_COND_EXEC		(arm_tune_marvell_f && !optimize_size)
+
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 #define TARGET_HARD_TP			(target_thread_pointer == TP_CP15)
 #define TARGET_SOFT_TP			(target_thread_pointer == TP_SOFT)
 
-/* LLVM LOCAL begin */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 /* Only 16-bit thumb code.  */
 #define TARGET_THUMB1			(TARGET_THUMB && !arm_arch_thumb2)
 /* Arm or Thumb-2 32-bit code.  */
 #define TARGET_32BIT			(TARGET_ARM || arm_arch_thumb2)
 /* 32-bit Thumb-2 code.  */
 #define TARGET_THUMB2			(TARGET_THUMB && arm_arch_thumb2)
-/* LLVM LOCAL end */
+/* APPLE LOCAL end v7 support. Merge from mainline */
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+/* Thumb-1 only.  */
+#define TARGET_THUMB1_ONLY		(TARGET_THUMB1 && !arm_arch_notm)
+
+/* The following two macros concern the ability to execute coprocessor
+   instructions for VFPv3 or NEON.  TARGET_VFP3 is currently only ever
+   tested when we know we are generating for VFP hardware; we need to
+   be more careful with TARGET_NEON as noted below.  */
+
+/* FPU is VFPv3 (with twice the number of D registers).  Setting the FPU to
+   Neon automatically enables VFPv3 too.  */
+#define TARGET_VFP3 (arm_fp_model == ARM_FP_MODEL_VFP \
+		     && (arm_fpu_arch == FPUTYPE_VFP3 \
+			 || arm_fpu_arch == FPUTYPE_NEON))
+/* FPU supports Neon instructions.  The setting of this macro gets
+   revealed via __ARM_NEON__ so we add extra guards upon TARGET_32BIT
+   and TARGET_HARD_FLOAT to ensure that NEON instructions are
+   available.  */
+#define TARGET_NEON (TARGET_32BIT && TARGET_HARD_FLOAT \
+                     && arm_fp_model == ARM_FP_MODEL_VFP \
+		     && arm_fpu_arch == FPUTYPE_NEON)
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+
+/* "DSP" multiply instructions, eg. SMULxy.  */
+#define TARGET_DSP_MULTIPLY \
+  (TARGET_32BIT && arm_arch5e && arm_arch_notm)
+/* Integer SIMD instructions, and extend-accumulate instructions.  */
+#define TARGET_INT_SIMD \
+  (TARGET_32BIT && arm_arch6 && arm_arch_notm)
+
+/* We could use unified syntax for arm mode, but for now we just use it
+   for Thumb-2.  */
+#define TARGET_UNIFIED_ASM TARGET_THUMB2
 
 /* APPLE LOCAL begin ARM compact switch tables */
 /* Use compact switch tables with libgcc handlers.  */
 #define TARGET_COMPACT_SWITCH_TABLES \
-  (TARGET_THUMB && !TARGET_LONG_CALLS)
+  (TARGET_THUMB1 && !TARGET_LONG_CALLS)
 /* APPLE LOCAL end ARM compact switch tables */
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* True iff the full BPABI is being used.  If TARGET_BPABI is true,
    then TARGET_AAPCS_BASED must be true -- but the converse does not
@@ -295,7 +348,13 @@
   /* Cirrus Maverick floating point co-processor.  */
   FPUTYPE_MAVERICK,
   /* VFP.  */
-  FPUTYPE_VFP
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  FPUTYPE_VFP,
+  /* VFPv3.  */
+  FPUTYPE_VFP3,
+  /* Neon.  */
+  FPUTYPE_NEON
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 };
 
 /* Recast the floating point class to be the floating point attribute.  */
@@ -366,6 +425,11 @@
 /* LLVM LOCAL Declare arm_arch7a for use when setting the target triple.  */
 extern int arm_arch7a;
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Nonzero if instructions not present in the 'M' profile can be used.  */
+extern int arm_arch_notm;
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 /* Nonzero if this chip can benefit from load scheduling.  */
 extern int arm_ld_sched;
 
@@ -397,11 +461,14 @@
    interworking clean.  */
 extern int arm_cpp_interwork;
 
-/* LLVM LOCAL begin */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 /* Nonzero if chip supports Thumb 2.  */
 extern int arm_arch_thumb2;
-/* LLVM LOCAL end */
 
+/* Nonzero if chip supports integer division instruction.  */
+extern int arm_arch_hwdiv;
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 #ifndef TARGET_DEFAULT
 #define TARGET_DEFAULT  (MASK_APCS_FRAME)
 #endif
@@ -495,6 +562,14 @@
 
 #define UNITS_PER_WORD	4
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+/* Use the option -mvectorize-with-neon-quad to override the use of doubleword
+   registers when autovectorizing for Neon, at least until multiple vector
+   widths are supported properly by the middle-end.  */
+#define UNITS_PER_SIMD_WORD \
+  (TARGET_NEON ? (TARGET_NEON_VECTORIZE_QUAD ? 16 : 8) : UNITS_PER_WORD)
+
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 /* True if natural alignment is used for doubleword types.  */
 #define ARM_DOUBLEWORD_ALIGN	TARGET_AAPCS_BASED
 
@@ -669,6 +744,12 @@
   1,1,1,1,1,1,1,1,	\
   1,1,1,1,1,1,1,1,	\
   1,1,1,1,1,1,1,1,	\
+/* APPLE LOCAL begin v7 support. Merge from mainline */ \
+  1,1,1,1,1,1,1,1,	\
+  1,1,1,1,1,1,1,1,	\
+  1,1,1,1,1,1,1,1,	\
+  1,1,1,1,1,1,1,1,	\
+/* APPLE LOCAL end v7 support. Merge from mainline */ \
   1			\
 }
 
@@ -695,6 +776,12 @@
   1,1,1,1,1,1,1,1,	     \
   1,1,1,1,1,1,1,1,	     \
   1,1,1,1,1,1,1,1,	     \
+/* APPLE LOCAL begin v7 support. Merge from mainline */ \
+  1,1,1,1,1,1,1,1,	     \
+  1,1,1,1,1,1,1,1,	     \
+  1,1,1,1,1,1,1,1,	     \
+  1,1,1,1,1,1,1,1,	     \
+/* APPLE LOCAL end v7 support. Merge from mainline */ \
   1			     \
 }
 
@@ -706,7 +793,8 @@
 {								\
   int regno;							\
 								\
-  if (TARGET_SOFT_FLOAT || TARGET_THUMB || !TARGET_FPA)		\
+  /* APPLE LOCAL v7 support. Merge from mainline */         \
+  if (TARGET_SOFT_FLOAT || TARGET_THUMB1 || !TARGET_FPA)	\
     {								\
       for (regno = FIRST_FPA_REGNUM;				\
 	   regno <= LAST_FPA_REGNUM; ++regno)			\
@@ -718,6 +806,8 @@
       /* When optimizing for size, it's better not to use	\
 	 the HI regs, because of the overhead of stacking 	\
 	 them.  */						\
+      /* APPLE LOCAL v7 support. Merge from mainline */     \
+      /* ??? Is this still true for thumb2?  */			\
       for (regno = FIRST_HI_REGNUM;				\
 	   regno <= LAST_HI_REGNUM; ++regno)			\
 	fixed_regs[regno] = call_used_regs[regno] = 1;		\
@@ -726,10 +816,12 @@
   /* The link register can be clobbered by any branch insn,	\
      but we have no way to track that at present, so mark	\
      it as unavailable.  */					\
-  if (TARGET_THUMB)						\
+  /* APPLE LOCAL v7 support. Merge from mainline */         \
+  if (TARGET_THUMB1)						\
     fixed_regs[LR_REGNUM] = call_used_regs[LR_REGNUM] = 1;	\
 								\
-  if (TARGET_ARM && TARGET_HARD_FLOAT)				\
+  /* APPLE LOCAL v7 support. Merge from mainline */         \
+  if (TARGET_32BIT && TARGET_HARD_FLOAT)			\
     {								\
       if (TARGET_MAVERICK)					\
 	{							\
@@ -743,15 +835,21 @@
 	      call_used_regs[regno] = regno < FIRST_CIRRUS_FP_REGNUM + 4; \
 	    }							\
 	}							\
+      /* APPLE LOCAL begin v7 support. Merge from mainline */ \
       if (TARGET_VFP)						\
 	{							\
+	  /* VFPv3 registers are disabled when earlier VFP	\
+	     versions are selected due to the definition of	\
+	     LAST_VFP_REGNUM.  */				\
 	  for (regno = FIRST_VFP_REGNUM;			\
 	       regno <= LAST_VFP_REGNUM; ++ regno)		\
 	    {							\
 	      fixed_regs[regno] = 0;				\
-	      call_used_regs[regno] = regno < FIRST_VFP_REGNUM + 16; \
+	      call_used_regs[regno] = regno < FIRST_VFP_REGNUM + 16 \
+	      	|| regno >= FIRST_VFP_REGNUM + 32;		\
 	    }							\
 	}							\
+      /* APPLE LOCAL end v7 support. Merge from mainline */ \
     }								\
 								\
   if (TARGET_REALLY_IWMMXT)					\
@@ -865,7 +963,8 @@
 /* The native (Norcroft) Pascal compiler for the ARM passes the static chain
    as an invisible last argument (possible since varargs don't exist in
    Pascal), so the following is not true.  */
-#define STATIC_CHAIN_REGNUM	(TARGET_ARM ? 12 : 9)
+/* APPLE LOCAL v7 support. Merge from mainline */
+#define STATIC_CHAIN_REGNUM	12
 
 /* Define this to be where the real frame pointer is if it is not possible to
    work out the offset between the frame pointer and the automatic variables
@@ -924,15 +1023,57 @@
   (((REGNUM) >= FIRST_CIRRUS_FP_REGNUM) && ((REGNUM) <= LAST_CIRRUS_FP_REGNUM))
 
 #define FIRST_VFP_REGNUM	63
-#define LAST_VFP_REGNUM		94
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+#define D7_VFP_REGNUM		78  /* Registers 77 and 78 == VFP reg D7.  */
+#define LAST_VFP_REGNUM	\
+  (TARGET_VFP3 ? LAST_HI_VFP_REGNUM : LAST_LO_VFP_REGNUM)
+
 #define IS_VFP_REGNUM(REGNUM) \
   (((REGNUM) >= FIRST_VFP_REGNUM) && ((REGNUM) <= LAST_VFP_REGNUM))
 
+/* VFP registers are split into two types: those defined by VFP versions < 3
+   have D registers overlaid on consecutive pairs of S registers. VFP version 3
+   defines 16 new D registers (d16-d31) which, for simplicity and correctness
+   in various parts of the backend, we implement as "fake" single-precision
+   registers (which would be S32-S63, but cannot be used in that way).  The
+   following macros define these ranges of registers.  */
+#define LAST_LO_VFP_REGNUM	94
+#define FIRST_HI_VFP_REGNUM	95
+#define LAST_HI_VFP_REGNUM	126
+
+/* APPLE LOCAL 6150859 begin use NEON instructions for SF math */
+/* For NEON, SFmode values are only valid in even registers.  */
+#define VFP_REGNO_OK_FOR_SINGLE(REGNUM) \
+  (((REGNUM) <= LAST_LO_VFP_REGNUM) \
+   && (TARGET_NEON ? ((((REGNUM) - FIRST_VFP_REGNUM) & 1) == 0): 1))
+/* APPLE LOCAL 6150859 end use NEON instructions for SF math */
+
+/* DFmode values are only valid in even register pairs.  */
+#define VFP_REGNO_OK_FOR_DOUBLE(REGNUM) \
+  ((((REGNUM) - FIRST_VFP_REGNUM) & 1) == 0)
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+/* Neon Quad values must start at a multiple of four registers.  */
+#define NEON_REGNO_OK_FOR_QUAD(REGNUM) \
+  ((((REGNUM) - FIRST_VFP_REGNUM) & 3) == 0)
+
+/* Neon structures of vectors must be in even register pairs and there
+   must be enough registers available.  Because of various patterns
+   requiring quad registers, we require them to start at a multiple of
+   four.  */
+#define NEON_REGNO_OK_FOR_NREGS(REGNUM, N) \
+  ((((REGNUM) - FIRST_VFP_REGNUM) & 3) == 0 \
+   && (LAST_VFP_REGNUM - (REGNUM) >= 2 * (N) - 1))
+
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 /* The number of hard registers is 16 ARM + 8 FPA + 1 CC + 1 SFP + 1 AFP.  */
 /* + 16 Cirrus registers take us up to 43.  */
 /* Intel Wireless MMX Technology registers add 16 + 4 more.  */
-/* VFP adds 32 + 1 more.  */
-#define FIRST_PSEUDO_REGISTER   96
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* VFP (VFP3) adds 32 (64) + 1 more.  */
+#define FIRST_PSEUDO_REGISTER   128
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 #define DBX_REGISTER_NUMBER(REGNO) arm_dbx_register_number (REGNO)
 
@@ -967,7 +1108,8 @@
    On the ARM regs are UNITS_PER_WORD bits wide; FPA regs can hold any FP
    mode.  */
 #define HARD_REGNO_NREGS(REGNO, MODE)  	\
-  ((TARGET_ARM 				\
+/* APPLE LOCAL v7 support. Merge from mainline */ \
+  ((TARGET_32BIT			\
     && REGNO >= FIRST_FPA_REGNUM	\
     && REGNO != FRAME_POINTER_REGNUM	\
     && REGNO != ARG_POINTER_REGNUM)	\
@@ -988,30 +1130,56 @@
 #define VALID_IWMMXT_REG_MODE(MODE) \
  (arm_vector_mode_supported_p (MODE) || (MODE) == DImode)
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+/* Modes valid for Neon D registers.  */
+#define VALID_NEON_DREG_MODE(MODE) \
+  ((MODE) == V2SImode || (MODE) == V4HImode || (MODE) == V8QImode \
+   || (MODE) == V2SFmode || (MODE) == DImode)
+
+/* Modes valid for Neon Q registers.  */
+#define VALID_NEON_QREG_MODE(MODE) \
+  ((MODE) == V4SImode || (MODE) == V8HImode || (MODE) == V16QImode \
+   || (MODE) == V4SFmode || (MODE) == V2DImode)
+
+/* Structure modes valid for Neon registers.  */
+#define VALID_NEON_STRUCT_MODE(MODE) \
+  ((MODE) == TImode || (MODE) == EImode || (MODE) == OImode \
+   || (MODE) == CImode || (MODE) == XImode)
+
 /* The order in which register should be allocated.  It is good to use ip
    since no saving is required (though calls clobber it) and it never contains
    function parameters.  It is quite good to use lr since other calls may
    clobber it anyway.  Allocate r0 through r3 in reverse order since r3 is
    least likely to contain a function parameter; in addition results are
-   returned in r0.  */
+   returned in r0.
+   For VFP/VFPv3, allocate caller-saved registers first (D0-D7), then D16-D31,
+   then D8-D15.  The reason for doing this is to attempt to reduce register
+   pressure when both single- and double-precision registers are used in a
+   function, but hopefully not force double-precision registers to be
+   callee-saved when it's not necessary. */
 
-#define REG_ALLOC_ORDER  	    \
-{                                   \
-     3,  2,  1,  0, 12, 14,  4,  5, \
-     6,  7,  8, 10,  9, 11, 13, 15, \
-    16, 17, 18, 19, 20, 21, 22, 23, \
-    27, 28, 29, 30, 31, 32, 33, 34, \
-    35, 36, 37, 38, 39, 40, 41, 42, \
-    43, 44, 45, 46, 47, 48, 49, 50, \
-    51, 52, 53, 54, 55, 56, 57, 58, \
-    59, 60, 61, 62,		    \
-    24, 25, 26,			    \
-    78, 77, 76, 75, 74, 73, 72, 71, \
-    70, 69, 68, 67, 66, 65, 64, 63, \
-    79, 80, 81, 82, 83, 84, 85, 86, \
-    87, 88, 89, 90, 91, 92, 93, 94, \
-    95				    \
+#define REG_ALLOC_ORDER				\
+{						\
+     3,  2,  1,  0, 12, 14,  4,  5,		\
+     6,  7,  8, 10,  9, 11, 13, 15,		\
+    16, 17, 18, 19, 20, 21, 22, 23,		\
+    27, 28, 29, 30, 31, 32, 33, 34,		\
+    35, 36, 37, 38, 39, 40, 41, 42,		\
+    43, 44, 45, 46, 47, 48, 49, 50,		\
+    51, 52, 53, 54, 55, 56, 57, 58,		\
+    59, 60, 61, 62,				\
+    24, 25, 26,					\
+    78,  77,  76,  75,  74,  73,  72,  71,	\
+    70,  69,  68,  67,  66,  65,  64,  63,	\
+    95,  96,  97,  98,  99, 100, 101, 102,	\
+   103, 104, 105, 106, 107, 108, 109, 110,	\
+   111, 112, 113, 114, 115, 116, 117, 118,	\
+   119, 120, 121, 122, 123, 124, 125, 126,	\
+    79,  80,  81,  82,  83,  84,  85,  86,	\
+    87,  88,  89,  90,  91,  92,  93,  94,	\
+   127						\
 }
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 
 /* APPLE LOCAL begin 5831562 add DIMODE_REG_ALLOC_ORDER */
 #define DIMODE_REG_ALLOC_ORDER  	    \
@@ -1044,11 +1212,15 @@
 
 /* Register classes: used to be simple, just all ARM regs or all FPA regs
    Now that the Thumb is involved it has become more complicated.  */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 enum reg_class
 {
   NO_REGS,
   FPA_REGS,
   CIRRUS_REGS,
+  VFP_D0_D7_REGS,
+  VFP_LO_REGS,
+  VFP_HI_REGS,
   VFP_REGS,
   IWMMXT_GR_REGS,
   IWMMXT_REGS,
@@ -1071,6 +1243,9 @@
   "NO_REGS",		\
   "FPA_REGS",		\
   "CIRRUS_REGS",	\
+  "VFP_D0_D7_REGS",	\
+  "VFP_LO_REGS",	\
+  "VFP_HI_REGS",	\
   "VFP_REGS",		\
   "IWMMXT_GR_REGS",	\
   "IWMMXT_REGS",	\
@@ -1087,24 +1262,33 @@
 /* Define which registers fit in which classes.
    This is an initializer for a vector of HARD_REG_SET
    of length N_REG_CLASSES.  */
-#define REG_CLASS_CONTENTS					\
-{								\
-  { 0x00000000, 0x00000000, 0x00000000 }, /* NO_REGS  */	\
-  { 0x00FF0000, 0x00000000, 0x00000000 }, /* FPA_REGS */	\
-  { 0xF8000000, 0x000007FF, 0x00000000 }, /* CIRRUS_REGS */	\
-  { 0x00000000, 0x80000000, 0x7FFFFFFF }, /* VFP_REGS  */	\
-  { 0x00000000, 0x00007800, 0x00000000 }, /* IWMMXT_GR_REGS */	\
-  { 0x00000000, 0x7FFF8000, 0x00000000 }, /* IWMMXT_REGS */	\
-  { 0x000000FF, 0x00000000, 0x00000000 }, /* LO_REGS */		\
-  { 0x00002000, 0x00000000, 0x00000000 }, /* STACK_REG */	\
-  { 0x000020FF, 0x00000000, 0x00000000 }, /* BASE_REGS */	\
-  { 0x0000FF00, 0x00000000, 0x00000000 }, /* HI_REGS */		\
-  { 0x01000000, 0x00000000, 0x00000000 }, /* CC_REG */		\
-  { 0x00000000, 0x00000000, 0x80000000 }, /* VFPCC_REG */	\
-  { 0x0200FFFF, 0x00000000, 0x00000000 }, /* GENERAL_REGS */	\
-  { 0xFAFFFFFF, 0xFFFFFFFF, 0x7FFFFFFF }  /* ALL_REGS */	\
+#define REG_CLASS_CONTENTS						\
+{									\
+  { 0x00000000, 0x00000000, 0x00000000, 0x00000000 }, /* NO_REGS  */	\
+  { 0x00FF0000, 0x00000000, 0x00000000, 0x00000000 }, /* FPA_REGS */	\
+  { 0xF8000000, 0x000007FF, 0x00000000, 0x00000000 }, /* CIRRUS_REGS */	\
+  { 0x00000000, 0x80000000, 0x00007FFF, 0x00000000 }, /* VFP_D0_D7_REGS  */ \
+  { 0x00000000, 0x80000000, 0x7FFFFFFF, 0x00000000 }, /* VFP_LO_REGS  */ \
+  { 0x00000000, 0x00000000, 0x80000000, 0x7FFFFFFF }, /* VFP_HI_REGS  */ \
+  { 0x00000000, 0x80000000, 0xFFFFFFFF, 0x7FFFFFFF }, /* VFP_REGS  */	\
+  { 0x00000000, 0x00007800, 0x00000000, 0x00000000 }, /* IWMMXT_GR_REGS */ \
+  { 0x00000000, 0x7FFF8000, 0x00000000, 0x00000000 }, /* IWMMXT_REGS */	\
+  { 0x000000FF, 0x00000000, 0x00000000, 0x00000000 }, /* LO_REGS */	\
+  { 0x00002000, 0x00000000, 0x00000000, 0x00000000 }, /* STACK_REG */	\
+  { 0x000020FF, 0x00000000, 0x00000000, 0x00000000 }, /* BASE_REGS */	\
+  { 0x0000FF00, 0x00000000, 0x00000000, 0x00000000 }, /* HI_REGS */	\
+  { 0x01000000, 0x00000000, 0x00000000, 0x00000000 }, /* CC_REG */	\
+  { 0x00000000, 0x00000000, 0x00000000, 0x80000000 }, /* VFPCC_REG */	\
+  { 0x0200FFFF, 0x00000000, 0x00000000, 0x00000000 }, /* GENERAL_REGS */ \
+  { 0xFAFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0x7FFFFFFF }  /* ALL_REGS */	\
 }
 
+/* Any of the VFP register classes.  */
+#define IS_VFP_CLASS(X) \
+  ((X) == VFP_D0_D7_REGS || (X) == VFP_LO_REGS \
+   || (X) == VFP_HI_REGS || (X) == VFP_REGS)
+/* APPLE LOCAL end v7 support. Merge from mainline */
+
 /* The same information, inverted:
    Return the class number of the smallest class containing
    reg number REGNO.  This could be a conditional expression
@@ -1127,34 +1311,39 @@
     ((TARGET_THUMB && (CLASS) == LO_REGS)	\
      || (CLASS) == CC_REG)
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 /* The class value for index registers, and the one for base regs.  */
-#define INDEX_REG_CLASS  (TARGET_THUMB ? LO_REGS : GENERAL_REGS)
-#define BASE_REG_CLASS   (TARGET_THUMB ? LO_REGS : GENERAL_REGS)
+#define INDEX_REG_CLASS  (TARGET_THUMB1 ? LO_REGS : GENERAL_REGS)
+#define BASE_REG_CLASS   (TARGET_THUMB1 ? LO_REGS : GENERAL_REGS)
 
 /* For the Thumb the high registers cannot be used as base registers
    when addressing quantities in QI or HI mode; if we don't know the
    mode, then we must be conservative.  */
 #define MODE_BASE_REG_CLASS(MODE)					\
-    (TARGET_ARM ? GENERAL_REGS :					\
+    (TARGET_32BIT ? GENERAL_REGS :					\
      (((MODE) == SImode) ? BASE_REGS : LO_REGS))
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* For Thumb we can not support SP+reg addressing, so we return LO_REGS
    instead of BASE_REGS.  */
 #define MODE_BASE_REG_REG_CLASS(MODE) BASE_REG_CLASS
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 /* When SMALL_REGISTER_CLASSES is nonzero, the compiler allows
    registers explicitly used in the rtl to be used as spill registers
    but prevents the compiler from extending the lifetime of these
    registers.  */
-#define SMALL_REGISTER_CLASSES   TARGET_THUMB
+#define SMALL_REGISTER_CLASSES   TARGET_THUMB1
 
 /* Given an rtx X being reloaded into a reg required to be
    in class CLASS, return the class of reg to actually use.
-   In general this is just CLASS, but for the Thumb we prefer
-   a LO_REGS class or a subset.  */
-#define PREFERRED_RELOAD_CLASS(X, CLASS)	\
-  (TARGET_ARM ? (CLASS) :			\
-   ((CLASS) == BASE_REGS ? (CLASS) : LO_REGS))
+   In general this is just CLASS, but for the Thumb core registers and
+   immediate constants we prefer a LO_REGS class or a subset.  */
+#define PREFERRED_RELOAD_CLASS(X, CLASS)		\
+  (TARGET_ARM ? (CLASS) :				\
+   ((CLASS) == GENERAL_REGS || (CLASS) == HI_REGS	\
+    || (CLASS) == NO_REGS ? LO_REGS : (CLASS)))
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Must leave BASE_REGS reloads alone */
 #define THUMB_SECONDARY_INPUT_RELOAD_CLASS(CLASS, MODE, X)		\
@@ -1174,14 +1363,15 @@
 /* Return the register class of a scratch register needed to copy IN into
    or out of a register in CLASS in MODE.  If it can be done directly,
    NO_REGS is returned.  */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 #define SECONDARY_OUTPUT_RELOAD_CLASS(CLASS, MODE, X)		\
   /* Restrict which direct reloads are allowed for VFP/iWMMXt regs.  */ \
   ((TARGET_VFP && TARGET_HARD_FLOAT				\
-    && (CLASS) == VFP_REGS)					\
+    && IS_VFP_CLASS (CLASS))					\
    ? coproc_secondary_reload_class (MODE, X, FALSE)		\
    : (TARGET_IWMMXT && (CLASS) == IWMMXT_REGS)			\
    ? coproc_secondary_reload_class (MODE, X, TRUE)		\
-   : TARGET_ARM							\
+   : TARGET_32BIT					\
    ? (((MODE) == HImode && ! arm_arch4 && true_regnum (X) == -1) \
     ? GENERAL_REGS : NO_REGS)					\
    : THUMB_SECONDARY_OUTPUT_RELOAD_CLASS (CLASS, MODE, X))
@@ -1190,7 +1380,7 @@
 #define SECONDARY_INPUT_RELOAD_CLASS(CLASS, MODE, X)		\
   /* Restrict which direct reloads are allowed for VFP/iWMMXt regs.  */ \
   ((TARGET_VFP && TARGET_HARD_FLOAT				\
-    && (CLASS) == VFP_REGS)					\
+    && IS_VFP_CLASS (CLASS))					\
     ? coproc_secondary_reload_class (MODE, X, FALSE) :		\
     (TARGET_IWMMXT && (CLASS) == IWMMXT_REGS) ?			\
     coproc_secondary_reload_class (MODE, X, TRUE) :		\
@@ -1199,7 +1389,7 @@
      && (CLASS) == CIRRUS_REGS					\
      && (CONSTANT_P (X) || GET_CODE (X) == SYMBOL_REF))		\
     ? GENERAL_REGS :						\
-  (TARGET_ARM ?							\
+  (TARGET_32BIT ?						\
    (((CLASS) == IWMMXT_REGS || (CLASS) == IWMMXT_GR_REGS)	\
       && CONSTANT_P (X))					\
    ? GENERAL_REGS :						\
@@ -1209,6 +1399,7 @@
 	     && true_regnum (X) == -1)))			\
     ? GENERAL_REGS : NO_REGS)					\
    : THUMB_SECONDARY_INPUT_RELOAD_CLASS (CLASS, MODE, X)))
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Try a machine-dependent way of reloading an illegitimate address
    operand.  If we find one, push the reload and jump to WIN.  This
@@ -1278,6 +1469,8 @@
 /* We could probably achieve better results by defining PROMOTE_MODE to help
    cope with the variances between the Thumb's signed and unsigned byte and
    halfword load instructions.  */
+/* APPLE LOCAL v7 support. Merge from mainline */
+/* ??? This should be safe for thumb2, but we may be able to do better.  */
 #define THUMB_LEGITIMIZE_RELOAD_ADDRESS(X, MODE, OPNUM, TYPE, IND_L, WIN)     \
 do {									      \
   rtx new_x = thumb_legitimize_reload_address (&X, MODE, OPNUM, TYPE, IND_L); \
@@ -1303,13 +1496,14 @@
 /* If defined, gives a class of registers that cannot be used as the
    operand of a SUBREG that changes the mode of the object illegally.  */
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 /* Moves between FPA_REGS and GENERAL_REGS are two memory insns.  */
 #define REGISTER_MOVE_COST(MODE, FROM, TO)		\
-  (TARGET_ARM ?						\
+  (TARGET_32BIT ?						\
    ((FROM) == FPA_REGS && (TO) != FPA_REGS ? 20 :	\
     (FROM) != FPA_REGS && (TO) == FPA_REGS ? 20 :	\
-    (FROM) == VFP_REGS && (TO) != VFP_REGS ? 10 :  \
-    (FROM) != VFP_REGS && (TO) == VFP_REGS ? 10 :  \
+    IS_VFP_CLASS (FROM) && !IS_VFP_CLASS (TO) ? 10 :	\
+    !IS_VFP_CLASS (FROM) && IS_VFP_CLASS (TO) ? 10 :	\
     (FROM) == IWMMXT_REGS && (TO) != IWMMXT_REGS ? 4 :  \
     (FROM) != IWMMXT_REGS && (TO) == IWMMXT_REGS ? 4 :  \
     (FROM) == IWMMXT_GR_REGS || (TO) == IWMMXT_GR_REGS ? 20 :  \
@@ -1318,6 +1512,7 @@
    2)							\
    :							\
    ((FROM) == HI_REGS || (TO) == HI_REGS) ? 4 : 2)
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Stack layout; function entry, exit and calling.  */
 
@@ -1376,13 +1571,14 @@
    on the stack.  */
 #define RETURN_POPS_ARGS(FUNDECL, FUNTYPE, SIZE)  0
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 /* Define how to find the value returned by a library function
    assuming the value has mode MODE.  */
 #define LIBCALL_VALUE(MODE)  \
-  (TARGET_ARM && TARGET_HARD_FLOAT_ABI && TARGET_FPA			\
+  (TARGET_32BIT && TARGET_HARD_FLOAT_ABI && TARGET_FPA			\
    && GET_MODE_CLASS (MODE) == MODE_FLOAT				\
    ? gen_rtx_REG (MODE, FIRST_FPA_REGNUM)				\
-   : TARGET_ARM && TARGET_HARD_FLOAT_ABI && TARGET_MAVERICK		\
+   : TARGET_32BIT && TARGET_HARD_FLOAT_ABI && TARGET_MAVERICK		\
      && GET_MODE_CLASS (MODE) == MODE_FLOAT				\
    ? gen_rtx_REG (MODE, FIRST_CIRRUS_FP_REGNUM) 			\
    : TARGET_IWMMXT_ABI && arm_vector_mode_supported_p (MODE)    	\
@@ -1401,11 +1597,12 @@
 /* On a Cirrus chip, mvf0 can return results.  */
 #define FUNCTION_VALUE_REGNO_P(REGNO)  \
   ((REGNO) == ARG_REGISTER (1) \
-   || (TARGET_ARM && ((REGNO) == FIRST_CIRRUS_FP_REGNUM)		\
+   || (TARGET_32BIT && ((REGNO) == FIRST_CIRRUS_FP_REGNUM)		\
        && TARGET_HARD_FLOAT_ABI && TARGET_MAVERICK)			\
    || ((REGNO) == FIRST_IWMMXT_REGNUM && TARGET_IWMMXT_ABI) \
-   || (TARGET_ARM && ((REGNO) == FIRST_FPA_REGNUM)			\
+   || (TARGET_32BIT && ((REGNO) == FIRST_FPA_REGNUM)			\
        && TARGET_HARD_FLOAT_ABI && TARGET_FPA))
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Amount of memory needed for an untyped call to save all possible return
    registers.  */
@@ -1452,6 +1649,8 @@
 #define ARM_FT_NAKED		(1 << 3) /* No prologue or epilogue.  */
 #define ARM_FT_VOLATILE		(1 << 4) /* Does not return.  */
 #define ARM_FT_NESTED		(1 << 5) /* Embedded inside another func.  */
+/* APPLE LOCAL v7 support. Merge from mainline */
+#define ARM_FT_STACKALIGN	(1 << 6) /* Called with misaligned stack.  */
 
 /* Some macros to test these flags.  */
 #define ARM_FUNC_TYPE(t)	(t & ARM_FT_TYPE_MASK)
@@ -1459,6 +1658,8 @@
 #define IS_VOLATILE(t)     	(t & ARM_FT_VOLATILE)
 #define IS_NAKED(t)        	(t & ARM_FT_NAKED)
 #define IS_NESTED(t)       	(t & ARM_FT_NESTED)
+/* APPLE LOCAL v7 support. Merge from mainline */
+#define IS_STACKALIGN(t)       	(t & ARM_FT_STACKALIGN)
 
 
 /* Structure used to hold the function stack frame layout.  Offsets are
@@ -1569,13 +1770,16 @@
 /* Update the data in CUM to advance over an argument
    of mode MODE and data type TYPE.
    (TYPE is null for libcalls where that information may not be available.)  */
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
 #define FUNCTION_ARG_ADVANCE(CUM, MODE, TYPE, NAMED)	\
   (CUM).nargs += 1;					\
-  if (arm_vector_mode_supported_p (MODE)	       	\
-      && (CUM).named_count > (CUM).nargs)		\
+  if (arm_vector_mode_supported_p (MODE)		\
+      && (CUM).named_count > (CUM).nargs		\
+      && TARGET_IWMMXT_ABI)				\
     (CUM).iwmmxt_nregs += 1;				\
   else							\
     (CUM).nregs += ARM_NUM_REGS2 (MODE, TYPE)
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 
 /* If defined, a C expression that gives the alignment boundary, in bits, of an
    argument with the specified mode and type.  If it is not defined,
@@ -1659,6 +1863,10 @@
 
 /* Determine if the epilogue should be output as RTL.
    You should override this if you define FUNCTION_EXTRA_EPILOGUE.  */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* This is disabled for Thumb-2 because it will confuse the
+   conditional insn counter.  */
+/* APPLE LOCAL end v7 support. Merge from mainline */
 #define USE_RETURN_INSN(ISCOND)				\
   (TARGET_ARM ? use_return_insn (ISCOND, NULL) : 0)
 
@@ -1742,39 +1950,61 @@
 #define DOT_WORD ".word"
 /* APPLE LOCAL end ARM MACH assembler */
 
-/* On the Thumb we always switch into ARM mode to execute the trampoline.
-   Why - because it is easier.  This code will always be branched to via
-   a BX instruction and since the compiler magically generates the address
-   of the function the linker has no opportunity to ensure that the
-   bottom bit is set.  Thus the processor will be in ARM mode when it
-   reaches this code.  So we duplicate the ARM trampoline code and add
-   a switch into Thumb mode as well.  */
-#define THUMB_TRAMPOLINE_TEMPLATE(FILE)		\
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* The Thumb-2 trampoline is similar to the arm implementation.
+   Unlike 16-bit Thumb, we enter the stub in thumb mode.  */
+#define THUMB2_TRAMPOLINE_TEMPLATE(FILE)      \
+{               \
+  asm_fprintf (FILE, "\tldr.w\t%r, [%r, #4]\n",     \
+         STATIC_CHAIN_REGNUM, PC_REGNUM);     \
+  asm_fprintf (FILE, "\tldr.w\t%r, [%r, #4]\n",     \
+         PC_REGNUM, PC_REGNUM);       \
+  assemble_aligned_integer (UNITS_PER_WORD, const0_rtx);  \
+  assemble_aligned_integer (UNITS_PER_WORD, const0_rtx);  \
+}
+ 
+#define THUMB1_TRAMPOLINE_TEMPLATE(FILE)  \
 {						\
-  fprintf (FILE, "\t.code 32\n");		\
+  ASM_OUTPUT_ALIGN(FILE, 2);      \
+  fprintf (FILE, "\t.code\t16\n");    \
   fprintf (FILE, ".Ltrampoline_start:\n");	\
-  asm_fprintf (FILE, "\tldr\t%r, [%r, #8]\n",	\
-	       STATIC_CHAIN_REGNUM, PC_REGNUM);	\
-  asm_fprintf (FILE, "\tldr\t%r, [%r, #8]\n",	\
-	       IP_REGNUM, PC_REGNUM);		\
-  asm_fprintf (FILE, "\torr\t%r, %r, #1\n",     \
-	       IP_REGNUM, IP_REGNUM);     	\
-  asm_fprintf (FILE, "\tbx\t%r\n", IP_REGNUM);	\
-  /* APPLE LOCAL begin ARM MACH assembler */	\
-  fprintf (FILE, "\t" DOT_WORD "\t0\n");	\
-  fprintf (FILE, "\t" DOT_WORD "\t0\n");	\
-  /* APPLE LOCAL end ARM MACH assembler */	\
-  fprintf (FILE, "\t.code 16\n");		\
+  asm_fprintf (FILE, "\tpush\t{r0, r1}\n"); \
+  asm_fprintf (FILE, "\tldr\tr0, [%r, #8]\n", \
+         PC_REGNUM);      \
+  asm_fprintf (FILE, "\tmov\t%r, r0\n",   \
+         STATIC_CHAIN_REGNUM);    \
+  asm_fprintf (FILE, "\tldr\tr0, [%r, #8]\n", \
+         PC_REGNUM);      \
+  asm_fprintf (FILE, "\tstr\tr0, [%r, #4]\n", \
+         SP_REGNUM);      \
+  asm_fprintf (FILE, "\tpop\t{r0, %r}\n", \
+         PC_REGNUM);      \
+  assemble_aligned_integer (UNITS_PER_WORD, const0_rtx);  \
+  assemble_aligned_integer (UNITS_PER_WORD, const0_rtx);  \
 }
 
 #define TRAMPOLINE_TEMPLATE(FILE)		\
   if (TARGET_ARM)				\
     ARM_TRAMPOLINE_TEMPLATE (FILE)		\
+  else if (TARGET_THUMB2)     \
+    THUMB2_TRAMPOLINE_TEMPLATE (FILE)   \
   else						\
-    THUMB_TRAMPOLINE_TEMPLATE (FILE)
+    THUMB1_TRAMPOLINE_TEMPLATE (FILE)
+
+/* Thumb trampolines should be entered in thumb mode, so set the bottom bit
+   of the address.  */
+#define TRAMPOLINE_ADJUST_ADDRESS(ADDR) do            \
+{                     \
+  if (TARGET_THUMB)                 \
+    (ADDR) = expand_simple_binop (Pmode, IOR, (ADDR), GEN_INT(1),     \
+          gen_reg_rtx (Pmode), 0, OPTAB_LIB_WIDEN); \
+} while(0)
 
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 /* Length in units of the trampoline for entering a nested function.  */
-#define TRAMPOLINE_SIZE  (TARGET_ARM ? 16 : 24)
+/* APPLE LOCAL v7 support. Merge from Codesourcery */
+#define TRAMPOLINE_SIZE  (TARGET_32BIT ? 16 : 20)
 
 /* Alignment required for a trampoline in bits.  */
 #define TRAMPOLINE_ALIGNMENT  32
@@ -1784,32 +2014,36 @@
    FNADDR is an RTX for the address of the function's pure code.
    CXT is an RTX for the static chain value for the function.  */
 #ifndef INITIALIZE_TRAMPOLINE
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 #define INITIALIZE_TRAMPOLINE(TRAMP, FNADDR, CXT)			\
 {									\
   emit_move_insn (gen_rtx_MEM (SImode,					\
 			       plus_constant (TRAMP,			\
-					      TARGET_ARM ? 8 : 16)),	\
+					      TARGET_32BIT ? 8 : 12)),	\
 		  CXT);							\
   emit_move_insn (gen_rtx_MEM (SImode,					\
 			       plus_constant (TRAMP,			\
-					      TARGET_ARM ? 12 : 20)),	\
+					      TARGET_32BIT ? 12 : 16)),	\
 		  FNADDR);						\
   emit_library_call (gen_rtx_SYMBOL_REF (Pmode, "__clear_cache"),	\
 		     0, VOIDmode, 2, TRAMP, Pmode,			\
 		     plus_constant (TRAMP, TRAMPOLINE_SIZE), Pmode);	\
 }
+/* APPLE LOCAL end v7 support. Merge from mainline */
 #endif
 
 
 /* Addressing modes, and classification of registers for them.  */
 #define HAVE_POST_INCREMENT   1
-#define HAVE_PRE_INCREMENT    TARGET_ARM
-#define HAVE_POST_DECREMENT   TARGET_ARM
-#define HAVE_PRE_DECREMENT    TARGET_ARM
-#define HAVE_PRE_MODIFY_DISP  TARGET_ARM
-#define HAVE_POST_MODIFY_DISP TARGET_ARM
-#define HAVE_PRE_MODIFY_REG   TARGET_ARM
-#define HAVE_POST_MODIFY_REG  TARGET_ARM
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+#define HAVE_PRE_INCREMENT    TARGET_32BIT
+#define HAVE_POST_DECREMENT   TARGET_32BIT
+#define HAVE_PRE_DECREMENT    TARGET_32BIT
+#define HAVE_PRE_MODIFY_DISP  TARGET_32BIT
+#define HAVE_POST_MODIFY_DISP TARGET_32BIT
+#define HAVE_PRE_MODIFY_REG   TARGET_32BIT
+#define HAVE_POST_MODIFY_REG  TARGET_32BIT
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Macros to check register numbers against specific register classes.  */
 
@@ -1821,21 +2055,23 @@
 #define TEST_REGNO(R, TEST, VALUE) \
   ((R TEST VALUE) || ((unsigned) reg_renumber[R] TEST VALUE))
 
-/*   On the ARM, don't allow the pc to be used.  */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Don't allow the pc to be used.  */
 #define ARM_REGNO_OK_FOR_BASE_P(REGNO)			\
   (TEST_REGNO (REGNO, <, PC_REGNUM)			\
    || TEST_REGNO (REGNO, ==, FRAME_POINTER_REGNUM)	\
    || TEST_REGNO (REGNO, ==, ARG_POINTER_REGNUM))
 
-#define THUMB_REGNO_MODE_OK_FOR_BASE_P(REGNO, MODE)		\
+#define THUMB1_REGNO_MODE_OK_FOR_BASE_P(REGNO, MODE)		\
   (TEST_REGNO (REGNO, <=, LAST_LO_REGNUM)			\
    || (GET_MODE_SIZE (MODE) >= 4				\
        && TEST_REGNO (REGNO, ==, STACK_POINTER_REGNUM)))
 
 #define REGNO_MODE_OK_FOR_BASE_P(REGNO, MODE)		\
-  (TARGET_THUMB						\
-   ? THUMB_REGNO_MODE_OK_FOR_BASE_P (REGNO, MODE)	\
+  (TARGET_THUMB1					\
+   ? THUMB1_REGNO_MODE_OK_FOR_BASE_P (REGNO, MODE)	\
    : ARM_REGNO_OK_FOR_BASE_P (REGNO))
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Nonzero if X can be the base register in a reg+reg addressing mode.
    For Thumb, we can not use SP + reg, so reject SP.  */
@@ -1861,6 +2097,8 @@
 
 #else
 
+/* APPLE LOCAL v7 support. Merge from mainline */
+/* ??? Should the TARGET_ARM here also apply to thumb2?  */
 #define CONSTANT_ADDRESS_P(X)  			\
   (GET_CODE (X) == SYMBOL_REF 			\
    && (CONSTANT_POOL_ADDRESS_P (X)		\
@@ -1884,10 +2122,12 @@
   || CONSTANT_ADDRESS_P (X)		\
   || flag_pic)
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 #define LEGITIMATE_CONSTANT_P(X)			\
   (!arm_tls_referenced_p (X)				\
-   && (TARGET_ARM ? ARM_LEGITIMATE_CONSTANT_P (X)	\
-		  : THUMB_LEGITIMATE_CONSTANT_P (X)))
+   && (TARGET_32BIT ? ARM_LEGITIMATE_CONSTANT_P (X)	\
+		    : THUMB_LEGITIMATE_CONSTANT_P (X)))
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* APPLE LOCAL begin ARM longcall */
 #define SYMBOL_SHORT_CALL ((SYMBOL_FLAG_MACH_DEP) << 3)
@@ -1943,6 +2183,13 @@
 #define ASM_OUTPUT_LABELREF(FILE, NAME)		\
    arm_asm_output_labelref (FILE, NAME)
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Output IT instructions for conditonally executed Thumb-2 instructions.  */
+#define ASM_OUTPUT_OPCODE(STREAM, PTR)	\
+  if (TARGET_THUMB2)			\
+    thumb2_asm_output_opcode (STREAM);
+
+/* APPLE LOCAL end v7 support. Merge from mainline */
 /* The EABI specifies that constructors should go in .init_array.
    Other targets use .ctors for compatibility.  */
 #ifndef ARM_EABI_CTORS_SECTION_OP
@@ -2015,12 +2262,15 @@
 #define ARM_EABI_UNWIND_TABLES 0
 #endif
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 /* The macros REG_OK_FOR..._P assume that the arg is a REG rtx
    and check its validity for a certain class.
    We have two alternate definitions for each of them.
    The usual definition accepts all pseudo regs; the other rejects
    them unless they have been allocated suitable hard regs.
-   The symbol REG_OK_STRICT causes the latter definition to be used.  */
+   The symbol REG_OK_STRICT causes the latter definition to be used.
+   Thumb-2 has the same restictions as arm.  */
+/* APPLE LOCAL end v7 support. Merge from mainline */
 #ifndef REG_OK_STRICT
 
 #define ARM_REG_OK_FOR_BASE_P(X)		\
@@ -2029,7 +2279,8 @@
    || REGNO (X) == FRAME_POINTER_REGNUM		\
    || REGNO (X) == ARG_POINTER_REGNUM)
 
-#define THUMB_REG_MODE_OK_FOR_BASE_P(X, MODE)	\
+/* APPLE LOCAL v7 support. Merge from mainline */
+#define THUMB1_REG_MODE_OK_FOR_BASE_P(X, MODE)	\
   (REGNO (X) <= LAST_LO_REGNUM			\
    || REGNO (X) >= FIRST_PSEUDO_REGISTER	\
    || (GET_MODE_SIZE (MODE) >= 4		\
@@ -2044,8 +2295,10 @@
 #define ARM_REG_OK_FOR_BASE_P(X) 		\
   ARM_REGNO_OK_FOR_BASE_P (REGNO (X))
 
-#define THUMB_REG_MODE_OK_FOR_BASE_P(X, MODE)	\
-  THUMB_REGNO_MODE_OK_FOR_BASE_P (REGNO (X), MODE)
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+#define THUMB1_REG_MODE_OK_FOR_BASE_P(X, MODE)	\
+  THUMB1_REGNO_MODE_OK_FOR_BASE_P (REGNO (X), MODE)
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 #define REG_STRICT_P 1
 
@@ -2053,24 +2306,27 @@
 
 /* Now define some helpers in terms of the above.  */
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 #define REG_MODE_OK_FOR_BASE_P(X, MODE)		\
-  (TARGET_THUMB					\
-   ? THUMB_REG_MODE_OK_FOR_BASE_P (X, MODE)	\
+  (TARGET_THUMB1				\
+   ? THUMB1_REG_MODE_OK_FOR_BASE_P (X, MODE)	\
    : ARM_REG_OK_FOR_BASE_P (X))
 
 #define ARM_REG_OK_FOR_INDEX_P(X) ARM_REG_OK_FOR_BASE_P (X)
 
-/* For Thumb, a valid index register is anything that can be used in
+/* For 16-bit Thumb, a valid index register is anything that can be used in
    a byte load instruction.  */
-#define THUMB_REG_OK_FOR_INDEX_P(X) THUMB_REG_MODE_OK_FOR_BASE_P (X, QImode)
+#define THUMB1_REG_OK_FOR_INDEX_P(X) \
+  THUMB1_REG_MODE_OK_FOR_BASE_P (X, QImode)
 
 /* Nonzero if X is a hard reg that can be used as an index
    or if it is a pseudo reg.  On the Thumb, the stack pointer
    is not suitable.  */
 #define REG_OK_FOR_INDEX_P(X)			\
-  (TARGET_THUMB					\
-   ? THUMB_REG_OK_FOR_INDEX_P (X)		\
+  (TARGET_THUMB1				\
+   ? THUMB1_REG_OK_FOR_INDEX_P (X)		\
    : ARM_REG_OK_FOR_INDEX_P (X))
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Nonzero if X can be the base register in a reg+reg addressing mode.
    For Thumb, we can not use SP + reg, so reject SP.  */
@@ -2094,17 +2350,27 @@
       goto WIN;							\
   }
 
-#define THUMB_GO_IF_LEGITIMATE_ADDRESS(MODE,X,WIN)		\
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+#define THUMB2_GO_IF_LEGITIMATE_ADDRESS(MODE,X,WIN)		\
+  {								\
+    if (thumb2_legitimate_address_p (MODE, X, REG_STRICT_P))	\
+      goto WIN;							\
+  }
+
+#define THUMB1_GO_IF_LEGITIMATE_ADDRESS(MODE,X,WIN)		\
   {								\
-    if (thumb_legitimate_address_p (MODE, X, REG_STRICT_P))	\
+    if (thumb1_legitimate_address_p (MODE, X, REG_STRICT_P))	\
       goto WIN;							\
   }
 
 #define GO_IF_LEGITIMATE_ADDRESS(MODE, X, WIN)				\
   if (TARGET_ARM)							\
     ARM_GO_IF_LEGITIMATE_ADDRESS (MODE, X, WIN)  			\
-  else /* if (TARGET_THUMB) */						\
-    THUMB_GO_IF_LEGITIMATE_ADDRESS (MODE, X, WIN)
+  else if (TARGET_THUMB2)						\
+    THUMB2_GO_IF_LEGITIMATE_ADDRESS (MODE, X, WIN)  			\
+  else /* if (TARGET_THUMB1) */						\
+    THUMB1_GO_IF_LEGITIMATE_ADDRESS (MODE, X, WIN)
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 
 /* Try machine-dependent ways of modifying an illegitimate address
@@ -2114,7 +2380,13 @@
   X = arm_legitimize_address (X, OLDX, MODE);		\
 } while (0)
 
-#define THUMB_LEGITIMIZE_ADDRESS(X, OLDX, MODE, WIN)	\
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* ??? Implement LEGITIMIZE_ADDRESS for thumb2.  */
+#define THUMB2_LEGITIMIZE_ADDRESS(X, OLDX, MODE, WIN)	\
+do {							\
+} while (0)
+
+#define THUMB1_LEGITIMIZE_ADDRESS(X, OLDX, MODE, WIN)	\
 do {							\
   X = thumb_legitimize_address (X, OLDX, MODE);		\
 } while (0)
@@ -2123,12 +2395,15 @@
 do {							\
   if (TARGET_ARM)					\
     ARM_LEGITIMIZE_ADDRESS (X, OLDX, MODE, WIN);	\
+  else if (TARGET_THUMB2)				\
+    THUMB2_LEGITIMIZE_ADDRESS (X, OLDX, MODE, WIN);	\
   else							\
-    THUMB_LEGITIMIZE_ADDRESS (X, OLDX, MODE, WIN);	\
+    THUMB1_LEGITIMIZE_ADDRESS (X, OLDX, MODE, WIN);	\
 							\
   if (memory_address_p (MODE, X))			\
     goto WIN;						\
 } while (0)
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Go to LABEL if ADDR (a legitimate address expression)
    has an effect that depends on the machine mode it is used for.  */
@@ -2141,7 +2416,8 @@
 
 /* Nothing helpful to do for the Thumb */
 #define GO_IF_MODE_DEPENDENT_ADDRESS(ADDR, LABEL)	\
-  if (TARGET_ARM)					\
+/* APPLE LOCAL v7 support. Merge from mainline */   \
+  if (TARGET_32BIT)					\
     ARM_GO_IF_MODE_DEPENDENT_ADDRESS (ADDR, LABEL)
 
 
@@ -2150,11 +2426,15 @@
 #define CASE_VECTOR_MODE Pmode
 
 /* APPLE LOCAL begin ARM compact switch tables */
-#define CASE_VECTOR_PC_RELATIVE (TARGET_THUMB)
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+#define CASE_VECTOR_PC_RELATIVE (TARGET_THUMB || TARGET_THUMB2)
 
 #define CASE_VECTOR_SHORTEN_MODE(MIN_OFFSET, MAX_OFFSET, BODY)	\
-(TARGET_ARM ? SImode						\
+((TARGET_ARM ||                                                 \
+ (TARGET_THUMB2 && (MIN_OFFSET < 0 || MAX_OFFSET >= 0x2000))) ? SImode \
+ : TARGET_THUMB2 ? ((MAX_OFFSET >= 0x200) ? HImode : QImode)    \
  : !TARGET_COMPACT_SWITCH_TABLES ? SImode			\
+ /* TARGET_THUMB1 */                                            \
  : (MIN_OFFSET) >= -256 && (MAX_OFFSET) <= 254			\
  ? (ADDR_DIFF_VEC_FLAGS (BODY).offset_unsigned = 0, QImode)	\
  : (MIN_OFFSET) >= 0 && (MAX_OFFSET) <= 510			\
@@ -2162,6 +2442,8 @@
  : (MIN_OFFSET) >= -65536 && (MAX_OFFSET) <= 65534		\
  ? (ADDR_DIFF_VEC_FLAGS (BODY).offset_unsigned = 0, HImode)	\
  : SImode)
+/* APPLE LOCAL end v7 support. Merge from mainline */
+
 
 /* This macro uses variable "file" that exists at 
    the single place it is invoked, in final.c.  INSN_ADDRESSES 
@@ -2172,67 +2454,8 @@
    count as first element, count does not include the last element.  
    All that is dealt with here. */
    
-
 #define ASM_OUTPUT_ADDR_DIFF_VEC(LABEL, BODY)				\
-do {									\
-  int idx, size = GET_MODE_SIZE (GET_MODE (BODY));			\
-  int pack = (TARGET_THUMB) ? 2 : 4;					\
-  /* APPLE LOCAL 5837498 assembler expr for (L1-L2)/2 */		\
-  /* removed unused variable "base_addr" */				\
-  int base_label_no = CODE_LABEL_NUMBER (LABEL);			\
-  int vlen = XVECLEN (BODY, 1); /*includes trailing default */		\
-  const char* directive;						\
-  if (GET_MODE (BODY) == QImode)					\
-      directive = ".byte";						\
-  else if (GET_MODE (BODY) == HImode)					\
-      directive = ".short";						\
-  else									\
-    {									\
-      pack = 1;		    						\
-      directive = ".long";						\
-    }									\
-  /* Alignment of table was handled by aligning its label,		\
-     in final_scan_insn. */						\
-  targetm.asm_out.internal_label (file, "L", base_label_no);		\
-  /* Default is not included in output count */				\
-  if (TARGET_COMPACT_SWITCH_TABLES)					\
-    asm_fprintf (file, "\t%s\t%d @ size\n", directive, vlen - 1);	\
-  for (idx = 0; idx < vlen; idx++)					\
-    {									\
-      rtx target_label = XEXP (XVECEXP (BODY, 1, idx), 0);		\
-      /* APPLE LOCAL begin 5837498 assembler expr for (L1-L2)/2 */	\
-      if (GET_MODE (BODY) != SImode)					\
-        {								\
-	  /* ARM mode is always SImode bodies */			\
-	  gcc_assert (!TARGET_ARM);					\
-	  /* APPLE LOCAL 5903944 */					\
-	  asm_fprintf (file, "\t%s\t(L%d-L%d)/%d\n",			\
-	    directive,							\
-	    CODE_LABEL_NUMBER (target_label), base_label_no, pack);	\
-        }								\
-      /* APPLE LOCAL end 5837498 assembler expr for (L1-L2)/2 */	\
-      else if (!TARGET_THUMB)						\
-	asm_fprintf (file, "\tb\tL%d\n",				\
-			CODE_LABEL_NUMBER (target_label));		\
-      else if (TARGET_COMPACT_SWITCH_TABLES || flag_pic)		\
-	/* Let the assembler do the computation here; one case that	\
-	   uses is this is when there are asm's, which makes		\
-	   compile time computations unreliable. */			\
-	asm_fprintf (file, "\t%s\tL%d-L%d\n",				\
-	  directive,							\
-	  CODE_LABEL_NUMBER (target_label), base_label_no);		\
-      else								\
-	asm_fprintf (file, "\t%s\tL%d\n", directive,			\
-		     CODE_LABEL_NUMBER (target_label));			\
-    }									\
-  /* Pad to instruction boundary. */					\
-  vlen = (vlen + 1/*count*/) * size;					\
-  while (vlen % pack != 0)						\
-    {									\
-      asm_fprintf (file, "\t%s\t0 @ pad\n", directive);			\
-      vlen += size;							\
-    }									\
-} while (0)
+  arm_asm_output_addr_diff_vec (file, LABEL, BODY)
 
 /* This is identical to the default code when ASM_OUTPUT_ADDR_VEC is
    not defined; however, final_scan_insn() will not invoke that
@@ -2331,15 +2554,17 @@
    || (X) == arg_pointer_rtx)
 
 /* Moves to and from memory are quite expensive */
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 #define MEMORY_MOVE_COST(M, CLASS, IN)			\
-  (TARGET_ARM ? 10 :					\
+  (TARGET_32BIT ? 10 :					\
    ((GET_MODE_SIZE (M) < 4 ? 8 : 2 * GET_MODE_SIZE (M))	\
     * (CLASS == LO_REGS ? 1 : 2)))
 
 /* Try to generate sequences that don't involve branches, we can then use
    conditional instructions */
 #define BRANCH_COST \
-  (TARGET_ARM ? 4 : (optimize > 1 ? 1 : 0))
+  (TARGET_32BIT ? 4 : (optimize > 0 ? 2 : 0))
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Position Independent Code.  */
 /* We decide which register to use based on the compilation options and
@@ -2430,7 +2655,10 @@
 #define CLZ_DEFINED_VALUE_AT_ZERO(MODE, VALUE)  ((VALUE) = 32, 1)
 
 #undef  ASM_APP_OFF
-#define ASM_APP_OFF (TARGET_THUMB ? "\t.code\t16\n" : "")
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+#define ASM_APP_OFF (TARGET_THUMB1 ? "\t.code\t16\n" : \
+		     TARGET_THUMB2 ? "\t.thumb\n" : "")
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Output a push or a pop instruction (only used when profiling).  */
 #define ASM_OUTPUT_REG_PUSH(STREAM, REGNO)		\
@@ -2454,16 +2682,29 @@
 	asm_fprintf (STREAM, "\tpop {%r}\n", REGNO);	\
     } while (0)
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
+/* Jump table alignment is explicit in ASM_OUTPUT_CASE_LABEL.  */
+#define ADDR_VEC_ALIGN(JUMPTABLE) 0
+
 /* This is how to output a label which precedes a jumptable.  Since
    Thumb instructions are 2 bytes, we may need explicit alignment here.  */
 #undef  ASM_OUTPUT_CASE_LABEL
-#define ASM_OUTPUT_CASE_LABEL(FILE, PREFIX, NUM, JUMPTABLE)	\
-  do								\
-    {								\
-      if (TARGET_THUMB)						\
-        ASM_OUTPUT_ALIGN (FILE, 2);				\
-      (*targetm.asm_out.internal_label) (FILE, PREFIX, NUM);	\
-    }								\
+#define ASM_OUTPUT_CASE_LABEL(FILE, PREFIX, NUM, JUMPTABLE)   \
+  do                  \
+    {                 \
+      if (TARGET_THUMB && GET_MODE (PATTERN (JUMPTABLE)) == SImode) \
+        ASM_OUTPUT_ALIGN (FILE, 2);         \
+      (*targetm.asm_out.internal_label) (FILE, PREFIX, NUM);    \
+    }                 \
+  while (0)
+
+/* Make sure subsequent insns are aligned after a TBB.  */
+#define ASM_OUTPUT_CASE_END(FILE, NUM, JUMPTABLE) \
+  do              \
+    {             \
+      if (GET_MODE (PATTERN (JUMPTABLE)) == QImode) \
+        ASM_OUTPUT_ALIGN (FILE, 1);     \
+    }             \
   while (0)
 
 #define ARM_DECLARE_FUNCTION_NAME(STREAM, NAME, DECL) 	\
@@ -2471,13 +2712,17 @@
     {							\
       if (TARGET_THUMB) 				\
         {						\
-          if (is_called_in_ARM_mode (DECL)      \
-			  || current_function_is_thunk)		\
+          if (is_called_in_ARM_mode (DECL)		\
+	      || (TARGET_THUMB1 && !TARGET_THUMB1_ONLY	\
+		  && current_function_is_thunk))	\
             fprintf (STREAM, "\t.code 32\n") ;		\
           else						\
 /* APPLE LOCAL begin ARM thumb_func <symbol_name> */	\
 	    {						\
-	      fputs ("\t.code 16\n", STREAM);		\
+        if (TARGET_THUMB1) \
+  	      fputs ("\t.code 16\n", STREAM);		\
+        else                                            \
+          fputs ("\t.thumb\n", STREAM);     \
 	      fputs ("\t.thumb_func ", STREAM);		\
 	      if (TARGET_MACHO)				\
 		assemble_name (STREAM, (char *) NAME);	\
@@ -2489,6 +2734,7 @@
         arm_poke_function_name (STREAM, (char *) NAME);	\
     }							\
   while (0)
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* For aliases of functions we use .thumb_set instead.  */
 #define ASM_OUTPUT_DEF_FROM_DECLS(FILE, DECL1, DECL2)		\
@@ -2525,20 +2771,26 @@
     }
 #endif
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 /* Only perform branch elimination (by making instructions conditional) if
-   we're optimizing.  Otherwise it's of no use anyway.  */
+   we're optimizing.  For Thumb-2 check if any IT instructions need
+   outputting.  */
 #define FINAL_PRESCAN_INSN(INSN, OPVEC, NOPERANDS)	\
   if (TARGET_ARM && optimize)				\
     arm_final_prescan_insn (INSN);			\
-  else if (TARGET_THUMB)				\
-    thumb_final_prescan_insn (INSN)
+  else if (TARGET_THUMB2)       \
+    thumb2_final_prescan_insn (INSN);     \
+  else if (TARGET_THUMB1)       \
+    thumb1_final_prescan_insn (INSN)
 
 #define PRINT_OPERAND_PUNCT_VALID_P(CODE)	\
-  (CODE == '@' || CODE == '|'			\
-   /* APPLE LOCAL ARM MACH assembler */		\
-   || CODE == '.'				\
-   || (TARGET_ARM   && (CODE == '?'))		\
+  (CODE == '@' || CODE == '|'	|| CODE == '.'  \
+   || CODE == '~' || CODE == '#' \
+   || CODE == '(' || CODE == ')'    \
+   || (TARGET_32BIT  && (CODE == '?'))		\
+   || (TARGET_THUMB2 && (CODE == '!'))    \
    || (TARGET_THUMB && (CODE == '_')))
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 /* Output an operand of an instruction.  */
 #define PRINT_OPERAND(STREAM, X, CODE)  \
@@ -2669,11 +2921,13 @@
     output_addr_const (STREAM, X);			\
 }
 
+/* APPLE LOCAL begin v7 support. Merge from mainline */
 #define PRINT_OPERAND_ADDRESS(STREAM, X)	\
-  if (TARGET_ARM)				\
+  if (TARGET_32BIT)				\
     ARM_PRINT_OPERAND_ADDRESS (STREAM, X)	\
   else						\
     THUMB_PRINT_OPERAND_ADDRESS (STREAM, X)
+/* APPLE LOCAL end v7 support. Merge from mainline */
 
 #define OUTPUT_ADDR_CONST_EXTRA(file, x, fail)		\
   if (arm_output_addr_const_extra (file, x) == FALSE)	\
@@ -2794,6 +3048,11 @@
   } while (0)
 /* APPLE LOCAL end 6186914 */
 
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+/* Neon defines builtins from ARM_BUILTIN_MAX upwards, though they don't have
+   symbolic names defined here (which would require too much duplication).
+   FIXME?  */
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 enum arm_builtins
 {
   ARM_BUILTIN_GETWCX,
@@ -2958,7 +3217,11 @@
 
   ARM_BUILTIN_THREAD_POINTER,
 
-  ARM_BUILTIN_MAX
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+  ARM_BUILTIN_NEON_BASE,
+
+  ARM_BUILTIN_MAX = ARM_BUILTIN_NEON_BASE  /* FIXME: Wrong!  */
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 };
 
 /* LLVM LOCAL begin */
@@ -3016,6 +3279,8 @@
     case arm1156t2fs:   F.setCPU("arm1156t2f-s"); break; \
     case cortexa8:      F.setCPU("cortex-a8"); break; \
     case cortexa9:      F.setCPU("cortex-a9"); break; \
+    case cortexr4:      F.setCPU("cortex-r4"); break; \
+    case cortexm3:      F.setCPU("cortex-m3"); break; \
     default:						\
       F.setCPU("arm7tdmi"); \
       break; \

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/arm.md
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm.md?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm.md (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm.md Wed Jul 22 15:36:27 2009
@@ -51,6 +51,8 @@
 
 ;; UNSPEC Usage:
 ;; Note: sin and cos are no-longer used.
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+;; Unspec constants for Neon are defined in neon.md.
 
 (define_constants
   [(UNSPEC_SIN       0)	; `sin' operation (MODE_FLOAT):
@@ -93,8 +95,12 @@
    (UNSPEC_TLS      20) ; A symbol that has been treated properly for TLS usage.
    (UNSPEC_PIC_LABEL 21) ; A label used for PIC access that does not appear in the
                          ; instruction stream.
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+   (UNSPEC_STACK_ALIGN 22) ; Doubleword aligned stack pointer.  Used to
+         ; generate correct unwind information.
    ; APPLE LOCAL ARM setjmp/longjmp interworking
-   (UNSPEC_JMP_XCHG 22) ; Indirect jump with possible change in ARM/Thumb state.
+   (UNSPEC_JMP_XCHG 23) ; Indirect jump with possible change in ARM/Thumb state.
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
    ; APPLE LOCAL ARM UXTB support
    (UNSPEC_UXTB16   27) ; The UXTB16 instruction (ARM only)
   ]
@@ -121,12 +127,16 @@
 			;   a 32-bit object.
    (VUNSPEC_POOL_8   7) ; `pool-entry(8)'.  An entry in the constant pool for
 			;   a 64-bit object.
-   (VUNSPEC_TMRC     8) ; Used by the iWMMXt TMRC instruction.
-   (VUNSPEC_TMCR     9) ; Used by the iWMMXt TMCR instruction.
-   (VUNSPEC_ALIGN8   10) ; 8-byte alignment version of VUNSPEC_ALIGN
-   (VUNSPEC_WCMP_EQ  11) ; Used by the iWMMXt WCMPEQ instructions
-   (VUNSPEC_WCMP_GTU 12) ; Used by the iWMMXt WCMPGTU instructions
-   (VUNSPEC_WCMP_GT  13) ; Used by the iwMMXT WCMPGT instructions
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+   (VUNSPEC_POOL_16  8) ; `pool-entry(16)'.  An entry in the constant pool for
+			;   a 128-bit object.
+   (VUNSPEC_TMRC     9) ; Used by the iWMMXt TMRC instruction.
+   (VUNSPEC_TMCR     10) ; Used by the iWMMXt TMCR instruction.
+   (VUNSPEC_ALIGN8   11) ; 8-byte alignment version of VUNSPEC_ALIGN
+   (VUNSPEC_WCMP_EQ  12) ; Used by the iWMMXt WCMPEQ instructions
+   (VUNSPEC_WCMP_GTU 13) ; Used by the iWMMXt WCMPGTU instructions
+   (VUNSPEC_WCMP_GT  14) ; Used by the iwMMXT WCMPGT instructions
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
    (VUNSPEC_EH_RETURN 20); Use to override the return address for exception
 			 ; handling.
 			    ; APPLE LOCAL begin ARM strings in code
@@ -185,7 +195,8 @@
 ;; scheduling information.
 
 (define_attr "insn"
-        "smulxy,smlaxy,smlalxy,smulwy,smlawx,mul,muls,mla,mlas,umull,umulls,umlal,umlals,smull,smulls,smlal,smlals,smlawy,smuad,smuadx,smlad,smladx,smusd,smusdx,smlsd,smlsdx,smmul,smmulr,other"
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+        "mov,mvn,smulxy,smlaxy,smlalxy,smulwy,smlawx,mul,muls,mla,mlas,umull,umulls,umlal,umlals,smull,smulls,smlal,smlals,smlawy,smuad,smuadx,smlad,smladx,smusd,smusdx,smlsd,smlsdx,smmul,smmulr,smmla,smmls,umaal,smlald,smlsld,clz,mrs,msr,xtab,sdiv,udiv,other"
         (const_string "other"))
 
 ; TYPE attribute is used to detect floating point instructions which, if
@@ -194,6 +205,10 @@
 ; scheduling of writes.
 
 ; Classification of each insn
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+; Note: vfp.md has different meanings for some of these, and some further
+; types as well.  See that file for details.
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
 ; alu		any alu  instruction that doesn't hit memory or fp
 ;		regs or have a shifted source operand
 ; alu_shift	any data instruction that doesn't hit memory or fp
@@ -237,7 +252,8 @@
 ; mav_dmult	Double multiplies (7 cycle)
 ;
 (define_attr "type"
-	"alu,alu_shift,alu_shift_reg,mult,block,float,fdivx,fdivd,fdivs,fmul,ffmul,farith,ffarith,f_flag,float_em,f_load,f_store,f_loads,f_loadd,f_stores,f_stored,f_mem_r,r_mem_f,f_2_r,r_2_f,f_cvt,branch,call,load_byte,load1,load2,load3,load4,store1,store2,store3,store4,mav_farith,mav_dmult" 
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+	"alu,alu_shift,alu_shift_reg,mult,block,float,fdivx,fdivd,fdivs,fmul,ffmul,farith,ffarith,f_flag,float_em,f_load,f_store,f_loads,f_loadd,f_stores,f_stored,f_mem_r,r_mem_f,f_2_r,r_2_f,f_cvt,branch,call,load_byte,load1,load2,load3,load4,store1,store2,store3,store4,mav_farith,mav_dmult,fmuls,fmuld,fmacs,fmacd"
 	(if_then_else 
 	 (eq_attr "insn" "smulxy,smlaxy,smlalxy,smulwy,smlawx,mul,muls,mla,mlas,umull,umulls,umlal,umlals,smull,smulls,smlal,smlals")
 	 (const_string "mult")
@@ -305,6 +321,12 @@
 (define_attr "far_jump" "yes,no" (const_string "no"))
 
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
+;; The number of machine instructions this pattern expands to.
+;; Used for Thumb-2 conditional execution.
+(define_attr "ce_count" "" (const_int 1))
+
+;; APPLE LOCAL end v7 support. Merge from mainline
 ;;---------------------------------------------------------------------------
 ;; Mode macros
 
@@ -329,14 +351,16 @@
 
 (define_attr "generic_sched" "yes,no"
   (const (if_then_else 
-          (eq_attr "tune" "arm926ejs,arm1020e,arm1026ejs,arm1136js,arm1136jfs") 
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+          (eq_attr "tune" "arm926ejs,arm1020e,arm1026ejs,arm1136js,arm1136jfs,cortexa8,cortexr4") 
           (const_string "no")
           (const_string "yes"))))
 
 (define_attr "generic_vfp" "yes,no"
   (const (if_then_else
 	  (and (eq_attr "fpu" "vfp")
-	       (eq_attr "tune" "!arm1020e,arm1022e"))
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+	       (eq_attr "tune" "!arm1020e,arm1022e,cortexa8"))
 	  (const_string "yes")
 	  (const_string "no"))))
 
@@ -345,6 +369,11 @@
 (include "arm1020e.md")
 (include "arm1026ejs.md")
 (include "arm1136jfs.md")
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+(include "cortex-a8.md")
+(include "cortex-r4.md")
+(include "vfp11.md")
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
 
 
 ;;---------------------------------------------------------------------------
@@ -377,7 +406,8 @@
       DONE;
     }
 
-  if (TARGET_THUMB)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_THUMB1)
     {
       if (GET_CODE (operands[1]) != REG)
         operands[1] = force_reg (SImode, operands[1]);
@@ -398,13 +428,15 @@
 )
 ;; APPLE LOCAL end 5831562 long long constants
 
-(define_insn "*thumb_adddi3"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_adddi3"
   [(set (match_operand:DI          0 "register_operand" "=l")
 	(plus:DI (match_operand:DI 1 "register_operand" "%0")
 		 (match_operand:DI 2 "register_operand" "l")))
    (clobber (reg:CC CC_REGNUM))
   ]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "add\\t%Q0, %Q0, %Q2\;adc\\t%R0, %R0, %R2"
   [(set_attr "length" "4")]
 )
@@ -415,9 +447,11 @@
 	(plus:DI (match_operand:DI 1 "s_register_operand" "%0, 0, r, 0")
 		 (match_operand:DI 2 "arm_rhs64_operand" "r,  0, Dd,Dd")))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)"
   "#"
-  "TARGET_ARM && reload_completed"
+  "TARGET_32BIT && reload_completed"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(parallel [(set (reg:CC_C CC_REGNUM)
 		   (compare:CC_C (plus:SI (match_dup 1) (match_dup 2))
 				 (match_dup 1)))
@@ -444,9 +478,11 @@
 		  (match_operand:SI 2 "s_register_operand" "r,r"))
 		 (match_operand:DI 1 "s_register_operand" "r,0")))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)"
   "#"
-  "TARGET_ARM && reload_completed"
+  "TARGET_32BIT && reload_completed"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(parallel [(set (reg:CC_C CC_REGNUM)
 		   (compare:CC_C (plus:SI (match_dup 1) (match_dup 2))
 				 (match_dup 1)))
@@ -473,9 +509,11 @@
 		  (match_operand:SI 2 "s_register_operand" "r,r"))
 		 (match_operand:DI 1 "s_register_operand" "r,0")))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)"
   "#"
-  "TARGET_ARM && reload_completed"
+  "TARGET_32BIT && reload_completed"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(parallel [(set (reg:CC_C CC_REGNUM)
 		   (compare:CC_C (plus:SI (match_dup 1) (match_dup 2))
 				 (match_dup 1)))
@@ -500,7 +538,8 @@
 		 (match_operand:SI 2 "reg_or_int_operand" "")))]
   "TARGET_EITHER"
   "
-  if (TARGET_ARM && GET_CODE (operands[2]) == CONST_INT)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_32BIT && GET_CODE (operands[2]) == CONST_INT)
     {
       arm_split_constant (PLUS, SImode, NULL_RTX,
 	                  INTVAL (operands[2]), operands[0], operands[1],
@@ -517,7 +556,8 @@
    (set (match_operand:SI          0 "arm_general_register_operand" "")
 	(plus:SI (match_operand:SI 1 "arm_general_register_operand" "")
 		 (match_operand:SI 2 "const_int_operand"  "")))]
-  "TARGET_ARM &&
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT &&
    !(const_ok_for_arm (INTVAL (operands[2]))
      || const_ok_for_arm (-INTVAL (operands[2])))
     && const_ok_for_arm (~INTVAL (operands[2]))"
@@ -530,15 +570,17 @@
   [(set (match_operand:SI          0 "s_register_operand" "=r,r,r")
 	(plus:SI (match_operand:SI 1 "s_register_operand" "%r,r,r")
 		 (match_operand:SI 2 "reg_or_int_operand" "rI,L,?n")))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
    add%?\\t%0, %1, %2
    sub%?\\t%0, %1, #%n2
    #"
-  "TARGET_ARM &&
+  "TARGET_32BIT &&
    GET_CODE (operands[2]) == CONST_INT
    && !(const_ok_for_arm (INTVAL (operands[2]))
         || const_ok_for_arm (-INTVAL (operands[2])))"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(clobber (const_int 0))]
   "
   arm_split_constant (PLUS, SImode, curr_insn,
@@ -554,11 +596,13 @@
 ;; register.  Trying to reload it will always fail catastrophically,
 ;; so never allow those alternatives to match if reloading is needed.
 
-(define_insn "*thumb_addsi3"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_addsi3"
   [(set (match_operand:SI          0 "register_operand" "=l,l,l,*r,*h,l,!k")
 	(plus:SI (match_operand:SI 1 "register_operand" "%0,0,l,*0,*0,!k,!k")
 		 (match_operand:SI 2 "nonmemory_operand" "I,J,lL,*h,*r,!M,!O")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
    static const char * const asms[] = 
    {
@@ -579,6 +623,7 @@
   [(set_attr "length" "2")]
 )
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
 ;; Reloading and elimination of the frame pointer can
 ;; sometimes cause this optimization to be missed.
 (define_peephole2
@@ -586,12 +631,13 @@
 	(match_operand:SI 1 "const_int_operand" ""))
    (set (match_dup 0)
 	(plus:SI (match_dup 0) (reg:SI SP_REGNUM)))]
-  "TARGET_THUMB
+  "TARGET_THUMB1
    && (unsigned HOST_WIDE_INT) (INTVAL (operands[1])) < 1024
    && (INTVAL (operands[1]) & 3) == 0"
   [(set (match_dup 0) (plus:SI (reg:SI SP_REGNUM) (match_dup 1)))]
   ""
 )
+;; APPLE LOCAL end v7 support. Merge from mainline
 
 ;; APPLE LOCAL begin ARM peephole
 ;; And sometimes greg will generate the same thing this way...
@@ -609,6 +655,8 @@
 )
 ;; APPLE LOCAL end ARM peephole
 
+;; APPLE LOCAL v7 support. Merge from mainline
+;; ??? Make Thumb-2 variants which prefer low regs
 (define_insn "*addsi3_compare0"
   [(set (reg:CC_NOOV CC_REGNUM)
 	(compare:CC_NOOV
@@ -617,10 +665,12 @@
 	 (const_int 0)))
    (set (match_operand:SI 0 "s_register_operand" "=r,r")
 	(plus:SI (match_dup 1) (match_dup 2)))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
-   add%?s\\t%0, %1, %2
-   sub%?s\\t%0, %1, #%n2"
+   add%.\\t%0, %1, %2
+   sub%.\\t%0, %1, #%n2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
@@ -630,7 +680,8 @@
 	 (plus:SI (match_operand:SI 0 "s_register_operand" "r, r")
 		  (match_operand:SI 1 "arm_add_operand"    "rI,L"))
 	 (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
    cmn%?\\t%0, %1
    cmp%?\\t%0, #%n1"
@@ -642,7 +693,8 @@
 	(compare:CC_Z
 	 (neg:SI (match_operand:SI 0 "s_register_operand" "r"))
 	 (match_operand:SI 1 "s_register_operand" "r")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "cmn%?\\t%1, %0"
   [(set_attr "conds" "set")]
 )
@@ -657,10 +709,12 @@
    (set (match_operand:SI 0 "s_register_operand" "=r,r")
 	(plus:SI (match_dup 1)
 		 (match_operand:SI 3 "arm_addimm_operand" "L,I")))]
-  "TARGET_ARM && INTVAL (operands[2]) == -INTVAL (operands[3])"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && INTVAL (operands[2]) == -INTVAL (operands[3])"
   "@
-   sub%?s\\t%0, %1, %2
-   add%?s\\t%0, %1, #%n2"
+   sub%.\\t%0, %1, %2
+   add%.\\t%0, %1, #%n2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
@@ -684,7 +738,8 @@
 		       [(match_dup 2) (const_int 0)])
 		      (match_operand 4 "" "")
 		      (match_operand 5 "" "")))]
-  "TARGET_ARM && peep2_reg_dead_p (3, operands[2])"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && peep2_reg_dead_p (3, operands[2])"
   [(parallel[
     (set (match_dup 2)
 	 (compare:CC
@@ -713,10 +768,12 @@
 	 (match_dup 1)))
    (set (match_operand:SI 0 "s_register_operand" "=r,r")
 	(plus:SI (match_dup 1) (match_dup 2)))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
-   add%?s\\t%0, %1, %2
-   sub%?s\\t%0, %1, #%n2"
+   add%.\\t%0, %1, %2
+   sub%.\\t%0, %1, #%n2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
@@ -728,10 +785,12 @@
 	 (match_dup 2)))
    (set (match_operand:SI 0 "s_register_operand" "=r,r")
 	(plus:SI (match_dup 1) (match_dup 2)))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
-   add%?s\\t%0, %1, %2
-   sub%?s\\t%0, %1, #%n2"
+   add%.\\t%0, %1, %2
+   sub%.\\t%0, %1, #%n2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
@@ -741,7 +800,8 @@
 	 (plus:SI (match_operand:SI 0 "s_register_operand" "r,r")
 		  (match_operand:SI 1 "arm_add_operand" "rI,L"))
 	 (match_dup 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
    cmn%?\\t%0, %1
    cmp%?\\t%0, #%n1"
@@ -754,7 +814,8 @@
 	 (plus:SI (match_operand:SI 0 "s_register_operand" "r,r")
 		  (match_operand:SI 1 "arm_add_operand" "rI,L"))
 	 (match_dup 1)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
    cmn%?\\t%0, %1
    cmp%?\\t%0, #%n1"
@@ -766,7 +827,8 @@
 	(plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))
 		 (plus:SI (match_operand:SI 1 "s_register_operand" "r")
 			  (match_operand:SI 2 "arm_rhs_operand" "rI"))))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "adc%?\\t%0, %1, %2"
   [(set_attr "conds" "use")]
 )
@@ -779,7 +841,8 @@
 		      [(match_operand:SI 3 "s_register_operand" "r")
 		       (match_operand:SI 4 "reg_or_int_operand" "rM")])
 		    (match_operand:SI 1 "s_register_operand" "r"))))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "adc%?\\t%0, %1, %3%S2"
   [(set_attr "conds" "use")
    (set (attr "type") (if_then_else (match_operand 4 "const_int_operand" "")
@@ -792,7 +855,8 @@
 	(plus:SI (plus:SI (match_operand:SI 1 "s_register_operand" "r")
 			  (match_operand:SI 2 "arm_rhs_operand" "rI"))
 		 (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "adc%?\\t%0, %1, %2"
   [(set_attr "conds" "use")]
 )
@@ -802,7 +866,8 @@
 	(plus:SI (plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))
 			  (match_operand:SI 1 "s_register_operand" "r"))
 		 (match_operand:SI 2 "arm_rhs_operand" "rI")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "adc%?\\t%0, %1, %2"
   [(set_attr "conds" "use")]
 )
@@ -812,12 +877,23 @@
 	(plus:SI (plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))
 			  (match_operand:SI 2 "arm_rhs_operand" "rI"))
 		 (match_operand:SI 1 "s_register_operand" "r")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "adc%?\\t%0, %1, %2"
   [(set_attr "conds" "use")]
 )
 
-(define_insn "incscc"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_expand "incscc"
+  [(set (match_operand:SI 0 "s_register_operand" "=r,r")
+        (plus:SI (match_operator:SI 2 "arm_comparison_operator"
+                    [(match_operand:CC 3 "cc_register" "") (const_int 0)])
+                 (match_operand:SI 1 "s_register_operand" "0,?r")))]
+  "TARGET_32BIT"
+  ""
+)
+
+(define_insn "*arm_incscc"
   [(set (match_operand:SI 0 "s_register_operand" "=r,r")
         (plus:SI (match_operator:SI 2 "arm_comparison_operator"
                     [(match_operand:CC 3 "cc_register" "") (const_int 0)])
@@ -829,6 +905,7 @@
   [(set_attr "conds" "use")
    (set_attr "length" "4,8")]
 )
+;; APPLE LOCAL end v7 support. Merge from mainline
 
 ; transform ((x << y) - 1) to ~(~(x-1) << y)  Where X is a constant.
 (define_split
@@ -837,7 +914,8 @@
 			    (match_operand:SI 2 "s_register_operand" ""))
 		 (const_int -1)))
    (clobber (match_operand:SI 3 "s_register_operand" ""))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   [(set (match_dup 3) (match_dup 1))
    (set (match_dup 0) (not:SI (ashift:SI (match_dup 3) (match_dup 2))))]
   "
@@ -848,7 +926,8 @@
   [(set (match_operand:SF          0 "s_register_operand" "")
 	(plus:SF (match_operand:SF 1 "s_register_operand" "")
 		 (match_operand:SF 2 "arm_float_add_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "
   if (TARGET_MAVERICK
       && !cirrus_fp_register (operands[2], SFmode))
@@ -859,7 +938,8 @@
   [(set (match_operand:DF          0 "s_register_operand" "")
 	(plus:DF (match_operand:DF 1 "s_register_operand" "")
 		 (match_operand:DF 2 "arm_float_add_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "
   if (TARGET_MAVERICK
       && !cirrus_fp_register (operands[2], DFmode))
@@ -875,8 +955,9 @@
     (clobber (reg:CC CC_REGNUM))])]
   "TARGET_EITHER"
   "
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
   if (TARGET_HARD_FLOAT && TARGET_MAVERICK
-      && TARGET_ARM
+      && TARGET_32BIT
       && cirrus_fp_register (operands[0], DImode)
       && cirrus_fp_register (operands[1], DImode))
     {
@@ -884,7 +965,7 @@
       DONE;
     }
 
-  if (TARGET_THUMB)
+  if (TARGET_THUMB1)
     {
       if (GET_CODE (operands[1]) != REG)
         operands[1] = force_reg (SImode, operands[1]);
@@ -892,24 +973,26 @@
         operands[2] = force_reg (SImode, operands[2]);
      }	
 
-  if (TARGET_ARM 
+  if (TARGET_32BIT
       && (GET_CODE (operands[2]) == CONST_INT
           || GET_CODE (operands[2]) == CONST_DOUBLE)
       && !const64_ok_for_arm_immediate (operands[2]))
     {
       emit_insn (gen_adddi3 (operands[0], operands[1],
-			    negate_rtx (DImode, operands[2])));
+                          negate_rtx (DImode, operands[2])));
       DONE;
     }
+  /* APPLE LOCAL end v7 support. Merge from mainline */
    "
 )
 
 (define_insn "*arm_subdi3"
   [(set (match_operand:DI           0 "s_register_operand" "=&r,&r,&r,&r,&r")
-	(minus:DI (match_operand:DI 1 "s_register_operand"   "0, r, 0, r ,0")
-		  (match_operand:DI 2 "arm_rhs64_operand"   "r, 0, 0,Dd,Dd")))
+	(minus:DI (match_operand:DI 1 "s_register_operand" "0,r,0,r,0")
+		  (match_operand:DI 2 "arm_rhs64_operand" "r,0,0,Dd,Dd")))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "*
    if (which_alternative <= 2)
      return \"subs\\t%Q0, %Q1, %Q2\;sbc\\t%R0, %R1, %R2\";
@@ -929,7 +1012,8 @@
 	(minus:DI (match_operand:DI 1 "register_operand"  "0")
 		  (match_operand:DI 2 "register_operand"  "l")))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "sub\\t%Q0, %Q0, %Q2\;sbc\\t%R0, %R0, %R2"
   [(set_attr "length" "4")]
 )
@@ -940,7 +1024,8 @@
 		  (zero_extend:DI
 		   (match_operand:SI 2 "s_register_operand"  "r,r"))))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "subs\\t%Q0, %Q1, %2\;sbc\\t%R0, %R1, #0"
   [(set_attr "conds" "clob")
    (set_attr "length" "8")]
@@ -952,7 +1037,8 @@
 		  (sign_extend:DI
 		   (match_operand:SI 2 "s_register_operand"  "r,r"))))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "subs\\t%Q0, %Q1, %2\;sbc\\t%R0, %R1, %2, asr #31"
   [(set_attr "conds" "clob")
    (set_attr "length" "8")]
@@ -989,8 +1075,10 @@
 		  (zero_extend:DI
 		   (match_operand:SI 2 "s_register_operand"  "r"))))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM"
-  "subs\\t%Q0, %1, %2\;rsc\\t%R0, %1, %1"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
+  "subs\\t%Q0, %1, %2\;sbc\\t%R0, %1, %1"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "clob")
    (set_attr "length" "8")]
 )
@@ -1003,37 +1091,43 @@
   "
   if (GET_CODE (operands[1]) == CONST_INT)
     {
-      if (TARGET_ARM)
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      if (TARGET_32BIT)
         {
           arm_split_constant (MINUS, SImode, NULL_RTX,
 	                      INTVAL (operands[1]), operands[0],
 	  		      operands[2], optimize && !no_new_pseudos);
           DONE;
 	}
-      else /* TARGET_THUMB */
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      else /* TARGET_THUMB1 */
         operands[1] = force_reg (SImode, operands[1]);
     }
   "
 )
 
-(define_insn "*thumb_subsi3_insn"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_subsi3_insn"
   [(set (match_operand:SI           0 "register_operand" "=l")
 	(minus:SI (match_operand:SI 1 "register_operand" "l")
 		  (match_operand:SI 2 "register_operand" "l")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "sub\\t%0, %1, %2"
   [(set_attr "length" "2")]
 )
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
+; ??? Check Thumb-2 split length
 (define_insn_and_split "*arm_subsi3_insn"
   [(set (match_operand:SI           0 "s_register_operand" "=r,r")
 	(minus:SI (match_operand:SI 1 "reg_or_int_operand" "rI,?n")
 		  (match_operand:SI 2 "s_register_operand" "r,r")))]
-  "TARGET_ARM"
+  "TARGET_32BIT"
   "@
    rsb%?\\t%0, %2, %1
    #"
-  "TARGET_ARM
+  "TARGET_32BIT
    && GET_CODE (operands[1]) == CONST_INT
    && !const_ok_for_arm (INTVAL (operands[1]))"
   [(clobber (const_int 0))]
@@ -1045,13 +1139,15 @@
   [(set_attr "length" "4,16")
    (set_attr "predicable" "yes")]
 )
+;; APPLE LOCAL end v7 support. Merge from mainline
 
 (define_peephole2
   [(match_scratch:SI 3 "r")
    (set (match_operand:SI 0 "arm_general_register_operand" "")
 	(minus:SI (match_operand:SI 1 "const_int_operand" "")
 		  (match_operand:SI 2 "arm_general_register_operand" "")))]
-  "TARGET_ARM
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT
    && !const_ok_for_arm (INTVAL (operands[1]))
    && const_ok_for_arm (~INTVAL (operands[1]))"
   [(set (match_dup 3) (match_dup 1))
@@ -1067,14 +1163,26 @@
 	 (const_int 0)))
    (set (match_operand:SI 0 "s_register_operand" "=r,r")
 	(minus:SI (match_dup 1) (match_dup 2)))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
-   sub%?s\\t%0, %1, %2
-   rsb%?s\\t%0, %2, %1"
+   sub%.\\t%0, %1, %2
+   rsb%.\\t%0, %2, %1"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
-(define_insn "decscc"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_expand "decscc"
+  [(set (match_operand:SI            0 "s_register_operand" "=r,r")
+        (minus:SI (match_operand:SI  1 "s_register_operand" "0,?r")
+		  (match_operator:SI 2 "arm_comparison_operator"
+                   [(match_operand   3 "cc_register" "") (const_int 0)])))]
+  "TARGET_32BIT"
+  ""
+)
+
+(define_insn "*arm_decscc"
   [(set (match_operand:SI            0 "s_register_operand" "=r,r")
         (minus:SI (match_operand:SI  1 "s_register_operand" "0,?r")
 		  (match_operator:SI 2 "arm_comparison_operator"
@@ -1086,12 +1194,14 @@
   [(set_attr "conds" "use")
    (set_attr "length" "*,8")]
 )
+;; APPLE LOCAL end v7 support. Merge from mainline
 
 (define_expand "subsf3"
   [(set (match_operand:SF           0 "s_register_operand" "")
 	(minus:SF (match_operand:SF 1 "arm_float_rhs_operand" "")
 		  (match_operand:SF 2 "arm_float_rhs_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "
   if (TARGET_MAVERICK)
     {
@@ -1106,7 +1216,8 @@
   [(set (match_operand:DF           0 "s_register_operand" "")
 	(minus:DF (match_operand:DF 1 "arm_float_rhs_operand" "")
 		  (match_operand:DF 2 "arm_float_rhs_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "
   if (TARGET_MAVERICK)
     {
@@ -1133,12 +1244,26 @@
   [(set (match_operand:SI          0 "s_register_operand" "=&r,&r")
 	(mult:SI (match_operand:SI 2 "s_register_operand" "r,r")
 		 (match_operand:SI 1 "s_register_operand" "%?r,0")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && !arm_arch6"
   "mul%?\\t%0, %2, %1"
   [(set_attr "insn" "mul")
    (set_attr "predicable" "yes")]
 )
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_insn "*arm_mulsi3_v6"
+  [(set (match_operand:SI          0 "s_register_operand" "=r")
+	(mult:SI (match_operand:SI 1 "s_register_operand" "r")
+		 (match_operand:SI 2 "s_register_operand" "r")))]
+  "TARGET_32BIT && arm_arch6"
+  "mul%?\\t%0, %1, %2"
+  [(set_attr "insn" "mul")
+   (set_attr "predicable" "yes")]
+)
+;; APPLE LOCAL end v7 support. Merge from mainline
+
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
 ; Unfortunately with the Thumb the '&'/'0' trick can fails when operands 
 ; 1 and 2; are the same, because reload will make operand 0 match 
 ; operand 1 without realizing that this conflicts with operand 2.  We fix 
@@ -1148,7 +1273,7 @@
   [(set (match_operand:SI          0 "register_operand" "=&l,&l,&l")
 	(mult:SI (match_operand:SI 1 "register_operand" "%l,*h,0")
 		 (match_operand:SI 2 "register_operand" "l,l,l")))]
-  "TARGET_THUMB"
+  "TARGET_THUMB1 && !arm_arch6"
   "*
   if (which_alternative < 2)
     return \"mov\\t%0, %1\;mul\\t%0, %2\";
@@ -1158,7 +1283,24 @@
   [(set_attr "length" "4,4,2")
    (set_attr "insn" "mul")]
 )
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
+
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_insn "*thumb_mulsi3_v6"
+  [(set (match_operand:SI          0 "register_operand" "=l,l,l")
+	(mult:SI (match_operand:SI 1 "register_operand" "0,l,0")
+		 (match_operand:SI 2 "register_operand" "l,0,0")))]
+  "TARGET_THUMB1 && arm_arch6"
+  "@
+   mul\\t%0, %2
+   mul\\t%0, %1
+   mul\\t%0, %1"
+  [(set_attr "length" "2")
+   (set_attr "insn" "mul")]
+)
+;; APPLE LOCAL end v7 support. Merge from mainline
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
 (define_insn "*mulsi3_compare0"
   [(set (reg:CC_NOOV CC_REGNUM)
 	(compare:CC_NOOV (mult:SI
@@ -1167,8 +1309,25 @@
 			 (const_int 0)))
    (set (match_operand:SI 0 "s_register_operand" "=&r,&r")
 	(mult:SI (match_dup 2) (match_dup 1)))]
-  "TARGET_ARM"
-  "mul%?s\\t%0, %2, %1"
+  "TARGET_ARM && !arm_arch6"
+  "mul%.\\t%0, %2, %1"
+  [(set_attr "conds" "set")
+   (set_attr "insn" "muls")]
+)
+;; APPLE LOCAL end v7 support. Merge from mainline
+
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_insn "*mulsi3_compare0_v6"
+  [(set (reg:CC_NOOV CC_REGNUM)
+	(compare:CC_NOOV (mult:SI
+			  (match_operand:SI 2 "s_register_operand" "r")
+			  (match_operand:SI 1 "s_register_operand" "r"))
+			 (const_int 0)))
+   (set (match_operand:SI 0 "s_register_operand" "=r")
+	(mult:SI (match_dup 2) (match_dup 1)))]
+;; APPLE LOCAL 6040923 unrecognizable insn ICE
+  "TARGET_ARM && arm_arch6"
+  "mul%.\\t%0, %2, %1"
   [(set_attr "conds" "set")
    (set_attr "insn" "muls")]
 )
@@ -1180,26 +1339,55 @@
 			  (match_operand:SI 1 "s_register_operand" "%?r,0"))
 			 (const_int 0)))
    (clobber (match_scratch:SI 0 "=&r,&r"))]
-  "TARGET_ARM"
-  "mul%?s\\t%0, %2, %1"
+  "TARGET_ARM && !arm_arch6"
+  "mul%.\\t%0, %2, %1"
   [(set_attr "conds" "set")
    (set_attr "insn" "muls")]
 )
 
+(define_insn "*mulsi_compare0_scratch_v6"
+  [(set (reg:CC_NOOV CC_REGNUM)
+	(compare:CC_NOOV (mult:SI
+			  (match_operand:SI 2 "s_register_operand" "r")
+			  (match_operand:SI 1 "s_register_operand" "r"))
+			 (const_int 0)))
+   (clobber (match_scratch:SI 0 "=r"))]
+;; APPLE LOCAL 6040923 unrecognizable insn ICE
+  "TARGET_ARM && arm_arch6"
+  "mul%.\\t%0, %2, %1"
+  [(set_attr "conds" "set")
+   (set_attr "insn" "muls")]
+)
+;; APPLE LOCAL end v7 support. Merge from mainline
+
 ;; Unnamed templates to match MLA instruction.
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
 (define_insn "*mulsi3addsi"
   [(set (match_operand:SI 0 "s_register_operand" "=&r,&r,&r,&r")
 	(plus:SI
 	  (mult:SI (match_operand:SI 2 "s_register_operand" "r,r,r,r")
 		   (match_operand:SI 1 "s_register_operand" "%r,0,r,0"))
 	  (match_operand:SI 3 "s_register_operand" "?r,r,0,0")))]
-  "TARGET_ARM"
+  "TARGET_32BIT && !arm_arch6"
   "mla%?\\t%0, %2, %1, %3"
   [(set_attr "insn" "mla")
    (set_attr "predicable" "yes")]
 )
 
+(define_insn "*mulsi3addsi_v6"
+  [(set (match_operand:SI 0 "s_register_operand" "=r")
+	(plus:SI
+	  (mult:SI (match_operand:SI 2 "s_register_operand" "r")
+		   (match_operand:SI 1 "s_register_operand" "r"))
+	  (match_operand:SI 3 "s_register_operand" "r")))]
+  "TARGET_32BIT && arm_arch6"
+  "mla%?\\t%0, %2, %1, %3"
+  [(set_attr "insn" "mla")
+   (set_attr "predicable" "yes")]
+)
+;; APPLE LOCAL end v7 support. Merge from mainline
+
 (define_insn "*mulsi3addsi_compare0"
   [(set (reg:CC_NOOV CC_REGNUM)
 	(compare:CC_NOOV
@@ -1211,11 +1399,30 @@
    (set (match_operand:SI 0 "s_register_operand" "=&r,&r,&r,&r")
 	(plus:SI (mult:SI (match_dup 2) (match_dup 1))
 		 (match_dup 3)))]
-  "TARGET_ARM"
-  "mla%?s\\t%0, %2, %1, %3"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_ARM && arm_arch6"
+  "mla%.\\t%0, %2, %1, %3"
+  [(set_attr "conds" "set")
+   (set_attr "insn" "mlas")]
+)
+
+(define_insn "*mulsi3addsi_compare0_v6"
+  [(set (reg:CC_NOOV CC_REGNUM)
+	(compare:CC_NOOV
+	 (plus:SI (mult:SI
+		   (match_operand:SI 2 "s_register_operand" "r")
+		   (match_operand:SI 1 "s_register_operand" "r"))
+		  (match_operand:SI 3 "s_register_operand" "r"))
+	 (const_int 0)))
+   (set (match_operand:SI 0 "s_register_operand" "=r")
+	(plus:SI (mult:SI (match_dup 2) (match_dup 1))
+		 (match_dup 3)))]
+  "TARGET_ARM && arm_arch6 && optimize_size"
+  "mla%.\\t%0, %2, %1, %3"
   [(set_attr "conds" "set")
    (set_attr "insn" "mlas")]
 )
+;; APPLE LOCAL end v7 support. Merge from mainline
 
 (define_insn "*mulsi3addsi_compare0_scratch"
   [(set (reg:CC_NOOV CC_REGNUM)
@@ -1226,12 +1433,43 @@
 		  (match_operand:SI 3 "s_register_operand" "?r,r,0,0"))
 	 (const_int 0)))
    (clobber (match_scratch:SI 0 "=&r,&r,&r,&r"))]
-  "TARGET_ARM"
-  "mla%?s\\t%0, %2, %1, %3"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_ARM && !arm_arch6"
+  "mla%.\\t%0, %2, %1, %3"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")
    (set_attr "insn" "mlas")]
 )
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_insn "*mulsi3addsi_compare0_scratch_v6"
+  [(set (reg:CC_NOOV CC_REGNUM)
+	(compare:CC_NOOV
+	 (plus:SI (mult:SI
+		   (match_operand:SI 2 "s_register_operand" "r")
+		   (match_operand:SI 1 "s_register_operand" "r"))
+		  (match_operand:SI 3 "s_register_operand" "r"))
+	 (const_int 0)))
+   (clobber (match_scratch:SI 0 "=r"))]
+  "TARGET_ARM && arm_arch6 && optimize_size"
+  "mla%.\\t%0, %2, %1, %3"
+  [(set_attr "conds" "set")
+   (set_attr "insn" "mlas")]
+)
+
+(define_insn "*mulsi3subsi"
+  [(set (match_operand:SI 0 "s_register_operand" "=r")
+	(minus:SI
+	  (match_operand:SI 3 "s_register_operand" "r")
+	  (mult:SI (match_operand:SI 2 "s_register_operand" "r")
+		   (match_operand:SI 1 "s_register_operand" "r"))))]
+  "TARGET_32BIT && arm_arch_thumb2"
+  "mls%?\\t%0, %2, %1, %3"
+  [(set_attr "insn" "mla")
+   (set_attr "predicable" "yes")]
+)
+
+;; APPLE LOCAL end v7 support. Merge from mainline
 ;; Unnamed template to match long long multiply-accumulate (smlal)
 
 (define_insn "*mulsidi3adddi"
@@ -1241,18 +1479,34 @@
 	  (sign_extend:DI (match_operand:SI 2 "s_register_operand" "%r"))
 	  (sign_extend:DI (match_operand:SI 3 "s_register_operand" "r")))
 	 (match_operand:DI 1 "s_register_operand" "0")))]
-  "TARGET_ARM && arm_arch3m"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && arm_arch3m && !arm_arch6"
+  "smlal%?\\t%Q0, %R0, %3, %2"
+  [(set_attr "insn" "smlal")
+   (set_attr "predicable" "yes")]
+)
+
+(define_insn "*mulsidi3adddi_v6"
+  [(set (match_operand:DI 0 "s_register_operand" "=r")
+	(plus:DI
+	 (mult:DI
+	  (sign_extend:DI (match_operand:SI 2 "s_register_operand" "r"))
+	  (sign_extend:DI (match_operand:SI 3 "s_register_operand" "r")))
+	 (match_operand:DI 1 "s_register_operand" "0")))]
+  "TARGET_32BIT && arm_arch6"
   "smlal%?\\t%Q0, %R0, %3, %2"
   [(set_attr "insn" "smlal")
    (set_attr "predicable" "yes")]
 )
+;; APPLE LOCAL end v7 support. Merge from mainline
 
 (define_insn "mulsidi3"
   [(set (match_operand:DI 0 "s_register_operand" "=&r")
 	(mult:DI
 	 (sign_extend:DI (match_operand:SI 1 "s_register_operand" "%r"))
 	 (sign_extend:DI (match_operand:SI 2 "s_register_operand" "r"))))]
-  "TARGET_ARM && arm_arch3m"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && arm_arch3m"
   "smull%?\\t%Q0, %R0, %1, %2"
   [(set_attr "insn" "smull")
    (set_attr "predicable" "yes")]
@@ -1263,7 +1517,8 @@
 	(mult:DI
 	 (zero_extend:DI (match_operand:SI 1 "s_register_operand" "%r"))
 	 (zero_extend:DI (match_operand:SI 2 "s_register_operand" "r"))))]
-  "TARGET_ARM && arm_arch3m"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && arm_arch3m"
   "umull%?\\t%Q0, %R0, %1, %2"
   [(set_attr "insn" "umull")
    (set_attr "predicable" "yes")]
@@ -1278,12 +1533,29 @@
 	  (zero_extend:DI (match_operand:SI 2 "s_register_operand" "%r"))
 	  (zero_extend:DI (match_operand:SI 3 "s_register_operand" "r")))
 	 (match_operand:DI 1 "s_register_operand" "0")))]
-  "TARGET_ARM && arm_arch3m"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && arm_arch3m && !arm_arch6"
   "umlal%?\\t%Q0, %R0, %3, %2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "insn" "umlal")
    (set_attr "predicable" "yes")]
 )
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_insn "*umulsidi3adddi_v6"
+  [(set (match_operand:DI 0 "s_register_operand" "=r")
+	(plus:DI
+	 (mult:DI
+	  (zero_extend:DI (match_operand:SI 2 "s_register_operand" "r"))
+	  (zero_extend:DI (match_operand:SI 3 "s_register_operand" "r")))
+	 (match_operand:DI 1 "s_register_operand" "0")))]
+  "TARGET_32BIT && arm_arch6"
+  "umlal%?\\t%Q0, %R0, %3, %2"
+  [(set_attr "insn" "umlal")
+   (set_attr "predicable" "yes")]
+)
+;; APPLE LOCAL end v7 support. Merge from mainline
+
 (define_insn "smulsi3_highpart"
   [(set (match_operand:SI 0 "s_register_operand" "=&r,&r")
 	(truncate:SI
@@ -1293,7 +1565,8 @@
 	   (sign_extend:DI (match_operand:SI 2 "s_register_operand" "r,r")))
 	  (const_int 32))))
    (clobber (match_scratch:SI 3 "=&r,&r"))]
-  "TARGET_ARM && arm_arch3m"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && arm_arch3m"
   "smull%?\\t%3, %0, %2, %1"
   [(set_attr "insn" "smull")
    (set_attr "predicable" "yes")]
@@ -1308,7 +1581,8 @@
 	   (zero_extend:DI (match_operand:SI 2 "s_register_operand" "r,r")))
 	  (const_int 32))))
    (clobber (match_scratch:SI 3 "=&r,&r"))]
-  "TARGET_ARM && arm_arch3m"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && arm_arch3m"
   "umull%?\\t%3, %0, %2, %1"
   [(set_attr "insn" "umull")
    (set_attr "predicable" "yes")]
@@ -1320,7 +1594,8 @@
 		  (match_operand:HI 1 "s_register_operand" "%r"))
 		 (sign_extend:SI
 		  (match_operand:HI 2 "s_register_operand" "r"))))]
-  "TARGET_ARM && arm_arch5e"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_DSP_MULTIPLY"
   "smulbb%?\\t%0, %1, %2"
   [(set_attr "insn" "smulxy")
    (set_attr "predicable" "yes")]
@@ -1333,7 +1608,8 @@
 		  (const_int 16))
 		 (sign_extend:SI
 		  (match_operand:HI 2 "s_register_operand" "r"))))]
-  "TARGET_ARM && arm_arch5e"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_DSP_MULTIPLY"
   "smultb%?\\t%0, %1, %2"
   [(set_attr "insn" "smulxy")
    (set_attr "predicable" "yes")]
@@ -1346,7 +1622,8 @@
 		 (ashiftrt:SI
 		  (match_operand:SI 2 "s_register_operand" "r")
 		  (const_int 16))))]
-  "TARGET_ARM && arm_arch5e"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_DSP_MULTIPLY"
   "smulbt%?\\t%0, %1, %2"
   [(set_attr "insn" "smulxy")
    (set_attr "predicable" "yes")]
@@ -1360,7 +1637,8 @@
 		 (ashiftrt:SI
 		  (match_operand:SI 2 "s_register_operand" "r")
 		  (const_int 16))))]
-  "TARGET_ARM && arm_arch5e"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_DSP_MULTIPLY"
   "smultt%?\\t%0, %1, %2"
   [(set_attr "insn" "smulxy")
    (set_attr "predicable" "yes")]
@@ -1373,7 +1651,8 @@
 			   (match_operand:HI 2 "s_register_operand" "%r"))
 			  (sign_extend:SI
 			   (match_operand:HI 3 "s_register_operand" "r")))))]
-  "TARGET_ARM && arm_arch5e"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_DSP_MULTIPLY"
   "smlabb%?\\t%0, %2, %3, %1"
   [(set_attr "insn" "smlaxy")
    (set_attr "predicable" "yes")]
@@ -1387,7 +1666,8 @@
 	 	    (match_operand:HI 2 "s_register_operand" "%r"))
 		   (sign_extend:DI
 		    (match_operand:HI 3 "s_register_operand" "r")))))]
-  "TARGET_ARM && arm_arch5e"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_DSP_MULTIPLY"
   "smlalbb%?\\t%Q0, %R0, %2, %3"
   [(set_attr "insn" "smlalxy")
    (set_attr "predicable" "yes")])
@@ -1449,7 +1729,8 @@
   [(set (match_operand:SF          0 "s_register_operand" "")
 	(mult:SF (match_operand:SF 1 "s_register_operand" "")
 		 (match_operand:SF 2 "arm_float_rhs_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "
   if (TARGET_MAVERICK
       && !cirrus_fp_register (operands[2], SFmode))
@@ -1460,7 +1741,8 @@
   [(set (match_operand:DF          0 "s_register_operand" "")
 	(mult:DF (match_operand:DF 1 "s_register_operand" "")
 		 (match_operand:DF 2 "arm_float_rhs_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "
   if (TARGET_MAVERICK
       && !cirrus_fp_register (operands[2], DFmode))
@@ -1473,14 +1755,16 @@
   [(set (match_operand:SF 0 "s_register_operand" "")
 	(div:SF (match_operand:SF 1 "arm_float_rhs_operand" "")
 		(match_operand:SF 2 "arm_float_rhs_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "")
 
 (define_expand "divdf3"
   [(set (match_operand:DF 0 "s_register_operand" "")
 	(div:DF (match_operand:DF 1 "arm_float_rhs_operand" "")
 		(match_operand:DF 2 "arm_float_rhs_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "")
 
 ;; Modulo insns
@@ -1489,14 +1773,16 @@
   [(set (match_operand:SF 0 "s_register_operand" "")
 	(mod:SF (match_operand:SF 1 "s_register_operand" "")
 		(match_operand:SF 2 "arm_float_rhs_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
   "")
 
 (define_expand "moddf3"
   [(set (match_operand:DF 0 "s_register_operand" "")
 	(mod:DF (match_operand:DF 1 "s_register_operand" "")
 		(match_operand:DF 2 "arm_float_rhs_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
   "")
 
 ;; Boolean and,ior,xor insns
@@ -1511,7 +1797,10 @@
 	(match_operator:DI 6 "logical_binary_operator"
 	  [(match_operand:DI 1 "s_register_operand" "")
 	   (match_operand:DI 2 "arm_rhs64_operand" "")]))]
-  "TARGET_ARM && reload_completed && ! IS_IWMMXT_REGNUM (REGNO (operands[0]))"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && reload_completed
+   && ! IS_IWMMXT_REGNUM (REGNO (operands[0]))"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set (match_dup 0) (match_op_dup:SI 6 [(match_dup 1) (match_dup 2)]))
    (set (match_dup 3) (match_op_dup:SI 6 [(match_dup 4) (match_dup 5)]))]
   "
@@ -1531,7 +1820,8 @@
 	(match_operator:DI 6 "logical_binary_operator"
 	  [(sign_extend:DI (match_operand:SI 2 "s_register_operand" ""))
 	   (match_operand:DI 1 "s_register_operand" "")]))]
-  "TARGET_ARM && reload_completed"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && reload_completed"
   [(set (match_dup 0) (match_op_dup:SI 6 [(match_dup 1) (match_dup 2)]))
    (set (match_dup 3) (match_op_dup:SI 6
 			[(ashiftrt:SI (match_dup 2) (const_int 31))
@@ -1554,7 +1844,8 @@
 	(ior:DI
 	  (zero_extend:DI (match_operand:SI 2 "s_register_operand" ""))
 	  (match_operand:DI 1 "s_register_operand" "")))]
-  "TARGET_ARM && operands[0] != operands[1] && reload_completed"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && operands[0] != operands[1] && reload_completed"
   [(set (match_dup 0) (ior:SI (match_dup 1) (match_dup 2)))
    (set (match_dup 3) (match_dup 4))]
   "
@@ -1573,7 +1864,8 @@
 	(xor:DI
 	  (zero_extend:DI (match_operand:SI 2 "s_register_operand" ""))
 	  (match_operand:DI 1 "s_register_operand" "")))]
-  "TARGET_ARM && operands[0] != operands[1] && reload_completed"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && operands[0] != operands[1] && reload_completed"
   [(set (match_dup 0) (xor:SI (match_dup 1) (match_dup 2)))
    (set (match_dup 3) (match_dup 4))]
   "
@@ -1588,9 +1880,10 @@
 ;; APPLE LOCAL begin 5831562 long long constants
 (define_insn "anddi3"
   [(set (match_operand:DI         0 "s_register_operand" "=&r,&r,&r,&r")
-	(and:DI (match_operand:DI 1 "s_register_operand"  "%0,r, 0, r")
-		(match_operand:DI 2 "arm_rhs64_operand"    "r,r,Dd,Dd")))]
-  "TARGET_ARM && ! TARGET_IWMMXT"
+	(and:DI (match_operand:DI 1 "s_register_operand"  "%0,r,0,r")
+		(match_operand:DI 2 "s_register_operand"   "r,r,Dd,Dd")))]
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && ! TARGET_IWMMXT"
   "#"
   [(set_attr "length" "8")
    (set_attr "predicable" "yes")]
@@ -1602,9 +1895,11 @@
 	(and:DI (zero_extend:DI
 		 (match_operand:SI 2 "s_register_operand" "r,r"))
 		(match_operand:DI 1 "s_register_operand" "?r,0")))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "#"
-  "TARGET_ARM && reload_completed"
+  "TARGET_32BIT && reload_completed"
+;; APPLE LOCAL end v7 support. Merge from mainline
   ; The zero extend of operand 2 clears the high word of the output
   ; operand.
   [(set (match_dup 0) (and:SI (match_dup 1) (match_dup 2)))
@@ -1623,7 +1918,8 @@
 	(and:DI (sign_extend:DI
 		 (match_operand:SI 2 "s_register_operand" "r,r"))
 		(match_operand:DI  1 "s_register_operand" "?r,0")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "#"
   [(set_attr "length" "8")]
 )
@@ -1634,7 +1930,8 @@
 		(match_operand:SI 2 "reg_or_int_operand" "")))]
   "TARGET_EITHER"
   "
-  if (TARGET_ARM)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_32BIT)
     {
       if (GET_CODE (operands[2]) == CONST_INT)
         {
@@ -1645,7 +1942,8 @@
           DONE;
         }
     }
-  else /* TARGET_THUMB */
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  else /* TARGET_THUMB1 */
     {
       if (GET_CODE (operands[2]) != CONST_INT)
         operands[2] = force_reg (SImode, operands[2]);
@@ -1690,12 +1988,14 @@
   "
 )
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
+; ??? Check split length for Thumb-2
 ;; APPLE LOCAL begin ARM 4673027 suboptimal loop codegen
 (define_insn "*arm_andsi3_insn"
   [(set (match_operand:SI         0 "s_register_operand" "=r,r")
 	(and:SI (match_operand:SI 1 "s_register_operand" "r,r")
 		(match_operand:SI 2 "arm_not_operand" "rI,K")))]
-  "TARGET_ARM"
+  "TARGET_32BIT"
   "@
    and%?\\t%0, %1, %2
    bic%?\\t%0, %1, #%B2"
@@ -1703,12 +2003,15 @@
    (set_attr "predicable" "yes")]
 )
 ;; APPLE LOCAL end ARM 4673027 suboptimal loop codegen
+;; APPLE LOCAL end v7 support. Merge from mainline
 
-(define_insn "*thumb_andsi3_insn"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_andsi3_insn"
   [(set (match_operand:SI         0 "register_operand" "=l")
 	(and:SI (match_operand:SI 1 "register_operand" "%0")
 		(match_operand:SI 2 "register_operand" "l")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "and\\t%0, %0, %2"
   [(set_attr "length" "2")]
 )
@@ -1721,10 +2024,12 @@
 	 (const_int 0)))
    (set (match_operand:SI          0 "s_register_operand" "=r,r")
 	(and:SI (match_dup 1) (match_dup 2)))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
-   and%?s\\t%0, %1, %2
-   bic%?s\\t%0, %1, #%B2"
+   and%.\\t%0, %1, %2
+   bic%.\\t%0, %1, #%B2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
@@ -1735,10 +2040,12 @@
 		 (match_operand:SI 1 "arm_not_operand" "rI,K"))
 	 (const_int 0)))
    (clobber (match_scratch:SI 2 "=X,r"))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
    tst%?\\t%0, %1
-   bic%?s\\t%2, %0, #%B1"
+   bic%.\\t%2, %0, #%B1"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
@@ -1749,7 +2056,8 @@
 		 	  (match_operand 1 "const_int_operand" "n")
 			  (match_operand 2 "const_int_operand" "n"))
 			 (const_int 0)))]
-  "TARGET_ARM
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT
   && (INTVAL (operands[2]) >= 0 && INTVAL (operands[2]) < 32
       && INTVAL (operands[1]) > 0 
       && INTVAL (operands[1]) + (INTVAL (operands[2]) & 1) <= 8
@@ -1771,17 +2079,19 @@
 		(match_operand:SI 3 "const_int_operand" "n"))
 	       (const_int 0)))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT
    && (INTVAL (operands[3]) >= 0 && INTVAL (operands[3]) < 32
        && INTVAL (operands[2]) > 0 
        && INTVAL (operands[2]) + (INTVAL (operands[3]) & 1) <= 8
        && INTVAL (operands[2]) + INTVAL (operands[3]) <= 32)"
   "#"
-  "TARGET_ARM
+  "TARGET_32BIT
    && (INTVAL (operands[3]) >= 0 && INTVAL (operands[3]) < 32
        && INTVAL (operands[2]) > 0 
        && INTVAL (operands[2]) + (INTVAL (operands[3]) & 1) <= 8
        && INTVAL (operands[2]) + INTVAL (operands[3]) <= 32)"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(parallel [(set (reg:CC_NOOV CC_REGNUM)
 		   (compare:CC_NOOV (and:SI (match_dup 1) (match_dup 2))
 				    (const_int 0)))
@@ -1794,7 +2104,12 @@
 			 << INTVAL (operands[3])); 
   "
   [(set_attr "conds" "clob")
-   (set_attr "length" "8")]
+;; APPLE LOCAL begin v7 support. Merge from mainline
+   (set (attr "length")
+	(if_then_else (eq_attr "is_thumb" "yes")
+		      (const_int 12)
+		      (const_int 8)))]
+;; APPLE LOCAL end v7 support. Merge from mainline
 )
 
 (define_insn_and_split "*ne_zeroextractsi_shifted"
@@ -1893,7 +2208,8 @@
 			 (match_operand:SI 2 "const_int_operand" "")
 			 (match_operand:SI 3 "const_int_operand" "")))
    (clobber (match_operand:SI 4 "s_register_operand" ""))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   [(set (match_dup 4) (ashift:SI (match_dup 1) (match_dup 2)))
    (set (match_dup 0) (lshiftrt:SI (match_dup 4) (match_dup 3)))]
   "{
@@ -1904,6 +2220,8 @@
    }"
 )
 
+;; APPLE LOCAL v7 support. Merge from mainline
+;; ??? Use Thumb-2 has bitfield insert/extract instructions.
 (define_split
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(match_operator:SI 1 "shiftable_operator"
@@ -1931,7 +2249,8 @@
 	(sign_extract:SI (match_operand:SI 1 "s_register_operand" "")
 			 (match_operand:SI 2 "const_int_operand" "")
 			 (match_operand:SI 3 "const_int_operand" "")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   [(set (match_dup 0) (ashift:SI (match_dup 1) (match_dup 2)))
    (set (match_dup 0) (ashiftrt:SI (match_dup 0) (match_dup 3)))]
   "{
@@ -1974,6 +2293,8 @@
 ;;; this insv pattern, so this pattern needs to be reevalutated.
 ;;; APPLE LOCAL begin ARM insv for Thumb
 
+;; APPLE LOCAL v7 support. Merge from mainline
+; ??? Use Thumb-2 bitfield insert/extract instructions
 (define_expand "insv"
   [(set (zero_extract:SI (match_operand:SI 0 "s_register_operand" "")
                          (match_operand:SI 1 "general_operand" "")
@@ -2146,9 +2467,10 @@
   [(set (match_operand:DI 0 "s_register_operand" "=&r,&r")
 	(and:DI (not:DI (match_operand:DI 1 "s_register_operand" "r,0"))
 		(match_operand:DI 2 "s_register_operand" "0,r")))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "#"
-  "TARGET_ARM && reload_completed && ! IS_IWMMXT_REGNUM (REGNO (operands[0]))"
+  "TARGET_32BIT && reload_completed && ! IS_IWMMXT_REGNUM (REGNO (operands[0]))"
   [(set (match_dup 0) (and:SI (not:SI (match_dup 1)) (match_dup 2)))
    (set (match_dup 3) (and:SI (not:SI (match_dup 4)) (match_dup 5)))]
   "
@@ -2160,6 +2482,7 @@
     operands[5] = gen_highpart (SImode, operands[2]);
     operands[2] = gen_lowpart (SImode, operands[2]);
   }"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "length" "8")
    (set_attr "predicable" "yes")]
 )
@@ -2169,13 +2492,15 @@
 	(and:DI (not:DI (zero_extend:DI
 			 (match_operand:SI 2 "s_register_operand" "r,r")))
 		(match_operand:DI 1 "s_register_operand" "0,?r")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
    bic%?\\t%Q0, %Q1, %2
    #"
   ; (not (zero_extend ...)) allows us to just copy the high word from
   ; operand1 to operand0.
-  "TARGET_ARM
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT
    && reload_completed
    && operands[0] != operands[1]"
   [(set (match_dup 0) (and:SI (not:SI (match_dup 2)) (match_dup 1)))
@@ -2196,9 +2521,11 @@
 	(and:DI (not:DI (sign_extend:DI
 			 (match_operand:SI 2 "s_register_operand" "r,r")))
 		(match_operand:DI 1 "s_register_operand" "0,r")))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "#"
-  "TARGET_ARM && reload_completed"
+  "TARGET_32BIT && reload_completed"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set (match_dup 0) (and:SI (not:SI (match_dup 2)) (match_dup 1)))
    (set (match_dup 3) (and:SI (not:SI
 				(ashiftrt:SI (match_dup 2) (const_int 31)))
@@ -2218,7 +2545,8 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(and:SI (not:SI (match_operand:SI 2 "s_register_operand" "r"))
 		(match_operand:SI 1 "s_register_operand" "r")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "bic%?\\t%0, %1, %2"
   [(set_attr "predicable" "yes")]
 )
@@ -2227,7 +2555,8 @@
   [(set (match_operand:SI                 0 "register_operand" "=l")
 	(and:SI (not:SI (match_operand:SI 1 "register_operand" "l"))
 		(match_operand:SI         2 "register_operand" "0")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "bic\\t%0, %0, %1"
   [(set_attr "length" "2")]
 )
@@ -2255,8 +2584,10 @@
 	 (const_int 0)))
    (set (match_operand:SI 0 "s_register_operand" "=r")
 	(and:SI (not:SI (match_dup 2)) (match_dup 1)))]
-  "TARGET_ARM"
-  "bic%?s\\t%0, %1, %2"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
+  "bic%.\\t%0, %1, %2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
@@ -2267,17 +2598,20 @@
 		 (match_operand:SI 1 "s_register_operand" "r"))
 	 (const_int 0)))
    (clobber (match_scratch:SI 0 "=r"))]
-  "TARGET_ARM"
-  "bic%?s\\t%0, %1, %2"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
+  "bic%.\\t%0, %1, %2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
 ;; APPLE LOCAL begin 5831562 long long constants
 (define_insn "iordi3"
   [(set (match_operand:DI         0 "s_register_operand" "=&r,&r,&r,&r")
-	(ior:DI (match_operand:DI 1 "s_register_operand"  "%0,r, 0, r")
+	(ior:DI (match_operand:DI 1 "s_register_operand"  "%0,r,0,r")
 		(match_operand:DI 2 "arm_rhs64_operand"   "r,r,Dd,Dd")))]
-  "TARGET_ARM && ! TARGET_IWMMXT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && ! TARGET_IWMMXT"
   "#"
   [(set_attr "length" "8")
    (set_attr "predicable" "yes")]
@@ -2289,7 +2623,8 @@
 	(ior:DI (zero_extend:DI
 		 (match_operand:SI 2 "s_register_operand" "r,r"))
 		(match_operand:DI 1 "s_register_operand" "0,?r")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
    orr%?\\t%Q0, %Q1, %2
    #"
@@ -2302,7 +2637,8 @@
 	(ior:DI (sign_extend:DI
 		 (match_operand:SI 2 "s_register_operand" "r,r"))
 		(match_operand:DI 1 "s_register_operand" "?r,0")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "#"
   [(set_attr "length" "8")
    (set_attr "predicable" "yes")]
@@ -2316,36 +2652,41 @@
   "
   if (GET_CODE (operands[2]) == CONST_INT)
     {
-      if (TARGET_ARM)
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      if (TARGET_32BIT)
         {
           arm_split_constant (IOR, SImode, NULL_RTX,
 	                      INTVAL (operands[2]), operands[0], operands[1],
 			      optimize && !no_new_pseudos);
           DONE;
 	}
-      else /* TARGET_THUMB */
+      /* APPLE LOCAL v7 support. Merge from mainline */
+      else /* TARGET_THUMB1 */
 	operands [2] = force_reg (SImode, operands [2]);
     }
   "
 )
 
 ;; APPLE LOCAL begin ARM 4673027 suboptimal loop codegen
-(define_insn "*arm_iorsi3"
+(define_insn"*arm_iorsi3"
   [(set (match_operand:SI         0 "s_register_operand" "=r")
 	(ior:SI (match_operand:SI 1 "s_register_operand" "r")
-		(match_operand:SI 2 "arm_rhs_operand" "rI")))]
-  "TARGET_ARM"
+		(match_operand:SI 2 "reg_or_int_operand" "rI")))]
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "orr%?\\t%0, %1, %2"
   [(set_attr "length" "4")
    (set_attr "predicable" "yes")]
 )
 ;; APPLE LOCAL end ARM 4673027 suboptimal loop codegen
 
-(define_insn "*thumb_iorsi3"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_iorsi3"
   [(set (match_operand:SI         0 "register_operand" "=l")
 	(ior:SI (match_operand:SI 1 "register_operand" "%0")
 		(match_operand:SI 2 "register_operand" "l")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "orr\\t%0, %0, %2"
   [(set_attr "length" "2")]
 )
@@ -2355,7 +2696,8 @@
    (set (match_operand:SI 0 "arm_general_register_operand" "")
 	(ior:SI (match_operand:SI 1 "arm_general_register_operand" "")
 		(match_operand:SI 2 "const_int_operand" "")))]
-  "TARGET_ARM
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT
    && !const_ok_for_arm (INTVAL (operands[2]))
    && const_ok_for_arm (~INTVAL (operands[2]))"
   [(set (match_dup 3) (match_dup 2))
@@ -2370,8 +2712,10 @@
 			 (const_int 0)))
    (set (match_operand:SI 0 "s_register_operand" "=r")
 	(ior:SI (match_dup 1) (match_dup 2)))]
-  "TARGET_ARM"
-  "orr%?s\\t%0, %1, %2"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
+  "orr%.\\t%0, %1, %2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
@@ -2381,17 +2725,20 @@
 				 (match_operand:SI 2 "arm_rhs_operand" "rI"))
 			 (const_int 0)))
    (clobber (match_scratch:SI 0 "=r"))]
-  "TARGET_ARM"
-  "orr%?s\\t%0, %1, %2"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
+  "orr%.\\t%0, %1, %2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
 ;; APPLE LOCAL begin 5831562 long long constants
 (define_insn "xordi3"
   [(set (match_operand:DI         0 "s_register_operand" "=&r,&r,&r,&r")
-	(xor:DI (match_operand:DI 1 "s_register_operand"  "%0,r, 0, r")
-		(match_operand:DI 2 "arm_rhs64_operand"   "r,r,Dd,Dd")))]
-  "TARGET_ARM && !TARGET_IWMMXT"
+	(xor:DI (match_operand:DI 1 "s_register_operand"  "%0,r,0,r")
+		(match_operand:DI 2 "s_register_operand"   "r,r,Dd,Dd")))]
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && !TARGET_IWMMXT"
   "#"
   [(set_attr "length" "8")
    (set_attr "predicable" "yes")]
@@ -2403,7 +2750,8 @@
 	(xor:DI (zero_extend:DI
 		 (match_operand:SI 2 "s_register_operand" "r,r"))
 		(match_operand:DI 1 "s_register_operand" "0,?r")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
    eor%?\\t%Q0, %Q1, %2
    #"
@@ -2416,7 +2764,8 @@
 	(xor:DI (sign_extend:DI
 		 (match_operand:SI 2 "s_register_operand" "r,r"))
 		(match_operand:DI 1 "s_register_operand" "?r,0")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "#"
   [(set_attr "length" "8")
    (set_attr "predicable" "yes")]
@@ -2427,7 +2776,8 @@
 	(xor:SI (match_operand:SI 1 "s_register_operand" "")
 		(match_operand:SI 2 "arm_rhs_operand"  "")))]
   "TARGET_EITHER"
-  "if (TARGET_THUMB)
+;; APPLE LOCAL v7 support. Merge from mainline
+  "if (TARGET_THUMB1)
      if (GET_CODE (operands[2]) == CONST_INT)
        operands[2] = force_reg (SImode, operands[2]);
   "
@@ -2437,16 +2787,19 @@
   [(set (match_operand:SI         0 "s_register_operand" "=r")
 	(xor:SI (match_operand:SI 1 "s_register_operand" "r")
 		(match_operand:SI 2 "arm_rhs_operand" "rI")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "eor%?\\t%0, %1, %2"
   [(set_attr "predicable" "yes")]
 )
 
-(define_insn "*thumb_xorsi3"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_xorsi3"
   [(set (match_operand:SI         0 "register_operand" "=l")
 	(xor:SI (match_operand:SI 1 "register_operand" "%0")
 		(match_operand:SI 2 "register_operand" "l")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "eor\\t%0, %0, %2"
   [(set_attr "length" "2")]
 )
@@ -2458,8 +2811,10 @@
 			 (const_int 0)))
    (set (match_operand:SI 0 "s_register_operand" "=r")
 	(xor:SI (match_dup 1) (match_dup 2)))]
-  "TARGET_ARM"
-  "eor%?s\\t%0, %1, %2"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
+  "eor%.\\t%0, %1, %2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
@@ -2468,7 +2823,8 @@
 	(compare:CC_NOOV (xor:SI (match_operand:SI 0 "s_register_operand" "r")
 				 (match_operand:SI 1 "arm_rhs_operand" "rI"))
 			 (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "teq%?\\t%0, %1"
   [(set_attr "conds" "set")]
 )
@@ -2483,7 +2839,8 @@
 			(not:SI (match_operand:SI 2 "arm_rhs_operand" "")))
 		(match_operand:SI 3 "arm_rhs_operand" "")))
    (clobber (match_operand:SI 4 "s_register_operand" ""))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   [(set (match_dup 4) (and:SI (ior:SI (match_dup 1) (match_dup 2))
 			      (not:SI (match_dup 3))))
    (set (match_dup 0) (not:SI (match_dup 4)))]
@@ -2495,12 +2852,19 @@
 	(and:SI (ior:SI (match_operand:SI 1 "s_register_operand" "r,r,0")
 			(match_operand:SI 2 "arm_rhs_operand" "rI,0,rI"))
 		(not:SI (match_operand:SI 3 "arm_rhs_operand" "rI,rI,rI"))))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "orr%?\\t%0, %1, %2\;bic%?\\t%0, %0, %3"
   [(set_attr "length" "8")
+   (set_attr "ce_count" "2")
    (set_attr "predicable" "yes")]
+;; APPLE LOCAL end v7 support. Merge from mainline
 )
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
+; ??? Are these four splitters still beneficial when the Thumb-2 bitfield
+; insns are available?
+;; APPLE LOCAL end v7 support. Merge from mainline
 (define_split
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(match_operator:SI 1 "logical_binary_operator"
@@ -2512,7 +2876,8 @@
 			 (match_operand:SI 6 "const_int_operand" ""))
 	    (match_operand:SI 7 "s_register_operand" "")])]))
    (clobber (match_operand:SI 8 "s_register_operand" ""))]
-  "TARGET_ARM
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT
    && GET_CODE (operands[1]) == GET_CODE (operands[9])
    && INTVAL (operands[3]) == 32 - INTVAL (operands[6])"
   [(set (match_dup 8)
@@ -2538,7 +2903,8 @@
 			   (match_operand:SI 3 "const_int_operand" "")
 			   (match_operand:SI 4 "const_int_operand" ""))]))
    (clobber (match_operand:SI 8 "s_register_operand" ""))]
-  "TARGET_ARM
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT
    && GET_CODE (operands[1]) == GET_CODE (operands[9])
    && INTVAL (operands[3]) == 32 - INTVAL (operands[6])"
   [(set (match_dup 8)
@@ -2564,7 +2930,8 @@
 			 (match_operand:SI 6 "const_int_operand" ""))
 	    (match_operand:SI 7 "s_register_operand" "")])]))
    (clobber (match_operand:SI 8 "s_register_operand" ""))]
-  "TARGET_ARM
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT
    && GET_CODE (operands[1]) == GET_CODE (operands[9])
    && INTVAL (operands[3]) == 32 - INTVAL (operands[6])"
   [(set (match_dup 8)
@@ -2590,7 +2957,8 @@
 			   (match_operand:SI 3 "const_int_operand" "")
 			   (match_operand:SI 4 "const_int_operand" ""))]))
    (clobber (match_operand:SI 8 "s_register_operand" ""))]
-  "TARGET_ARM
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT
    && GET_CODE (operands[1]) == GET_CODE (operands[9])
    && INTVAL (operands[3]) == 32 - INTVAL (operands[6])"
   [(set (match_dup 8)
@@ -2614,7 +2982,8 @@
 	 (smax:SI (match_operand:SI 1 "s_register_operand" "")
 		  (match_operand:SI 2 "arm_rhs_operand" "")))
     (clobber (reg:CC CC_REGNUM))])]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "
   if (operands[2] == const0_rtx || operands[2] == constm1_rtx)
     {
@@ -2630,7 +2999,8 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(smax:SI (match_operand:SI 1 "s_register_operand" "r")
 		 (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "bic%?\\t%0, %1, %1, asr #31"
   [(set_attr "predicable" "yes")]
 )
@@ -2639,12 +3009,14 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(smax:SI (match_operand:SI 1 "s_register_operand" "r")
 		 (const_int -1)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "orr%?\\t%0, %1, %1, asr #31"
   [(set_attr "predicable" "yes")]
 )
 
-(define_insn "*smax_insn"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*arm_smax_insn"
   [(set (match_operand:SI          0 "s_register_operand" "=r,r")
 	(smax:SI (match_operand:SI 1 "s_register_operand"  "%0,?r")
 		 (match_operand:SI 2 "arm_rhs_operand"    "rI,rI")))
@@ -2663,7 +3035,8 @@
 	 (smin:SI (match_operand:SI 1 "s_register_operand" "")
 		  (match_operand:SI 2 "arm_rhs_operand" "")))
     (clobber (reg:CC CC_REGNUM))])]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "
   if (operands[2] == const0_rtx)
     {
@@ -2679,12 +3052,14 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(smin:SI (match_operand:SI 1 "s_register_operand" "r")
 		 (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "and%?\\t%0, %1, %1, asr #31"
   [(set_attr "predicable" "yes")]
 )
 
-(define_insn "*smin_insn"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*arm_smin_insn"
   [(set (match_operand:SI 0 "s_register_operand" "=r,r")
 	(smin:SI (match_operand:SI 1 "s_register_operand" "%0,?r")
 		 (match_operand:SI 2 "arm_rhs_operand" "rI,rI")))
@@ -2697,7 +3072,18 @@
    (set_attr "length" "8,12")]
 )
 
-(define_insn "umaxsi3"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_expand "umaxsi3"
+  [(parallel [
+    (set (match_operand:SI 0 "s_register_operand" "")
+	 (umax:SI (match_operand:SI 1 "s_register_operand" "")
+		  (match_operand:SI 2 "arm_rhs_operand" "")))
+    (clobber (reg:CC CC_REGNUM))])]
+  "TARGET_32BIT"
+  ""
+)
+
+(define_insn "*arm_umaxsi3"
   [(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
 	(umax:SI (match_operand:SI 1 "s_register_operand" "0,r,?r")
 		 (match_operand:SI 2 "arm_rhs_operand" "rI,0,rI")))
@@ -2711,7 +3097,17 @@
    (set_attr "length" "8,8,12")]
 )
 
-(define_insn "uminsi3"
+(define_expand "uminsi3"
+  [(parallel [
+    (set (match_operand:SI 0 "s_register_operand" "")
+	 (umin:SI (match_operand:SI 1 "s_register_operand" "")
+		  (match_operand:SI 2 "arm_rhs_operand" "")))
+    (clobber (reg:CC CC_REGNUM))])]
+  "TARGET_32BIT"
+  ""
+)
+
+(define_insn "*arm_uminsi3"
   [(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
 	(umin:SI (match_operand:SI 1 "s_register_operand" "0,r,?r")
 		 (match_operand:SI 2 "arm_rhs_operand" "rI,0,rI")))
@@ -2724,6 +3120,7 @@
   [(set_attr "conds" "clob")
    (set_attr "length" "8,8,12")]
 )
+;; APPLE LOCAL end v7 support. Merge from mainline
 
 (define_insn "*store_minmaxsi"
   [(set (match_operand:SI 0 "memory_operand" "=m")
@@ -2731,17 +3128,24 @@
 	 [(match_operand:SI 1 "s_register_operand" "r")
 	  (match_operand:SI 2 "s_register_operand" "r")]))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "*
   operands[3] = gen_rtx_fmt_ee (minmax_code (operands[3]), SImode,
 				operands[1], operands[2]);
   output_asm_insn (\"cmp\\t%1, %2\", operands);
+  if (TARGET_THUMB2)
+    output_asm_insn (\"ite\t%d3\", operands);
   output_asm_insn (\"str%d3\\t%1, %0\", operands);
   output_asm_insn (\"str%D3\\t%2, %0\", operands);
   return \"\";
   "
   [(set_attr "conds" "clob")
-   (set_attr "length" "12")
+   (set (attr "length")
+	(if_then_else (eq_attr "is_thumb" "yes")
+		      (const_int 14)
+		      (const_int 12)))
+;; APPLE LOCAL end v7 support. Merge from mainline
    (set_attr "type" "store1")]
 )
 
@@ -2755,22 +3159,40 @@
 	    (match_operand:SI 3 "arm_rhs_operand" "rI,rI")])
 	  (match_operand:SI 1 "s_register_operand" "0,?r")]))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM && !arm_eliminable_register (operands[1])"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && !arm_eliminable_register (operands[1])"
   "*
   {
     enum rtx_code code = GET_CODE (operands[4]);
+    bool need_else;
+
+    if (which_alternative != 0 || operands[3] != const0_rtx
+        || (code != PLUS && code != MINUS && code != IOR && code != XOR))
+      need_else = true;
+    else
+      need_else = false;
 
     operands[5] = gen_rtx_fmt_ee (minmax_code (operands[5]), SImode,
 				  operands[2], operands[3]);
     output_asm_insn (\"cmp\\t%2, %3\", operands);
+    if (TARGET_THUMB2)
+      {
+	if (need_else)
+	  output_asm_insn (\"ite\\t%d5\", operands);
+	else
+	  output_asm_insn (\"it\\t%d5\", operands);
+      }
     output_asm_insn (\"%i4%d5\\t%0, %1, %2\", operands);
-    if (which_alternative != 0 || operands[3] != const0_rtx
-        || (code != PLUS && code != MINUS && code != IOR && code != XOR))
+    if (need_else)
       output_asm_insn (\"%i4%D5\\t%0, %1, %3\", operands);
     return \"\";
   }"
   [(set_attr "conds" "clob")
-   (set_attr "length" "12")]
+   (set (attr "length")
+	(if_then_else (eq_attr "is_thumb" "yes")
+		      (const_int 14)
+		      (const_int 12)))]
+;; APPLE LOCAL end v7 support. Merge from mainline
 )
 
 
@@ -2780,7 +3202,8 @@
   [(set (match_operand:DI            0 "s_register_operand" "")
         (ashift:DI (match_operand:DI 1 "s_register_operand" "")
                    (match_operand:SI 2 "reg_or_int_operand" "")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "
   if (GET_CODE (operands[2]) == CONST_INT)
     {
@@ -2805,7 +3228,8 @@
         (ashift:DI (match_operand:DI 1 "s_register_operand" "?r,0")
                    (const_int 1)))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "movs\\t%Q0, %Q1, asl #1\;adc\\t%R0, %R1, %R1"
   [(set_attr "conds" "clob")
    (set_attr "length" "8")]
@@ -2826,11 +3250,13 @@
   "
 )
 
-(define_insn "*thumb_ashlsi3"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_ashlsi3"
   [(set (match_operand:SI            0 "register_operand" "=l,l")
 	(ashift:SI (match_operand:SI 1 "register_operand" "l,0")
 		   (match_operand:SI 2 "nonmemory_operand" "N,l")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "lsl\\t%0, %1, %2"
   [(set_attr "length" "2")]
 )
@@ -2839,7 +3265,8 @@
   [(set (match_operand:DI              0 "s_register_operand" "")
         (ashiftrt:DI (match_operand:DI 1 "s_register_operand" "")
                      (match_operand:SI 2 "reg_or_int_operand" "")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "
   if (GET_CODE (operands[2]) == CONST_INT)
     {
@@ -2864,9 +3291,12 @@
         (ashiftrt:DI (match_operand:DI 1 "s_register_operand" "?r,0")
                      (const_int 1)))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "movs\\t%R0, %R1, asr #1\;mov\\t%Q0, %Q1, rrx"
   [(set_attr "conds" "clob")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov")
    (set_attr "length" "8")]
 )
 
@@ -2882,11 +3312,13 @@
   "
 )
 
-(define_insn "*thumb_ashrsi3"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_ashrsi3"
   [(set (match_operand:SI              0 "register_operand" "=l,l")
 	(ashiftrt:SI (match_operand:SI 1 "register_operand" "l,0")
 		     (match_operand:SI 2 "nonmemory_operand" "N,l")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "asr\\t%0, %1, %2"
   [(set_attr "length" "2")]
 )
@@ -2895,7 +3327,8 @@
   [(set (match_operand:DI              0 "s_register_operand" "")
         (lshiftrt:DI (match_operand:DI 1 "s_register_operand" "")
                      (match_operand:SI 2 "reg_or_int_operand" "")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "
   if (GET_CODE (operands[2]) == CONST_INT)
     {
@@ -2920,9 +3353,12 @@
         (lshiftrt:DI (match_operand:DI 1 "s_register_operand" "?r,0")
                      (const_int 1)))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "movs\\t%R0, %R1, lsr #1\;mov\\t%Q0, %Q1, rrx"
   [(set_attr "conds" "clob")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov")
    (set_attr "length" "8")]
 )
 
@@ -2941,11 +3377,13 @@
   "
 )
 
-(define_insn "*thumb_lshrsi3"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_lshrsi3"
   [(set (match_operand:SI              0 "register_operand" "=l,l")
 	(lshiftrt:SI (match_operand:SI 1 "register_operand" "l,0")
 		     (match_operand:SI 2 "nonmemory_operand" "N,l")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "lsr\\t%0, %1, %2"
   [(set_attr "length" "2")]
 )
@@ -2954,7 +3392,8 @@
   [(set (match_operand:SI              0 "s_register_operand" "")
 	(rotatert:SI (match_operand:SI 1 "s_register_operand" "")
 		     (match_operand:SI 2 "reg_or_int_operand" "")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "
   if (GET_CODE (operands[2]) == CONST_INT)
     operands[2] = GEN_INT ((32 - INTVAL (operands[2])) % 32);
@@ -2973,13 +3412,15 @@
 		     (match_operand:SI 2 "arm_rhs_operand" "")))]
   "TARGET_EITHER"
   "
-  if (TARGET_ARM)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_32BIT)
     {
       if (GET_CODE (operands[2]) == CONST_INT
           && ((unsigned HOST_WIDE_INT) INTVAL (operands[2])) > 31)
         operands[2] = GEN_INT (INTVAL (operands[2]) % 32);
     }
-  else /* TARGET_THUMB */
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  else /* TARGET_THUMB1 */
     {
       if (GET_CODE (operands [2]) == CONST_INT)
         operands [2] = force_reg (SImode, operands[2]);
@@ -2987,11 +3428,13 @@
   "
 )
 
-(define_insn "*thumb_rotrsi3"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_rotrsi3"
   [(set (match_operand:SI              0 "register_operand" "=l")
 	(rotatert:SI (match_operand:SI 1 "register_operand" "0")
 		     (match_operand:SI 2 "register_operand" "l")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "ror\\t%0, %0, %2"
   [(set_attr "length" "2")]
 )
@@ -3001,8 +3444,10 @@
 	(match_operator:SI  3 "shift_operator"
 	 [(match_operand:SI 1 "s_register_operand"  "r")
 	  (match_operand:SI 2 "reg_or_int_operand" "rM")]))]
-  "TARGET_ARM"
-  "mov%?\\t%0, %1%S3"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
+  "* return arm_output_shift(operands, 0);"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "predicable" "yes")
    (set_attr "shift" "1")
    (set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
@@ -3018,8 +3463,10 @@
 			 (const_int 0)))
    (set (match_operand:SI 0 "s_register_operand" "=r")
 	(match_op_dup 3 [(match_dup 1) (match_dup 2)]))]
-  "TARGET_ARM"
-  "mov%?s\\t%0, %1%S3"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
+  "* return arm_output_shift(operands, 1);"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")
    (set_attr "shift" "1")
    (set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
@@ -3034,13 +3481,16 @@
 			   (match_operand:SI 2 "arm_rhs_operand" "rM")])
 			 (const_int 0)))
    (clobber (match_scratch:SI 0 "=r"))]
-  "TARGET_ARM"
-  "mov%?s\\t%0, %1%S3"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
+  "* return arm_output_shift(operands, 1);"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")
    (set_attr "shift" "1")]
 )
 
-(define_insn "*notsi_shiftsi"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*arm_notsi_shiftsi"
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(not:SI (match_operator:SI 3 "shift_operator"
 		 [(match_operand:SI 1 "s_register_operand" "r")
@@ -3049,12 +3499,15 @@
   "mvn%?\\t%0, %1%S3"
   [(set_attr "predicable" "yes")
    (set_attr "shift" "1")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mvn")
    (set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
 		      (const_string "alu_shift")
 		      (const_string "alu_shift_reg")))]
 )
 
-(define_insn "*notsi_shiftsi_compare0"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*arm_notsi_shiftsi_compare0"
   [(set (reg:CC_NOOV CC_REGNUM)
 	(compare:CC_NOOV (not:SI (match_operator:SI 3 "shift_operator"
 			  [(match_operand:SI 1 "s_register_operand" "r")
@@ -3063,15 +3516,19 @@
    (set (match_operand:SI 0 "s_register_operand" "=r")
 	(not:SI (match_op_dup 3 [(match_dup 1) (match_dup 2)])))]
   "TARGET_ARM"
-  "mvn%?s\\t%0, %1%S3"
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+  "mvn%.\\t%0, %1%S3"
   [(set_attr "conds" "set")
    (set_attr "shift" "1")
+   (set_attr "insn" "mvn")
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
    (set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
 		      (const_string "alu_shift")
 		      (const_string "alu_shift_reg")))]
 )
 
-(define_insn "*not_shiftsi_compare0_scratch"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*arm_not_shiftsi_compare0_scratch"
   [(set (reg:CC_NOOV CC_REGNUM)
 	(compare:CC_NOOV (not:SI (match_operator:SI 3 "shift_operator"
 			  [(match_operand:SI 1 "s_register_operand" "r")
@@ -3079,9 +3536,12 @@
 			 (const_int 0)))
    (clobber (match_scratch:SI 0 "=r"))]
   "TARGET_ARM"
-  "mvn%?s\\t%0, %1%S3"
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+  "mvn%.\\t%0, %1%S3"
   [(set_attr "conds" "set")
    (set_attr "shift" "1")
+   (set_attr "insn" "mvn")
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
    (set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
 		      (const_string "alu_shift")
 		      (const_string "alu_shift_reg")))]
@@ -3097,7 +3557,8 @@
    (set (match_operand:SI              0 "register_operand" "")
 	(lshiftrt:SI (match_dup 4)
 		     (match_operand:SI 3 "const_int_operand" "")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "
   {
     HOST_WIDE_INT lshift = 32 - INTVAL (operands[2]) - INTVAL (operands[3]);
@@ -3126,7 +3587,8 @@
     (clobber (reg:CC CC_REGNUM))])]
   "TARGET_EITHER"
   "
-  if (TARGET_THUMB)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_THUMB1)
     {
       if (GET_CODE (operands[1]) != REG)
         operands[1] = force_reg (SImode, operands[1]);
@@ -3146,11 +3608,13 @@
    (set_attr "length" "8")]
 )
 
-(define_insn "*thumb_negdi2"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_negdi2"
   [(set (match_operand:DI         0 "register_operand" "=&l")
 	(neg:DI (match_operand:DI 1 "register_operand"   "l")))
    (clobber (reg:CC CC_REGNUM))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "mov\\t%R0, #0\;neg\\t%Q0, %Q1\;sbc\\t%R0, %R1"
   [(set_attr "length" "6")]
 )
@@ -3165,30 +3629,35 @@
 (define_insn "*arm_negsi2"
   [(set (match_operand:SI         0 "s_register_operand" "=r")
 	(neg:SI (match_operand:SI 1 "s_register_operand" "r")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "rsb%?\\t%0, %1, #0"
   [(set_attr "predicable" "yes")]
 )
 
-(define_insn "*thumb_negsi2"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_insn "*thumb1_negsi2"
   [(set (match_operand:SI         0 "register_operand" "=l")
 	(neg:SI (match_operand:SI 1 "register_operand" "l")))]
-  "TARGET_THUMB"
+  "TARGET_THUMB1"
   "neg\\t%0, %1"
   [(set_attr "length" "2")]
 )
+;; APPLE LOCAL end v7 support. Merge from mainline
 
 (define_expand "negsf2"
   [(set (match_operand:SF         0 "s_register_operand" "")
 	(neg:SF (match_operand:SF 1 "s_register_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   ""
 )
 
 (define_expand "negdf2"
   [(set (match_operand:DF         0 "s_register_operand" "")
 	(neg:DF (match_operand:DF 1 "s_register_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "")
 
 ;; abssi2 doesn't really clobber the condition codes if a different register
@@ -3201,11 +3670,13 @@
     [(set (match_operand:SI         0 "s_register_operand" "")
 	  (abs:SI (match_operand:SI 1 "s_register_operand" "")))
      (clobber (reg:CC CC_REGNUM))])]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "")
 
 (define_insn "*arm_abssi2"
-  [(set (match_operand:SI         0 "s_register_operand" "=r,&r")
+;; APPLE LOCAL v7 support. Merge from mainline
+  [(set (match_operand:SI 0 "s_register_operand" "=r,&r")
 	(abs:SI (match_operand:SI 1 "s_register_operand" "0,r")))
    (clobber (reg:CC CC_REGNUM))]
   "TARGET_ARM"
@@ -3218,7 +3689,8 @@
    (set_attr "length" "8")]
 )
 
-(define_insn "*neg_abssi2"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*arm_neg_abssi2"
   [(set (match_operand:SI 0 "s_register_operand" "=r,&r")
 	(neg:SI (abs:SI (match_operand:SI 1 "s_register_operand" "0,r"))))
    (clobber (reg:CC CC_REGNUM))]
@@ -3235,33 +3707,39 @@
 (define_expand "abssf2"
   [(set (match_operand:SF         0 "s_register_operand" "")
 	(abs:SF (match_operand:SF 1 "s_register_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "")
 
 (define_expand "absdf2"
   [(set (match_operand:DF         0 "s_register_operand" "")
 	(abs:DF (match_operand:DF 1 "s_register_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "")
 
 (define_expand "sqrtsf2"
   [(set (match_operand:SF 0 "s_register_operand" "")
 	(sqrt:SF (match_operand:SF 1 "s_register_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "")
 
 (define_expand "sqrtdf2"
   [(set (match_operand:DF 0 "s_register_operand" "")
 	(sqrt:DF (match_operand:DF 1 "s_register_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "")
 
 (define_insn_and_split "one_cmpldi2"
   [(set (match_operand:DI 0 "s_register_operand" "=&r,&r")
 	(not:DI (match_operand:DI 1 "s_register_operand" "?r,0")))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "#"
-  "TARGET_ARM && reload_completed"
+  "TARGET_32BIT && reload_completed"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set (match_dup 0) (not:SI (match_dup 1)))
    (set (match_dup 2) (not:SI (match_dup 3)))]
   "
@@ -3285,17 +3763,26 @@
 (define_insn "*arm_one_cmplsi2"
   [(set (match_operand:SI         0 "s_register_operand" "=r")
 	(not:SI (match_operand:SI 1 "s_register_operand"  "r")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "mvn%?\\t%0, %1"
-  [(set_attr "predicable" "yes")]
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+  [(set_attr "predicable" "yes")
+   (set_attr "insn" "mvn")]
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
 )
 
-(define_insn "*thumb_one_cmplsi2"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_one_cmplsi2"
   [(set (match_operand:SI         0 "register_operand" "=l")
 	(not:SI (match_operand:SI 1 "register_operand"  "l")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "mvn\\t%0, %1"
-  [(set_attr "length" "2")]
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+  [(set_attr "length" "2")
+   (set_attr "insn" "mvn")]
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
 )
 
 (define_insn "*notsi_compare0"
@@ -3304,9 +3791,14 @@
 			 (const_int 0)))
    (set (match_operand:SI 0 "s_register_operand" "=r")
 	(not:SI (match_dup 1)))]
-  "TARGET_ARM"
-  "mvn%?s\\t%0, %1"
-  [(set_attr "conds" "set")]
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
+  "mvn%.\\t%0, %1"
+;; APPLE LOCAL end v7 support. Merge from mainline
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+  [(set_attr "conds" "set")
+   (set_attr "insn" "mvn")]
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
 )
 
 (define_insn "*notsi_compare0_scratch"
@@ -3314,9 +3806,14 @@
 	(compare:CC_NOOV (not:SI (match_operand:SI 1 "s_register_operand" "r"))
 			 (const_int 0)))
    (clobber (match_scratch:SI 0 "=r"))]
-  "TARGET_ARM"
-  "mvn%?s\\t%0, %1"
-  [(set_attr "conds" "set")]
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
+  "mvn%.\\t%0, %1"
+;; APPLE LOCAL end v7 support. Merge from mainline
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+  [(set_attr "conds" "set")
+   (set_attr "insn" "mvn")]
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
 )
 
 ;; Fixed <--> Floating conversion insns
@@ -3324,7 +3821,8 @@
 (define_expand "floatsisf2"
   [(set (match_operand:SF           0 "s_register_operand" "")
 	(float:SF (match_operand:SI 1 "s_register_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "
   if (TARGET_MAVERICK)
     {
@@ -3336,7 +3834,8 @@
 (define_expand "floatsidf2"
   [(set (match_operand:DF           0 "s_register_operand" "")
 	(float:DF (match_operand:SI 1 "s_register_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "
   if (TARGET_MAVERICK)
     {
@@ -3348,7 +3847,8 @@
 (define_expand "fix_truncsfsi2"
   [(set (match_operand:SI         0 "s_register_operand" "")
 	(fix:SI (fix:SF (match_operand:SF 1 "s_register_operand"  ""))))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "
   if (TARGET_MAVERICK)
     {
@@ -3364,7 +3864,8 @@
 (define_expand "fix_truncdfsi2"
   [(set (match_operand:SI         0 "s_register_operand" "")
 	(fix:SI (fix:DF (match_operand:DF 1 "s_register_operand"  ""))))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "
   if (TARGET_MAVERICK)
     {
@@ -3381,13 +3882,23 @@
   [(set (match_operand:SF  0 "s_register_operand" "")
 	(float_truncate:SF
  	 (match_operand:DF 1 "s_register_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   ""
 )
 
 ;; Zero and sign extension instructions.
 
-(define_insn "zero_extendsidi2"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_expand "zero_extendsidi2"
+  [(set (match_operand:DI 0 "s_register_operand" "")
+        (zero_extend:DI (match_operand:SI 1 "s_register_operand" "")))]
+  "TARGET_32BIT"
+  ""
+)
+
+(define_insn "*arm_zero_extendsidi2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set (match_operand:DI 0 "s_register_operand" "=r")
         (zero_extend:DI (match_operand:SI 1 "s_register_operand" "r")))]
   "TARGET_ARM"
@@ -3398,16 +3909,27 @@
     return \"mov%?\\t%R0, #0\";
   "
   [(set_attr "length" "8")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov")
    (set_attr "predicable" "yes")]
 )
 
-(define_insn "zero_extendqidi2"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_expand "zero_extendqidi2"
+  [(set (match_operand:DI                 0 "s_register_operand"  "")
+	(zero_extend:DI (match_operand:QI 1 "nonimmediate_operand" "")))]
+  "TARGET_32BIT"
+  ""
+)
+
+(define_insn "*arm_zero_extendqidi2"
   [(set (match_operand:DI                 0 "s_register_operand"  "=r,r")
 	(zero_extend:DI (match_operand:QI 1 "nonimmediate_operand" "r,m")))]
   "TARGET_ARM"
   "@
    and%?\\t%Q0, %1, #255\;mov%?\\t%R0, #0
-   ldr%?b\\t%Q0, %1\;mov%?\\t%R0, #0"
+   ldr%(b%)\\t%Q0, %1\;mov%?\\t%R0, #0"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "length" "8")
    (set_attr "predicable" "yes")
    (set_attr "type" "*,load_byte")
@@ -3415,7 +3937,16 @@
    (set_attr "neg_pool_range" "*,4084")]
 )
 
-(define_insn "extendsidi2"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_expand "extendsidi2"
+  [(set (match_operand:DI 0 "s_register_operand" "")
+        (sign_extend:DI (match_operand:SI 1 "s_register_operand" "")))]
+  "TARGET_32BIT"
+  ""
+)
+
+(define_insn "*arm_extendsidi2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set (match_operand:DI 0 "s_register_operand" "=r")
         (sign_extend:DI (match_operand:SI 1 "s_register_operand" "r")))]
   "TARGET_ARM"
@@ -3427,6 +3958,8 @@
   "
   [(set_attr "length" "8")
    (set_attr "shift" "1")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov")
    (set_attr "predicable" "yes")]
 )
 
@@ -3439,7 +3972,8 @@
   "TARGET_EITHER"
   "
   {
-    if ((TARGET_THUMB || arm_arch4) && GET_CODE (operands[1]) == MEM)
+    /* APPLE LOCAL v7 support. Merge from mainline */
+    if ((TARGET_THUMB1 || arm_arch4) && GET_CODE (operands[1]) == MEM)
       {
 	emit_insn (gen_rtx_SET (VOIDmode, operands[0],
 				gen_rtx_ZERO_EXTEND (SImode, operands[1])));
@@ -3468,10 +4002,12 @@
 )
 
 ;; APPLE LOCAL ARM compact switch tables
-(define_insn "adjustable_thumb_zero_extendhisi2"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_insn "adjustable_thumb1_zero_extendhisi2"
   [(set (match_operand:SI 0 "register_operand" "=l")
 	(zero_extend:SI (match_operand:HI 1 "memory_operand" "m")))]
-  "TARGET_THUMB && !arm_arch6"
+  "TARGET_THUMB1 && !arm_arch6"
+;; APPLE LOCAL end v7 support. Merge from mainline
   "*
   rtx mem = XEXP (operands[1], 0);
 
@@ -3511,10 +4047,12 @@
 )
 
 ;; APPLE LOCAL ARM compact switch tables
-(define_insn "adjustable_thumb_zero_extendhisi2_v6"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "adjustable_thumb1_zero_extendhisi2_v6"
   [(set (match_operand:SI 0 "register_operand" "=l,l")
 	(zero_extend:SI (match_operand:HI 1 "nonimmediate_operand" "l,m")))]
-  "TARGET_THUMB && arm_arch6"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && arm_arch6"
   "*
   rtx mem;
 
@@ -3562,7 +4100,8 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(zero_extend:SI (match_operand:HI 1 "memory_operand" "m")))]
   "TARGET_ARM && arm_arch4 && !arm_arch6"
-  "ldr%?h\\t%0, %1"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "ldr%(h%)\\t%0, %1"
   [(set_attr "type" "load_byte")
    (set_attr "predicable" "yes")
    (set_attr "pool_range" "256")
@@ -3573,9 +4112,11 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r,r")
 	(zero_extend:SI (match_operand:HI 1 "nonimmediate_operand" "r,m")))]
   "TARGET_ARM && arm_arch6"
+;; APPLE LOCAL begin v7 support. Merge from mainline
   "@
    uxth%?\\t%0, %1
-   ldr%?h\\t%0, %1"
+   ldr%(h%)\\t%0, %1"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "alu_shift,load_byte")
    (set_attr "predicable" "yes")
    (set_attr "pool_range" "*,256")
@@ -3586,7 +4127,8 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(plus:SI (zero_extend:SI (match_operand:HI 1 "s_register_operand" "r"))
 		 (match_operand:SI 2 "s_register_operand" "r")))]
-  "TARGET_ARM && arm_arch6"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_INT_SIMD"
   "uxtah%?\\t%0, %2, %1"
   [(set_attr "type" "alu_shift")
    (set_attr "predicable" "yes")]
@@ -3632,20 +4174,24 @@
   "
 )
 
-(define_insn "*thumb_zero_extendqisi2"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_zero_extendqisi2"
   [(set (match_operand:SI 0 "register_operand" "=l")
 	(zero_extend:SI (match_operand:QI 1 "memory_operand" "m")))]
-  "TARGET_THUMB && !arm_arch6"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && !arm_arch6"
   "ldrb\\t%0, %1"
   [(set_attr "length" "2")
    (set_attr "type" "load_byte")
    (set_attr "pool_range" "32")]
 )
 
-(define_insn "*thumb_zero_extendqisi2_v6"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_zero_extendqisi2_v6"
   [(set (match_operand:SI 0 "register_operand" "=l,l")
 	(zero_extend:SI (match_operand:QI 1 "nonimmediate_operand" "l,m")))]
-  "TARGET_THUMB && arm_arch6"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && arm_arch6"
   "@
    uxtb\\t%0, %1
    ldrb\\t%0, %1"
@@ -3658,7 +4204,8 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(zero_extend:SI (match_operand:QI 1 "memory_operand" "m")))]
   "TARGET_ARM && !arm_arch6"
-  "ldr%?b\\t%0, %1\\t%@ zero_extendqisi2"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "ldr%(b%)\\t%0, %1\\t%@ zero_extendqisi2"
   [(set_attr "type" "load_byte")
    (set_attr "predicable" "yes")
    (set_attr "pool_range" "4096")
@@ -3669,9 +4216,11 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r,r")
 	(zero_extend:SI (match_operand:QI 1 "nonimmediate_operand" "r,m")))]
   "TARGET_ARM && arm_arch6"
+;; APPLE LOCAL begin v7 support. Merge from mainline
   "@
-   uxtb%?\\t%0, %1
-   ldr%?b\\t%0, %1\\t%@ zero_extendqisi2"
+   uxtb%(%)\\t%0, %1
+   ldr%(b%)\\t%0, %1\\t%@ zero_extendqisi2"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "alu_shift,load_byte")
    (set_attr "predicable" "yes")
    (set_attr "pool_range" "*,4096")
@@ -3682,7 +4231,8 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(plus:SI (zero_extend:SI (match_operand:QI 1 "s_register_operand" "r"))
 		 (match_operand:SI 2 "s_register_operand" "r")))]
-  "TARGET_ARM && arm_arch6"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_INT_SIMD"
   "uxtab%?\\t%0, %2, %1"
   [(set_attr "predicable" "yes")
    (set_attr "type" "alu_shift")]
@@ -3692,7 +4242,8 @@
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(zero_extend:SI (subreg:QI (match_operand:SI 1 "" "") 0)))
    (clobber (match_operand:SI 2 "s_register_operand" ""))]
-  "TARGET_ARM && (GET_CODE (operands[1]) != MEM) && ! BYTES_BIG_ENDIAN"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && (GET_CODE (operands[1]) != MEM) && ! BYTES_BIG_ENDIAN"
   [(set (match_dup 2) (match_dup 1))
    (set (match_dup 0) (and:SI (match_dup 2) (const_int 255)))]
   ""
@@ -3702,7 +4253,8 @@
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(zero_extend:SI (subreg:QI (match_operand:SI 1 "" "") 3)))
    (clobber (match_operand:SI 2 "s_register_operand" ""))]
-  "TARGET_ARM && (GET_CODE (operands[1]) != MEM) && BYTES_BIG_ENDIAN"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && (GET_CODE (operands[1]) != MEM) && BYTES_BIG_ENDIAN"
   [(set (match_dup 2) (match_dup 1))
    (set (match_dup 0) (and:SI (match_dup 2) (const_int 255)))]
   ""
@@ -3712,7 +4264,8 @@
   [(set (reg:CC_Z CC_REGNUM)
 	(compare:CC_Z (match_operand:QI 0 "s_register_operand" "r")
 			 (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "tst\\t%0, #255"
   [(set_attr "conds" "set")]
 )
@@ -3729,9 +4282,11 @@
   {
     if (GET_CODE (operands[1]) == MEM)
       {
-	if (TARGET_THUMB)
+        /* APPLE LOCAL v7 support. Merge from mainline */
+	if (TARGET_THUMB1)
 	  {
-	    emit_insn (gen_thumb_extendhisi2 (operands[0], operands[1]));
+            /* APPLE LOCAL v7 support. Merge from mainline */
+	    emit_insn (gen_thumb1_extendhisi2 (operands[0], operands[1]));
 	    DONE;
           }
 	else if (arm_arch4)
@@ -3753,8 +4308,10 @@
 
     if (arm_arch6)
       {
-	if (TARGET_THUMB)
-	  emit_insn (gen_thumb_extendhisi2 (operands[0], operands[1]));
+        /* APPLE LOCAL begin v7 support. Merge from mainline */
+	if (TARGET_THUMB1)
+	  emit_insn (gen_thumb1_extendhisi2 (operands[0], operands[1]));
+        /* APPLE LOCAL end v7 support. Merge from mainline */
 	else
 	  emit_insn (gen_rtx_SET (VOIDmode, operands[0],
 		     gen_rtx_SIGN_EXTEND (SImode, operands[1])));
@@ -3767,11 +4324,13 @@
   }"
 )
 
-(define_insn "thumb_extendhisi2"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "thumb1_extendhisi2"
   [(set (match_operand:SI 0 "register_operand" "=l")
 	(sign_extend:SI (match_operand:HI 1 "memory_operand" "m")))
    (clobber (match_scratch:SI 2 "=&l"))]
-  "TARGET_THUMB && !arm_arch6"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && !arm_arch6"
   "*
   {
     rtx ops[4];
@@ -3829,11 +4388,13 @@
 ;; the early-clobber: we can always use operand 0 if operand 2
 ;; overlaps the address.
 ;; APPLE LOCAL ARM compact switch tables
-(define_insn "adjustable_thumb_extendhisi2_insn_v6"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "adjustable_thumb1_extendhisi2_insn_v6"
   [(set (match_operand:SI 0 "register_operand" "=l,l")
 	(sign_extend:SI (match_operand:HI 1 "nonimmediate_operand" "l,m")))
    (clobber (match_scratch:SI 2 "=X,l"))]
-  "TARGET_THUMB && arm_arch6"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && arm_arch6"
   "*
   {
     rtx ops[4];
@@ -3891,6 +4452,8 @@
    (set_attr "pool_range" "*,1020")]
 )
 
+;; APPLE LOCAL v7 support. Merge from mainline
+;; This pattern will only be used when ldsh is not available
 (define_expand "extendhisi2_mem"
   [(set (match_dup 2) (zero_extend:SI (match_operand:HI 1 "" "")))
    (set (match_dup 3)
@@ -3930,20 +4493,25 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(sign_extend:SI (match_operand:HI 1 "memory_operand" "m")))]
   "TARGET_ARM && arm_arch4 && !arm_arch6"
-  "ldr%?sh\\t%0, %1"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "ldr%(sh%)\\t%0, %1"
   [(set_attr "type" "load_byte")
    (set_attr "predicable" "yes")
    (set_attr "pool_range" "256")
    (set_attr "neg_pool_range" "244")]
 )
 
+;; APPLE LOCAL v7 support. Merge from mainline
+;; ??? Check Thumb-2 pool range
 (define_insn "*arm_extendhisi2_v6"
   [(set (match_operand:SI 0 "s_register_operand" "=r,r")
 	(sign_extend:SI (match_operand:HI 1 "nonimmediate_operand" "r,m")))]
-  "TARGET_ARM && arm_arch6"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && arm_arch6"
   "@
    sxth%?\\t%0, %1
-   ldr%?sh\\t%0, %1"
+   ldr%(sh%)\\t%0, %1"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "alu_shift,load_byte")
    (set_attr "predicable" "yes")
    (set_attr "pool_range" "*,256")
@@ -3954,7 +4522,8 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(plus:SI (sign_extend:SI (match_operand:HI 1 "s_register_operand" "r"))
 		 (match_operand:SI 2 "s_register_operand" "r")))]
-  "TARGET_ARM && arm_arch6"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_INT_SIMD"
   "sxtah%?\\t%0, %2, %1"
 )
 
@@ -3983,11 +4552,13 @@
   }"
 )
 
-(define_insn "*extendqihi_insn"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*arm_extendqihi_insn"
   [(set (match_operand:HI 0 "s_register_operand" "=r")
 	(sign_extend:HI (match_operand:QI 1 "memory_operand" "Uq")))]
   "TARGET_ARM && arm_arch4"
-  "ldr%?sb\\t%0, %1"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "ldr%(sb%)\\t%0, %1"
   [(set_attr "type" "load_byte")
    (set_attr "predicable" "yes")
    (set_attr "pool_range" "256")
@@ -4030,7 +4601,8 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(sign_extend:SI (match_operand:QI 1 "memory_operand" "Uq")))]
   "TARGET_ARM && arm_arch4 && !arm_arch6"
-  "ldr%?sb\\t%0, %1"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "ldr%(sb%)\\t%0, %1"
   [(set_attr "type" "load_byte")
    (set_attr "predicable" "yes")
    (set_attr "pool_range" "256")
@@ -4041,9 +4613,11 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r,r")
 	(sign_extend:SI (match_operand:QI 1 "nonimmediate_operand" "r,Uq")))]
   "TARGET_ARM && arm_arch6"
+;; APPLE LOCAL begin v7 support. Merge from mainline
   "@
    sxtb%?\\t%0, %1
-   ldr%?sb\\t%0, %1"
+   ldr%(sb%)\\t%0, %1"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "alu_shift,load_byte")
    (set_attr "predicable" "yes")
    (set_attr "pool_range" "*,256")
@@ -4054,17 +4628,20 @@
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(plus:SI (sign_extend:SI (match_operand:QI 1 "s_register_operand" "r"))
 		 (match_operand:SI 2 "s_register_operand" "r")))]
-  "TARGET_ARM && arm_arch6"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_INT_SIMD"
   "sxtab%?\\t%0, %2, %1"
   [(set_attr "type" "alu_shift")
    (set_attr "predicable" "yes")]
 )
 
 ;; APPLE LOCAL ARM compact switch tables
-(define_insn "adjustable_thumb_extendqisi2"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "adjustable_thumb1_extendqisi2"
   [(set (match_operand:SI 0 "register_operand" "=l,l")
 	(sign_extend:SI (match_operand:QI 1 "memory_operand" "V,m")))]
-  "TARGET_THUMB && !arm_arch6"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && !arm_arch6"
   "*
   {
     rtx ops[3];
@@ -4140,10 +4717,12 @@
 )
 
 ;; APPLE LOCAL ARM compact switch tables
-(define_insn "adjustable_thumb_extendqisi2_v6"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "adjustable_thumb1_extendqisi2_v6"
   [(set (match_operand:SI 0 "register_operand" "=l,l,l")
 	(sign_extend:SI (match_operand:QI 1 "nonimmediate_operand" "l,V,m")))]
-  "TARGET_THUMB && arm_arch6"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && arm_arch6"
   "*
   {
     rtx ops[3];
@@ -4224,7 +4803,8 @@
 (define_expand "extendsfdf2"
   [(set (match_operand:DF                  0 "s_register_operand" "")
 	(float_extend:DF (match_operand:SF 1 "s_register_operand"  "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   ""
 )
 
@@ -4330,7 +4910,8 @@
 (define_split
   [(set (match_operand:ANY64 0 "arm_general_register_operand" "")
 	(match_operand:ANY64 1 "const_double_operand" ""))]
-  "TARGET_ARM
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT
    && reload_completed
    && (arm_const_double_inline_cost (operands[1])
        <= ((optimize_size || arm_ld_sched) ? 3 : 4))"
@@ -4426,10 +5007,12 @@
 ;;; ??? This was originally identical to the movdf_insn pattern.
 ;;; ??? The 'i' constraint looks funny, but it should always be replaced by
 ;;; thumb_reorg with a memory reference.
-(define_insn "adjustable_thumb_movdi_insn"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "adjustable_thumb1_movdi_insn"
   [(set (match_operand:DI 0 "nonimmediate_operand" "=l,l,l,l,>,l, m,*r")
 	(match_operand:DI 1 "general_operand"      "l, I,J,>,l,mi,l,*r"))]
-  "TARGET_THUMB
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1
    && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)
    && (   register_operand (operands[0], DImode)
        || register_operand (operands[1], DImode))"
@@ -4466,6 +5049,8 @@
   }"
   [(set_attr "length" "4,4,6,2,2,4,4,4")
    (set_attr "type" "*,*,*,load2,store2,load2,store2,*")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "*,mov,*,*,*,*,*,mov")
    (set_attr "pool_range" "*,*,*,*,*,1018,*,*")]
 )
 ;; APPLE LOCAL end compact switch tables
@@ -4475,7 +5060,8 @@
         (match_operand:SI 1 "general_operand" ""))]
   "TARGET_EITHER"
   "
-  if (TARGET_ARM)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_32BIT)
     {
       /* Everything except mem = const or mem = mem can be done easily.  */
       if (GET_CODE (operands[0]) == MEM)
@@ -4491,7 +5077,8 @@
           DONE;
         }
     }
-  else /* TARGET_THUMB....  */
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  else /* TARGET_THUMB1...  */
     {
       if (!no_new_pseudos)
         {
@@ -4531,9 +5118,10 @@
   "
 )
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
 (define_insn "*arm_movsi_insn"
-  [(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r, m")
-	(match_operand:SI 1 "general_operand"      "rI,K,mi,r"))]
+  [(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r,r, m")
+	(match_operand:SI 1 "general_operand"      "rI,K,N,mi,r"))]
   "TARGET_ARM && ! TARGET_IWMMXT
    && !(TARGET_HARD_FLOAT && TARGET_VFP)
    && (   register_operand (operands[0], SImode)
@@ -4541,18 +5129,23 @@
   "@
    mov%?\\t%0, %1
    mvn%?\\t%0, #%B1
+   movw%?\\t%0, %1
    ldr%?\\t%0, %1
    str%?\\t%1, %0"
-  [(set_attr "type" "*,*,load1,store1")
+  [(set_attr "type" "*,*,*,load1,store1")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov,mvn,mov,*,*")
    (set_attr "predicable" "yes")
-   (set_attr "pool_range" "*,*,4096,*")
-   (set_attr "neg_pool_range" "*,*,4084,*")]
+   (set_attr "pool_range" "*,*,*,4096,*")
+   (set_attr "neg_pool_range" "*,*,*,4084,*")]
 )
+;; APPLE LOCAL end v7 support. Merge from mainline
 
 (define_split
   [(set (match_operand:SI 0 "arm_general_register_operand" "")
 	(match_operand:SI 1 "const_int_operand" ""))]
-  "TARGET_ARM
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT
   && (!(const_ok_for_arm (INTVAL (operands[1]))
         || const_ok_for_arm (~INTVAL (operands[1]))))"
   [(clobber (const_int 0))]
@@ -4563,10 +5156,12 @@
   "
 )
 
-(define_insn "*thumb_movsi_insn"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_movsi_insn"
   [(set (match_operand:SI 0 "nonimmediate_operand" "=l,l,l,l,l,>,l, m,*lh")
 	(match_operand:SI 1 "general_operand"      "l, I,J,K,>,l,mi,l,*lh"))]
-  "TARGET_THUMB
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1
    && (   register_operand (operands[0], SImode) 
        || register_operand (operands[1], SImode))"
   "@
@@ -4587,7 +5182,8 @@
 (define_split 
   [(set (match_operand:SI 0 "register_operand" "")
 	(match_operand:SI 1 "const_int_operand" ""))]
-  "TARGET_THUMB && satisfies_constraint_J (operands[1])"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && satisfies_constraint_J (operands[1])"
   [(set (match_dup 0) (match_dup 1))
    (set (match_dup 0) (neg:SI (match_dup 0)))]
   "operands[1] = GEN_INT (- INTVAL (operands[1]));"
@@ -4596,7 +5192,8 @@
 (define_split 
   [(set (match_operand:SI 0 "register_operand" "")
 	(match_operand:SI 1 "const_int_operand" ""))]
-  "TARGET_THUMB && satisfies_constraint_K (operands[1])"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && satisfies_constraint_K (operands[1])"
   [(set (match_dup 0) (match_dup 1))
    (set (match_dup 0) (ashift:SI (match_dup 0) (match_dup 2)))]
   "
@@ -4640,12 +5237,14 @@
    (set (attr "neg_pool_range") (const_int 4084))]
 )
 
-(define_insn "pic_load_addr_thumb"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "pic_load_addr_thumb1"
   [(set (match_operand:SI 0 "s_register_operand" "=l")
 	(unspec:SI [(match_operand:SI 1 "" "mX")
 		    (label_ref (match_operand 2 "" ""))] UNSPEC_PIC_SYM))
    (use (label_ref (match_dup 2)))]
-  "TARGET_THUMB && (flag_pic || (TARGET_MACHO && MACHO_DYNAMIC_NO_PIC_P))"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && (flag_pic || (TARGET_MACHO && MACHO_DYNAMIC_NO_PIC_P))"
   "ldr\\t%0, %1"
   [(set_attr "type" "load1")
    (set (attr "pool_range") (const_int 1022))
@@ -4866,10 +5465,12 @@
 		    (const_int 0)))
    (set (match_operand:SI 0 "s_register_operand" "=r,r")
 	(match_dup 1))]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
    cmp%?\\t%0, #0
-   sub%?s\\t%0, %1, #0"
+   sub%.\\t%0, %1, #0"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "conds" "set")]
 )
 
@@ -4985,7 +5586,8 @@
 (define_expand "storehi_single_op"
   [(set (match_operand:HI 0 "memory_operand" "")
 	(match_operand:HI 1 "general_operand" ""))]
-  "TARGET_ARM && arm_arch4"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && arm_arch4"
   "
   if (!s_register_operand (operands[1], HImode))
     operands[1] = copy_to_mode_reg (HImode, operands[1]);
@@ -5103,9 +5705,29 @@
           DONE;
        }
     }
-  else /* TARGET_THUMB */
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  else if (TARGET_THUMB2)
+    {
+      /* Thumb-2 can do everything except mem=mem and mem=const easily.  */
+      if (!no_new_pseudos)
+	{
+	  if (GET_CODE (operands[0]) != REG)
+	    operands[1] = force_reg (HImode, operands[1]);
+          /* Zero extend a constant, and keep it in an SImode reg.  */
+          else if (GET_CODE (operands[1]) == CONST_INT)
+	    {
+	      rtx reg = gen_reg_rtx (SImode);
+	      HOST_WIDE_INT val = INTVAL (operands[1]) & 0xffff;
+
+	      emit_insn (gen_movsi (reg, GEN_INT (val)));
+	      operands[1] = gen_lowpart (HImode, reg);
+	    }
+	}
+    }
+  else /* TARGET_THUMB1 */
     {
       if (!no_new_pseudos)
+  /* APPLE LOCAL end v7 support. Merge from mainline */
         {
 	  if (GET_CODE (operands[1]) == CONST_INT)
 	    {
@@ -5166,10 +5788,12 @@
 )
 
 ;; APPLE LOCAL ARM compact switch tables
-(define_insn "adjustable_thumb_movhi_insn"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "adjustable_thumb1_movhi_insn"
   [(set (match_operand:HI 0 "nonimmediate_operand" "=l,l,m,*r,*h,l")
 	(match_operand:HI 1 "general_operand"       "l,m,l,*h,*r,I"))]
-  "TARGET_THUMB
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1
    && (   register_operand (operands[0], HImode)
        || register_operand (operands[1], HImode))"
   "*
@@ -5262,13 +5886,17 @@
    && (GET_CODE (operands[1]) != CONST_INT
        || const_ok_for_arm (INTVAL (operands[1]))
        || const_ok_for_arm (~INTVAL (operands[1])))"
+;; APPLE LOCAL begin v7 support. Merge from mainline
   "@
    mov%?\\t%0, %1\\t%@ movhi
    mvn%?\\t%0, #%B1\\t%@ movhi
-   str%?h\\t%1, %0\\t%@ movhi
-   ldr%?h\\t%0, %1\\t%@ movhi"
+   str%(h%)\\t%1, %0\\t%@ movhi
+   ldr%(h%)\\t%0, %1\\t%@ movhi"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "*,*,store1,load1")
    (set_attr "predicable" "yes")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov,mvn,*,*")
    (set_attr "pool_range" "*,*,*,256")
    (set_attr "neg_pool_range" "*,*,*,244")]
 )
@@ -5280,14 +5908,18 @@
   "@
    mov%?\\t%0, %1\\t%@ movhi
    mvn%?\\t%0, #%B1\\t%@ movhi"
-  [(set_attr "predicable" "yes")]
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+  [(set_attr "predicable" "yes")
+   (set_attr "insn" "mov,mvn")]
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
 )
 
 (define_expand "thumb_movhi_clobber"
   [(set (match_operand:HI     0 "memory_operand"   "")
 	(match_operand:HI     1 "register_operand" ""))
    (clobber (match_operand:DI 2 "register_operand" ""))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "
   if (strict_memory_address_p (HImode, XEXP (operands[0], 0))
       && REGNO (operands[1]) <= LAST_LO_REGNUM)
@@ -5402,22 +6034,29 @@
 (define_insn "*arm_movqi_insn"
   [(set (match_operand:QI 0 "nonimmediate_operand" "=r,r,r,m")
 	(match_operand:QI 1 "general_operand" "rI,K,m,r"))]
-  "TARGET_ARM
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT
    && (   register_operand (operands[0], QImode)
        || register_operand (operands[1], QImode))"
+;; APPLE LOCAL begin v7 support. Merge from mainline
   "@
    mov%?\\t%0, %1
    mvn%?\\t%0, #%B1
-   ldr%?b\\t%0, %1
-   str%?b\\t%1, %0"
+   ldr%(b%)\\t%0, %1
+   str%(b%)\\t%1, %0"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "*,*,load1,store1")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov,mvn,*,*")
    (set_attr "predicable" "yes")]
 )
 
-(define_insn "*thumb_movqi_insn"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_movqi_insn"
   [(set (match_operand:QI 0 "nonimmediate_operand" "=l,l,m,*r,*h,l")
 	(match_operand:QI 1 "general_operand"      "l, m,l,*h,*r,I"))]
-  "TARGET_THUMB
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1
    && (   register_operand (operands[0], QImode)
        || register_operand (operands[1], QImode))"
   "@
@@ -5429,6 +6068,8 @@
    mov\\t%0, %1"
   [(set_attr "length" "2")
    (set_attr "type" "*,load1,store1,*,*,*")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "*,*,*,mov,mov,mov")
    (set_attr "pool_range" "*,32,*,*,*,*")]
 )
 
@@ -5437,12 +6078,14 @@
 	(match_operand:SF 1 "general_operand" ""))]
   "TARGET_EITHER"
   "
-  if (TARGET_ARM)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_32BIT)
     {
       if (GET_CODE (operands[0]) == MEM)
         operands[1] = force_reg (SFmode, operands[1]);
     }
-  else /* TARGET_THUMB */
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  else /* TARGET_THUMB1 */
     {
       if (!no_new_pseudos)
         {
@@ -5458,7 +6101,8 @@
 (define_split
   [(set (match_operand:SF 0 "arm_general_register_operand" "")
 	(match_operand:SF 1 "immediate_operand" ""))]
-  "TARGET_ARM
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT
    && reload_completed
    && GET_CODE (operands[1]) == CONST_DOUBLE"
   [(set (match_dup 2) (match_dup 3))]
@@ -5484,15 +6128,19 @@
   [(set_attr "length" "4,4,4")
    (set_attr "predicable" "yes")
    (set_attr "type" "*,load1,store1")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov,*,*")
    (set_attr "pool_range" "*,4096,*")
    (set_attr "neg_pool_range" "*,4084,*")]
 )
 
 ;;; ??? This should have alternatives for constants.
-(define_insn "*thumb_movsf_insn"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_movsf_insn"
   [(set (match_operand:SF     0 "nonimmediate_operand" "=l,l,>,l, m,*r,*h")
 	(match_operand:SF     1 "general_operand"      "l, >,l,mF,l,*h,*r"))]
-  "TARGET_THUMB
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1
    && (   register_operand (operands[0], SFmode) 
        || register_operand (operands[1], SFmode))"
   "@
@@ -5513,7 +6161,8 @@
 	(match_operand:DF 1 "general_operand" ""))]
   "TARGET_EITHER"
   "
-  if (TARGET_ARM)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_32BIT)
     {
       if (GET_CODE (operands[0]) == MEM)
         operands[1] = force_reg (DFmode, operands[1]);
@@ -5535,7 +6184,8 @@
   [(match_operand:DF 0 "arm_reload_memory_operand" "=o")
    (match_operand:DF 1 "s_register_operand" "r")
    (match_operand:SI 2 "s_register_operand" "=&r")]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "
   {
     enum rtx_code code = GET_CODE (XEXP (operands[0], 0));
@@ -5604,7 +6254,8 @@
 (define_insn "*thumb_movdf_insn"
   [(set (match_operand:DF 0 "nonimmediate_operand" "=l,l,>,l, m,*r")
 	(match_operand:DF 1 "general_operand"      "l, >,l,mF,l,*r"))]
-  "TARGET_THUMB
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1
    && (   register_operand (operands[0], DFmode)
        || register_operand (operands[1], DFmode))"
   "*
@@ -5641,34 +6292,16 @@
 (define_expand "movxf"
   [(set (match_operand:XF 0 "general_operand" "")
 	(match_operand:XF 1 "general_operand" ""))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
   "
   if (GET_CODE (operands[0]) == MEM)
     operands[1] = force_reg (XFmode, operands[1]);
   "
 )
 
-;; Vector Moves
-(define_expand "movv2si"
-  [(set (match_operand:V2SI 0 "nonimmediate_operand" "")
-	(match_operand:V2SI 1 "general_operand" ""))]
-  "TARGET_REALLY_IWMMXT"
-{
-})
-
-(define_expand "movv4hi"
-  [(set (match_operand:V4HI 0 "nonimmediate_operand" "")
-	(match_operand:V4HI 1 "general_operand" ""))]
-  "TARGET_REALLY_IWMMXT"
-{
-})
-
-(define_expand "movv8qi"
-  [(set (match_operand:V8QI 0 "nonimmediate_operand" "")
-	(match_operand:V8QI 1 "general_operand" ""))]
-  "TARGET_REALLY_IWMMXT"
-{
-})
+;; APPLE LOCAL v7 support. Merge from mainline
+;; Removed lines
 
 
 ;; load- and store-multiple insns
@@ -5679,7 +6312,8 @@
   [(match_par_dup 3 [(set (match_operand:SI 0 "" "")
                           (match_operand:SI 1 "" ""))
                      (use (match_operand:SI 2 "" ""))])]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
 {
   HOST_WIDE_INT offset = 0;
 
@@ -5714,14 +6348,17 @@
 	  (mem:SI (plus:SI (match_dup 2) (const_int 8))))
      (set (match_operand:SI 6 "arm_hard_register_operand" "")
 	  (mem:SI (plus:SI (match_dup 2) (const_int 12))))])]
-  "TARGET_ARM && XVECLEN (operands[0], 0) == 5"
-  "ldm%?ia\\t%1!, {%3, %4, %5, %6}"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && XVECLEN (operands[0], 0) == 5"
+  "ldm%(ia%)\\t%1!, {%3, %4, %5, %6}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "load4")
    (set_attr "predicable" "yes")]
 )
 
 ;; APPLE LOCAL begin ARM compact switch tables
-(define_insn "*ldmsi_postinc4_thumb"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*ldmsi_postinc4_thumb1"
   [(match_parallel 0 "load_multiple_operation"
     [(set (match_operand:SI 1 "s_register_operand" "=l")
 	  (plus:SI (match_operand:SI 2 "s_register_operand" "1")
@@ -5734,7 +6371,8 @@
 	  (mem:SI (plus:SI (match_dup 2) (const_int 8))))
      (set (match_operand:SI 6 "arm_hard_register_operand" "")
 	  (mem:SI (plus:SI (match_dup 2) (const_int 12))))])]
-  "TARGET_THUMB && XVECLEN (operands[0], 0) == 5"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 5"
   "ldmia\\t%1!, {%3, %4, %5, %6}"
   [(set_attr "type" "load4")
    (set_attr "length" "2")]
@@ -5752,8 +6390,10 @@
 	  (mem:SI (plus:SI (match_dup 2) (const_int 4))))
      (set (match_operand:SI 5 "arm_hard_register_operand" "")
 	  (mem:SI (plus:SI (match_dup 2) (const_int 8))))])]
-  "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
-  "ldm%?ia\\t%1!, {%3, %4, %5}"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
+  "ldm%(ia%)\\t%1!, {%3, %4, %5}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "load3")
    (set_attr "predicable" "yes")]
 )
@@ -5767,8 +6407,10 @@
 	  (mem:SI (match_dup 2)))
      (set (match_operand:SI 4 "arm_hard_register_operand" "")
 	  (mem:SI (plus:SI (match_dup 2) (const_int 4))))])]
-  "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
-  "ldm%?ia\\t%1!, {%3, %4}"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
+  "ldm%(ia%)\\t%1!, {%3, %4}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "load2")
    (set_attr "predicable" "yes")]
 )
@@ -5785,8 +6427,10 @@
 	  (mem:SI (plus:SI (match_dup 1) (const_int 8))))
      (set (match_operand:SI 5 "arm_hard_register_operand" "")
 	  (mem:SI (plus:SI (match_dup 1) (const_int 12))))])]
-  "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
-  "ldm%?ia\\t%1, {%2, %3, %4, %5}"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
+  "ldm%(ia%)\\t%1, {%2, %3, %4, %5}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "load4")
    (set_attr "predicable" "yes")]
 )
@@ -5799,8 +6443,10 @@
 	  (mem:SI (plus:SI (match_dup 1) (const_int 4))))
      (set (match_operand:SI 4 "arm_hard_register_operand" "")
 	  (mem:SI (plus:SI (match_dup 1) (const_int 8))))])]
-  "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
-  "ldm%?ia\\t%1, {%2, %3, %4}"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
+  "ldm%(ia%)\\t%1, {%2, %3, %4}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "load3")
    (set_attr "predicable" "yes")]
 )
@@ -5811,8 +6457,10 @@
 	  (mem:SI (match_operand:SI 1 "s_register_operand" "r")))
      (set (match_operand:SI 3 "arm_hard_register_operand" "")
 	  (mem:SI (plus:SI (match_dup 1) (const_int 4))))])]
-  "TARGET_ARM && XVECLEN (operands[0], 0) == 2"
-  "ldm%?ia\\t%1, {%2, %3}"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && XVECLEN (operands[0], 0) == 2"
+  "ldm%(ia%)\\t%1, {%2, %3}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "load2")
    (set_attr "predicable" "yes")]
 )
@@ -5821,7 +6469,8 @@
   [(match_par_dup 3 [(set (match_operand:SI 0 "" "")
                           (match_operand:SI 1 "" ""))
                      (use (match_operand:SI 2 "" ""))])]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
 {
   HOST_WIDE_INT offset = 0;
 
@@ -5856,14 +6505,17 @@
 	  (match_operand:SI 5 "arm_hard_register_operand" ""))
      (set (mem:SI (plus:SI (match_dup 2) (const_int 12)))
 	  (match_operand:SI 6 "arm_hard_register_operand" ""))])]
-  "TARGET_ARM && XVECLEN (operands[0], 0) == 5"
-  "stm%?ia\\t%1!, {%3, %4, %5, %6}"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && XVECLEN (operands[0], 0) == 5"
+  "stm%(ia%)\\t%1!, {%3, %4, %5, %6}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "predicable" "yes")
    (set_attr "type" "store4")]
 )
 
 ;; APPLE LOCAL begin ARM compact switch tables
-(define_insn "*stmsi_postinc4_thumb"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*stmsi_postinc4_thumb1"
   [(match_parallel 0 "store_multiple_operation"
     [(set (match_operand:SI 1 "s_register_operand" "=l")
 	  (plus:SI (match_operand:SI 2 "s_register_operand" "1")
@@ -5876,8 +6528,10 @@
 	  (match_operand:SI 5 "arm_hard_register_operand" ""))
      (set (mem:SI (plus:SI (match_dup 2) (const_int 12)))
 	  (match_operand:SI 6 "arm_hard_register_operand" ""))])]
-  "TARGET_THUMB && XVECLEN (operands[0], 0) == 5"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 5"
   "stmia\\t%1!, {%3, %4, %5, %6}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "store4")
    (set_attr "length" "2")]
 )
@@ -5894,8 +6548,10 @@
 	  (match_operand:SI 4 "arm_hard_register_operand" ""))
      (set (mem:SI (plus:SI (match_dup 2) (const_int 8)))
 	  (match_operand:SI 5 "arm_hard_register_operand" ""))])]
-  "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
-  "stm%?ia\\t%1!, {%3, %4, %5}"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
+  "stm%(ia%)\\t%1!, {%3, %4, %5}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "predicable" "yes")
    (set_attr "type" "store3")]
 )
@@ -5909,8 +6565,10 @@
 	  (match_operand:SI 3 "arm_hard_register_operand" ""))
      (set (mem:SI (plus:SI (match_dup 2) (const_int 4)))
 	  (match_operand:SI 4 "arm_hard_register_operand" ""))])]
-  "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
-  "stm%?ia\\t%1!, {%3, %4}"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
+  "stm%(ia%)\\t%1!, {%3, %4}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "predicable" "yes")
    (set_attr "type" "store2")]
 )
@@ -5927,8 +6585,10 @@
 	  (match_operand:SI 4 "arm_hard_register_operand" ""))
      (set (mem:SI (plus:SI (match_dup 1) (const_int 12)))
 	  (match_operand:SI 5 "arm_hard_register_operand" ""))])]
-  "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
-  "stm%?ia\\t%1, {%2, %3, %4, %5}"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
+  "stm%(ia%)\\t%1, {%2, %3, %4, %5}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "predicable" "yes")
    (set_attr "type" "store4")]
 )
@@ -5941,8 +6601,10 @@
 	  (match_operand:SI 3 "arm_hard_register_operand" ""))
      (set (mem:SI (plus:SI (match_dup 1) (const_int 8)))
 	  (match_operand:SI 4 "arm_hard_register_operand" ""))])]
-  "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
-  "stm%?ia\\t%1, {%2, %3, %4}"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
+  "stm%(ia%)\\t%1, {%2, %3, %4}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "predicable" "yes")
    (set_attr "type" "store3")]
 )
@@ -5953,8 +6615,10 @@
 	  (match_operand:SI 2 "arm_hard_register_operand" ""))
      (set (mem:SI (plus:SI (match_dup 1) (const_int 4)))
 	  (match_operand:SI 3 "arm_hard_register_operand" ""))])]
-  "TARGET_ARM && XVECLEN (operands[0], 0) == 2"
-  "stm%?ia\\t%1, {%2, %3}"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && XVECLEN (operands[0], 0) == 2"
+  "stm%(ia%)\\t%1, {%2, %3}"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "predicable" "yes")
    (set_attr "type" "store2")]
 )
@@ -5970,7 +6634,8 @@
    (match_operand:SI 3 "const_int_operand" "")]
   "TARGET_EITHER"
   "
-  if (TARGET_ARM)
+  /* APPLE LOCAL v7 support. Merge from mainline */
+  if (TARGET_32BIT)
     {
       if (arm_gen_movmemqi (operands))
         DONE;
@@ -6015,7 +6680,8 @@
    (clobber (match_scratch:SI 4 "=&l"))
    (clobber (match_scratch:SI 5 "=&l"))
    (clobber (match_scratch:SI 6 "=&l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "* return thumb_output_move_mem_multiple (3, operands);"
   [(set_attr "length" "4")
    ; This isn't entirely accurate...  It loads as well, but in terms of
@@ -6034,7 +6700,8 @@
 	(plus:SI (match_dup 3) (const_int 8)))
    (clobber (match_scratch:SI 4 "=&l"))
    (clobber (match_scratch:SI 5 "=&l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "* return thumb_output_move_mem_multiple (2, operands);"
   [(set_attr "length" "4")
    ; This isn't entirely accurate...  It loads as well, but in terms of
@@ -6068,26 +6735,30 @@
 	        (match_operand:SI 2 "nonmemory_operand" "")])
 	      (label_ref (match_operand 3 "" ""))
 	      (pc)))]
-  "TARGET_THUMB"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "
-  if (thumb_cmpneg_operand (operands[2], SImode))
+  if (thumb1_cmpneg_operand (operands[2], SImode))
     {
       emit_jump_insn (gen_cbranchsi4_scratch (NULL, operands[1], operands[2],
 					      operands[3], operands[0]));
       DONE;
     }
-  if (!thumb_cmp_operand (operands[2], SImode))
+  if (!thumb1_cmp_operand (operands[2], SImode))
     operands[2] = force_reg (SImode, operands[2]);
   ")
+;; APPLE LOCAL end v7 support. Merge from mainline
 
 (define_insn "*cbranchsi4_insn"
   [(set (pc) (if_then_else
 	      (match_operator 0 "arm_comparison_operator"
 	       [(match_operand:SI 1 "s_register_operand" "l,*h")
-	        (match_operand:SI 2 "thumb_cmp_operand" "lI*h,*r")])
+;; APPLE LOCAL begin v7 support. Merge from mainline
+	        (match_operand:SI 2 "thumb1_cmp_operand" "lI*h,*r")])
 	      (label_ref (match_operand 3 "" ""))
 	      (pc)))]
-  "TARGET_THUMB"
+  "TARGET_THUMB1"
+;; APPLE LOCAL end v7 support. Merge from mainline
   "*
   output_asm_insn (\"cmp\\t%1, %2\", operands);
 
@@ -6095,8 +6766,8 @@
     {
     case 4:  return \"b%d0\\t%l3\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D0\\t%.LCB%=\;b\\t%l3\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D0\\t%.LCB%=\;bl\\t%l3\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D0\\t%~LCB%=\;b\\t%l3\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D0\\t%~LCB%=\;bl\\t%l3\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   "
@@ -6121,11 +6792,13 @@
   [(set (pc) (if_then_else
 	      (match_operator 4 "arm_comparison_operator"
 	       [(match_operand:SI 1 "s_register_operand" "l,0")
-	        (match_operand:SI 2 "thumb_cmpneg_operand" "L,J")])
+;; APPLE LOCAL v7 support. Merge from mainline
+	        (match_operand:SI 2 "thumb1_cmpneg_operand" "L,J")])
 	      (label_ref (match_operand 3 "" ""))
 	      (pc)))
    (clobber (match_scratch:SI 0 "=l,l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   output_asm_insn (\"add\\t%0, %1, #%n2\", operands);
 
@@ -6133,8 +6806,8 @@
     {
     case 4:  return \"b%d4\\t%l3\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D4\\t%.LCB%=\;b\\t%l3\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D4\\t%.LCB%=\;bl\\t%l3\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D4\\t%~LCB%=\;b\\t%l3\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D4\\t%~LCB%=\;bl\\t%l3\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   "
@@ -6164,7 +6837,8 @@
 	 (pc)))
    (set (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,l,*h,*m")
 	(match_dup 1))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*{
   if (which_alternative == 0)
     output_asm_insn (\"cmp\t%0, #0\", operands);
@@ -6182,8 +6856,8 @@
     {
     case 4:  return \"b%d3\\t%l2\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D3\\t%.LCB%=\;b\\t%l2\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D3\\t%.LCB%=\;bl\\t%l2\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D3\\t%~LCB%=\;b\\t%l2\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D3\\t%~LCB%=\;bl\\t%l2\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   }"
@@ -6253,15 +6927,16 @@
 	   (neg:SI (match_operand:SI 2 "s_register_operand" "l"))])
 	 (label_ref (match_operand 3 "" ""))
 	 (pc)))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   output_asm_insn (\"cmn\\t%1, %2\", operands);
   switch (get_attr_length (insn))
     {
     case 4:  return \"b%d0\\t%l3\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D0\\t%.LCB%=\;b\\t%l3\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D0\\t%.LCB%=\;bl\\t%l3\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D0\\t%~LCB%=\;b\\t%l3\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D0\\t%~LCB%=\;bl\\t%l3\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   "
@@ -6293,7 +6968,8 @@
 	 (label_ref (match_operand 3 "" ""))
 	 (pc)))
    (clobber (match_scratch:SI 4 "=l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   {
   rtx op[3];
@@ -6306,8 +6982,8 @@
     {
     case 4:  return \"b%d0\\t%l3\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D0\\t%.LCB%=\;b\\t%l3\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D0\\t%.LCB%=\;bl\\t%l3\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D0\\t%~LCB%=\;b\\t%l3\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D0\\t%~LCB%=\;bl\\t%l3\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   }"
@@ -6339,7 +7015,8 @@
 	 (label_ref (match_operand 3 "" ""))
 	 (pc)))
    (clobber (match_scratch:SI 4 "=l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   {
   rtx op[3];
@@ -6352,8 +7029,8 @@
     {
     case 4:  return \"b%d0\\t%l3\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D0\\t%.LCB%=\;b\\t%l3\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D0\\t%.LCB%=\;bl\\t%l3\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D0\\t%~LCB%=\;b\\t%l3\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D0\\t%~LCB%=\;bl\\t%l3\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   }"
@@ -6383,7 +7060,8 @@
 	   (const_int 0)])
 	 (label_ref (match_operand 2 "" ""))
 	 (pc)))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   {
   output_asm_insn (\"tst\\t%0, %1\", operands);
@@ -6391,8 +7069,8 @@
     {
     case 4:  return \"b%d3\\t%l2\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D3\\t%.LCB%=\;b\\t%l2\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D3\\t%.LCB%=\;bl\\t%l2\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D3\\t%~LCB%=\;b\\t%l2\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D3\\t%~LCB%=\;bl\\t%l2\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   }"
@@ -6425,7 +7103,8 @@
    (set (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,*?h,*?m,*?m")
 	(and:SI (match_dup 2) (match_dup 3)))
    (clobber (match_scratch:SI 1 "=X,l,&l,&l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   {
   if (which_alternative == 0)
@@ -6445,8 +7124,8 @@
     {
     case 4:  return \"b%d5\\t%l4\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D5\\t%.LCB%=\;b\\t%l4\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D5\\t%.LCB%=\;bl\\t%l4\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D5\\t%~LCB%=\;b\\t%l4\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D5\\t%~LCB%=\;bl\\t%l4\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   }"
@@ -6492,7 +7171,8 @@
 	 (label_ref (match_operand 3 "" ""))
 	 (pc)))
    (clobber (match_scratch:SI 0 "=l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   {
   output_asm_insn (\"orr\\t%0, %2\", operands);
@@ -6500,8 +7180,8 @@
     {
     case 4:  return \"b%d4\\t%l3\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D4\\t%.LCB%=\;b\\t%l3\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D4\\t%.LCB%=\;bl\\t%l3\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D4\\t%~LCB%=\;b\\t%l3\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D4\\t%~LCB%=\;bl\\t%l3\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   }"
@@ -6534,7 +7214,8 @@
    (set (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,*?h,*?m,*?m")
 	(ior:SI (match_dup 2) (match_dup 3)))
    (clobber (match_scratch:SI 1 "=X,l,&l,&l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   {
   if (which_alternative == 0)
@@ -6554,8 +7235,8 @@
     {
     case 4:  return \"b%d5\\t%l4\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D5\\t%.LCB%=\;b\\t%l4\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D5\\t%.LCB%=\;bl\\t%l4\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D5\\t%~LCB%=\;b\\t%l4\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D5\\t%~LCB%=\;bl\\t%l4\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   }"
@@ -6601,7 +7282,8 @@
 	 (label_ref (match_operand 3 "" ""))
 	 (pc)))
    (clobber (match_scratch:SI 0 "=l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   {
   output_asm_insn (\"eor\\t%0, %2\", operands);
@@ -6609,8 +7291,8 @@
     {
     case 4:  return \"b%d4\\t%l3\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D4\\t%.LCB%=\;b\\t%l3\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D4\\t%.LCB%=\;bl\\t%l3\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D4\\t%~LCB%=\;b\\t%l3\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D4\\t%~LCB%=\;bl\\t%l3\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   }"
@@ -6643,7 +7325,8 @@
    (set (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,*?h,*?m,*?m")
 	(xor:SI (match_dup 2) (match_dup 3)))
    (clobber (match_scratch:SI 1 "=X,l,&l,&l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   {
   if (which_alternative == 0)
@@ -6663,8 +7346,8 @@
     {
     case 4:  return \"b%d5\\t%l4\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D5\\t%.LCB%=\;b\\t%l4\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D5\\t%.LCB%=\;bl\\t%l4\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D5\\t%~LCB%=\;b\\t%l4\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D5\\t%~LCB%=\;bl\\t%l4\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   }"
@@ -6710,7 +7393,8 @@
 	 (label_ref (match_operand 3 "" ""))
 	 (pc)))
    (clobber (match_scratch:SI 0 "=l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   {
   output_asm_insn (\"bic\\t%0, %2\", operands);
@@ -6718,8 +7402,8 @@
     {
     case 4:  return \"b%d4\\t%l3\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D4\\t%.LCB%=\;b\\t%l3\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D4\\t%.LCB%=\;bl\\t%l3\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D4\\t%~LCB%=\;b\\t%l3\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D4\\t%~LCB%=\;bl\\t%l3\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   }"
@@ -6752,7 +7436,8 @@
    (set (match_operand:SI 0 "thumb_cbrch_target_operand" "=!l,l,*?h,*?m,*?m")
 	(and:SI (not:SI (match_dup 3)) (match_dup 2)))
    (clobber (match_scratch:SI 1 "=X,l,l,&l,&l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   {
   if (which_alternative == 0)
@@ -6774,8 +7459,8 @@
     {
     case 4:  return \"b%d5\\t%l4\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D5\\t%.LCB%=\;b\\t%l4\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D5\\t%.LCB%=\;bl\\t%l4\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D5\\t%~LCB%=\;b\\t%l4\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D5\\t%~LCB%=\;bl\\t%l4\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   }"
@@ -6821,7 +7506,8 @@
    (set (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,*?h,*?m,*?m")
 	(plus:SI (match_dup 2) (const_int -1)))
    (clobber (match_scratch:SI 1 "=X,l,&l,&l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
    {
      rtx cond[2];
@@ -6855,11 +7541,11 @@
 	   output_asm_insn (\"b%d0\\t%l1\", cond);
 	   return \"\";
 	 case 6:
-	   output_asm_insn (\"b%D0\\t%.LCB%=\", cond);
-	   return \"b\\t%l4\\t%@long jump\\n%.LCB%=:\";
+	   output_asm_insn (\"b%D0\\t%~LCB%=\", cond);
+	   return \"b\\t%l4\\t%@long jump\\n%~LCB%=:\";
 	 default:
-	   output_asm_insn (\"b%D0\\t%.LCB%=\", cond);
-	   return \"bl\\t%l4\\t%@far jump\\n%.LCB%=:\";
+	   output_asm_insn (\"b%D0\\t%~LCB%=\", cond);
+	   return \"bl\\t%l4\\t%@far jump\\n%~LCB%=:\";
 	 /* APPLE LOCAL end ARM MACH assembler */
        }
    }
@@ -6930,7 +7616,8 @@
     (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,l,*!h,*?h,*?m,*?m")
     (plus:SI (match_dup 2) (match_dup 3)))
    (clobber (match_scratch:SI 1 "=X,X,X,l,&l,&l"))]
-  "TARGET_THUMB
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1
    && (GET_CODE (operands[4]) == EQ
        || GET_CODE (operands[4]) == NE
        || GET_CODE (operands[4]) == GE
@@ -6961,9 +7648,9 @@
 	   return \"b%d4\\t%l5\";
 	 /* APPLE LOCAL begin ARM MACH assembler */
 	 case 6:
-	   return \"b%D4\\t%.LCB%=\;b\\t%l5\\t%@long jump\\n%.LCB%=:\";
+	   return \"b%D4\\t%~LCB%=\;b\\t%l5\\t%@long jump\\n%~LCB%=:\";
 	 default:
-	   return \"b%D4\\t%.LCB%=\;bl\\t%l5\\t%@far jump\\n%.LCB%=:\";
+	   return \"b%D4\\t%~LCB%=\;bl\\t%l5\\t%@far jump\\n%~LCB%=:\";
 	 /* APPLE LOCAL end ARM MACH assembler */
        }
    }
@@ -7011,7 +7698,8 @@
 	 (label_ref (match_operand 4 "" ""))
 	 (pc)))
    (clobber (match_scratch:SI 0 "=X,X,l,l"))]
-  "TARGET_THUMB
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1
    && (GET_CODE (operands[3]) == EQ
        || GET_CODE (operands[3]) == NE
        || GET_CODE (operands[3]) == GE
@@ -7046,9 +7734,9 @@
 	   return \"b%d3\\t%l4\";
 	 /* APPLE LOCAL begin ARM MACH assembler */
 	 case 6:
-	   return \"b%D3\\t%.LCB%=\;b\\t%l4\\t%@long jump\\n%.LCB%=:\";
+	   return \"b%D3\\t%~LCB%=\;b\\t%l4\\t%@long jump\\n%~LCB%=:\";
 	 default:
-	   return \"b%D3\\t%.LCB%=\;bl\\t%l4\\t%@far jump\\n%.LCB%=:\";
+	   return \"b%D3\\t%~LCB%=\;bl\\t%l4\\t%@far jump\\n%~LCB%=:\";
 	 /* APPLE LOCAL end ARM MACH assembler */
        }
    }
@@ -7083,7 +7771,8 @@
    (set (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,*?h,*?m,*?m")
 	(minus:SI (match_dup 2) (match_dup 3)))
    (clobber (match_scratch:SI 1 "=X,l,&l,&l"))]
-  "TARGET_THUMB
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1
    && (GET_CODE (operands[4]) == EQ
        || GET_CODE (operands[4]) == NE
        || GET_CODE (operands[4]) == GE
@@ -7114,9 +7803,9 @@
 	   return \"b%d4\\t%l5\";
 	 /* APPLE LOCAL begin ARM MACH assembler */
 	 case 6:
-	   return \"b%D4\\t%.LCB%=\;b\\t%l5\\t%@long jump\\n%.LCB%=:\";
+	   return \"b%D4\\t%~LCB%=\;b\\t%l5\\t%@long jump\\n%~LCB%=:\";
 	 default:
-	   return \"b%D4\\t%.LCB%=\;bl\\t%l5\\t%@far jump\\n%.LCB%=:\";
+	   return \"b%D4\\t%~LCB%=\;bl\\t%l5\\t%@far jump\\n%~LCB%=:\";
 	 /* APPLE LOCAL end ARM MACH assembler */
        }
    }
@@ -7162,7 +7851,8 @@
 	   (const_int 0)])
 	 (label_ref (match_operand 3 "" ""))
 	 (pc)))]
-  "TARGET_THUMB
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1
    && (GET_CODE (operands[0]) == EQ
        || GET_CODE (operands[0]) == NE
        || GET_CODE (operands[0]) == GE
@@ -7173,8 +7863,8 @@
     {
     case 4:  return \"b%d0\\t%l3\";
     /* APPLE LOCAL begin ARM MACH assembler */
-    case 6:  return \"b%D0\\t%.LCB%=\;b\\t%l3\\t%@long jump\\n%.LCB%=:\";
-    default: return \"b%D0\\t%.LCB%=\;bl\\t%l3\\t%@far jump\\n%.LCB%=:\";
+    case 6:  return \"b%D0\\t%~LCB%=\;b\\t%l3\\t%@long jump\\n%~LCB%=:\";
+    default: return \"b%D0\\t%~LCB%=\;bl\\t%l3\\t%@far jump\\n%~LCB%=:\";
     /* APPLE LOCAL end ARM MACH assembler */
     }
   "
@@ -7200,7 +7890,8 @@
 (define_expand "cmpsi"
   [(match_operand:SI 0 "s_register_operand" "")
    (match_operand:SI 1 "arm_add_operand" "")]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "{
     arm_compare_op0 = operands[0];
     arm_compare_op1 = operands[1];
@@ -7211,7 +7902,8 @@
 (define_expand "cmpsf"
   [(match_operand:SF 0 "s_register_operand" "")
    (match_operand:SF 1 "arm_float_compare_operand" "")]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "
   arm_compare_op0 = operands[0];
   arm_compare_op1 = operands[1];
@@ -7222,7 +7914,8 @@
 (define_expand "cmpdf"
   [(match_operand:DF 0 "s_register_operand" "")
    (match_operand:DF 1 "arm_float_compare_operand" "")]
-  "TARGET_ARM && TARGET_HARD_FLOAT"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT"
   "
   arm_compare_op0 = operands[0];
   arm_compare_op1 = operands[1];
@@ -7235,7 +7928,8 @@
   [(set (reg:CC CC_REGNUM)
 	(compare:CC (match_operand:SI 0 "s_register_operand" "r,r")
 		    (match_operand:SI 1 "arm_add_operand"    "rI,L")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "@
    cmp%?\\t%0, %1
    cmn%?\\t%0, #%n1"
@@ -7244,7 +7938,8 @@
 )
 ;; APPLE LOCAL end ARM enhance conditional insn generation
 
-(define_insn "*cmpsi_shiftsi"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*arm_cmpsi_shiftsi"
   [(set (reg:CC CC_REGNUM)
 	(compare:CC (match_operand:SI   0 "s_register_operand" "r")
 		    (match_operator:SI  3 "shift_operator"
@@ -7259,7 +7954,8 @@
 		      (const_string "alu_shift_reg")))]
 )
 
-(define_insn "*cmpsi_shiftsi_swp"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*arm_cmpsi_shiftsi_swp"
   [(set (reg:CC_SWP CC_REGNUM)
 	(compare:CC_SWP (match_operator:SI 3 "shift_operator"
 			 [(match_operand:SI 1 "s_register_operand" "r")
@@ -7274,7 +7970,8 @@
 		      (const_string "alu_shift_reg")))]
 )
 
-(define_insn "*cmpsi_negshiftsi_si"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*arm_cmpsi_negshiftsi_si"
   [(set (reg:CC_Z CC_REGNUM)
 	(compare:CC_Z
 	 (neg:SI (match_operator:SI 1 "shift_operator"
@@ -7340,7 +8037,8 @@
 
 (define_insn "*deleted_compare"
   [(set (match_operand 0 "cc_register" "") (match_dup 0))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "\\t%@ deleted compare"
   [(set_attr "conds" "set")
    (set_attr "length" "0")]
@@ -7354,7 +8052,8 @@
 	(if_then_else (eq (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (EQ, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7363,7 +8062,8 @@
 	(if_then_else (ne (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (NE, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7372,7 +8072,8 @@
 	(if_then_else (gt (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (GT, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7381,7 +8082,8 @@
 	(if_then_else (le (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (LE, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7390,7 +8092,8 @@
 	(if_then_else (ge (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (GE, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7399,7 +8102,8 @@
 	(if_then_else (lt (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (LT, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7408,7 +8112,8 @@
 	(if_then_else (gtu (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (GTU, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7417,7 +8122,8 @@
 	(if_then_else (leu (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (LEU, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7426,7 +8132,8 @@
 	(if_then_else (geu (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (GEU, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7435,7 +8142,8 @@
 	(if_then_else (ltu (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (LTU, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7444,7 +8152,8 @@
 	(if_then_else (unordered (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (UNORDERED, arm_compare_op0,
 				      arm_compare_op1);"
 )
@@ -7454,7 +8163,8 @@
 	(if_then_else (ordered (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (ORDERED, arm_compare_op0,
 				      arm_compare_op1);"
 )
@@ -7464,7 +8174,8 @@
 	(if_then_else (ungt (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (UNGT, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7473,7 +8184,8 @@
 	(if_then_else (unlt (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (UNLT, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7482,7 +8194,8 @@
 	(if_then_else (unge (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (UNGE, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7491,7 +8204,8 @@
 	(if_then_else (unle (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (UNLE, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7502,7 +8216,8 @@
 	(if_then_else (uneq (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (UNEQ, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7511,7 +8226,8 @@
 	(if_then_else (ltgt (match_dup 1) (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (LTGT, arm_compare_op0, arm_compare_op1);"
 )
 
@@ -7525,7 +8241,8 @@
 	(if_then_else (uneq (match_operand 1 "cc_register" "") (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "*
   gcc_assert (!arm_ccfsm_state);
 
@@ -7541,7 +8258,8 @@
 	(if_then_else (ltgt (match_operand 1 "cc_register" "") (const_int 0))
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "*
   gcc_assert (!arm_ccfsm_state);
 
@@ -7557,7 +8275,8 @@
 		       [(match_operand 2 "cc_register" "") (const_int 0)])
 		      (label_ref (match_operand 0 "" ""))
 		      (pc)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "*
   if (arm_ccfsm_state == 1 || arm_ccfsm_state == 2)
     {
@@ -7608,7 +8327,8 @@
 		       [(match_operand 2 "cc_register" "") (const_int 0)])
 		      (pc)
 		      (label_ref (match_operand 0 "" ""))))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "*
   if (arm_ccfsm_state == 1 || arm_ccfsm_state == 2)
     {
@@ -7628,77 +8348,88 @@
 (define_expand "seq"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(eq:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (EQ, arm_compare_op0, arm_compare_op1);"
 )
 
 (define_expand "sne"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(ne:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (NE, arm_compare_op0, arm_compare_op1);"
 )
 
 (define_expand "sgt"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(gt:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (GT, arm_compare_op0, arm_compare_op1);"
 )
 
 (define_expand "sle"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(le:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (LE, arm_compare_op0, arm_compare_op1);"
 )
 
 (define_expand "sge"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(ge:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (GE, arm_compare_op0, arm_compare_op1);"
 )
 
 (define_expand "slt"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(lt:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (LT, arm_compare_op0, arm_compare_op1);"
 )
 
 (define_expand "sgtu"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(gtu:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (GTU, arm_compare_op0, arm_compare_op1);"
 )
 
 (define_expand "sleu"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(leu:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (LEU, arm_compare_op0, arm_compare_op1);"
 )
 
 (define_expand "sgeu"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(geu:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (GEU, arm_compare_op0, arm_compare_op1);"
 )
 
 (define_expand "sltu"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(ltu:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "operands[1] = arm_gen_compare_reg (LTU, arm_compare_op0, arm_compare_op1);"
 )
 
 (define_expand "sunordered"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(unordered:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (UNORDERED, arm_compare_op0,
 				      arm_compare_op1);"
 )
@@ -7706,7 +8437,8 @@
 (define_expand "sordered"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(ordered:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (ORDERED, arm_compare_op0,
 				      arm_compare_op1);"
 )
@@ -7714,7 +8446,8 @@
 (define_expand "sungt"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(ungt:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (UNGT, arm_compare_op0,
 				      arm_compare_op1);"
 )
@@ -7722,7 +8455,8 @@
 (define_expand "sunge"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(unge:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (UNGE, arm_compare_op0,
 				      arm_compare_op1);"
 )
@@ -7730,7 +8464,8 @@
 (define_expand "sunlt"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(unlt:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (UNLT, arm_compare_op0,
 				      arm_compare_op1);"
 )
@@ -7738,7 +8473,8 @@
 (define_expand "sunle"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(unle:SI (match_dup 1) (const_int 0)))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "operands[1] = arm_gen_compare_reg (UNLE, arm_compare_op0,
 				      arm_compare_op1);"
 )
@@ -7749,14 +8485,16 @@
 ; (define_expand "suneq"
 ;   [(set (match_operand:SI 0 "s_register_operand" "")
 ; 	(uneq:SI (match_dup 1) (const_int 0)))]
-;   "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+;   "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
 ;   "gcc_unreachable ();"
 ; )
 ;
 ; (define_expand "sltgt"
 ;   [(set (match_operand:SI 0 "s_register_operand" "")
 ; 	(ltgt:SI (match_dup 1) (const_int 0)))]
-;   "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+;   "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
 ;   "gcc_unreachable ();"
 ; )
 
@@ -7767,6 +8505,8 @@
   "TARGET_ARM"
   "mov%D1\\t%0, #0\;mov%d1\\t%0, #1"
   [(set_attr "conds" "use")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov")
    (set_attr "length" "8")]
 )
 
@@ -7777,6 +8517,8 @@
   "TARGET_ARM"
   "mov%D1\\t%0, #0\;mvn%d1\\t%0, #0"
   [(set_attr "conds" "use")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov")
    (set_attr "length" "8")]
 )
 
@@ -7787,6 +8529,8 @@
   "TARGET_ARM"
   "mov%D1\\t%0, #0\;mvn%d1\\t%0, #1"
   [(set_attr "conds" "use")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov")
    (set_attr "length" "8")]
 )
 
@@ -7798,7 +8542,8 @@
 	(if_then_else:SI (match_operand 1 "arm_comparison_operator" "")
 			 (match_operand:SI 2 "arm_not_operand" "")
 			 (match_operand:SI 3 "arm_not_operand" "")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "
   {
     enum rtx_code code = GET_CODE (operands[1]);
@@ -7817,7 +8562,8 @@
 	(if_then_else:SF (match_operand 1 "arm_comparison_operator" "")
 			 (match_operand:SF 2 "s_register_operand" "")
 			 (match_operand:SF 3 "nonmemory_operand" "")))]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "
   {
     enum rtx_code code = GET_CODE (operands[1]);
@@ -7842,7 +8588,8 @@
 	(if_then_else:DF (match_operand 1 "arm_comparison_operator" "")
 			 (match_operand:DF 2 "s_register_operand" "")
 			 (match_operand:DF 3 "arm_float_add_operand" "")))]
-  "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
   "
   {
     enum rtx_code code = GET_CODE (operands[1]);
@@ -7874,7 +8621,10 @@
    mvn%d3\\t%0, #%B1\;mov%D3\\t%0, %2
    mvn%d3\\t%0, #%B1\;mvn%D3\\t%0, #%B2"
   [(set_attr "length" "4,4,4,4,8,8,8,8")
-   (set_attr "conds" "use")]
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+   (set_attr "conds" "use")
+   (set_attr "insn" "mov,mvn,mov,mvn,mov,mov,mvn,mvn")]
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
 )
 
 (define_insn "*movsfcc_soft_insn"
@@ -7887,7 +8637,10 @@
   "@
    mov%D3\\t%0, %2
    mov%d3\\t%0, %1"
-  [(set_attr "conds" "use")]
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+  [(set_attr "conds" "use")
+   (set_attr "insn" "mov")]
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
 )
 
 
@@ -7917,10 +8670,38 @@
   [(set_attr "predicable" "yes")]
 )
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
+(define_insn "*thumb2_jump"
+  [(set (pc)
+	(label_ref (match_operand 0 "" "")))]
+  "TARGET_THUMB2"
+  "*
+    if (arm_ccfsm_state == 1 || arm_ccfsm_state == 2)
+      {
+        arm_ccfsm_state += 2;
+        return \"\";
+      }
+    return \"b\\t%l0\";
+  "
+  [(set (attr "far_jump")
+        (if_then_else
+	    (eq_attr "length" "4")
+	    (const_string "yes")
+	    (const_string "no")))
+   (set (attr "length") 
+        (if_then_else
+	    (and (ge (minus (match_dup 0) (pc)) (const_int -2044))
+		 (le (minus (match_dup 0) (pc)) (const_int 2048)))
+  	    (const_int 2)
+	    (const_int 4)))]
+)
+;; APPLE LOCAL end v7 support. Merge from mainline
+
 (define_insn "*thumb_jump"
   [(set (pc)
 	(label_ref (match_operand 0 "" "")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   if (get_attr_length (insn) == 2)
     return \"b\\t%l0\";
@@ -8022,23 +8803,27 @@
    (set_attr "predicable" "yes")]
 )
 
-(define_insn "*call_reg_thumb_v5"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*call_reg_thumb1_v5"
   [(call (mem:SI (match_operand:SI 0 "register_operand" "l*r"))
 	 (match_operand 1 "" ""))
    (use (match_operand 2 "" ""))
    (clobber (reg:SI LR_REGNUM))]
-  "TARGET_THUMB && arm_arch5"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && arm_arch5"
   "blx\\t%0"
   [(set_attr "length" "2")
    (set_attr "type" "call")]
 )
 
-(define_insn "*call_reg_thumb"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*call_reg_thumb1"
   [(call (mem:SI (match_operand:SI 0 "register_operand" "l*r"))
 	 (match_operand 1 "" ""))
    (use (match_operand 2 "" ""))
    (clobber (reg:SI LR_REGNUM))]
-  "TARGET_THUMB && !arm_arch5"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && !arm_arch5"
   "*
   {
     if (!TARGET_CALLER_INTERWORKING)
@@ -8129,25 +8914,29 @@
    (set_attr "predicable" "yes")]
 )
 
-(define_insn "*call_value_reg_thumb_v5"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*call_value_reg_thumb1_v5"
   [(set (match_operand 0 "" "")
 	(call (mem:SI (match_operand:SI 1 "register_operand" "l*r"))
 	      (match_operand 2 "" "")))
    (use (match_operand 3 "" ""))
    (clobber (reg:SI LR_REGNUM))]
-  "TARGET_THUMB && arm_arch5"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && arm_arch5"
   "blx\\t%1"
   [(set_attr "length" "2")
    (set_attr "type" "call")]
 )
 
-(define_insn "*call_value_reg_thumb"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*call_value_reg_thumb1"
   [(set (match_operand 0 "" "")
 	(call (mem:SI (match_operand:SI 1 "register_operand" "l*r"))
 	      (match_operand 2 "" "")))
    (use (match_operand 3 "" ""))
    (clobber (reg:SI LR_REGNUM))]
-  "TARGET_THUMB && !arm_arch5"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1 && !arm_arch5"
   "*
   {
     if (!TARGET_CALLER_INTERWORKING)
@@ -8587,7 +9376,7 @@
    (match_operand:SI 3 "" "")			; table label
    (match_operand:SI 4 "" "")]			; Out of range label
 ;; APPLE LOCAL compact switch tables
-  "TARGET_ARM || TARGET_COMPACT_SWITCH_TABLES"
+  "TARGET_32BIT || TARGET_COMPACT_SWITCH_TABLES"
   "
   {
     rtx reg;
@@ -8600,16 +9389,29 @@
 	operands[0] = reg;
       }
 
+    /* APPLE LOCAL begin v7 support. Merge from mainline */
     /* APPLE LOCAL begin compact switch tables */
-    if (TARGET_ARM)
+    if (TARGET_32BIT)
       {
     /* APPLE LOCAL end compact switch tables */
-    if (!const_ok_for_arm (INTVAL (operands[2])))
-      operands[2] = force_reg (SImode, operands[2]);
+        if (!const_ok_for_arm (INTVAL (operands[2])))
+          operands[2] = force_reg (SImode, operands[2]);
 
-    emit_jump_insn (gen_casesi_internal (operands[0], operands[2], operands[3],
-					 operands[4]));
-    DONE;
+        if (TARGET_ARM)
+          {
+	    emit_jump_insn (gen_arm_casesi_internal (operands[0], operands[2], 
+                                                     operands[3], operands[4]));
+          }
+	/* APPLE LOCAL begin 6152801 SImode thumb2 switch table dispatch */
+	/* Removed specialized PIC handling */
+	/* APPLE LOCAL end 6152801 SImode thumb2 switch table dispatch */
+        else
+          {
+            emit_jump_insn (gen_thumb2_casesi_internal (operands[0], 
+                operands[2], operands[3], operands[4]));
+          }
+	DONE;
+    /* APPLE LOCAL end v7 support. Merge from mainline */
     /* APPLE LOCAL begin compact switch tables */
       }
     else
@@ -8632,7 +9434,8 @@
 
 ;; The USE in this pattern is needed to tell flow analysis that this is
 ;; a CASESI insn.  It has no other purpose.
-(define_insn "casesi_internal"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "arm_casesi_internal"
   [(parallel [(set (pc)
 	       (if_then_else
 		(leu (match_operand:SI 0 "s_register_operand" "r")
@@ -8763,7 +9566,19 @@
   [(set (pc)
 	(match_operand:SI 0 "s_register_operand" ""))]
   "TARGET_EITHER"
-  ""
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "
+  /* Thumb-2 doesn't have mov pc, reg.  Explicitly set the low bit of the
+     address and use bx.  */
+  if (TARGET_THUMB2)
+    {
+      rtx tmp;
+      tmp = gen_reg_rtx (SImode);
+      emit_insn (gen_iorsi3 (tmp, operands[0], GEN_INT(1)));
+      operands[0] = tmp;
+    }
+  "
+;; APPLE LOCAL end v7 support. Merge from mainline
 )
 
 ;; NB Never uses BX.
@@ -8787,10 +9602,12 @@
 )
 
 ;; NB Never uses BX.
-(define_insn "*thumb_indirect_jump"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_indirect_jump"
   [(set (pc)
 	(match_operand:SI 0 "register_operand" "l*r"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "mov\\tpc, %0"
   [(set_attr "conds" "clob")
    (set_attr "length" "2")]
@@ -8802,11 +9619,15 @@
 (define_insn "nop"
   [(const_int 0)]
   "TARGET_EITHER"
+;; APPLE LOCAL begin v7 support. Merge from mainline
   "*
+  if (TARGET_UNIFIED_ASM)
+    return \"nop\";
   if (TARGET_ARM)
     return \"mov%?\\t%|r0, %|r0\\t%@ nop\";
   return  \"mov\\tr8, r8\";
   "
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set (attr "length")
 	(if_then_else (eq_attr "is_thumb" "yes")
 		      (const_int 2)
@@ -8862,7 +9683,8 @@
 	(match_op_dup 1 [(match_op_dup 3 [(match_dup 4) (match_dup 5)])
 			 (match_dup 2)]))]
   "TARGET_ARM"
-  "%i1%?s\\t%0, %2, %4%S3"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "%i1%.\\t%0, %2, %4%S3"
   [(set_attr "conds" "set")
    (set_attr "shift" "4")
    (set (attr "type") (if_then_else (match_operand 5 "const_int_operand" "")
@@ -8880,7 +9702,8 @@
 			 (const_int 0)))
    (clobber (match_scratch:SI 0 "=r"))]
   "TARGET_ARM"
-  "%i1%?s\\t%0, %2, %4%S3"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "%i1%.\\t%0, %2, %4%S3"
   [(set_attr "conds" "set")
    (set_attr "shift" "4")
    (set (attr "type") (if_then_else (match_operand 5 "const_int_operand" "")
@@ -8915,7 +9738,8 @@
 	(minus:SI (match_dup 1) (match_op_dup 2 [(match_dup 3)
 						 (match_dup 4)])))]
   "TARGET_ARM"
-  "sub%?s\\t%0, %1, %3%S2"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "sub%.\\t%0, %1, %3%S2"
   [(set_attr "conds" "set")
    (set_attr "shift" "3")
    (set (attr "type") (if_then_else (match_operand 4 "const_int_operand" "")
@@ -8933,7 +9757,8 @@
 	 (const_int 0)))
    (clobber (match_scratch:SI 0 "=r"))]
   "TARGET_ARM"
-  "sub%?s\\t%0, %1, %3%S2"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "sub%.\\t%0, %1, %3%S2"
   [(set_attr "conds" "set")
    (set_attr "shift" "3")
    (set (attr "type") (if_then_else (match_operand 4 "const_int_operand" "")
@@ -8951,6 +9776,8 @@
   "TARGET_ARM"
   "mov%D1\\t%0, #0\;and%d1\\t%0, %2, #1"
   [(set_attr "conds" "use")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov")
    (set_attr "length" "8")]
 )
 
@@ -9028,6 +9855,8 @@
     return \"\";
   "
   [(set_attr "conds" "use")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov")
    (set_attr "length" "4,4,8")]
 )
 
@@ -9075,6 +9904,8 @@
    (set_attr "length" "8,12")]
 )
 
+;; APPLE LOCAL v7 support. Merge from mainline
+;; ??? Is it worth using these conditional patterns in Thumb-2 mode?
 (define_insn "*cmp_ite0"
   [(set (match_operand 6 "dominant_cc_register" "")
 	(compare
@@ -9401,6 +10232,8 @@
 	(compare:CC_NOOV (and:SI (match_dup 4) (const_int 1))
 			 (const_int 0)))]
   "")
+;; APPLE LOCAL v7 support. Merge from mainline
+;; ??? The conditional patterns above need checking for Thumb-2 usefulness
 
 (define_insn "*negscc"
   [(set (match_operand:SI 0 "s_register_operand" "=r")
@@ -9490,6 +10323,8 @@
    (set_attr "length" "8,8,12")]
 )
 
+;; APPLE LOCAL v7 support. Merge from mainline
+;; ??? The patterns below need checking for Thumb-2 usefulness.
 (define_insn "*ifcompare_plus_move"
   [(set (match_operand:SI 0 "s_register_operand" "=r,r")
 	(if_then_else:SI (match_operator 6 "arm_comparison_operator"
@@ -9743,6 +10578,8 @@
    mov%d4\\t%0, %1\;mvn%D4\\t%0, %2
    mvn%d4\\t%0, #%B1\;mvn%D4\\t%0, %2"
   [(set_attr "conds" "use")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mvn")
    (set_attr "length" "4,8,8")]
 )
 
@@ -9775,6 +10612,8 @@
    mov%D4\\t%0, %1\;mvn%d4\\t%0, %2
    mvn%D4\\t%0, #%B1\;mvn%d4\\t%0, %2"
   [(set_attr "conds" "use")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mvn")
    (set_attr "length" "4,8,8")]
 )
 
@@ -9812,6 +10651,8 @@
   [(set_attr "conds" "use")
    (set_attr "shift" "2")
    (set_attr "length" "4,8,8")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov")
    (set (attr "type") (if_then_else (match_operand 3 "const_int_operand" "")
 		      (const_string "alu_shift")
 		      (const_string "alu_shift_reg")))]
@@ -9851,6 +10692,8 @@
   [(set_attr "conds" "use")
    (set_attr "shift" "2")
    (set_attr "length" "4,8,8")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov")
    (set (attr "type") (if_then_else (match_operand 3 "const_int_operand" "")
 		      (const_string "alu_shift")
 		      (const_string "alu_shift_reg")))]
@@ -9891,6 +10734,8 @@
   [(set_attr "conds" "use")
    (set_attr "shift" "1")
    (set_attr "length" "8")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mov")
    (set (attr "type") (if_then_else
 		        (and (match_operand 2 "const_int_operand" "")
                              (match_operand 4 "const_int_operand" ""))
@@ -9927,6 +10772,8 @@
   "TARGET_ARM"
   "mvn%d5\\t%0, %1\;%I6%D5\\t%0, %2, %3"
   [(set_attr "conds" "use")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mvn")
    (set_attr "length" "8")]
 )
 
@@ -9959,6 +10806,8 @@
   "TARGET_ARM"
   "mvn%D5\\t%0, %1\;%I6%d5\\t%0, %2, %3"
   [(set_attr "conds" "use")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mvn")
    (set_attr "length" "8")]
 )
 
@@ -10079,10 +10928,11 @@
       {
 	rtx ops[3];
 
+        /* APPLE LOCAL begin v7 support. Merge from mainline */
 	if (val1 == 4 || val2 == 4)
 	  /* Other val must be 8, since we know they are adjacent and neither
 	     is zero.  */
-	  output_asm_insn (\"ldm%?ib\\t%0, {%1, %2}\", ldm);
+	  output_asm_insn (\"ldm%(ib%)\\t%0, {%1, %2}\", ldm);
 	else if (const_ok_for_arm (val1) || const_ok_for_arm (-val1))
 	  {
 	    ldm[0] = ops[0] = operands[4];
@@ -10090,10 +10940,11 @@
 	    ops[2] = GEN_INT (val1);
 	    output_add_immediate (ops);
 	    if (val1 < val2)
-	      output_asm_insn (\"ldm%?ia\\t%0, {%1, %2}\", ldm);
+	      output_asm_insn (\"ldm%(ia%)\\t%0, {%1, %2}\", ldm);
 	    else
-	      output_asm_insn (\"ldm%?da\\t%0, {%1, %2}\", ldm);
+	      output_asm_insn (\"ldm%(da%)\\t%0, {%1, %2}\", ldm);
 	  }
+        /* APPLE LOCAL end v7 support. Merge from mainline */
 	else
 	  {
 	    /* Offset is out of range for a single add, so use two ldr.  */
@@ -10106,20 +10957,22 @@
 	    output_asm_insn (\"ldr%?\\t%0, [%1, %2]\", ops);
 	  }
       }
+    /* APPLE LOCAL begin v7 support. Merge from mainline */
     else if (val1 != 0)
       {
 	if (val1 < val2)
-	  output_asm_insn (\"ldm%?da\\t%0, {%1, %2}\", ldm);
+	  output_asm_insn (\"ldm%(da%)\\t%0, {%1, %2}\", ldm);
 	else
-	  output_asm_insn (\"ldm%?ia\\t%0, {%1, %2}\", ldm);
+	  output_asm_insn (\"ldm%(ia%)\\t%0, {%1, %2}\", ldm);
       }
     else
       {
 	if (val1 < val2)
-	  output_asm_insn (\"ldm%?ia\\t%0, {%1, %2}\", ldm);
+	  output_asm_insn (\"ldm%(ia%)\\t%0, {%1, %2}\", ldm);
 	else
-	  output_asm_insn (\"ldm%?da\\t%0, {%1, %2}\", ldm);
+	  output_asm_insn (\"ldm%(da%)\\t%0, {%1, %2}\", ldm);
       }
+    /* APPLE LOCAL end v7 support. Merge from mainline */
     output_asm_insn (\"%I3%?\\t%0, %1, %2\", arith);
     return \"\";
   }"
@@ -10257,16 +11110,20 @@
   operands[1] = GEN_INT (((unsigned long) INTVAL (operands[1])) >> 24);
   "
 )
+;; APPLE LOCAL v7 support. Merge from mainline
+;; ??? Check the patterns above for Thumb-2 usefulness
 
 (define_expand "prologue"
   [(clobber (const_int 0))]
   "TARGET_EITHER"
-  "if (TARGET_ARM)
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "if (TARGET_32BIT)
      arm_expand_prologue ();
    else
-     thumb_expand_prologue ();
+     thumb1_expand_prologue ();
   DONE;
   "
+;; APPLE LOCAL end v7 support. Merge from mainline
 )
 
 (define_expand "epilogue"
@@ -10275,8 +11132,10 @@
   "
   if (current_function_calls_eh_return)
     emit_insn (gen_prologue_use (gen_rtx_REG (Pmode, 2)));
-  if (TARGET_THUMB)
-    thumb_expand_epilogue ();
+  /* APPLE LOCAL begin v7 support. Merge from mainline */
+  if (TARGET_THUMB1)
+    thumb1_expand_epilogue ();
+  /* APPLE LOCAL end v7 support. Merge from mainline */
   else if (USE_RETURN_INSN (FALSE))
     {
       emit_jump_insn (gen_return ());
@@ -10298,7 +11157,8 @@
 (define_insn "sibcall_epilogue"
   [(parallel [(unspec:SI [(reg:SI LR_REGNUM)] UNSPEC_PROLOGUE_USE)
               (unspec_volatile [(return)] VUNSPEC_EPILOGUE)])]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   "*
   if (use_return_insn (FALSE, next_nonnote_insn (insn)))
     return output_return_instruction (const_true_rtx, FALSE, FALSE);
@@ -10316,12 +11176,14 @@
 (define_insn "*epilogue_insns"
   [(unspec_volatile [(return)] VUNSPEC_EPILOGUE)]
   "TARGET_EITHER"
+;; APPLE LOCAL begin v7 support. Merge from mainline
   "*
-  if (TARGET_ARM)
+  if (TARGET_32BIT)
     return arm_output_epilogue (NULL);
-  else /* TARGET_THUMB */
+  else /* TARGET_THUMB1 */
     return thumb_unexpanded_epilogue ();
   "
+;; APPLE LOCAL end v7 support. Merge from mainline
   ; Length is absolute worst case
   [(set_attr "length" "44")
    (set_attr "type" "block")
@@ -10359,6 +11221,11 @@
 ;; some extent with the conditional data operations, so we have to split them
 ;; up again here.
 
+;; APPLE LOCAL begin v7 support. Merge from mainline
+;; ??? Need to audit these splitters for Thumb-2.  Why isn't normal
+;; conditional execution sufficient?
+;; APPLE LOCAL end v7 support. Merge from mainline
+
 (define_split
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(if_then_else:SI (match_operator 1 "arm_comparison_operator"
@@ -10482,6 +11349,8 @@
    mvn%D4\\t%0, %2
    mov%d4\\t%0, %1\;mvn%D4\\t%0, %2"
   [(set_attr "conds" "use")
+;; APPLE LOCAL v7 support. Merge from Codesourcery
+   (set_attr "insn" "mvn")
    (set_attr "length" "4,8")]
 )
 
@@ -10521,6 +11390,8 @@
   [(set_attr "conds" "clob")
    (set_attr "length" "12")]
 )
+;; APPLE LOCAL v7 support. Merge from mainline
+;; ??? The above patterns need auditing for Thumb-2
 
 ;; Push multiple registers to the stack.  Registers are in parallel (use ...)
 ;; expressions.  For simplicity, the first register is also in the unspec
@@ -10530,21 +11401,27 @@
     [(set (match_operand:BLK 0 "memory_operand" "=m")
 	  (unspec:BLK [(match_operand:SI 1 "s_register_operand" "r")]
 		      UNSPEC_PUSH_MULT))])]
-  "TARGET_ARM"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT"
   "*
   {
     int num_saves = XVECLEN (operands[2], 0);
      
     /* For the StrongARM at least it is faster to
-       use STR to store only a single register.  */
-    if (num_saves == 1)
+       use STR to store only a single register.
+       In Thumb mode always use push, and the assmebler will pick
+       something approporiate.  */
+    if (num_saves == 1 && TARGET_ARM)
       output_asm_insn (\"str\\t%1, [%m0, #-4]!\", operands);
     else
       {
 	int i;
 	char pattern[100];
 
-	strcpy (pattern, \"stmfd\\t%m0!, {%1\");
+	if (TARGET_ARM)
+	    strcpy (pattern, \"stmfd\\t%m0!, {%1\");
+	else
+	    strcpy (pattern, \"push\\t{%1\");
 
 	for (i = 1; i < num_saves; i++)
 	  {
@@ -10559,6 +11436,7 @@
 
     return \"\";
   }"
+;; APPLE LOCAL end v7 support. Merge from mainline
   [(set_attr "type" "store4")]
 )
 
@@ -10578,7 +11456,8 @@
     [(set (match_operand:BLK 0 "memory_operand" "=m")
 	  (unspec:BLK [(match_operand:XF 1 "f_register_operand" "f")]
 		      UNSPEC_PUSH_MULT))])]
-  "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
   "*
   {
     char pattern[100];
@@ -10626,7 +11505,8 @@
 
 (define_insn "consttable_1"
   [(unspec_volatile [(match_operand 0 "" "")] VUNSPEC_POOL_1)]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   making_const_table = TRUE;
   assemble_integer (operands[0], 1, BITS_PER_WORD, 1);
@@ -10638,7 +11518,8 @@
 
 (define_insn "consttable_2"
   [(unspec_volatile [(match_operand 0 "" "")] VUNSPEC_POOL_2)]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "*
   making_const_table = TRUE;
   assemble_integer (operands[0], 2, BITS_PER_WORD, 1);
@@ -10696,12 +11577,39 @@
   [(set_attr "length" "8")]
 )
 
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+(define_insn "consttable_16"
+  [(unspec_volatile [(match_operand 0 "" "")] VUNSPEC_POOL_16)]
+  "TARGET_EITHER"
+  "*
+  {
+    making_const_table = TRUE;
+    switch (GET_MODE_CLASS (GET_MODE (operands[0])))
+      {
+       case MODE_FLOAT:
+        {
+          REAL_VALUE_TYPE r;
+          REAL_VALUE_FROM_CONST_DOUBLE (r, operands[0]);
+          assemble_real (r, GET_MODE (operands[0]), BITS_PER_WORD);
+          break;
+        }
+      default:
+        assemble_integer (operands[0], 16, BITS_PER_WORD, 1);
+        break;
+      }
+    return \"\";
+  }"
+  [(set_attr "length" "16")]
+)
+
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
 ;; Miscellaneous Thumb patterns
 
 (define_expand "tablejump"
   [(parallel [(set (pc) (match_operand:SI 0 "register_operand" ""))
 	      (use (label_ref (match_operand 1 "" "")))])]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "
   if (flag_pic)
     {
@@ -10716,10 +11624,12 @@
 )
 
 ;; NB never uses BX.
-(define_insn "*thumb_tablejump"
+;; APPLE LOCAL v7 support. Merge from mainline
+(define_insn "*thumb1_tablejump"
   [(set (pc) (match_operand:SI 0 "register_operand" "l*r"))
    (use (label_ref (match_operand 1 "" "")))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "mov\\t%|pc, %0"
   [(set_attr "length" "2")]
 )
@@ -10729,14 +11639,18 @@
 (define_insn "clzsi2"
   [(set (match_operand:SI 0 "s_register_operand" "=r")
 	(clz:SI (match_operand:SI 1 "s_register_operand" "r")))]
-  "TARGET_ARM && arm_arch5"
+;; APPLE LOCAL begin v7 support. Merge from mainline
+  "TARGET_32BIT && arm_arch5"
   "clz%?\\t%0, %1"
-  [(set_attr "predicable" "yes")])
+  [(set_attr "predicable" "yes")
+   (set_attr "insn" "clz")])
+;; APPLE LOCAL end v7 support. Merge from mainline
 
 (define_expand "ffssi2"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(ffs:SI (match_operand:SI 1 "s_register_operand" "")))]
-  "TARGET_ARM && arm_arch5"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && arm_arch5"
   "
   {
     rtx t1, t2, t3;
@@ -10756,7 +11670,8 @@
 (define_expand "ctzsi2"
   [(set (match_operand:SI 0 "s_register_operand" "")
 	(ctz:SI (match_operand:SI 1 "s_register_operand" "")))]
-  "TARGET_ARM && arm_arch5"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && arm_arch5"
   "
   {
     rtx t1, t2, t3;
@@ -10779,7 +11694,8 @@
   [(prefetch (match_operand:SI 0 "address_operand" "p")
 	     (match_operand:SI 1 "" "")
 	     (match_operand:SI 2 "" ""))]
-  "TARGET_ARM && arm_arch5e"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT && arm_arch5e"
   "pld\\t%a0")
 
 ;; General predication pattern
@@ -10788,7 +11704,8 @@
   [(match_operator 0 "arm_comparison_operator"
     [(match_operand 1 "cc_register" "")
      (const_int 0)])]
-  "TARGET_ARM"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_32BIT"
   ""
 )
 
@@ -10809,7 +11726,8 @@
   "TARGET_EITHER"
   "
   {
-    if (TARGET_ARM)
+    /* APPLE LOCAL v7 support. Merge from mainline */
+    if (TARGET_32BIT)
       emit_insn (gen_arm_eh_return (operands[0]));
     else
       emit_insn (gen_thumb_eh_return (operands[0]));
@@ -10837,7 +11755,8 @@
   [(unspec_volatile [(match_operand:SI 0 "s_register_operand" "l")]
 		    VUNSPEC_EH_RETURN)
    (clobber (match_scratch:SI 1 "=&l"))]
-  "TARGET_THUMB"
+;; APPLE LOCAL v7 support. Merge from mainline
+  "TARGET_THUMB1"
   "#"
   "&& reload_completed"
   [(const_int 0)]
@@ -11022,8 +11941,20 @@
 (include "fpa.md")
 ;; Load the Maverick co-processor patterns
 (include "cirrus.md")
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+;; Vector bits common to IWMMXT and Neon
+(include "vec-common.md")
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
 ;; Load the Intel Wireless Multimedia Extension patterns
 (include "iwmmxt.md")
 ;; Load the VFP co-processor patterns
 (include "vfp.md")
+;; APPLE LOCAL begin v7 support. Merge from mainline
+;; Thumb-2 patterns
+(include "thumb2.md")
+;; APPLE LOCAL end v7 support. Merge from mainline
+;; APPLE LOCAL begin v7 support. Merge from Codesourcery
+;; Neon patterns
+(include "neon.md")
+;; APPLE LOCAL end v7 support. Merge from Codesourcery
 

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/arm.opt
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm.opt?rev=76781&r1=76780&r2=76781&view=diff

==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm.opt (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm.opt Wed Jul 22 15:36:27 2009
@@ -49,8 +49,10 @@
 Target RejectNegative Joined
 Specify the name of the target architecture
 
+; APPLE LOCAL begin 6150882 use thumb2 by default for v7
 marm
-Target RejectNegative InverseMask(THUMB) Undocumented
+Target RejectNegative VarExists Var(thumb_option, 0) Undocumented
+; APPLE LOCAL end 6150882 use thumb2 by default for v7
 
 mbig-endian
 Target Report RejectNegative Mask(BIG_END)
@@ -140,9 +142,11 @@
 Target RejectNegative Joined Var(structure_size_string)
 Specify the minimum bit alignment of structures
 
+; APPLE LOCAL begin 6150882 use thumb2 by default for v7
 mthumb
-Target Report Mask(THUMB)
+Target Report Var(thumb_option) Init(-1)
 Compile for the Thumb not the ARM
+; APPLE LOCAL end 6150882 use thumb2 by default for v7
 
 ; APPLE LOCAL begin ARM interworking
 mthumb-interwork
@@ -175,3 +179,9 @@
 Target Report Mask(MS_BITFIELD_LAYOUT)
 Use Microsoft structure layout
 ; APPLE LOCAL end 5946347 ms_struct support
+; APPLE LOCAL begin v7 support. Merge from Codesourcery
+
+mvectorize-with-neon-quad
+Target Report Mask(NEON_VECTORIZE_QUAD)
+Use Neon quad-word (rather than double-word) registers for vectorization
+; APPLE LOCAL end v7 support. Merge from Codesourcery





More information about the llvm-commits mailing list