[llvm-commits] [llvm-gcc-4.2] r103811 - in /llvm-gcc-4.2/trunk/gcc: config.gcc config/arm/arm-protos.h config/arm/arm.c config/arm/arm_neon.h config/arm/arm_neon_gcc.h config/arm/arm_neon_std.h config/arm/neon-gen-std.ml config/arm/neon-gen.ml

Bob Wilson bob.wilson at apple.com
Fri May 14 14:31:28 PDT 2010


Author: bwilson
Date: Fri May 14 16:31:28 2010
New Revision: 103811

URL: http://llvm.org/viewvc/llvm-project?rev=103811&view=rev
Log:
llvm-gcc's implementation of Neon types and intrinsics, which follows ARM's
specifications, has been causing problems for porting code from gcc.  GCC's
implementation defines the Neon types as plain vector types so that you can
freely intermix builtin vector operators with Neon intrinsics.  The ARM
standard Neon types are "containerized vectors", i.e., structs, so you cannot
use them with the usual vector operators (+, -, *, etc.).  This change adds
an optional gcc-compatibility mode for the Neon types and intrinsics to
support code written in that style of intermixing intrinsics and operators.

Before going into details, I also want to mention that the plan is for Clang
to define overloaded intrinsics that will accept either ARM's containerized
types or plain vector types.  That is not yet implemented, but once it is
available, Clang users can make a choice of declaring Neon vectors with either
kind of type.  If they go with the standard containerized vector types,
they'll get strict conformance to ARM's specifications but will not be able
to use vector operators with those values.  If they go with plain vector
types, they'll lose the standard conformance but be able to mix-and-match
intrinsics with operators.

This patch is intended as a step in the direction of the Clang solution, but
since llvm-gcc does not support function overloading in C (besides the fact
that it implements the Neon intrinsics as preprocessor macros for other
reasons), llvm-gcc users will have to consistently use one style or the other.
If they define ARM_NEON_GCC_COMPATIBILITY before including <arm_neon.h>,
then the Neon intrinsics will be defined to work on plain vector types.
Otherwise, they'll continue to get the standard versions.

The patch brings back some of the awful arm_mangle_types code that fakes
the C++ mangling for Neon types to make them be mangled as if they were
the containerized vector structs.  I've addressed the previous problems
with that code by defining the Neon vector types as built-in types so that
the mangler can recognize them by their unique type nodes.  The motivation
for resurrecting that code is that we don't want binary compatibility problems
between llvm-gcc with ARM_NEON_GCC_COMPATIBILITY and Clang.  However,
the expected use of the ARM_NEON_GCC_COMPATIBILITY mode is that variables
will be declared with plain vector types.  That usage should continue to
work without changes in Clang (where ARM_NEON_GCC_COMPATIBILITY will not
be needed).  Likewise, code that does not use that mode will continue to
work unmodified in Clang.  Code using the standard Neon type names when
ARM_NEON_GCC_COMPATIBILITY is defined may require changes when moving to
Clang, but those are basically the same changes that would otherwise be
required when porting from gcc to llvm-gcc.

Down to the details.... There are now 2 versions of the neon-gen.ml
generator for arm_neon.h, and 2 versions of the arm_neon.h output.  The
neon-gen-std.ml version generates arm_neon_std.h, which is the version using
the standard containerized vector types.  The neon-gen.ml version generates
arm_neon_gcc.h, which is the gcc-compatible version.  Neither of those headers
should ever be used directly, since they will not be available with clang.
The arm_neon.h header is now a simple wrapper that selects between the 2
versions based on whether ARM_NEON_GCC_COMPATIBILITY is defined.

Added:
    llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon.h
    llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon_gcc.h
    llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon_std.h
      - copied, changed from r103724, llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon.h
    llvm-gcc-4.2/trunk/gcc/config/arm/neon-gen-std.ml
Modified:
    llvm-gcc-4.2/trunk/gcc/config.gcc
    llvm-gcc-4.2/trunk/gcc/config/arm/arm-protos.h
    llvm-gcc-4.2/trunk/gcc/config/arm/arm.c
    llvm-gcc-4.2/trunk/gcc/config/arm/neon-gen.ml

Modified: llvm-gcc-4.2/trunk/gcc/config.gcc
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config.gcc?rev=103811&r1=103810&r2=103811&view=diff
==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config.gcc (original)
+++ llvm-gcc-4.2/trunk/gcc/config.gcc Fri May 14 16:31:28 2010
@@ -264,7 +264,7 @@
 arm*-*-*)
 	cpu_type=arm
         # APPLE LOCAL ARM v7 support, merge from Codesourcery.
-	extra_headers="mmintrin.h arm_neon.h"
+	extra_headers="mmintrin.h arm_neon.h arm_neon_std.h arm_neon_gcc.h"
 # LLVM LOCAL begin
 	out_cxx_file=arm/llvm-arm.cpp
 # LLVM LOCAL end                               
@@ -822,7 +822,7 @@
 	extra_options="${extra_options} arm/darwin.opt"
         tm_file="${tm_file} arm/darwin.h"
         tmake_file="${tmake_file} arm/t-slibgcc-iphoneos"
-	extra_headers="arm_neon.h"
+	extra_headers="arm_neon.h arm_neon_std.h arm_neon_gcc.h"
         ;;
 # APPLE LOCAL end ARM darwin target
 arm*-wince-pe*)

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/arm-protos.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm-protos.h?rev=103811&r1=103810&r2=103811&view=diff
==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm-protos.h (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm-protos.h Fri May 14 16:31:28 2010
@@ -258,8 +258,10 @@
 /* APPLE LOCAL 5946347 ms_struct support */
 extern int arm_field_ms_struct_align (tree);
 
-/* LLVM LOCAL pr5037 removed arm_mangle_type */
+/* APPLE LOCAL begin v7 support. Merge from Codesourcery */
+extern const char *arm_mangle_type (tree);
 
+/* APPLE LOCAL end v7 support. Merge from Codesourcery */
 /* APPLE LOCAL v7 support. Fix compact switch tables */
 extern void arm_asm_output_addr_diff_vec (FILE *file, rtx LABEL, rtx BODY);
 

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/arm.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm.c?rev=103811&r1=103810&r2=103811&view=diff
==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm.c (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm.c Fri May 14 16:31:28 2010
@@ -146,7 +146,6 @@
 /* LLVM LOCAL begin */
 static tree arm_type_promotes_to(tree);
 static bool arm_is_fp16(tree);
-static const char * arm_mangle_type (tree type);
 /* LLVM LOCAL end */
 static int arm_comp_type_attributes (tree, tree);
 static void arm_set_default_type_attributes (tree);
@@ -16775,8 +16774,48 @@
 
 /* LLVM LOCAL pr5037 removed make_neon_float_type */
 
-/* LLVM LOCAL begin multi-vector types */
 #ifdef ENABLE_LLVM
+/* LLVM LOCAL begin use builtin vector types for easier mangling */
+/* Create a new vector type node for a Neon vector.  This is just like
+   make_vector_type() but it does not enter the new type in the hash table.
+   The whole point of having these types built-in is to make them unique so
+   that the mangling function can identify them.  */
+
+static tree
+build_neonvec_type (tree innertype, int nunits)
+{
+  tree t;
+
+  t = make_node (VECTOR_TYPE);
+  TREE_TYPE (t) = TYPE_MAIN_VARIANT (innertype);
+  SET_TYPE_VECTOR_SUBPARTS (t, nunits);
+  TYPE_MODE (t) = VOIDmode;
+  TYPE_READONLY (t) = TYPE_READONLY (innertype);
+  TYPE_VOLATILE (t) = TYPE_VOLATILE (innertype);
+
+  layout_type (t);
+
+  {
+    tree index = build_int_cst (NULL_TREE, nunits - 1);
+    tree array = build_array_type (innertype, build_index_type (index));
+    tree rt = make_node (RECORD_TYPE);
+
+    TYPE_FIELDS (rt) = build_decl (FIELD_DECL, get_identifier ("f"), array);
+    DECL_CONTEXT (TYPE_FIELDS (rt)) = rt;
+    layout_type (rt);
+    TYPE_DEBUG_REPRESENTATION_TYPE (t) = rt;
+    /* In dwarfout.c, type lookup uses TYPE_UID numbers.  We want to output
+       the representation type, and we want to find that die when looking up
+       the vector type.  This is most easily achieved by making the TYPE_UID
+       numbers equal.  */
+    TYPE_UID (rt) = TYPE_UID (t);
+  }
+
+  return t;
+}
+/* LLVM LOCAL end use builtin vector types for easier mangling */
+
+/* LLVM LOCAL begin multi-vector types */
 /* Create a new builtin struct type containing NUMVECS fields (where NUMVECS
    is in the range from 1 to 4) of type VECTYPE.  */
 static tree
@@ -16810,6 +16849,57 @@
 #endif /* ENABLE_LLVM */
 /* LLVM LOCAL end multi-vector types */
 
+/* LLVM LOCAL begin use builtin vector types for easier mangling */
+typedef struct
+{
+  tree neonvec_type;
+  const char *aapcs_name;
+} arm_mangle_map_entry;
+
+enum neonvec_types {
+  neon_int8x8_type,
+  neon_int16x4_type,
+  neon_int32x2_type,
+  neon_int64x1_type,
+  neon_float32x2_type,
+  neon_poly8x8_type,
+  neon_poly16x4_type,
+  neon_uint8x8_type,
+  neon_uint16x4_type,
+  neon_uint32x2_type,
+  neon_uint64x1_type,
+  neon_int8x16_type,
+  neon_int16x8_type,
+  neon_int32x4_type,
+  neon_int64x2_type,
+  neon_float32x4_type,
+  neon_poly8x16_type,
+  neon_poly16x8_type,
+  neon_uint8x16_type,
+  neon_uint16x8_type,
+  neon_uint32x4_type,
+  neon_uint64x2_type,
+  neon_LAST_type
+};
+
+static arm_mangle_map_entry arm_mangle_map[neon_LAST_type];
+
+/* Create a unique type node for a Neon vector type and enter it in the
+   arm_mangle_map along with the corresponding mangled name.  */
+static void
+define_neonvec_type (tree elt_type, unsigned num_elts,
+                     const char *type_name, const char *mangling,
+                     enum neonvec_types neonvec)
+{
+  tree neon_type_node = build_neonvec_type(elt_type, num_elts);
+  (*lang_hooks.types.register_builtin_type) (neon_type_node, type_name);
+
+  arm_mangle_map[neonvec].neonvec_type = neon_type_node;
+  arm_mangle_map[neonvec].aapcs_name = mangling;
+}
+
+/* LLVM LOCAL end use builtin vector types for easier mangling */
+
 static void
 arm_init_neon_builtins (void)
 {
@@ -17880,6 +17970,36 @@
   (*lang_hooks.types.register_builtin_type) (intUDI_type_node,
 					     "__builtin_neon_udi");
 
+  /* LLVM LOCAL begin use builtin vector types for easier mangling */
+#define DEFINE_NEONVEC_TYPE(VECT, ELTT, NUMELTS, MANGLING) \
+  define_neonvec_type (ELTT, NUMELTS, "__neon_" #VECT "_t", \
+                       MANGLING, neon_##VECT##_type)
+
+  DEFINE_NEONVEC_TYPE(int8x8,    intQI_type_node,  8, "15__simd64_int8_t");
+  DEFINE_NEONVEC_TYPE(int16x4,   intHI_type_node,  4, "16__simd64_int16_t");
+  DEFINE_NEONVEC_TYPE(int32x2,   intSI_type_node,  2, "16__simd64_int32_t");
+  DEFINE_NEONVEC_TYPE(int64x1,   intDI_type_node,  1, "16__simd64_int64_t");
+  DEFINE_NEONVEC_TYPE(float32x2, float_type_node,  2, "18__simd64_float32_t");
+  DEFINE_NEONVEC_TYPE(poly8x8,   intQI_type_node,  8, "16__simd64_poly8_t");
+  DEFINE_NEONVEC_TYPE(poly16x4,  intHI_type_node,  4, "17__simd64_poly16_t");
+  DEFINE_NEONVEC_TYPE(uint8x8,  intUQI_type_node,  8, "16__simd64_uint8_t");
+  DEFINE_NEONVEC_TYPE(uint16x4, intUHI_type_node,  4, "17__simd64_uint16_t");
+  DEFINE_NEONVEC_TYPE(uint32x2, intUSI_type_node,  2, "17__simd64_uint32_t");
+  DEFINE_NEONVEC_TYPE(uint64x1, intUDI_type_node,  1, "17__simd64_uint64_t");
+
+  DEFINE_NEONVEC_TYPE(int8x16,   intQI_type_node, 16, "16__simd128_int8_t");
+  DEFINE_NEONVEC_TYPE(int16x8,   intHI_type_node,  8, "17__simd128_int16_t");
+  DEFINE_NEONVEC_TYPE(int32x4,   intSI_type_node,  4, "17__simd128_int32_t");
+  DEFINE_NEONVEC_TYPE(int64x2,   intDI_type_node,  2, "17__simd128_int64_t");
+  DEFINE_NEONVEC_TYPE(float32x4, float_type_node,  4, "19__simd128_float32_t");
+  DEFINE_NEONVEC_TYPE(poly8x16,  intQI_type_node, 16, "17__simd128_poly8_t");
+  DEFINE_NEONVEC_TYPE(poly16x8,  intHI_type_node,  8, "18__simd128_poly16_t");
+  DEFINE_NEONVEC_TYPE(uint8x16, intUQI_type_node, 16, "17__simd128_uint8_t");
+  DEFINE_NEONVEC_TYPE(uint16x8, intUHI_type_node,  8, "18__simd128_uint16_t");
+  DEFINE_NEONVEC_TYPE(uint32x4, intUSI_type_node,  4, "18__simd128_uint32_t");
+  DEFINE_NEONVEC_TYPE(uint64x2, intUDI_type_node,  2, "18__simd128_uint64_t");
+  /* LLVM LOCAL end use builtin vector types for easier mangling */
+
   /* LLVM LOCAL begin multi-vector types */
   (*lang_hooks.types.register_builtin_type) (V8QI2_type_node,
                                              "__neon_int8x8x2_t");
@@ -23904,18 +24024,41 @@
 }
 /* APPLE LOCAL end v7 support. Merge from mainline */
 
-/* LLVM LOCAL begin */
-static const char *
+/* A table and a function to perform ARM-specific name mangling for
+   NEON vector types in order to conform to the AAPCS (see "Procedure
+   Call Standard for the ARM Architecture", Appendix A).  To qualify
+   for emission with the mangled names defined in that document, a
+   vector type must not only be of the correct mode but also be
+   composed of NEON vector element types (e.g. __builtin_neon_qi).  */
+/* LLVM LOCAL moved arm_mangle_map declarations earlier in this file */
+const char *
 arm_mangle_type (tree type)
 {
+  /* LLVM LOCAL */
+  unsigned pos;
+
+  /* LLVM LOCAL begin half-float */
   if (arm_is_fp16(type))
     return "Dh";
+  /* LLVM LOCAL end half-float */
+
+  if (TREE_CODE (type) != VECTOR_TYPE)
+    return NULL;
+
+  /* LLVM LOCAL begin use builtin vector types for easier mangling */
+  /* Check if this type matches any of the unique vector type nodes in the
+     arm_mangle_map table.  */
+  for (pos = 0; pos < neon_LAST_type; ++pos)
+    {
+      if (type == arm_mangle_map[pos].neonvec_type)
+        return arm_mangle_map[pos].aapcs_name;
+    }
+  /* LLVM LOCAL end use builtin vector types for easier mangling */
 
   /* Use the default mangling for unrecognized (possibly user-defined)
      vector types.  */
   return NULL;
 }
-/* LLVM LOCAL end */
 
 void
 arm_asm_output_addr_diff_vec (FILE *file, rtx label, rtx body)

Added: llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon.h?rev=103811&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon.h (added)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon.h Fri May 14 16:31:28 2010
@@ -0,0 +1,38 @@
+/* ARM NEON intrinsics include file.
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 2, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING.  If not, write to the
+   Free Software Foundation, 51 Franklin Street, Fifth Floor, Boston,
+   MA 02110-1301, USA.  */
+
+/* As a special exception, if you include this header file into source
+   files compiled by GCC, this header file does not by itself cause
+   the resulting executable to be covered by the GNU General Public
+   License.  This exception does not however invalidate any other
+   reasons why the executable file might be covered by the GNU General
+   Public License.  */
+
+/* llvm-gcc provides two different versions of the NEON types and
+   intrinsics.  The default versions follow the standard definitions
+   specified by ARM.  For backward compatibility with GCC, alternate
+   versions are provided where the intrinsics will accept arguments with
+   GCC's vector types instead of the "containerized vector" types
+   specified by ARM.  Define the ARM_NEON_GCC_COMPATIBILITY macro to
+   select these alternate versions of the NEON types and intrinsics.  */
+
+#ifdef ARM_NEON_GCC_COMPATIBILITY
+#include <arm_neon_gcc.h>
+#else
+#include <arm_neon_std.h>
+#endif

Added: llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon_gcc.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon_gcc.h?rev=103811&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon_gcc.h (added)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon_gcc.h Fri May 14 16:31:28 2010
@@ -0,0 +1,7206 @@
+/* Internal definitions for GCC-compatible NEON types and intrinsics.
+   Do not include this file directly; please use <arm_neon.h> and define
+   the ARM_NEON_GCC_COMPATIBILITY macro.
+
+   This file is generated automatically using neon-gen.ml.
+   Please do not edit manually.
+
+   Copyright (C) 2006, 2007 Free Software Foundation, Inc.
+   Contributed by CodeSourcery.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 2, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING.  If not, write to the
+   Free Software Foundation, 51 Franklin Street, Fifth Floor, Boston,
+   MA 02110-1301, USA.  */
+
+/* As a special exception, if you include this header file into source
+   files compiled by GCC, this header file does not by itself cause
+   the resulting executable to be covered by the GNU General Public
+   License.  This exception does not however invalidate any other
+   reasons why the executable file might be covered by the GNU General
+   Public License.  */
+
+#ifndef _GCC_ARM_NEON_H
+#define _GCC_ARM_NEON_H 1
+
+#ifndef __ARM_NEON__
+#error You must enable NEON instructions (e.g. -mfloat-abi=softfp -mfpu=neon) to use arm_neon.h
+#else
+
+#ifdef __cplusplus
+extern "C" {
+#define __neon_ptr_cast(ty, ptr) reinterpret_cast<ty>(ptr)
+#else
+#define __neon_ptr_cast(ty, ptr) (ty)(ptr)
+#endif
+
+#include <stdint.h>
+
+typedef __builtin_neon_sf float32_t;
+typedef __builtin_neon_poly8 poly8_t;
+typedef __builtin_neon_poly16 poly16_t;
+
+typedef __neon_int8x8_t int8x8_t;
+typedef __neon_int16x4_t int16x4_t;
+typedef __neon_int32x2_t int32x2_t;
+typedef __neon_int64x1_t int64x1_t;
+typedef __neon_float32x2_t float32x2_t;
+typedef __neon_poly8x8_t poly8x8_t;
+typedef __neon_poly16x4_t poly16x4_t;
+typedef __neon_uint8x8_t uint8x8_t;
+typedef __neon_uint16x4_t uint16x4_t;
+typedef __neon_uint32x2_t uint32x2_t;
+typedef __neon_uint64x1_t uint64x1_t;
+typedef __neon_int8x16_t int8x16_t;
+typedef __neon_int16x8_t int16x8_t;
+typedef __neon_int32x4_t int32x4_t;
+typedef __neon_int64x2_t int64x2_t;
+typedef __neon_float32x4_t float32x4_t;
+typedef __neon_poly8x16_t poly8x16_t;
+typedef __neon_poly16x8_t poly16x8_t;
+typedef __neon_uint8x16_t uint8x16_t;
+typedef __neon_uint16x8_t uint16x8_t;
+typedef __neon_uint32x4_t uint32x4_t;
+typedef __neon_uint64x2_t uint64x2_t;
+
+typedef struct int8x8x2_t
+{
+  int8x8_t val[2];
+} int8x8x2_t;
+
+typedef struct int8x16x2_t
+{
+  int8x16_t val[2];
+} int8x16x2_t;
+
+typedef struct int16x4x2_t
+{
+  int16x4_t val[2];
+} int16x4x2_t;
+
+typedef struct int16x8x2_t
+{
+  int16x8_t val[2];
+} int16x8x2_t;
+
+typedef struct int32x2x2_t
+{
+  int32x2_t val[2];
+} int32x2x2_t;
+
+typedef struct int32x4x2_t
+{
+  int32x4_t val[2];
+} int32x4x2_t;
+
+typedef struct int64x1x2_t
+{
+  int64x1_t val[2];
+} int64x1x2_t;
+
+typedef struct int64x2x2_t
+{
+  int64x2_t val[2];
+} int64x2x2_t;
+
+typedef struct uint8x8x2_t
+{
+  uint8x8_t val[2];
+} uint8x8x2_t;
+
+typedef struct uint8x16x2_t
+{
+  uint8x16_t val[2];
+} uint8x16x2_t;
+
+typedef struct uint16x4x2_t
+{
+  uint16x4_t val[2];
+} uint16x4x2_t;
+
+typedef struct uint16x8x2_t
+{
+  uint16x8_t val[2];
+} uint16x8x2_t;
+
+typedef struct uint32x2x2_t
+{
+  uint32x2_t val[2];
+} uint32x2x2_t;
+
+typedef struct uint32x4x2_t
+{
+  uint32x4_t val[2];
+} uint32x4x2_t;
+
+typedef struct uint64x1x2_t
+{
+  uint64x1_t val[2];
+} uint64x1x2_t;
+
+typedef struct uint64x2x2_t
+{
+  uint64x2_t val[2];
+} uint64x2x2_t;
+
+typedef struct float32x2x2_t
+{
+  float32x2_t val[2];
+} float32x2x2_t;
+
+typedef struct float32x4x2_t
+{
+  float32x4_t val[2];
+} float32x4x2_t;
+
+typedef struct poly8x8x2_t
+{
+  poly8x8_t val[2];
+} poly8x8x2_t;
+
+typedef struct poly8x16x2_t
+{
+  poly8x16_t val[2];
+} poly8x16x2_t;
+
+typedef struct poly16x4x2_t
+{
+  poly16x4_t val[2];
+} poly16x4x2_t;
+
+typedef struct poly16x8x2_t
+{
+  poly16x8_t val[2];
+} poly16x8x2_t;
+
+typedef struct int8x8x3_t
+{
+  int8x8_t val[3];
+} int8x8x3_t;
+
+typedef struct int8x16x3_t
+{
+  int8x16_t val[3];
+} int8x16x3_t;
+
+typedef struct int16x4x3_t
+{
+  int16x4_t val[3];
+} int16x4x3_t;
+
+typedef struct int16x8x3_t
+{
+  int16x8_t val[3];
+} int16x8x3_t;
+
+typedef struct int32x2x3_t
+{
+  int32x2_t val[3];
+} int32x2x3_t;
+
+typedef struct int32x4x3_t
+{
+  int32x4_t val[3];
+} int32x4x3_t;
+
+typedef struct int64x1x3_t
+{
+  int64x1_t val[3];
+} int64x1x3_t;
+
+typedef struct int64x2x3_t
+{
+  int64x2_t val[3];
+} int64x2x3_t;
+
+typedef struct uint8x8x3_t
+{
+  uint8x8_t val[3];
+} uint8x8x3_t;
+
+typedef struct uint8x16x3_t
+{
+  uint8x16_t val[3];
+} uint8x16x3_t;
+
+typedef struct uint16x4x3_t
+{
+  uint16x4_t val[3];
+} uint16x4x3_t;
+
+typedef struct uint16x8x3_t
+{
+  uint16x8_t val[3];
+} uint16x8x3_t;
+
+typedef struct uint32x2x3_t
+{
+  uint32x2_t val[3];
+} uint32x2x3_t;
+
+typedef struct uint32x4x3_t
+{
+  uint32x4_t val[3];
+} uint32x4x3_t;
+
+typedef struct uint64x1x3_t
+{
+  uint64x1_t val[3];
+} uint64x1x3_t;
+
+typedef struct uint64x2x3_t
+{
+  uint64x2_t val[3];
+} uint64x2x3_t;
+
+typedef struct float32x2x3_t
+{
+  float32x2_t val[3];
+} float32x2x3_t;
+
+typedef struct float32x4x3_t
+{
+  float32x4_t val[3];
+} float32x4x3_t;
+
+typedef struct poly8x8x3_t
+{
+  poly8x8_t val[3];
+} poly8x8x3_t;
+
+typedef struct poly8x16x3_t
+{
+  poly8x16_t val[3];
+} poly8x16x3_t;
+
+typedef struct poly16x4x3_t
+{
+  poly16x4_t val[3];
+} poly16x4x3_t;
+
+typedef struct poly16x8x3_t
+{
+  poly16x8_t val[3];
+} poly16x8x3_t;
+
+typedef struct int8x8x4_t
+{
+  int8x8_t val[4];
+} int8x8x4_t;
+
+typedef struct int8x16x4_t
+{
+  int8x16_t val[4];
+} int8x16x4_t;
+
+typedef struct int16x4x4_t
+{
+  int16x4_t val[4];
+} int16x4x4_t;
+
+typedef struct int16x8x4_t
+{
+  int16x8_t val[4];
+} int16x8x4_t;
+
+typedef struct int32x2x4_t
+{
+  int32x2_t val[4];
+} int32x2x4_t;
+
+typedef struct int32x4x4_t
+{
+  int32x4_t val[4];
+} int32x4x4_t;
+
+typedef struct int64x1x4_t
+{
+  int64x1_t val[4];
+} int64x1x4_t;
+
+typedef struct int64x2x4_t
+{
+  int64x2_t val[4];
+} int64x2x4_t;
+
+typedef struct uint8x8x4_t
+{
+  uint8x8_t val[4];
+} uint8x8x4_t;
+
+typedef struct uint8x16x4_t
+{
+  uint8x16_t val[4];
+} uint8x16x4_t;
+
+typedef struct uint16x4x4_t
+{
+  uint16x4_t val[4];
+} uint16x4x4_t;
+
+typedef struct uint16x8x4_t
+{
+  uint16x8_t val[4];
+} uint16x8x4_t;
+
+typedef struct uint32x2x4_t
+{
+  uint32x2_t val[4];
+} uint32x2x4_t;
+
+typedef struct uint32x4x4_t
+{
+  uint32x4_t val[4];
+} uint32x4x4_t;
+
+typedef struct uint64x1x4_t
+{
+  uint64x1_t val[4];
+} uint64x1x4_t;
+
+typedef struct uint64x2x4_t
+{
+  uint64x2_t val[4];
+} uint64x2x4_t;
+
+typedef struct float32x2x4_t
+{
+  float32x2_t val[4];
+} float32x2x4_t;
+
+typedef struct float32x4x4_t
+{
+  float32x4_t val[4];
+} float32x4x4_t;
+
+typedef struct poly8x8x4_t
+{
+  poly8x8_t val[4];
+} poly8x8x4_t;
+
+typedef struct poly8x16x4_t
+{
+  poly8x16_t val[4];
+} poly8x16x4_t;
+
+typedef struct poly16x4x4_t
+{
+  poly16x4_t val[4];
+} poly16x4x4_t;
+
+typedef struct poly16x8x4_t
+{
+  poly16x8_t val[4];
+} poly16x8x4_t;
+
+
+#define vadd_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vaddv8qi (__a, __b, 1)
+
+#define vadd_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vaddv4hi (__a, __b, 1)
+
+#define vadd_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vaddv2si (__a, __b, 1)
+
+#define vadd_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vaddv1di (__a, __b, 1)
+
+#define vadd_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vaddv2sf (__a, __b, 5)
+
+#define vadd_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vaddv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vadd_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vaddv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vadd_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vaddv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vadd_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vaddv1di ((int64x1_t) __a, (int64x1_t) __b, 0)
+
+#define vaddq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vaddv16qi (__a, __b, 1)
+
+#define vaddq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vaddv8hi (__a, __b, 1)
+
+#define vaddq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vaddv4si (__a, __b, 1)
+
+#define vaddq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vaddv2di (__a, __b, 1)
+
+#define vaddq_f32(__a, __b) \
+  (float32x4_t)__builtin_neon_vaddv4sf (__a, __b, 5)
+
+#define vaddq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vaddv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vaddq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vaddv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vaddq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vaddv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vaddq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vaddv2di ((int64x2_t) __a, (int64x2_t) __b, 0)
+
+#define vaddl_s8(__a, __b) \
+  (int16x8_t)__builtin_neon_vaddlv8qi (__a, __b, 1)
+
+#define vaddl_s16(__a, __b) \
+  (int32x4_t)__builtin_neon_vaddlv4hi (__a, __b, 1)
+
+#define vaddl_s32(__a, __b) \
+  (int64x2_t)__builtin_neon_vaddlv2si (__a, __b, 1)
+
+#define vaddl_u8(__a, __b) \
+  (uint16x8_t)__builtin_neon_vaddlv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vaddl_u16(__a, __b) \
+  (uint32x4_t)__builtin_neon_vaddlv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vaddl_u32(__a, __b) \
+  (uint64x2_t)__builtin_neon_vaddlv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vaddw_s8(__a, __b) \
+  (int16x8_t)__builtin_neon_vaddwv8qi (__a, __b, 1)
+
+#define vaddw_s16(__a, __b) \
+  (int32x4_t)__builtin_neon_vaddwv4hi (__a, __b, 1)
+
+#define vaddw_s32(__a, __b) \
+  (int64x2_t)__builtin_neon_vaddwv2si (__a, __b, 1)
+
+#define vaddw_u8(__a, __b) \
+  (uint16x8_t)__builtin_neon_vaddwv8qi ((int16x8_t) __a, (int8x8_t) __b, 0)
+
+#define vaddw_u16(__a, __b) \
+  (uint32x4_t)__builtin_neon_vaddwv4hi ((int32x4_t) __a, (int16x4_t) __b, 0)
+
+#define vaddw_u32(__a, __b) \
+  (uint64x2_t)__builtin_neon_vaddwv2si ((int64x2_t) __a, (int32x2_t) __b, 0)
+
+#define vhadd_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vhaddv8qi (__a, __b, 1)
+
+#define vhadd_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vhaddv4hi (__a, __b, 1)
+
+#define vhadd_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vhaddv2si (__a, __b, 1)
+
+#define vhadd_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vhaddv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vhadd_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vhaddv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vhadd_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vhaddv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vhaddq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vhaddv16qi (__a, __b, 1)
+
+#define vhaddq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vhaddv8hi (__a, __b, 1)
+
+#define vhaddq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vhaddv4si (__a, __b, 1)
+
+#define vhaddq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vhaddv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vhaddq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vhaddv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vhaddq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vhaddv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vrhadd_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vhaddv8qi (__a, __b, 3)
+
+#define vrhadd_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vhaddv4hi (__a, __b, 3)
+
+#define vrhadd_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vhaddv2si (__a, __b, 3)
+
+#define vrhadd_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vhaddv8qi ((int8x8_t) __a, (int8x8_t) __b, 2)
+
+#define vrhadd_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vhaddv4hi ((int16x4_t) __a, (int16x4_t) __b, 2)
+
+#define vrhadd_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vhaddv2si ((int32x2_t) __a, (int32x2_t) __b, 2)
+
+#define vrhaddq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vhaddv16qi (__a, __b, 3)
+
+#define vrhaddq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vhaddv8hi (__a, __b, 3)
+
+#define vrhaddq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vhaddv4si (__a, __b, 3)
+
+#define vrhaddq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vhaddv16qi ((int8x16_t) __a, (int8x16_t) __b, 2)
+
+#define vrhaddq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vhaddv8hi ((int16x8_t) __a, (int16x8_t) __b, 2)
+
+#define vrhaddq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vhaddv4si ((int32x4_t) __a, (int32x4_t) __b, 2)
+
+#define vqadd_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vqaddv8qi (__a, __b, 1)
+
+#define vqadd_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vqaddv4hi (__a, __b, 1)
+
+#define vqadd_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vqaddv2si (__a, __b, 1)
+
+#define vqadd_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vqaddv1di (__a, __b, 1)
+
+#define vqadd_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vqaddv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vqadd_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vqaddv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vqadd_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vqaddv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vqadd_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vqaddv1di ((int64x1_t) __a, (int64x1_t) __b, 0)
+
+#define vqaddq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vqaddv16qi (__a, __b, 1)
+
+#define vqaddq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vqaddv8hi (__a, __b, 1)
+
+#define vqaddq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vqaddv4si (__a, __b, 1)
+
+#define vqaddq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vqaddv2di (__a, __b, 1)
+
+#define vqaddq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vqaddv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vqaddq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vqaddv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vqaddq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vqaddv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vqaddq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vqaddv2di ((int64x2_t) __a, (int64x2_t) __b, 0)
+
+#define vaddhn_s16(__a, __b) \
+  (int8x8_t)__builtin_neon_vaddhnv8hi (__a, __b, 1)
+
+#define vaddhn_s32(__a, __b) \
+  (int16x4_t)__builtin_neon_vaddhnv4si (__a, __b, 1)
+
+#define vaddhn_s64(__a, __b) \
+  (int32x2_t)__builtin_neon_vaddhnv2di (__a, __b, 1)
+
+#define vaddhn_u16(__a, __b) \
+  (uint8x8_t)__builtin_neon_vaddhnv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vaddhn_u32(__a, __b) \
+  (uint16x4_t)__builtin_neon_vaddhnv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vaddhn_u64(__a, __b) \
+  (uint32x2_t)__builtin_neon_vaddhnv2di ((int64x2_t) __a, (int64x2_t) __b, 0)
+
+#define vraddhn_s16(__a, __b) \
+  (int8x8_t)__builtin_neon_vaddhnv8hi (__a, __b, 3)
+
+#define vraddhn_s32(__a, __b) \
+  (int16x4_t)__builtin_neon_vaddhnv4si (__a, __b, 3)
+
+#define vraddhn_s64(__a, __b) \
+  (int32x2_t)__builtin_neon_vaddhnv2di (__a, __b, 3)
+
+#define vraddhn_u16(__a, __b) \
+  (uint8x8_t)__builtin_neon_vaddhnv8hi ((int16x8_t) __a, (int16x8_t) __b, 2)
+
+#define vraddhn_u32(__a, __b) \
+  (uint16x4_t)__builtin_neon_vaddhnv4si ((int32x4_t) __a, (int32x4_t) __b, 2)
+
+#define vraddhn_u64(__a, __b) \
+  (uint32x2_t)__builtin_neon_vaddhnv2di ((int64x2_t) __a, (int64x2_t) __b, 2)
+
+#define vmul_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vmulv8qi (__a, __b, 1)
+
+#define vmul_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vmulv4hi (__a, __b, 1)
+
+#define vmul_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vmulv2si (__a, __b, 1)
+
+#define vmul_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vmulv2sf (__a, __b, 5)
+
+#define vmul_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vmulv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vmul_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vmulv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vmul_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vmulv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vmul_p8(__a, __b) \
+  (poly8x8_t)__builtin_neon_vmulv8qi ((int8x8_t) __a, (int8x8_t) __b, 4)
+
+#define vmulq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vmulv16qi (__a, __b, 1)
+
+#define vmulq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vmulv8hi (__a, __b, 1)
+
+#define vmulq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vmulv4si (__a, __b, 1)
+
+#define vmulq_f32(__a, __b) \
+  (float32x4_t)__builtin_neon_vmulv4sf (__a, __b, 5)
+
+#define vmulq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vmulv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vmulq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vmulv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vmulq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vmulv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vmulq_p8(__a, __b) \
+  (poly8x16_t)__builtin_neon_vmulv16qi ((int8x16_t) __a, (int8x16_t) __b, 4)
+
+#define vqdmulh_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vqdmulhv4hi (__a, __b, 1)
+
+#define vqdmulh_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vqdmulhv2si (__a, __b, 1)
+
+#define vqdmulhq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vqdmulhv8hi (__a, __b, 1)
+
+#define vqdmulhq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vqdmulhv4si (__a, __b, 1)
+
+#define vqrdmulh_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vqdmulhv4hi (__a, __b, 3)
+
+#define vqrdmulh_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vqdmulhv2si (__a, __b, 3)
+
+#define vqrdmulhq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vqdmulhv8hi (__a, __b, 3)
+
+#define vqrdmulhq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vqdmulhv4si (__a, __b, 3)
+
+#define vmull_s8(__a, __b) \
+  (int16x8_t)__builtin_neon_vmullv8qi (__a, __b, 1)
+
+#define vmull_s16(__a, __b) \
+  (int32x4_t)__builtin_neon_vmullv4hi (__a, __b, 1)
+
+#define vmull_s32(__a, __b) \
+  (int64x2_t)__builtin_neon_vmullv2si (__a, __b, 1)
+
+#define vmull_u8(__a, __b) \
+  (uint16x8_t)__builtin_neon_vmullv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vmull_u16(__a, __b) \
+  (uint32x4_t)__builtin_neon_vmullv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vmull_u32(__a, __b) \
+  (uint64x2_t)__builtin_neon_vmullv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vmull_p8(__a, __b) \
+  (poly16x8_t)__builtin_neon_vmullv8qi ((int8x8_t) __a, (int8x8_t) __b, 4)
+
+#define vqdmull_s16(__a, __b) \
+  (int32x4_t)__builtin_neon_vqdmullv4hi (__a, __b, 1)
+
+#define vqdmull_s32(__a, __b) \
+  (int64x2_t)__builtin_neon_vqdmullv2si (__a, __b, 1)
+
+#define vmla_s8(__a, __b, __c) \
+  (int8x8_t)__builtin_neon_vmlav8qi (__a, __b, __c, 1)
+
+#define vmla_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vmlav4hi (__a, __b, __c, 1)
+
+#define vmla_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vmlav2si (__a, __b, __c, 1)
+
+#define vmla_f32(__a, __b, __c) \
+  (float32x2_t)__builtin_neon_vmlav2sf (__a, __b, __c, 5)
+
+#define vmla_u8(__a, __b, __c) \
+  (uint8x8_t)__builtin_neon_vmlav8qi ((int8x8_t) __a, (int8x8_t) __b, (int8x8_t) __c, 0)
+
+#define vmla_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vmlav4hi ((int16x4_t) __a, (int16x4_t) __b, (int16x4_t) __c, 0)
+
+#define vmla_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vmlav2si ((int32x2_t) __a, (int32x2_t) __b, (int32x2_t) __c, 0)
+
+#define vmlaq_s8(__a, __b, __c) \
+  (int8x16_t)__builtin_neon_vmlav16qi (__a, __b, __c, 1)
+
+#define vmlaq_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vmlav8hi (__a, __b, __c, 1)
+
+#define vmlaq_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vmlav4si (__a, __b, __c, 1)
+
+#define vmlaq_f32(__a, __b, __c) \
+  (float32x4_t)__builtin_neon_vmlav4sf (__a, __b, __c, 5)
+
+#define vmlaq_u8(__a, __b, __c) \
+  (uint8x16_t)__builtin_neon_vmlav16qi ((int8x16_t) __a, (int8x16_t) __b, (int8x16_t) __c, 0)
+
+#define vmlaq_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vmlav8hi ((int16x8_t) __a, (int16x8_t) __b, (int16x8_t) __c, 0)
+
+#define vmlaq_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vmlav4si ((int32x4_t) __a, (int32x4_t) __b, (int32x4_t) __c, 0)
+
+#define vmlal_s8(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vmlalv8qi (__a, __b, __c, 1)
+
+#define vmlal_s16(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vmlalv4hi (__a, __b, __c, 1)
+
+#define vmlal_s32(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vmlalv2si (__a, __b, __c, 1)
+
+#define vmlal_u8(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vmlalv8qi ((int16x8_t) __a, (int8x8_t) __b, (int8x8_t) __c, 0)
+
+#define vmlal_u16(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vmlalv4hi ((int32x4_t) __a, (int16x4_t) __b, (int16x4_t) __c, 0)
+
+#define vmlal_u32(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vmlalv2si ((int64x2_t) __a, (int32x2_t) __b, (int32x2_t) __c, 0)
+
+#define vqdmlal_s16(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vqdmlalv4hi (__a, __b, __c, 1)
+
+#define vqdmlal_s32(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vqdmlalv2si (__a, __b, __c, 1)
+
+#define vmls_s8(__a, __b, __c) \
+  (int8x8_t)__builtin_neon_vmlsv8qi (__a, __b, __c, 1)
+
+#define vmls_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vmlsv4hi (__a, __b, __c, 1)
+
+#define vmls_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vmlsv2si (__a, __b, __c, 1)
+
+#define vmls_f32(__a, __b, __c) \
+  (float32x2_t)__builtin_neon_vmlsv2sf (__a, __b, __c, 5)
+
+#define vmls_u8(__a, __b, __c) \
+  (uint8x8_t)__builtin_neon_vmlsv8qi ((int8x8_t) __a, (int8x8_t) __b, (int8x8_t) __c, 0)
+
+#define vmls_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vmlsv4hi ((int16x4_t) __a, (int16x4_t) __b, (int16x4_t) __c, 0)
+
+#define vmls_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vmlsv2si ((int32x2_t) __a, (int32x2_t) __b, (int32x2_t) __c, 0)
+
+#define vmlsq_s8(__a, __b, __c) \
+  (int8x16_t)__builtin_neon_vmlsv16qi (__a, __b, __c, 1)
+
+#define vmlsq_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vmlsv8hi (__a, __b, __c, 1)
+
+#define vmlsq_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vmlsv4si (__a, __b, __c, 1)
+
+#define vmlsq_f32(__a, __b, __c) \
+  (float32x4_t)__builtin_neon_vmlsv4sf (__a, __b, __c, 5)
+
+#define vmlsq_u8(__a, __b, __c) \
+  (uint8x16_t)__builtin_neon_vmlsv16qi ((int8x16_t) __a, (int8x16_t) __b, (int8x16_t) __c, 0)
+
+#define vmlsq_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vmlsv8hi ((int16x8_t) __a, (int16x8_t) __b, (int16x8_t) __c, 0)
+
+#define vmlsq_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vmlsv4si ((int32x4_t) __a, (int32x4_t) __b, (int32x4_t) __c, 0)
+
+#define vmlsl_s8(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vmlslv8qi (__a, __b, __c, 1)
+
+#define vmlsl_s16(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vmlslv4hi (__a, __b, __c, 1)
+
+#define vmlsl_s32(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vmlslv2si (__a, __b, __c, 1)
+
+#define vmlsl_u8(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vmlslv8qi ((int16x8_t) __a, (int8x8_t) __b, (int8x8_t) __c, 0)
+
+#define vmlsl_u16(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vmlslv4hi ((int32x4_t) __a, (int16x4_t) __b, (int16x4_t) __c, 0)
+
+#define vmlsl_u32(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vmlslv2si ((int64x2_t) __a, (int32x2_t) __b, (int32x2_t) __c, 0)
+
+#define vqdmlsl_s16(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vqdmlslv4hi (__a, __b, __c, 1)
+
+#define vqdmlsl_s32(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vqdmlslv2si (__a, __b, __c, 1)
+
+#define vsub_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vsubv8qi (__a, __b, 1)
+
+#define vsub_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vsubv4hi (__a, __b, 1)
+
+#define vsub_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vsubv2si (__a, __b, 1)
+
+#define vsub_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vsubv1di (__a, __b, 1)
+
+#define vsub_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vsubv2sf (__a, __b, 5)
+
+#define vsub_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vsubv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vsub_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vsubv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vsub_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vsubv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vsub_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vsubv1di ((int64x1_t) __a, (int64x1_t) __b, 0)
+
+#define vsubq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vsubv16qi (__a, __b, 1)
+
+#define vsubq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vsubv8hi (__a, __b, 1)
+
+#define vsubq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vsubv4si (__a, __b, 1)
+
+#define vsubq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vsubv2di (__a, __b, 1)
+
+#define vsubq_f32(__a, __b) \
+  (float32x4_t)__builtin_neon_vsubv4sf (__a, __b, 5)
+
+#define vsubq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vsubv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vsubq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vsubv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vsubq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vsubv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vsubq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vsubv2di ((int64x2_t) __a, (int64x2_t) __b, 0)
+
+#define vsubl_s8(__a, __b) \
+  (int16x8_t)__builtin_neon_vsublv8qi (__a, __b, 1)
+
+#define vsubl_s16(__a, __b) \
+  (int32x4_t)__builtin_neon_vsublv4hi (__a, __b, 1)
+
+#define vsubl_s32(__a, __b) \
+  (int64x2_t)__builtin_neon_vsublv2si (__a, __b, 1)
+
+#define vsubl_u8(__a, __b) \
+  (uint16x8_t)__builtin_neon_vsublv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vsubl_u16(__a, __b) \
+  (uint32x4_t)__builtin_neon_vsublv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vsubl_u32(__a, __b) \
+  (uint64x2_t)__builtin_neon_vsublv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vsubw_s8(__a, __b) \
+  (int16x8_t)__builtin_neon_vsubwv8qi (__a, __b, 1)
+
+#define vsubw_s16(__a, __b) \
+  (int32x4_t)__builtin_neon_vsubwv4hi (__a, __b, 1)
+
+#define vsubw_s32(__a, __b) \
+  (int64x2_t)__builtin_neon_vsubwv2si (__a, __b, 1)
+
+#define vsubw_u8(__a, __b) \
+  (uint16x8_t)__builtin_neon_vsubwv8qi ((int16x8_t) __a, (int8x8_t) __b, 0)
+
+#define vsubw_u16(__a, __b) \
+  (uint32x4_t)__builtin_neon_vsubwv4hi ((int32x4_t) __a, (int16x4_t) __b, 0)
+
+#define vsubw_u32(__a, __b) \
+  (uint64x2_t)__builtin_neon_vsubwv2si ((int64x2_t) __a, (int32x2_t) __b, 0)
+
+#define vhsub_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vhsubv8qi (__a, __b, 1)
+
+#define vhsub_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vhsubv4hi (__a, __b, 1)
+
+#define vhsub_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vhsubv2si (__a, __b, 1)
+
+#define vhsub_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vhsubv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vhsub_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vhsubv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vhsub_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vhsubv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vhsubq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vhsubv16qi (__a, __b, 1)
+
+#define vhsubq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vhsubv8hi (__a, __b, 1)
+
+#define vhsubq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vhsubv4si (__a, __b, 1)
+
+#define vhsubq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vhsubv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vhsubq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vhsubv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vhsubq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vhsubv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vqsub_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vqsubv8qi (__a, __b, 1)
+
+#define vqsub_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vqsubv4hi (__a, __b, 1)
+
+#define vqsub_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vqsubv2si (__a, __b, 1)
+
+#define vqsub_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vqsubv1di (__a, __b, 1)
+
+#define vqsub_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vqsubv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vqsub_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vqsubv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vqsub_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vqsubv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vqsub_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vqsubv1di ((int64x1_t) __a, (int64x1_t) __b, 0)
+
+#define vqsubq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vqsubv16qi (__a, __b, 1)
+
+#define vqsubq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vqsubv8hi (__a, __b, 1)
+
+#define vqsubq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vqsubv4si (__a, __b, 1)
+
+#define vqsubq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vqsubv2di (__a, __b, 1)
+
+#define vqsubq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vqsubv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vqsubq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vqsubv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vqsubq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vqsubv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vqsubq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vqsubv2di ((int64x2_t) __a, (int64x2_t) __b, 0)
+
+#define vsubhn_s16(__a, __b) \
+  (int8x8_t)__builtin_neon_vsubhnv8hi (__a, __b, 1)
+
+#define vsubhn_s32(__a, __b) \
+  (int16x4_t)__builtin_neon_vsubhnv4si (__a, __b, 1)
+
+#define vsubhn_s64(__a, __b) \
+  (int32x2_t)__builtin_neon_vsubhnv2di (__a, __b, 1)
+
+#define vsubhn_u16(__a, __b) \
+  (uint8x8_t)__builtin_neon_vsubhnv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vsubhn_u32(__a, __b) \
+  (uint16x4_t)__builtin_neon_vsubhnv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vsubhn_u64(__a, __b) \
+  (uint32x2_t)__builtin_neon_vsubhnv2di ((int64x2_t) __a, (int64x2_t) __b, 0)
+
+#define vrsubhn_s16(__a, __b) \
+  (int8x8_t)__builtin_neon_vsubhnv8hi (__a, __b, 3)
+
+#define vrsubhn_s32(__a, __b) \
+  (int16x4_t)__builtin_neon_vsubhnv4si (__a, __b, 3)
+
+#define vrsubhn_s64(__a, __b) \
+  (int32x2_t)__builtin_neon_vsubhnv2di (__a, __b, 3)
+
+#define vrsubhn_u16(__a, __b) \
+  (uint8x8_t)__builtin_neon_vsubhnv8hi ((int16x8_t) __a, (int16x8_t) __b, 2)
+
+#define vrsubhn_u32(__a, __b) \
+  (uint16x4_t)__builtin_neon_vsubhnv4si ((int32x4_t) __a, (int32x4_t) __b, 2)
+
+#define vrsubhn_u64(__a, __b) \
+  (uint32x2_t)__builtin_neon_vsubhnv2di ((int64x2_t) __a, (int64x2_t) __b, 2)
+
+#define vceq_s8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vceqv8qi (__a, __b, 1)
+
+#define vceq_s16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vceqv4hi (__a, __b, 1)
+
+#define vceq_s32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vceqv2si (__a, __b, 1)
+
+#define vceq_f32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vceqv2sf (__a, __b, 5)
+
+#define vceq_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vceqv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vceq_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vceqv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vceq_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vceqv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vceq_p8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vceqv8qi ((int8x8_t) __a, (int8x8_t) __b, 4)
+
+#define vceqq_s8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vceqv16qi (__a, __b, 1)
+
+#define vceqq_s16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vceqv8hi (__a, __b, 1)
+
+#define vceqq_s32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vceqv4si (__a, __b, 1)
+
+#define vceqq_f32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vceqv4sf (__a, __b, 5)
+
+#define vceqq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vceqv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vceqq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vceqv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vceqq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vceqv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vceqq_p8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vceqv16qi ((int8x16_t) __a, (int8x16_t) __b, 4)
+
+#define vcge_s8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vcgev8qi (__a, __b, 1)
+
+#define vcge_s16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vcgev4hi (__a, __b, 1)
+
+#define vcge_s32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcgev2si (__a, __b, 1)
+
+#define vcge_f32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcgev2sf (__a, __b, 5)
+
+#define vcge_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vcgev8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vcge_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vcgev4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vcge_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcgev2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vcgeq_s8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vcgev16qi (__a, __b, 1)
+
+#define vcgeq_s16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vcgev8hi (__a, __b, 1)
+
+#define vcgeq_s32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcgev4si (__a, __b, 1)
+
+#define vcgeq_f32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcgev4sf (__a, __b, 5)
+
+#define vcgeq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vcgev16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vcgeq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vcgev8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vcgeq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcgev4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vcle_s8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vcgev8qi (__b, __a, 1)
+
+#define vcle_s16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vcgev4hi (__b, __a, 1)
+
+#define vcle_s32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcgev2si (__b, __a, 1)
+
+#define vcle_f32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcgev2sf (__b, __a, 5)
+
+#define vcle_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vcgev8qi ((int8x8_t) __b, (int8x8_t) __a, 0)
+
+#define vcle_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vcgev4hi ((int16x4_t) __b, (int16x4_t) __a, 0)
+
+#define vcle_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcgev2si ((int32x2_t) __b, (int32x2_t) __a, 0)
+
+#define vcleq_s8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vcgev16qi (__b, __a, 1)
+
+#define vcleq_s16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vcgev8hi (__b, __a, 1)
+
+#define vcleq_s32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcgev4si (__b, __a, 1)
+
+#define vcleq_f32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcgev4sf (__b, __a, 5)
+
+#define vcleq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vcgev16qi ((int8x16_t) __b, (int8x16_t) __a, 0)
+
+#define vcleq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vcgev8hi ((int16x8_t) __b, (int16x8_t) __a, 0)
+
+#define vcleq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcgev4si ((int32x4_t) __b, (int32x4_t) __a, 0)
+
+#define vcgt_s8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vcgtv8qi (__a, __b, 1)
+
+#define vcgt_s16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vcgtv4hi (__a, __b, 1)
+
+#define vcgt_s32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcgtv2si (__a, __b, 1)
+
+#define vcgt_f32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcgtv2sf (__a, __b, 5)
+
+#define vcgt_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vcgtv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vcgt_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vcgtv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vcgt_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcgtv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vcgtq_s8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vcgtv16qi (__a, __b, 1)
+
+#define vcgtq_s16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vcgtv8hi (__a, __b, 1)
+
+#define vcgtq_s32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcgtv4si (__a, __b, 1)
+
+#define vcgtq_f32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcgtv4sf (__a, __b, 5)
+
+#define vcgtq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vcgtv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vcgtq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vcgtv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vcgtq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcgtv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vclt_s8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vcgtv8qi (__b, __a, 1)
+
+#define vclt_s16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vcgtv4hi (__b, __a, 1)
+
+#define vclt_s32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcgtv2si (__b, __a, 1)
+
+#define vclt_f32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcgtv2sf (__b, __a, 5)
+
+#define vclt_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vcgtv8qi ((int8x8_t) __b, (int8x8_t) __a, 0)
+
+#define vclt_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vcgtv4hi ((int16x4_t) __b, (int16x4_t) __a, 0)
+
+#define vclt_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcgtv2si ((int32x2_t) __b, (int32x2_t) __a, 0)
+
+#define vcltq_s8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vcgtv16qi (__b, __a, 1)
+
+#define vcltq_s16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vcgtv8hi (__b, __a, 1)
+
+#define vcltq_s32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcgtv4si (__b, __a, 1)
+
+#define vcltq_f32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcgtv4sf (__b, __a, 5)
+
+#define vcltq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vcgtv16qi ((int8x16_t) __b, (int8x16_t) __a, 0)
+
+#define vcltq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vcgtv8hi ((int16x8_t) __b, (int16x8_t) __a, 0)
+
+#define vcltq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcgtv4si ((int32x4_t) __b, (int32x4_t) __a, 0)
+
+#define vcage_f32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcagev2sf (__a, __b, 5)
+
+#define vcageq_f32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcagev4sf (__a, __b, 5)
+
+#define vcale_f32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcagev2sf (__b, __a, 5)
+
+#define vcaleq_f32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcagev4sf (__b, __a, 5)
+
+#define vcagt_f32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcagtv2sf (__a, __b, 5)
+
+#define vcagtq_f32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcagtv4sf (__a, __b, 5)
+
+#define vcalt_f32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcagtv2sf (__b, __a, 5)
+
+#define vcaltq_f32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcagtv4sf (__b, __a, 5)
+
+#define vtst_s8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vtstv8qi (__a, __b, 1)
+
+#define vtst_s16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vtstv4hi (__a, __b, 1)
+
+#define vtst_s32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vtstv2si (__a, __b, 1)
+
+#define vtst_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vtstv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vtst_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vtstv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vtst_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vtstv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vtst_p8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vtstv8qi ((int8x8_t) __a, (int8x8_t) __b, 4)
+
+#define vtstq_s8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vtstv16qi (__a, __b, 1)
+
+#define vtstq_s16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vtstv8hi (__a, __b, 1)
+
+#define vtstq_s32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vtstv4si (__a, __b, 1)
+
+#define vtstq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vtstv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vtstq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vtstv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vtstq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vtstv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vtstq_p8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vtstv16qi ((int8x16_t) __a, (int8x16_t) __b, 4)
+
+#define vabd_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vabdv8qi (__a, __b, 1)
+
+#define vabd_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vabdv4hi (__a, __b, 1)
+
+#define vabd_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vabdv2si (__a, __b, 1)
+
+#define vabd_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vabdv2sf (__a, __b, 5)
+
+#define vabd_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vabdv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vabd_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vabdv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vabd_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vabdv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vabdq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vabdv16qi (__a, __b, 1)
+
+#define vabdq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vabdv8hi (__a, __b, 1)
+
+#define vabdq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vabdv4si (__a, __b, 1)
+
+#define vabdq_f32(__a, __b) \
+  (float32x4_t)__builtin_neon_vabdv4sf (__a, __b, 5)
+
+#define vabdq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vabdv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vabdq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vabdv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vabdq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vabdv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vabdl_s8(__a, __b) \
+  (int16x8_t)__builtin_neon_vabdlv8qi (__a, __b, 1)
+
+#define vabdl_s16(__a, __b) \
+  (int32x4_t)__builtin_neon_vabdlv4hi (__a, __b, 1)
+
+#define vabdl_s32(__a, __b) \
+  (int64x2_t)__builtin_neon_vabdlv2si (__a, __b, 1)
+
+#define vabdl_u8(__a, __b) \
+  (uint16x8_t)__builtin_neon_vabdlv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vabdl_u16(__a, __b) \
+  (uint32x4_t)__builtin_neon_vabdlv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vabdl_u32(__a, __b) \
+  (uint64x2_t)__builtin_neon_vabdlv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vaba_s8(__a, __b, __c) \
+  (int8x8_t)__builtin_neon_vabav8qi (__a, __b, __c, 1)
+
+#define vaba_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vabav4hi (__a, __b, __c, 1)
+
+#define vaba_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vabav2si (__a, __b, __c, 1)
+
+#define vaba_u8(__a, __b, __c) \
+  (uint8x8_t)__builtin_neon_vabav8qi ((int8x8_t) __a, (int8x8_t) __b, (int8x8_t) __c, 0)
+
+#define vaba_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vabav4hi ((int16x4_t) __a, (int16x4_t) __b, (int16x4_t) __c, 0)
+
+#define vaba_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vabav2si ((int32x2_t) __a, (int32x2_t) __b, (int32x2_t) __c, 0)
+
+#define vabaq_s8(__a, __b, __c) \
+  (int8x16_t)__builtin_neon_vabav16qi (__a, __b, __c, 1)
+
+#define vabaq_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vabav8hi (__a, __b, __c, 1)
+
+#define vabaq_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vabav4si (__a, __b, __c, 1)
+
+#define vabaq_u8(__a, __b, __c) \
+  (uint8x16_t)__builtin_neon_vabav16qi ((int8x16_t) __a, (int8x16_t) __b, (int8x16_t) __c, 0)
+
+#define vabaq_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vabav8hi ((int16x8_t) __a, (int16x8_t) __b, (int16x8_t) __c, 0)
+
+#define vabaq_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vabav4si ((int32x4_t) __a, (int32x4_t) __b, (int32x4_t) __c, 0)
+
+#define vabal_s8(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vabalv8qi (__a, __b, __c, 1)
+
+#define vabal_s16(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vabalv4hi (__a, __b, __c, 1)
+
+#define vabal_s32(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vabalv2si (__a, __b, __c, 1)
+
+#define vabal_u8(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vabalv8qi ((int16x8_t) __a, (int8x8_t) __b, (int8x8_t) __c, 0)
+
+#define vabal_u16(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vabalv4hi ((int32x4_t) __a, (int16x4_t) __b, (int16x4_t) __c, 0)
+
+#define vabal_u32(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vabalv2si ((int64x2_t) __a, (int32x2_t) __b, (int32x2_t) __c, 0)
+
+#define vmax_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vmaxv8qi (__a, __b, 1)
+
+#define vmax_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vmaxv4hi (__a, __b, 1)
+
+#define vmax_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vmaxv2si (__a, __b, 1)
+
+#define vmax_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vmaxv2sf (__a, __b, 5)
+
+#define vmax_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vmaxv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vmax_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vmaxv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vmax_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vmaxv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vmaxq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vmaxv16qi (__a, __b, 1)
+
+#define vmaxq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vmaxv8hi (__a, __b, 1)
+
+#define vmaxq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vmaxv4si (__a, __b, 1)
+
+#define vmaxq_f32(__a, __b) \
+  (float32x4_t)__builtin_neon_vmaxv4sf (__a, __b, 5)
+
+#define vmaxq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vmaxv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vmaxq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vmaxv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vmaxq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vmaxv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vmin_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vminv8qi (__a, __b, 1)
+
+#define vmin_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vminv4hi (__a, __b, 1)
+
+#define vmin_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vminv2si (__a, __b, 1)
+
+#define vmin_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vminv2sf (__a, __b, 5)
+
+#define vmin_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vminv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vmin_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vminv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vmin_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vminv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vminq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vminv16qi (__a, __b, 1)
+
+#define vminq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vminv8hi (__a, __b, 1)
+
+#define vminq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vminv4si (__a, __b, 1)
+
+#define vminq_f32(__a, __b) \
+  (float32x4_t)__builtin_neon_vminv4sf (__a, __b, 5)
+
+#define vminq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vminv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vminq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vminv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vminq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vminv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vpadd_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vpaddv8qi (__a, __b, 1)
+
+#define vpadd_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vpaddv4hi (__a, __b, 1)
+
+#define vpadd_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vpaddv2si (__a, __b, 1)
+
+#define vpadd_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vpaddv2sf (__a, __b, 5)
+
+#define vpadd_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vpaddv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vpadd_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vpaddv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vpadd_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vpaddv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vpaddl_s8(__a) \
+  (int16x4_t)__builtin_neon_vpaddlv8qi (__a, 1)
+
+#define vpaddl_s16(__a) \
+  (int32x2_t)__builtin_neon_vpaddlv4hi (__a, 1)
+
+#define vpaddl_s32(__a) \
+  (int64x1_t)__builtin_neon_vpaddlv2si (__a, 1)
+
+#define vpaddl_u8(__a) \
+  (uint16x4_t)__builtin_neon_vpaddlv8qi ((int8x8_t) __a, 0)
+
+#define vpaddl_u16(__a) \
+  (uint32x2_t)__builtin_neon_vpaddlv4hi ((int16x4_t) __a, 0)
+
+#define vpaddl_u32(__a) \
+  (uint64x1_t)__builtin_neon_vpaddlv2si ((int32x2_t) __a, 0)
+
+#define vpaddlq_s8(__a) \
+  (int16x8_t)__builtin_neon_vpaddlv16qi (__a, 1)
+
+#define vpaddlq_s16(__a) \
+  (int32x4_t)__builtin_neon_vpaddlv8hi (__a, 1)
+
+#define vpaddlq_s32(__a) \
+  (int64x2_t)__builtin_neon_vpaddlv4si (__a, 1)
+
+#define vpaddlq_u8(__a) \
+  (uint16x8_t)__builtin_neon_vpaddlv16qi ((int8x16_t) __a, 0)
+
+#define vpaddlq_u16(__a) \
+  (uint32x4_t)__builtin_neon_vpaddlv8hi ((int16x8_t) __a, 0)
+
+#define vpaddlq_u32(__a) \
+  (uint64x2_t)__builtin_neon_vpaddlv4si ((int32x4_t) __a, 0)
+
+#define vpadal_s8(__a, __b) \
+  (int16x4_t)__builtin_neon_vpadalv8qi (__a, __b, 1)
+
+#define vpadal_s16(__a, __b) \
+  (int32x2_t)__builtin_neon_vpadalv4hi (__a, __b, 1)
+
+#define vpadal_s32(__a, __b) \
+  (int64x1_t)__builtin_neon_vpadalv2si (__a, __b, 1)
+
+#define vpadal_u8(__a, __b) \
+  (uint16x4_t)__builtin_neon_vpadalv8qi ((int16x4_t) __a, (int8x8_t) __b, 0)
+
+#define vpadal_u16(__a, __b) \
+  (uint32x2_t)__builtin_neon_vpadalv4hi ((int32x2_t) __a, (int16x4_t) __b, 0)
+
+#define vpadal_u32(__a, __b) \
+  (uint64x1_t)__builtin_neon_vpadalv2si ((int64x1_t) __a, (int32x2_t) __b, 0)
+
+#define vpadalq_s8(__a, __b) \
+  (int16x8_t)__builtin_neon_vpadalv16qi (__a, __b, 1)
+
+#define vpadalq_s16(__a, __b) \
+  (int32x4_t)__builtin_neon_vpadalv8hi (__a, __b, 1)
+
+#define vpadalq_s32(__a, __b) \
+  (int64x2_t)__builtin_neon_vpadalv4si (__a, __b, 1)
+
+#define vpadalq_u8(__a, __b) \
+  (uint16x8_t)__builtin_neon_vpadalv16qi ((int16x8_t) __a, (int8x16_t) __b, 0)
+
+#define vpadalq_u16(__a, __b) \
+  (uint32x4_t)__builtin_neon_vpadalv8hi ((int32x4_t) __a, (int16x8_t) __b, 0)
+
+#define vpadalq_u32(__a, __b) \
+  (uint64x2_t)__builtin_neon_vpadalv4si ((int64x2_t) __a, (int32x4_t) __b, 0)
+
+#define vpmax_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vpmaxv8qi (__a, __b, 1)
+
+#define vpmax_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vpmaxv4hi (__a, __b, 1)
+
+#define vpmax_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vpmaxv2si (__a, __b, 1)
+
+#define vpmax_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vpmaxv2sf (__a, __b, 5)
+
+#define vpmax_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vpmaxv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vpmax_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vpmaxv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vpmax_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vpmaxv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vpmin_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vpminv8qi (__a, __b, 1)
+
+#define vpmin_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vpminv4hi (__a, __b, 1)
+
+#define vpmin_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vpminv2si (__a, __b, 1)
+
+#define vpmin_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vpminv2sf (__a, __b, 5)
+
+#define vpmin_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vpminv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vpmin_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vpminv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vpmin_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vpminv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vrecps_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vrecpsv2sf (__a, __b, 5)
+
+#define vrecpsq_f32(__a, __b) \
+  (float32x4_t)__builtin_neon_vrecpsv4sf (__a, __b, 5)
+
+#define vrsqrts_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vrsqrtsv2sf (__a, __b, 5)
+
+#define vrsqrtsq_f32(__a, __b) \
+  (float32x4_t)__builtin_neon_vrsqrtsv4sf (__a, __b, 5)
+
+#define vshl_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vshlv8qi (__a, __b, 1)
+
+#define vshl_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vshlv4hi (__a, __b, 1)
+
+#define vshl_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vshlv2si (__a, __b, 1)
+
+#define vshl_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vshlv1di (__a, __b, 1)
+
+#define vshl_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vshlv8qi ((int8x8_t) __a, __b, 0)
+
+#define vshl_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vshlv4hi ((int16x4_t) __a, __b, 0)
+
+#define vshl_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vshlv2si ((int32x2_t) __a, __b, 0)
+
+#define vshl_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vshlv1di ((int64x1_t) __a, __b, 0)
+
+#define vshlq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vshlv16qi (__a, __b, 1)
+
+#define vshlq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vshlv8hi (__a, __b, 1)
+
+#define vshlq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vshlv4si (__a, __b, 1)
+
+#define vshlq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vshlv2di (__a, __b, 1)
+
+#define vshlq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vshlv16qi ((int8x16_t) __a, __b, 0)
+
+#define vshlq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vshlv8hi ((int16x8_t) __a, __b, 0)
+
+#define vshlq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vshlv4si ((int32x4_t) __a, __b, 0)
+
+#define vshlq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vshlv2di ((int64x2_t) __a, __b, 0)
+
+#define vrshl_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vshlv8qi (__a, __b, 3)
+
+#define vrshl_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vshlv4hi (__a, __b, 3)
+
+#define vrshl_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vshlv2si (__a, __b, 3)
+
+#define vrshl_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vshlv1di (__a, __b, 3)
+
+#define vrshl_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vshlv8qi ((int8x8_t) __a, __b, 2)
+
+#define vrshl_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vshlv4hi ((int16x4_t) __a, __b, 2)
+
+#define vrshl_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vshlv2si ((int32x2_t) __a, __b, 2)
+
+#define vrshl_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vshlv1di ((int64x1_t) __a, __b, 2)
+
+#define vrshlq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vshlv16qi (__a, __b, 3)
+
+#define vrshlq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vshlv8hi (__a, __b, 3)
+
+#define vrshlq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vshlv4si (__a, __b, 3)
+
+#define vrshlq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vshlv2di (__a, __b, 3)
+
+#define vrshlq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vshlv16qi ((int8x16_t) __a, __b, 2)
+
+#define vrshlq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vshlv8hi ((int16x8_t) __a, __b, 2)
+
+#define vrshlq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vshlv4si ((int32x4_t) __a, __b, 2)
+
+#define vrshlq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vshlv2di ((int64x2_t) __a, __b, 2)
+
+#define vqshl_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vqshlv8qi (__a, __b, 1)
+
+#define vqshl_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vqshlv4hi (__a, __b, 1)
+
+#define vqshl_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vqshlv2si (__a, __b, 1)
+
+#define vqshl_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vqshlv1di (__a, __b, 1)
+
+#define vqshl_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vqshlv8qi ((int8x8_t) __a, __b, 0)
+
+#define vqshl_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vqshlv4hi ((int16x4_t) __a, __b, 0)
+
+#define vqshl_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vqshlv2si ((int32x2_t) __a, __b, 0)
+
+#define vqshl_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vqshlv1di ((int64x1_t) __a, __b, 0)
+
+#define vqshlq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vqshlv16qi (__a, __b, 1)
+
+#define vqshlq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vqshlv8hi (__a, __b, 1)
+
+#define vqshlq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vqshlv4si (__a, __b, 1)
+
+#define vqshlq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vqshlv2di (__a, __b, 1)
+
+#define vqshlq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vqshlv16qi ((int8x16_t) __a, __b, 0)
+
+#define vqshlq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vqshlv8hi ((int16x8_t) __a, __b, 0)
+
+#define vqshlq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vqshlv4si ((int32x4_t) __a, __b, 0)
+
+#define vqshlq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vqshlv2di ((int64x2_t) __a, __b, 0)
+
+#define vqrshl_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vqshlv8qi (__a, __b, 3)
+
+#define vqrshl_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vqshlv4hi (__a, __b, 3)
+
+#define vqrshl_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vqshlv2si (__a, __b, 3)
+
+#define vqrshl_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vqshlv1di (__a, __b, 3)
+
+#define vqrshl_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vqshlv8qi ((int8x8_t) __a, __b, 2)
+
+#define vqrshl_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vqshlv4hi ((int16x4_t) __a, __b, 2)
+
+#define vqrshl_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vqshlv2si ((int32x2_t) __a, __b, 2)
+
+#define vqrshl_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vqshlv1di ((int64x1_t) __a, __b, 2)
+
+#define vqrshlq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vqshlv16qi (__a, __b, 3)
+
+#define vqrshlq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vqshlv8hi (__a, __b, 3)
+
+#define vqrshlq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vqshlv4si (__a, __b, 3)
+
+#define vqrshlq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vqshlv2di (__a, __b, 3)
+
+#define vqrshlq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vqshlv16qi ((int8x16_t) __a, __b, 2)
+
+#define vqrshlq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vqshlv8hi ((int16x8_t) __a, __b, 2)
+
+#define vqrshlq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vqshlv4si ((int32x4_t) __a, __b, 2)
+
+#define vqrshlq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vqshlv2di ((int64x2_t) __a, __b, 2)
+
+#define vshr_n_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vshr_nv8qi (__a, __b, 1)
+
+#define vshr_n_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vshr_nv4hi (__a, __b, 1)
+
+#define vshr_n_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vshr_nv2si (__a, __b, 1)
+
+#define vshr_n_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vshr_nv1di (__a, __b, 1)
+
+#define vshr_n_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vshr_nv8qi ((int8x8_t) __a, __b, 0)
+
+#define vshr_n_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vshr_nv4hi ((int16x4_t) __a, __b, 0)
+
+#define vshr_n_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vshr_nv2si ((int32x2_t) __a, __b, 0)
+
+#define vshr_n_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vshr_nv1di ((int64x1_t) __a, __b, 0)
+
+#define vshrq_n_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vshr_nv16qi (__a, __b, 1)
+
+#define vshrq_n_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vshr_nv8hi (__a, __b, 1)
+
+#define vshrq_n_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vshr_nv4si (__a, __b, 1)
+
+#define vshrq_n_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vshr_nv2di (__a, __b, 1)
+
+#define vshrq_n_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vshr_nv16qi ((int8x16_t) __a, __b, 0)
+
+#define vshrq_n_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vshr_nv8hi ((int16x8_t) __a, __b, 0)
+
+#define vshrq_n_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vshr_nv4si ((int32x4_t) __a, __b, 0)
+
+#define vshrq_n_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vshr_nv2di ((int64x2_t) __a, __b, 0)
+
+#define vrshr_n_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vshr_nv8qi (__a, __b, 3)
+
+#define vrshr_n_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vshr_nv4hi (__a, __b, 3)
+
+#define vrshr_n_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vshr_nv2si (__a, __b, 3)
+
+#define vrshr_n_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vshr_nv1di (__a, __b, 3)
+
+#define vrshr_n_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vshr_nv8qi ((int8x8_t) __a, __b, 2)
+
+#define vrshr_n_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vshr_nv4hi ((int16x4_t) __a, __b, 2)
+
+#define vrshr_n_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vshr_nv2si ((int32x2_t) __a, __b, 2)
+
+#define vrshr_n_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vshr_nv1di ((int64x1_t) __a, __b, 2)
+
+#define vrshrq_n_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vshr_nv16qi (__a, __b, 3)
+
+#define vrshrq_n_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vshr_nv8hi (__a, __b, 3)
+
+#define vrshrq_n_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vshr_nv4si (__a, __b, 3)
+
+#define vrshrq_n_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vshr_nv2di (__a, __b, 3)
+
+#define vrshrq_n_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vshr_nv16qi ((int8x16_t) __a, __b, 2)
+
+#define vrshrq_n_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vshr_nv8hi ((int16x8_t) __a, __b, 2)
+
+#define vrshrq_n_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vshr_nv4si ((int32x4_t) __a, __b, 2)
+
+#define vrshrq_n_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vshr_nv2di ((int64x2_t) __a, __b, 2)
+
+#define vshrn_n_s16(__a, __b) \
+  (int8x8_t)__builtin_neon_vshrn_nv8hi (__a, __b, 1)
+
+#define vshrn_n_s32(__a, __b) \
+  (int16x4_t)__builtin_neon_vshrn_nv4si (__a, __b, 1)
+
+#define vshrn_n_s64(__a, __b) \
+  (int32x2_t)__builtin_neon_vshrn_nv2di (__a, __b, 1)
+
+#define vshrn_n_u16(__a, __b) \
+  (uint8x8_t)__builtin_neon_vshrn_nv8hi ((int16x8_t) __a, __b, 0)
+
+#define vshrn_n_u32(__a, __b) \
+  (uint16x4_t)__builtin_neon_vshrn_nv4si ((int32x4_t) __a, __b, 0)
+
+#define vshrn_n_u64(__a, __b) \
+  (uint32x2_t)__builtin_neon_vshrn_nv2di ((int64x2_t) __a, __b, 0)
+
+#define vrshrn_n_s16(__a, __b) \
+  (int8x8_t)__builtin_neon_vshrn_nv8hi (__a, __b, 3)
+
+#define vrshrn_n_s32(__a, __b) \
+  (int16x4_t)__builtin_neon_vshrn_nv4si (__a, __b, 3)
+
+#define vrshrn_n_s64(__a, __b) \
+  (int32x2_t)__builtin_neon_vshrn_nv2di (__a, __b, 3)
+
+#define vrshrn_n_u16(__a, __b) \
+  (uint8x8_t)__builtin_neon_vshrn_nv8hi ((int16x8_t) __a, __b, 2)
+
+#define vrshrn_n_u32(__a, __b) \
+  (uint16x4_t)__builtin_neon_vshrn_nv4si ((int32x4_t) __a, __b, 2)
+
+#define vrshrn_n_u64(__a, __b) \
+  (uint32x2_t)__builtin_neon_vshrn_nv2di ((int64x2_t) __a, __b, 2)
+
+#define vqshrn_n_s16(__a, __b) \
+  (int8x8_t)__builtin_neon_vqshrn_nv8hi (__a, __b, 1)
+
+#define vqshrn_n_s32(__a, __b) \
+  (int16x4_t)__builtin_neon_vqshrn_nv4si (__a, __b, 1)
+
+#define vqshrn_n_s64(__a, __b) \
+  (int32x2_t)__builtin_neon_vqshrn_nv2di (__a, __b, 1)
+
+#define vqshrn_n_u16(__a, __b) \
+  (uint8x8_t)__builtin_neon_vqshrn_nv8hi ((int16x8_t) __a, __b, 0)
+
+#define vqshrn_n_u32(__a, __b) \
+  (uint16x4_t)__builtin_neon_vqshrn_nv4si ((int32x4_t) __a, __b, 0)
+
+#define vqshrn_n_u64(__a, __b) \
+  (uint32x2_t)__builtin_neon_vqshrn_nv2di ((int64x2_t) __a, __b, 0)
+
+#define vqrshrn_n_s16(__a, __b) \
+  (int8x8_t)__builtin_neon_vqshrn_nv8hi (__a, __b, 3)
+
+#define vqrshrn_n_s32(__a, __b) \
+  (int16x4_t)__builtin_neon_vqshrn_nv4si (__a, __b, 3)
+
+#define vqrshrn_n_s64(__a, __b) \
+  (int32x2_t)__builtin_neon_vqshrn_nv2di (__a, __b, 3)
+
+#define vqrshrn_n_u16(__a, __b) \
+  (uint8x8_t)__builtin_neon_vqshrn_nv8hi ((int16x8_t) __a, __b, 2)
+
+#define vqrshrn_n_u32(__a, __b) \
+  (uint16x4_t)__builtin_neon_vqshrn_nv4si ((int32x4_t) __a, __b, 2)
+
+#define vqrshrn_n_u64(__a, __b) \
+  (uint32x2_t)__builtin_neon_vqshrn_nv2di ((int64x2_t) __a, __b, 2)
+
+#define vqshrun_n_s16(__a, __b) \
+  (uint8x8_t)__builtin_neon_vqshrun_nv8hi (__a, __b, 1)
+
+#define vqshrun_n_s32(__a, __b) \
+  (uint16x4_t)__builtin_neon_vqshrun_nv4si (__a, __b, 1)
+
+#define vqshrun_n_s64(__a, __b) \
+  (uint32x2_t)__builtin_neon_vqshrun_nv2di (__a, __b, 1)
+
+#define vqrshrun_n_s16(__a, __b) \
+  (uint8x8_t)__builtin_neon_vqshrun_nv8hi (__a, __b, 3)
+
+#define vqrshrun_n_s32(__a, __b) \
+  (uint16x4_t)__builtin_neon_vqshrun_nv4si (__a, __b, 3)
+
+#define vqrshrun_n_s64(__a, __b) \
+  (uint32x2_t)__builtin_neon_vqshrun_nv2di (__a, __b, 3)
+
+#define vshl_n_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vshl_nv8qi (__a, __b, 1)
+
+#define vshl_n_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vshl_nv4hi (__a, __b, 1)
+
+#define vshl_n_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vshl_nv2si (__a, __b, 1)
+
+#define vshl_n_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vshl_nv1di (__a, __b, 1)
+
+#define vshl_n_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vshl_nv8qi ((int8x8_t) __a, __b, 0)
+
+#define vshl_n_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vshl_nv4hi ((int16x4_t) __a, __b, 0)
+
+#define vshl_n_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vshl_nv2si ((int32x2_t) __a, __b, 0)
+
+#define vshl_n_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vshl_nv1di ((int64x1_t) __a, __b, 0)
+
+#define vshlq_n_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vshl_nv16qi (__a, __b, 1)
+
+#define vshlq_n_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vshl_nv8hi (__a, __b, 1)
+
+#define vshlq_n_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vshl_nv4si (__a, __b, 1)
+
+#define vshlq_n_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vshl_nv2di (__a, __b, 1)
+
+#define vshlq_n_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vshl_nv16qi ((int8x16_t) __a, __b, 0)
+
+#define vshlq_n_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vshl_nv8hi ((int16x8_t) __a, __b, 0)
+
+#define vshlq_n_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vshl_nv4si ((int32x4_t) __a, __b, 0)
+
+#define vshlq_n_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vshl_nv2di ((int64x2_t) __a, __b, 0)
+
+#define vqshl_n_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vqshl_nv8qi (__a, __b, 1)
+
+#define vqshl_n_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vqshl_nv4hi (__a, __b, 1)
+
+#define vqshl_n_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vqshl_nv2si (__a, __b, 1)
+
+#define vqshl_n_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vqshl_nv1di (__a, __b, 1)
+
+#define vqshl_n_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vqshl_nv8qi ((int8x8_t) __a, __b, 0)
+
+#define vqshl_n_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vqshl_nv4hi ((int16x4_t) __a, __b, 0)
+
+#define vqshl_n_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vqshl_nv2si ((int32x2_t) __a, __b, 0)
+
+#define vqshl_n_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vqshl_nv1di ((int64x1_t) __a, __b, 0)
+
+#define vqshlq_n_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vqshl_nv16qi (__a, __b, 1)
+
+#define vqshlq_n_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vqshl_nv8hi (__a, __b, 1)
+
+#define vqshlq_n_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vqshl_nv4si (__a, __b, 1)
+
+#define vqshlq_n_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vqshl_nv2di (__a, __b, 1)
+
+#define vqshlq_n_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vqshl_nv16qi ((int8x16_t) __a, __b, 0)
+
+#define vqshlq_n_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vqshl_nv8hi ((int16x8_t) __a, __b, 0)
+
+#define vqshlq_n_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vqshl_nv4si ((int32x4_t) __a, __b, 0)
+
+#define vqshlq_n_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vqshl_nv2di ((int64x2_t) __a, __b, 0)
+
+#define vqshlu_n_s8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vqshlu_nv8qi (__a, __b, 1)
+
+#define vqshlu_n_s16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vqshlu_nv4hi (__a, __b, 1)
+
+#define vqshlu_n_s32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vqshlu_nv2si (__a, __b, 1)
+
+#define vqshlu_n_s64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vqshlu_nv1di (__a, __b, 1)
+
+#define vqshluq_n_s8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vqshlu_nv16qi (__a, __b, 1)
+
+#define vqshluq_n_s16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vqshlu_nv8hi (__a, __b, 1)
+
+#define vqshluq_n_s32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vqshlu_nv4si (__a, __b, 1)
+
+#define vqshluq_n_s64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vqshlu_nv2di (__a, __b, 1)
+
+#define vshll_n_s8(__a, __b) \
+  (int16x8_t)__builtin_neon_vshll_nv8qi (__a, __b, 1)
+
+#define vshll_n_s16(__a, __b) \
+  (int32x4_t)__builtin_neon_vshll_nv4hi (__a, __b, 1)
+
+#define vshll_n_s32(__a, __b) \
+  (int64x2_t)__builtin_neon_vshll_nv2si (__a, __b, 1)
+
+#define vshll_n_u8(__a, __b) \
+  (uint16x8_t)__builtin_neon_vshll_nv8qi ((int8x8_t) __a, __b, 0)
+
+#define vshll_n_u16(__a, __b) \
+  (uint32x4_t)__builtin_neon_vshll_nv4hi ((int16x4_t) __a, __b, 0)
+
+#define vshll_n_u32(__a, __b) \
+  (uint64x2_t)__builtin_neon_vshll_nv2si ((int32x2_t) __a, __b, 0)
+
+#define vsra_n_s8(__a, __b, __c) \
+  (int8x8_t)__builtin_neon_vsra_nv8qi (__a, __b, __c, 1)
+
+#define vsra_n_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vsra_nv4hi (__a, __b, __c, 1)
+
+#define vsra_n_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vsra_nv2si (__a, __b, __c, 1)
+
+#define vsra_n_s64(__a, __b, __c) \
+  (int64x1_t)__builtin_neon_vsra_nv1di (__a, __b, __c, 1)
+
+#define vsra_n_u8(__a, __b, __c) \
+  (uint8x8_t)__builtin_neon_vsra_nv8qi ((int8x8_t) __a, (int8x8_t) __b, __c, 0)
+
+#define vsra_n_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vsra_nv4hi ((int16x4_t) __a, (int16x4_t) __b, __c, 0)
+
+#define vsra_n_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vsra_nv2si ((int32x2_t) __a, (int32x2_t) __b, __c, 0)
+
+#define vsra_n_u64(__a, __b, __c) \
+  (uint64x1_t)__builtin_neon_vsra_nv1di ((int64x1_t) __a, (int64x1_t) __b, __c, 0)
+
+#define vsraq_n_s8(__a, __b, __c) \
+  (int8x16_t)__builtin_neon_vsra_nv16qi (__a, __b, __c, 1)
+
+#define vsraq_n_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vsra_nv8hi (__a, __b, __c, 1)
+
+#define vsraq_n_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vsra_nv4si (__a, __b, __c, 1)
+
+#define vsraq_n_s64(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vsra_nv2di (__a, __b, __c, 1)
+
+#define vsraq_n_u8(__a, __b, __c) \
+  (uint8x16_t)__builtin_neon_vsra_nv16qi ((int8x16_t) __a, (int8x16_t) __b, __c, 0)
+
+#define vsraq_n_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vsra_nv8hi ((int16x8_t) __a, (int16x8_t) __b, __c, 0)
+
+#define vsraq_n_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vsra_nv4si ((int32x4_t) __a, (int32x4_t) __b, __c, 0)
+
+#define vsraq_n_u64(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vsra_nv2di ((int64x2_t) __a, (int64x2_t) __b, __c, 0)
+
+#define vrsra_n_s8(__a, __b, __c) \
+  (int8x8_t)__builtin_neon_vsra_nv8qi (__a, __b, __c, 3)
+
+#define vrsra_n_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vsra_nv4hi (__a, __b, __c, 3)
+
+#define vrsra_n_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vsra_nv2si (__a, __b, __c, 3)
+
+#define vrsra_n_s64(__a, __b, __c) \
+  (int64x1_t)__builtin_neon_vsra_nv1di (__a, __b, __c, 3)
+
+#define vrsra_n_u8(__a, __b, __c) \
+  (uint8x8_t)__builtin_neon_vsra_nv8qi ((int8x8_t) __a, (int8x8_t) __b, __c, 2)
+
+#define vrsra_n_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vsra_nv4hi ((int16x4_t) __a, (int16x4_t) __b, __c, 2)
+
+#define vrsra_n_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vsra_nv2si ((int32x2_t) __a, (int32x2_t) __b, __c, 2)
+
+#define vrsra_n_u64(__a, __b, __c) \
+  (uint64x1_t)__builtin_neon_vsra_nv1di ((int64x1_t) __a, (int64x1_t) __b, __c, 2)
+
+#define vrsraq_n_s8(__a, __b, __c) \
+  (int8x16_t)__builtin_neon_vsra_nv16qi (__a, __b, __c, 3)
+
+#define vrsraq_n_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vsra_nv8hi (__a, __b, __c, 3)
+
+#define vrsraq_n_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vsra_nv4si (__a, __b, __c, 3)
+
+#define vrsraq_n_s64(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vsra_nv2di (__a, __b, __c, 3)
+
+#define vrsraq_n_u8(__a, __b, __c) \
+  (uint8x16_t)__builtin_neon_vsra_nv16qi ((int8x16_t) __a, (int8x16_t) __b, __c, 2)
+
+#define vrsraq_n_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vsra_nv8hi ((int16x8_t) __a, (int16x8_t) __b, __c, 2)
+
+#define vrsraq_n_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vsra_nv4si ((int32x4_t) __a, (int32x4_t) __b, __c, 2)
+
+#define vrsraq_n_u64(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vsra_nv2di ((int64x2_t) __a, (int64x2_t) __b, __c, 2)
+
+#define vsri_n_s8(__a, __b, __c) \
+  (int8x8_t)__builtin_neon_vsri_nv8qi (__a, __b, __c)
+
+#define vsri_n_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vsri_nv4hi (__a, __b, __c)
+
+#define vsri_n_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vsri_nv2si (__a, __b, __c)
+
+#define vsri_n_s64(__a, __b, __c) \
+  (int64x1_t)__builtin_neon_vsri_nv1di (__a, __b, __c)
+
+#define vsri_n_u8(__a, __b, __c) \
+  (uint8x8_t)__builtin_neon_vsri_nv8qi ((int8x8_t) __a, (int8x8_t) __b, __c)
+
+#define vsri_n_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vsri_nv4hi ((int16x4_t) __a, (int16x4_t) __b, __c)
+
+#define vsri_n_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vsri_nv2si ((int32x2_t) __a, (int32x2_t) __b, __c)
+
+#define vsri_n_u64(__a, __b, __c) \
+  (uint64x1_t)__builtin_neon_vsri_nv1di ((int64x1_t) __a, (int64x1_t) __b, __c)
+
+#define vsri_n_p8(__a, __b, __c) \
+  (poly8x8_t)__builtin_neon_vsri_nv8qi ((int8x8_t) __a, (int8x8_t) __b, __c)
+
+#define vsri_n_p16(__a, __b, __c) \
+  (poly16x4_t)__builtin_neon_vsri_nv4hi ((int16x4_t) __a, (int16x4_t) __b, __c)
+
+#define vsriq_n_s8(__a, __b, __c) \
+  (int8x16_t)__builtin_neon_vsri_nv16qi (__a, __b, __c)
+
+#define vsriq_n_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vsri_nv8hi (__a, __b, __c)
+
+#define vsriq_n_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vsri_nv4si (__a, __b, __c)
+
+#define vsriq_n_s64(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vsri_nv2di (__a, __b, __c)
+
+#define vsriq_n_u8(__a, __b, __c) \
+  (uint8x16_t)__builtin_neon_vsri_nv16qi ((int8x16_t) __a, (int8x16_t) __b, __c)
+
+#define vsriq_n_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vsri_nv8hi ((int16x8_t) __a, (int16x8_t) __b, __c)
+
+#define vsriq_n_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vsri_nv4si ((int32x4_t) __a, (int32x4_t) __b, __c)
+
+#define vsriq_n_u64(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vsri_nv2di ((int64x2_t) __a, (int64x2_t) __b, __c)
+
+#define vsriq_n_p8(__a, __b, __c) \
+  (poly8x16_t)__builtin_neon_vsri_nv16qi ((int8x16_t) __a, (int8x16_t) __b, __c)
+
+#define vsriq_n_p16(__a, __b, __c) \
+  (poly16x8_t)__builtin_neon_vsri_nv8hi ((int16x8_t) __a, (int16x8_t) __b, __c)
+
+#define vsli_n_s8(__a, __b, __c) \
+  (int8x8_t)__builtin_neon_vsli_nv8qi (__a, __b, __c)
+
+#define vsli_n_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vsli_nv4hi (__a, __b, __c)
+
+#define vsli_n_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vsli_nv2si (__a, __b, __c)
+
+#define vsli_n_s64(__a, __b, __c) \
+  (int64x1_t)__builtin_neon_vsli_nv1di (__a, __b, __c)
+
+#define vsli_n_u8(__a, __b, __c) \
+  (uint8x8_t)__builtin_neon_vsli_nv8qi ((int8x8_t) __a, (int8x8_t) __b, __c)
+
+#define vsli_n_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vsli_nv4hi ((int16x4_t) __a, (int16x4_t) __b, __c)
+
+#define vsli_n_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vsli_nv2si ((int32x2_t) __a, (int32x2_t) __b, __c)
+
+#define vsli_n_u64(__a, __b, __c) \
+  (uint64x1_t)__builtin_neon_vsli_nv1di ((int64x1_t) __a, (int64x1_t) __b, __c)
+
+#define vsli_n_p8(__a, __b, __c) \
+  (poly8x8_t)__builtin_neon_vsli_nv8qi ((int8x8_t) __a, (int8x8_t) __b, __c)
+
+#define vsli_n_p16(__a, __b, __c) \
+  (poly16x4_t)__builtin_neon_vsli_nv4hi ((int16x4_t) __a, (int16x4_t) __b, __c)
+
+#define vsliq_n_s8(__a, __b, __c) \
+  (int8x16_t)__builtin_neon_vsli_nv16qi (__a, __b, __c)
+
+#define vsliq_n_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vsli_nv8hi (__a, __b, __c)
+
+#define vsliq_n_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vsli_nv4si (__a, __b, __c)
+
+#define vsliq_n_s64(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vsli_nv2di (__a, __b, __c)
+
+#define vsliq_n_u8(__a, __b, __c) \
+  (uint8x16_t)__builtin_neon_vsli_nv16qi ((int8x16_t) __a, (int8x16_t) __b, __c)
+
+#define vsliq_n_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vsli_nv8hi ((int16x8_t) __a, (int16x8_t) __b, __c)
+
+#define vsliq_n_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vsli_nv4si ((int32x4_t) __a, (int32x4_t) __b, __c)
+
+#define vsliq_n_u64(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vsli_nv2di ((int64x2_t) __a, (int64x2_t) __b, __c)
+
+#define vsliq_n_p8(__a, __b, __c) \
+  (poly8x16_t)__builtin_neon_vsli_nv16qi ((int8x16_t) __a, (int8x16_t) __b, __c)
+
+#define vsliq_n_p16(__a, __b, __c) \
+  (poly16x8_t)__builtin_neon_vsli_nv8hi ((int16x8_t) __a, (int16x8_t) __b, __c)
+
+#define vabs_s8(__a) \
+  (int8x8_t)__builtin_neon_vabsv8qi (__a, 1)
+
+#define vabs_s16(__a) \
+  (int16x4_t)__builtin_neon_vabsv4hi (__a, 1)
+
+#define vabs_s32(__a) \
+  (int32x2_t)__builtin_neon_vabsv2si (__a, 1)
+
+#define vabs_f32(__a) \
+  (float32x2_t)__builtin_neon_vabsv2sf (__a, 5)
+
+#define vabsq_s8(__a) \
+  (int8x16_t)__builtin_neon_vabsv16qi (__a, 1)
+
+#define vabsq_s16(__a) \
+  (int16x8_t)__builtin_neon_vabsv8hi (__a, 1)
+
+#define vabsq_s32(__a) \
+  (int32x4_t)__builtin_neon_vabsv4si (__a, 1)
+
+#define vabsq_f32(__a) \
+  (float32x4_t)__builtin_neon_vabsv4sf (__a, 5)
+
+#define vqabs_s8(__a) \
+  (int8x8_t)__builtin_neon_vqabsv8qi (__a, 1)
+
+#define vqabs_s16(__a) \
+  (int16x4_t)__builtin_neon_vqabsv4hi (__a, 1)
+
+#define vqabs_s32(__a) \
+  (int32x2_t)__builtin_neon_vqabsv2si (__a, 1)
+
+#define vqabsq_s8(__a) \
+  (int8x16_t)__builtin_neon_vqabsv16qi (__a, 1)
+
+#define vqabsq_s16(__a) \
+  (int16x8_t)__builtin_neon_vqabsv8hi (__a, 1)
+
+#define vqabsq_s32(__a) \
+  (int32x4_t)__builtin_neon_vqabsv4si (__a, 1)
+
+#define vneg_s8(__a) \
+  (int8x8_t)__builtin_neon_vnegv8qi (__a, 1)
+
+#define vneg_s16(__a) \
+  (int16x4_t)__builtin_neon_vnegv4hi (__a, 1)
+
+#define vneg_s32(__a) \
+  (int32x2_t)__builtin_neon_vnegv2si (__a, 1)
+
+#define vneg_f32(__a) \
+  (float32x2_t)__builtin_neon_vnegv2sf (__a, 5)
+
+#define vnegq_s8(__a) \
+  (int8x16_t)__builtin_neon_vnegv16qi (__a, 1)
+
+#define vnegq_s16(__a) \
+  (int16x8_t)__builtin_neon_vnegv8hi (__a, 1)
+
+#define vnegq_s32(__a) \
+  (int32x4_t)__builtin_neon_vnegv4si (__a, 1)
+
+#define vnegq_f32(__a) \
+  (float32x4_t)__builtin_neon_vnegv4sf (__a, 5)
+
+#define vqneg_s8(__a) \
+  (int8x8_t)__builtin_neon_vqnegv8qi (__a, 1)
+
+#define vqneg_s16(__a) \
+  (int16x4_t)__builtin_neon_vqnegv4hi (__a, 1)
+
+#define vqneg_s32(__a) \
+  (int32x2_t)__builtin_neon_vqnegv2si (__a, 1)
+
+#define vqnegq_s8(__a) \
+  (int8x16_t)__builtin_neon_vqnegv16qi (__a, 1)
+
+#define vqnegq_s16(__a) \
+  (int16x8_t)__builtin_neon_vqnegv8hi (__a, 1)
+
+#define vqnegq_s32(__a) \
+  (int32x4_t)__builtin_neon_vqnegv4si (__a, 1)
+
+#define vmvn_s8(__a) \
+  (int8x8_t)__builtin_neon_vmvnv8qi (__a, 1)
+
+#define vmvn_s16(__a) \
+  (int16x4_t)__builtin_neon_vmvnv4hi (__a, 1)
+
+#define vmvn_s32(__a) \
+  (int32x2_t)__builtin_neon_vmvnv2si (__a, 1)
+
+#define vmvn_u8(__a) \
+  (uint8x8_t)__builtin_neon_vmvnv8qi ((int8x8_t) __a, 0)
+
+#define vmvn_u16(__a) \
+  (uint16x4_t)__builtin_neon_vmvnv4hi ((int16x4_t) __a, 0)
+
+#define vmvn_u32(__a) \
+  (uint32x2_t)__builtin_neon_vmvnv2si ((int32x2_t) __a, 0)
+
+#define vmvn_p8(__a) \
+  (poly8x8_t)__builtin_neon_vmvnv8qi ((int8x8_t) __a, 4)
+
+#define vmvnq_s8(__a) \
+  (int8x16_t)__builtin_neon_vmvnv16qi (__a, 1)
+
+#define vmvnq_s16(__a) \
+  (int16x8_t)__builtin_neon_vmvnv8hi (__a, 1)
+
+#define vmvnq_s32(__a) \
+  (int32x4_t)__builtin_neon_vmvnv4si (__a, 1)
+
+#define vmvnq_u8(__a) \
+  (uint8x16_t)__builtin_neon_vmvnv16qi ((int8x16_t) __a, 0)
+
+#define vmvnq_u16(__a) \
+  (uint16x8_t)__builtin_neon_vmvnv8hi ((int16x8_t) __a, 0)
+
+#define vmvnq_u32(__a) \
+  (uint32x4_t)__builtin_neon_vmvnv4si ((int32x4_t) __a, 0)
+
+#define vmvnq_p8(__a) \
+  (poly8x16_t)__builtin_neon_vmvnv16qi ((int8x16_t) __a, 4)
+
+#define vcls_s8(__a) \
+  (int8x8_t)__builtin_neon_vclsv8qi (__a, 1)
+
+#define vcls_s16(__a) \
+  (int16x4_t)__builtin_neon_vclsv4hi (__a, 1)
+
+#define vcls_s32(__a) \
+  (int32x2_t)__builtin_neon_vclsv2si (__a, 1)
+
+#define vclsq_s8(__a) \
+  (int8x16_t)__builtin_neon_vclsv16qi (__a, 1)
+
+#define vclsq_s16(__a) \
+  (int16x8_t)__builtin_neon_vclsv8hi (__a, 1)
+
+#define vclsq_s32(__a) \
+  (int32x4_t)__builtin_neon_vclsv4si (__a, 1)
+
+#define vclz_s8(__a) \
+  (int8x8_t)__builtin_neon_vclzv8qi (__a, 1)
+
+#define vclz_s16(__a) \
+  (int16x4_t)__builtin_neon_vclzv4hi (__a, 1)
+
+#define vclz_s32(__a) \
+  (int32x2_t)__builtin_neon_vclzv2si (__a, 1)
+
+#define vclz_u8(__a) \
+  (uint8x8_t)__builtin_neon_vclzv8qi ((int8x8_t) __a, 0)
+
+#define vclz_u16(__a) \
+  (uint16x4_t)__builtin_neon_vclzv4hi ((int16x4_t) __a, 0)
+
+#define vclz_u32(__a) \
+  (uint32x2_t)__builtin_neon_vclzv2si ((int32x2_t) __a, 0)
+
+#define vclzq_s8(__a) \
+  (int8x16_t)__builtin_neon_vclzv16qi (__a, 1)
+
+#define vclzq_s16(__a) \
+  (int16x8_t)__builtin_neon_vclzv8hi (__a, 1)
+
+#define vclzq_s32(__a) \
+  (int32x4_t)__builtin_neon_vclzv4si (__a, 1)
+
+#define vclzq_u8(__a) \
+  (uint8x16_t)__builtin_neon_vclzv16qi ((int8x16_t) __a, 0)
+
+#define vclzq_u16(__a) \
+  (uint16x8_t)__builtin_neon_vclzv8hi ((int16x8_t) __a, 0)
+
+#define vclzq_u32(__a) \
+  (uint32x4_t)__builtin_neon_vclzv4si ((int32x4_t) __a, 0)
+
+#define vcnt_s8(__a) \
+  (int8x8_t)__builtin_neon_vcntv8qi (__a, 1)
+
+#define vcnt_u8(__a) \
+  (uint8x8_t)__builtin_neon_vcntv8qi ((int8x8_t) __a, 0)
+
+#define vcnt_p8(__a) \
+  (poly8x8_t)__builtin_neon_vcntv8qi ((int8x8_t) __a, 4)
+
+#define vcntq_s8(__a) \
+  (int8x16_t)__builtin_neon_vcntv16qi (__a, 1)
+
+#define vcntq_u8(__a) \
+  (uint8x16_t)__builtin_neon_vcntv16qi ((int8x16_t) __a, 0)
+
+#define vcntq_p8(__a) \
+  (poly8x16_t)__builtin_neon_vcntv16qi ((int8x16_t) __a, 4)
+
+#define vrecpe_f32(__a) \
+  (float32x2_t)__builtin_neon_vrecpev2sf (__a, 5)
+
+#define vrecpe_u32(__a) \
+  (uint32x2_t)__builtin_neon_vrecpev2si ((int32x2_t) __a, 0)
+
+#define vrecpeq_f32(__a) \
+  (float32x4_t)__builtin_neon_vrecpev4sf (__a, 5)
+
+#define vrecpeq_u32(__a) \
+  (uint32x4_t)__builtin_neon_vrecpev4si ((int32x4_t) __a, 0)
+
+#define vrsqrte_f32(__a) \
+  (float32x2_t)__builtin_neon_vrsqrtev2sf (__a, 5)
+
+#define vrsqrte_u32(__a) \
+  (uint32x2_t)__builtin_neon_vrsqrtev2si ((int32x2_t) __a, 0)
+
+#define vrsqrteq_f32(__a) \
+  (float32x4_t)__builtin_neon_vrsqrtev4sf (__a, 5)
+
+#define vrsqrteq_u32(__a) \
+  (uint32x4_t)__builtin_neon_vrsqrtev4si ((int32x4_t) __a, 0)
+
+#define vget_lane_s8(__a, __b) \
+  (int8_t)__builtin_neon_vget_lanev8qi (__a, __b, 1)
+
+#define vget_lane_s16(__a, __b) \
+  (int16_t)__builtin_neon_vget_lanev4hi (__a, __b, 1)
+
+#define vget_lane_s32(__a, __b) \
+  (int32_t)__builtin_neon_vget_lanev2si (__a, __b, 1)
+
+#define vget_lane_f32(__a, __b) \
+  (float32_t)__builtin_neon_vget_lanev2sf (__a, __b, 5)
+
+#define vget_lane_u8(__a, __b) \
+  (uint8_t)__builtin_neon_vget_lanev8qi ((int8x8_t) __a, __b, 0)
+
+#define vget_lane_u16(__a, __b) \
+  (uint16_t)__builtin_neon_vget_lanev4hi ((int16x4_t) __a, __b, 0)
+
+#define vget_lane_u32(__a, __b) \
+  (uint32_t)__builtin_neon_vget_lanev2si ((int32x2_t) __a, __b, 0)
+
+#define vget_lane_p8(__a, __b) \
+  (poly8_t)__builtin_neon_vget_lanev8qi ((int8x8_t) __a, __b, 4)
+
+#define vget_lane_p16(__a, __b) \
+  (poly16_t)__builtin_neon_vget_lanev4hi ((int16x4_t) __a, __b, 4)
+
+#define vget_lane_s64(__a, __b) \
+  (int64_t)__builtin_neon_vget_lanev1di (__a, __b, 1)
+
+#define vget_lane_u64(__a, __b) \
+  (uint64_t)__builtin_neon_vget_lanev1di ((int64x1_t) __a, __b, 0)
+
+#define vgetq_lane_s8(__a, __b) \
+  (int8_t)__builtin_neon_vget_lanev16qi (__a, __b, 1)
+
+#define vgetq_lane_s16(__a, __b) \
+  (int16_t)__builtin_neon_vget_lanev8hi (__a, __b, 1)
+
+#define vgetq_lane_s32(__a, __b) \
+  (int32_t)__builtin_neon_vget_lanev4si (__a, __b, 1)
+
+#define vgetq_lane_f32(__a, __b) \
+  (float32_t)__builtin_neon_vget_lanev4sf (__a, __b, 5)
+
+#define vgetq_lane_u8(__a, __b) \
+  (uint8_t)__builtin_neon_vget_lanev16qi ((int8x16_t) __a, __b, 0)
+
+#define vgetq_lane_u16(__a, __b) \
+  (uint16_t)__builtin_neon_vget_lanev8hi ((int16x8_t) __a, __b, 0)
+
+#define vgetq_lane_u32(__a, __b) \
+  (uint32_t)__builtin_neon_vget_lanev4si ((int32x4_t) __a, __b, 0)
+
+#define vgetq_lane_p8(__a, __b) \
+  (poly8_t)__builtin_neon_vget_lanev16qi ((int8x16_t) __a, __b, 4)
+
+#define vgetq_lane_p16(__a, __b) \
+  (poly16_t)__builtin_neon_vget_lanev8hi ((int16x8_t) __a, __b, 4)
+
+#define vgetq_lane_s64(__a, __b) \
+  (int64_t)__builtin_neon_vget_lanev2di (__a, __b, 1)
+
+#define vgetq_lane_u64(__a, __b) \
+  (uint64_t)__builtin_neon_vget_lanev2di ((int64x2_t) __a, __b, 0)
+
+#define vset_lane_s8(__a, __b, __c) \
+  (int8x8_t)__builtin_neon_vset_lanev8qi ((__builtin_neon_qi) __a, __b, __c)
+
+#define vset_lane_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vset_lanev4hi ((__builtin_neon_hi) __a, __b, __c)
+
+#define vset_lane_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vset_lanev2si ((__builtin_neon_si) __a, __b, __c)
+
+#define vset_lane_f32(__a, __b, __c) \
+  (float32x2_t)__builtin_neon_vset_lanev2sf (__a, __b, __c)
+
+#define vset_lane_u8(__a, __b, __c) \
+  (uint8x8_t)__builtin_neon_vset_lanev8qi ((__builtin_neon_qi) __a, (int8x8_t) __b, __c)
+
+#define vset_lane_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vset_lanev4hi ((__builtin_neon_hi) __a, (int16x4_t) __b, __c)
+
+#define vset_lane_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vset_lanev2si ((__builtin_neon_si) __a, (int32x2_t) __b, __c)
+
+#define vset_lane_p8(__a, __b, __c) \
+  (poly8x8_t)__builtin_neon_vset_lanev8qi ((__builtin_neon_qi) __a, (int8x8_t) __b, __c)
+
+#define vset_lane_p16(__a, __b, __c) \
+  (poly16x4_t)__builtin_neon_vset_lanev4hi ((__builtin_neon_hi) __a, (int16x4_t) __b, __c)
+
+#define vset_lane_s64(__a, __b, __c) \
+  (int64x1_t)__builtin_neon_vset_lanev1di ((__builtin_neon_di) __a, __b, __c)
+
+#define vset_lane_u64(__a, __b, __c) \
+  (uint64x1_t)__builtin_neon_vset_lanev1di ((__builtin_neon_di) __a, (int64x1_t) __b, __c)
+
+#define vsetq_lane_s8(__a, __b, __c) \
+  (int8x16_t)__builtin_neon_vset_lanev16qi ((__builtin_neon_qi) __a, __b, __c)
+
+#define vsetq_lane_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vset_lanev8hi ((__builtin_neon_hi) __a, __b, __c)
+
+#define vsetq_lane_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vset_lanev4si ((__builtin_neon_si) __a, __b, __c)
+
+#define vsetq_lane_f32(__a, __b, __c) \
+  (float32x4_t)__builtin_neon_vset_lanev4sf (__a, __b, __c)
+
+#define vsetq_lane_u8(__a, __b, __c) \
+  (uint8x16_t)__builtin_neon_vset_lanev16qi ((__builtin_neon_qi) __a, (int8x16_t) __b, __c)
+
+#define vsetq_lane_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vset_lanev8hi ((__builtin_neon_hi) __a, (int16x8_t) __b, __c)
+
+#define vsetq_lane_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vset_lanev4si ((__builtin_neon_si) __a, (int32x4_t) __b, __c)
+
+#define vsetq_lane_p8(__a, __b, __c) \
+  (poly8x16_t)__builtin_neon_vset_lanev16qi ((__builtin_neon_qi) __a, (int8x16_t) __b, __c)
+
+#define vsetq_lane_p16(__a, __b, __c) \
+  (poly16x8_t)__builtin_neon_vset_lanev8hi ((__builtin_neon_hi) __a, (int16x8_t) __b, __c)
+
+#define vsetq_lane_s64(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vset_lanev2di ((__builtin_neon_di) __a, __b, __c)
+
+#define vsetq_lane_u64(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vset_lanev2di ((__builtin_neon_di) __a, (int64x2_t) __b, __c)
+
+#define vcreate_s8(__a) \
+  (int8x8_t)__builtin_neon_vcreatev8qi ((__builtin_neon_di) __a)
+
+#define vcreate_s16(__a) \
+  (int16x4_t)__builtin_neon_vcreatev4hi ((__builtin_neon_di) __a)
+
+#define vcreate_s32(__a) \
+  (int32x2_t)__builtin_neon_vcreatev2si ((__builtin_neon_di) __a)
+
+#define vcreate_s64(__a) \
+  (int64x1_t)__builtin_neon_vcreatev1di ((__builtin_neon_di) __a)
+
+#define vcreate_f32(__a) \
+  (float32x2_t)__builtin_neon_vcreatev2sf ((__builtin_neon_di) __a)
+
+#define vcreate_u8(__a) \
+  (uint8x8_t)__builtin_neon_vcreatev8qi ((__builtin_neon_di) __a)
+
+#define vcreate_u16(__a) \
+  (uint16x4_t)__builtin_neon_vcreatev4hi ((__builtin_neon_di) __a)
+
+#define vcreate_u32(__a) \
+  (uint32x2_t)__builtin_neon_vcreatev2si ((__builtin_neon_di) __a)
+
+#define vcreate_u64(__a) \
+  (uint64x1_t)__builtin_neon_vcreatev1di ((__builtin_neon_di) __a)
+
+#define vcreate_p8(__a) \
+  (poly8x8_t)__builtin_neon_vcreatev8qi ((__builtin_neon_di) __a)
+
+#define vcreate_p16(__a) \
+  (poly16x4_t)__builtin_neon_vcreatev4hi ((__builtin_neon_di) __a)
+
+#define vdup_n_s8(__a) \
+  (int8x8_t)__builtin_neon_vdup_nv8qi ((__builtin_neon_qi) __a)
+
+#define vdup_n_s16(__a) \
+  (int16x4_t)__builtin_neon_vdup_nv4hi ((__builtin_neon_hi) __a)
+
+#define vdup_n_s32(__a) \
+  (int32x2_t)__builtin_neon_vdup_nv2si ((__builtin_neon_si) __a)
+
+#define vdup_n_f32(__a) \
+  (float32x2_t)__builtin_neon_vdup_nv2sf (__a)
+
+#define vdup_n_u8(__a) \
+  (uint8x8_t)__builtin_neon_vdup_nv8qi ((__builtin_neon_qi) __a)
+
+#define vdup_n_u16(__a) \
+  (uint16x4_t)__builtin_neon_vdup_nv4hi ((__builtin_neon_hi) __a)
+
+#define vdup_n_u32(__a) \
+  (uint32x2_t)__builtin_neon_vdup_nv2si ((__builtin_neon_si) __a)
+
+#define vdup_n_p8(__a) \
+  (poly8x8_t)__builtin_neon_vdup_nv8qi ((__builtin_neon_qi) __a)
+
+#define vdup_n_p16(__a) \
+  (poly16x4_t)__builtin_neon_vdup_nv4hi ((__builtin_neon_hi) __a)
+
+#define vdup_n_s64(__a) \
+  (int64x1_t)__builtin_neon_vdup_nv1di ((__builtin_neon_di) __a)
+
+#define vdup_n_u64(__a) \
+  (uint64x1_t)__builtin_neon_vdup_nv1di ((__builtin_neon_di) __a)
+
+#define vdupq_n_s8(__a) \
+  (int8x16_t)__builtin_neon_vdup_nv16qi ((__builtin_neon_qi) __a)
+
+#define vdupq_n_s16(__a) \
+  (int16x8_t)__builtin_neon_vdup_nv8hi ((__builtin_neon_hi) __a)
+
+#define vdupq_n_s32(__a) \
+  (int32x4_t)__builtin_neon_vdup_nv4si ((__builtin_neon_si) __a)
+
+#define vdupq_n_f32(__a) \
+  (float32x4_t)__builtin_neon_vdup_nv4sf (__a)
+
+#define vdupq_n_u8(__a) \
+  (uint8x16_t)__builtin_neon_vdup_nv16qi ((__builtin_neon_qi) __a)
+
+#define vdupq_n_u16(__a) \
+  (uint16x8_t)__builtin_neon_vdup_nv8hi ((__builtin_neon_hi) __a)
+
+#define vdupq_n_u32(__a) \
+  (uint32x4_t)__builtin_neon_vdup_nv4si ((__builtin_neon_si) __a)
+
+#define vdupq_n_p8(__a) \
+  (poly8x16_t)__builtin_neon_vdup_nv16qi ((__builtin_neon_qi) __a)
+
+#define vdupq_n_p16(__a) \
+  (poly16x8_t)__builtin_neon_vdup_nv8hi ((__builtin_neon_hi) __a)
+
+#define vdupq_n_s64(__a) \
+  (int64x2_t)__builtin_neon_vdup_nv2di ((__builtin_neon_di) __a)
+
+#define vdupq_n_u64(__a) \
+  (uint64x2_t)__builtin_neon_vdup_nv2di ((__builtin_neon_di) __a)
+
+#define vmov_n_s8(__a) \
+  (int8x8_t)__builtin_neon_vdup_nv8qi ((__builtin_neon_qi) __a)
+
+#define vmov_n_s16(__a) \
+  (int16x4_t)__builtin_neon_vdup_nv4hi ((__builtin_neon_hi) __a)
+
+#define vmov_n_s32(__a) \
+  (int32x2_t)__builtin_neon_vdup_nv2si ((__builtin_neon_si) __a)
+
+#define vmov_n_f32(__a) \
+  (float32x2_t)__builtin_neon_vdup_nv2sf (__a)
+
+#define vmov_n_u8(__a) \
+  (uint8x8_t)__builtin_neon_vdup_nv8qi ((__builtin_neon_qi) __a)
+
+#define vmov_n_u16(__a) \
+  (uint16x4_t)__builtin_neon_vdup_nv4hi ((__builtin_neon_hi) __a)
+
+#define vmov_n_u32(__a) \
+  (uint32x2_t)__builtin_neon_vdup_nv2si ((__builtin_neon_si) __a)
+
+#define vmov_n_p8(__a) \
+  (poly8x8_t)__builtin_neon_vdup_nv8qi ((__builtin_neon_qi) __a)
+
+#define vmov_n_p16(__a) \
+  (poly16x4_t)__builtin_neon_vdup_nv4hi ((__builtin_neon_hi) __a)
+
+#define vmov_n_s64(__a) \
+  (int64x1_t)__builtin_neon_vdup_nv1di ((__builtin_neon_di) __a)
+
+#define vmov_n_u64(__a) \
+  (uint64x1_t)__builtin_neon_vdup_nv1di ((__builtin_neon_di) __a)
+
+#define vmovq_n_s8(__a) \
+  (int8x16_t)__builtin_neon_vdup_nv16qi ((__builtin_neon_qi) __a)
+
+#define vmovq_n_s16(__a) \
+  (int16x8_t)__builtin_neon_vdup_nv8hi ((__builtin_neon_hi) __a)
+
+#define vmovq_n_s32(__a) \
+  (int32x4_t)__builtin_neon_vdup_nv4si ((__builtin_neon_si) __a)
+
+#define vmovq_n_f32(__a) \
+  (float32x4_t)__builtin_neon_vdup_nv4sf (__a)
+
+#define vmovq_n_u8(__a) \
+  (uint8x16_t)__builtin_neon_vdup_nv16qi ((__builtin_neon_qi) __a)
+
+#define vmovq_n_u16(__a) \
+  (uint16x8_t)__builtin_neon_vdup_nv8hi ((__builtin_neon_hi) __a)
+
+#define vmovq_n_u32(__a) \
+  (uint32x4_t)__builtin_neon_vdup_nv4si ((__builtin_neon_si) __a)
+
+#define vmovq_n_p8(__a) \
+  (poly8x16_t)__builtin_neon_vdup_nv16qi ((__builtin_neon_qi) __a)
+
+#define vmovq_n_p16(__a) \
+  (poly16x8_t)__builtin_neon_vdup_nv8hi ((__builtin_neon_hi) __a)
+
+#define vmovq_n_s64(__a) \
+  (int64x2_t)__builtin_neon_vdup_nv2di ((__builtin_neon_di) __a)
+
+#define vmovq_n_u64(__a) \
+  (uint64x2_t)__builtin_neon_vdup_nv2di ((__builtin_neon_di) __a)
+
+#define vdup_lane_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vdup_lanev8qi (__a, __b)
+
+#define vdup_lane_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vdup_lanev4hi (__a, __b)
+
+#define vdup_lane_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vdup_lanev2si (__a, __b)
+
+#define vdup_lane_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vdup_lanev2sf (__a, __b)
+
+#define vdup_lane_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vdup_lanev8qi ((int8x8_t) __a, __b)
+
+#define vdup_lane_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vdup_lanev4hi ((int16x4_t) __a, __b)
+
+#define vdup_lane_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vdup_lanev2si ((int32x2_t) __a, __b)
+
+#define vdup_lane_p8(__a, __b) \
+  (poly8x8_t)__builtin_neon_vdup_lanev8qi ((int8x8_t) __a, __b)
+
+#define vdup_lane_p16(__a, __b) \
+  (poly16x4_t)__builtin_neon_vdup_lanev4hi ((int16x4_t) __a, __b)
+
+#define vdup_lane_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vdup_lanev1di (__a, __b)
+
+#define vdup_lane_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vdup_lanev1di ((int64x1_t) __a, __b)
+
+#define vdupq_lane_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vdup_lanev16qi (__a, __b)
+
+#define vdupq_lane_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vdup_lanev8hi (__a, __b)
+
+#define vdupq_lane_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vdup_lanev4si (__a, __b)
+
+#define vdupq_lane_f32(__a, __b) \
+  (float32x4_t)__builtin_neon_vdup_lanev4sf (__a, __b)
+
+#define vdupq_lane_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vdup_lanev16qi ((int8x8_t) __a, __b)
+
+#define vdupq_lane_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vdup_lanev8hi ((int16x4_t) __a, __b)
+
+#define vdupq_lane_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vdup_lanev4si ((int32x2_t) __a, __b)
+
+#define vdupq_lane_p8(__a, __b) \
+  (poly8x16_t)__builtin_neon_vdup_lanev16qi ((int8x8_t) __a, __b)
+
+#define vdupq_lane_p16(__a, __b) \
+  (poly16x8_t)__builtin_neon_vdup_lanev8hi ((int16x4_t) __a, __b)
+
+#define vdupq_lane_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vdup_lanev2di (__a, __b)
+
+#define vdupq_lane_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vdup_lanev2di ((int64x1_t) __a, __b)
+
+#define vcombine_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vcombinev8qi (__a, __b)
+
+#define vcombine_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vcombinev4hi (__a, __b)
+
+#define vcombine_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vcombinev2si (__a, __b)
+
+#define vcombine_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vcombinev1di (__a, __b)
+
+#define vcombine_f32(__a, __b) \
+  (float32x4_t)__builtin_neon_vcombinev2sf (__a, __b)
+
+#define vcombine_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vcombinev8qi ((int8x8_t) __a, (int8x8_t) __b)
+
+#define vcombine_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vcombinev4hi ((int16x4_t) __a, (int16x4_t) __b)
+
+#define vcombine_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcombinev2si ((int32x2_t) __a, (int32x2_t) __b)
+
+#define vcombine_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vcombinev1di ((int64x1_t) __a, (int64x1_t) __b)
+
+#define vcombine_p8(__a, __b) \
+  (poly8x16_t)__builtin_neon_vcombinev8qi ((int8x8_t) __a, (int8x8_t) __b)
+
+#define vcombine_p16(__a, __b) \
+  (poly16x8_t)__builtin_neon_vcombinev4hi ((int16x4_t) __a, (int16x4_t) __b)
+
+#define vget_high_s8(__a) \
+  (int8x8_t)__builtin_neon_vget_highv16qi (__a)
+
+#define vget_high_s16(__a) \
+  (int16x4_t)__builtin_neon_vget_highv8hi (__a)
+
+#define vget_high_s32(__a) \
+  (int32x2_t)__builtin_neon_vget_highv4si (__a)
+
+#define vget_high_s64(__a) \
+  (int64x1_t)__builtin_neon_vget_highv2di (__a)
+
+#define vget_high_f32(__a) \
+  (float32x2_t)__builtin_neon_vget_highv4sf (__a)
+
+#define vget_high_u8(__a) \
+  (uint8x8_t)__builtin_neon_vget_highv16qi ((int8x16_t) __a)
+
+#define vget_high_u16(__a) \
+  (uint16x4_t)__builtin_neon_vget_highv8hi ((int16x8_t) __a)
+
+#define vget_high_u32(__a) \
+  (uint32x2_t)__builtin_neon_vget_highv4si ((int32x4_t) __a)
+
+#define vget_high_u64(__a) \
+  (uint64x1_t)__builtin_neon_vget_highv2di ((int64x2_t) __a)
+
+#define vget_high_p8(__a) \
+  (poly8x8_t)__builtin_neon_vget_highv16qi ((int8x16_t) __a)
+
+#define vget_high_p16(__a) \
+  (poly16x4_t)__builtin_neon_vget_highv8hi ((int16x8_t) __a)
+
+#define vget_low_s8(__a) \
+  (int8x8_t)__builtin_neon_vget_lowv16qi (__a)
+
+#define vget_low_s16(__a) \
+  (int16x4_t)__builtin_neon_vget_lowv8hi (__a)
+
+#define vget_low_s32(__a) \
+  (int32x2_t)__builtin_neon_vget_lowv4si (__a)
+
+#define vget_low_s64(__a) \
+  (int64x1_t)__builtin_neon_vget_lowv2di (__a)
+
+#define vget_low_f32(__a) \
+  (float32x2_t)__builtin_neon_vget_lowv4sf (__a)
+
+#define vget_low_u8(__a) \
+  (uint8x8_t)__builtin_neon_vget_lowv16qi ((int8x16_t) __a)
+
+#define vget_low_u16(__a) \
+  (uint16x4_t)__builtin_neon_vget_lowv8hi ((int16x8_t) __a)
+
+#define vget_low_u32(__a) \
+  (uint32x2_t)__builtin_neon_vget_lowv4si ((int32x4_t) __a)
+
+#define vget_low_u64(__a) \
+  (uint64x1_t)__builtin_neon_vget_lowv2di ((int64x2_t) __a)
+
+#define vget_low_p8(__a) \
+  (poly8x8_t)__builtin_neon_vget_lowv16qi ((int8x16_t) __a)
+
+#define vget_low_p16(__a) \
+  (poly16x4_t)__builtin_neon_vget_lowv8hi ((int16x8_t) __a)
+
+#define vcvt_s32_f32(__a) \
+  (int32x2_t)__builtin_neon_vcvtv2sf (__a, 1)
+
+#define vcvt_f32_s32(__a) \
+  (float32x2_t)__builtin_neon_vcvtv2si (__a, 1)
+
+#define vcvt_f32_u32(__a) \
+  (float32x2_t)__builtin_neon_vcvtv2si ((int32x2_t) __a, 0)
+
+#define vcvt_u32_f32(__a) \
+  (uint32x2_t)__builtin_neon_vcvtv2sf (__a, 0)
+
+#define vcvtq_s32_f32(__a) \
+  (int32x4_t)__builtin_neon_vcvtv4sf (__a, 1)
+
+#define vcvtq_f32_s32(__a) \
+  (float32x4_t)__builtin_neon_vcvtv4si (__a, 1)
+
+#define vcvtq_f32_u32(__a) \
+  (float32x4_t)__builtin_neon_vcvtv4si ((int32x4_t) __a, 0)
+
+#define vcvtq_u32_f32(__a) \
+  (uint32x4_t)__builtin_neon_vcvtv4sf (__a, 0)
+
+#define vcvt_n_s32_f32(__a, __b) \
+  (int32x2_t)__builtin_neon_vcvt_nv2sf (__a, __b, 1)
+
+#define vcvt_n_f32_s32(__a, __b) \
+  (float32x2_t)__builtin_neon_vcvt_nv2si (__a, __b, 1)
+
+#define vcvt_n_f32_u32(__a, __b) \
+  (float32x2_t)__builtin_neon_vcvt_nv2si ((int32x2_t) __a, __b, 0)
+
+#define vcvt_n_u32_f32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vcvt_nv2sf (__a, __b, 0)
+
+#define vcvtq_n_s32_f32(__a, __b) \
+  (int32x4_t)__builtin_neon_vcvt_nv4sf (__a, __b, 1)
+
+#define vcvtq_n_f32_s32(__a, __b) \
+  (float32x4_t)__builtin_neon_vcvt_nv4si (__a, __b, 1)
+
+#define vcvtq_n_f32_u32(__a, __b) \
+  (float32x4_t)__builtin_neon_vcvt_nv4si ((int32x4_t) __a, __b, 0)
+
+#define vcvtq_n_u32_f32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vcvt_nv4sf (__a, __b, 0)
+
+#define vmovn_s16(__a) \
+  (int8x8_t)__builtin_neon_vmovnv8hi (__a, 1)
+
+#define vmovn_s32(__a) \
+  (int16x4_t)__builtin_neon_vmovnv4si (__a, 1)
+
+#define vmovn_s64(__a) \
+  (int32x2_t)__builtin_neon_vmovnv2di (__a, 1)
+
+#define vmovn_u16(__a) \
+  (uint8x8_t)__builtin_neon_vmovnv8hi ((int16x8_t) __a, 0)
+
+#define vmovn_u32(__a) \
+  (uint16x4_t)__builtin_neon_vmovnv4si ((int32x4_t) __a, 0)
+
+#define vmovn_u64(__a) \
+  (uint32x2_t)__builtin_neon_vmovnv2di ((int64x2_t) __a, 0)
+
+#define vqmovn_s16(__a) \
+  (int8x8_t)__builtin_neon_vqmovnv8hi (__a, 1)
+
+#define vqmovn_s32(__a) \
+  (int16x4_t)__builtin_neon_vqmovnv4si (__a, 1)
+
+#define vqmovn_s64(__a) \
+  (int32x2_t)__builtin_neon_vqmovnv2di (__a, 1)
+
+#define vqmovn_u16(__a) \
+  (uint8x8_t)__builtin_neon_vqmovnv8hi ((int16x8_t) __a, 0)
+
+#define vqmovn_u32(__a) \
+  (uint16x4_t)__builtin_neon_vqmovnv4si ((int32x4_t) __a, 0)
+
+#define vqmovn_u64(__a) \
+  (uint32x2_t)__builtin_neon_vqmovnv2di ((int64x2_t) __a, 0)
+
+#define vqmovun_s16(__a) \
+  (uint8x8_t)__builtin_neon_vqmovunv8hi (__a, 1)
+
+#define vqmovun_s32(__a) \
+  (uint16x4_t)__builtin_neon_vqmovunv4si (__a, 1)
+
+#define vqmovun_s64(__a) \
+  (uint32x2_t)__builtin_neon_vqmovunv2di (__a, 1)
+
+#define vmovl_s8(__a) \
+  (int16x8_t)__builtin_neon_vmovlv8qi (__a, 1)
+
+#define vmovl_s16(__a) \
+  (int32x4_t)__builtin_neon_vmovlv4hi (__a, 1)
+
+#define vmovl_s32(__a) \
+  (int64x2_t)__builtin_neon_vmovlv2si (__a, 1)
+
+#define vmovl_u8(__a) \
+  (uint16x8_t)__builtin_neon_vmovlv8qi ((int8x8_t) __a, 0)
+
+#define vmovl_u16(__a) \
+  (uint32x4_t)__builtin_neon_vmovlv4hi ((int16x4_t) __a, 0)
+
+#define vmovl_u32(__a) \
+  (uint64x2_t)__builtin_neon_vmovlv2si ((int32x2_t) __a, 0)
+
+#define vtbl1_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vtbl1v8qi (__a, __b)
+
+#define vtbl1_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vtbl1v8qi ((int8x8_t) __a, (int8x8_t) __b)
+
+#define vtbl1_p8(__a, __b) \
+  (poly8x8_t)__builtin_neon_vtbl1v8qi ((int8x8_t) __a, (int8x8_t) __b)
+
+#define vtbl2_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x8x2_t __i; __neon_int8x8x2_t __o; } __au = { __a }; \
+     (int8x8_t)__builtin_neon_vtbl2v8qi (__au.__o, __b); \
+   })
+
+#define vtbl2_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x8x2_t __i; __neon_int8x8x2_t __o; } __au = { __a }; \
+     (uint8x8_t)__builtin_neon_vtbl2v8qi (__au.__o, (int8x8_t) __b); \
+   })
+
+#define vtbl2_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x8x2_t __i; __neon_int8x8x2_t __o; } __au = { __a }; \
+     (poly8x8_t)__builtin_neon_vtbl2v8qi (__au.__o, (int8x8_t) __b); \
+   })
+
+#define vtbl3_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x8x3_t __i; __neon_int8x8x3_t __o; } __au = { __a }; \
+     (int8x8_t)__builtin_neon_vtbl3v8qi (__au.__o, __b); \
+   })
+
+#define vtbl3_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x8x3_t __i; __neon_int8x8x3_t __o; } __au = { __a }; \
+     (uint8x8_t)__builtin_neon_vtbl3v8qi (__au.__o, (int8x8_t) __b); \
+   })
+
+#define vtbl3_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x8x3_t __i; __neon_int8x8x3_t __o; } __au = { __a }; \
+     (poly8x8_t)__builtin_neon_vtbl3v8qi (__au.__o, (int8x8_t) __b); \
+   })
+
+#define vtbl4_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x8x4_t __i; __neon_int8x8x4_t __o; } __au = { __a }; \
+     (int8x8_t)__builtin_neon_vtbl4v8qi (__au.__o, __b); \
+   })
+
+#define vtbl4_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x8x4_t __i; __neon_int8x8x4_t __o; } __au = { __a }; \
+     (uint8x8_t)__builtin_neon_vtbl4v8qi (__au.__o, (int8x8_t) __b); \
+   })
+
+#define vtbl4_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x8x4_t __i; __neon_int8x8x4_t __o; } __au = { __a }; \
+     (poly8x8_t)__builtin_neon_vtbl4v8qi (__au.__o, (int8x8_t) __b); \
+   })
+
+#define vtbx1_s8(__a, __b, __c) \
+  (int8x8_t)__builtin_neon_vtbx1v8qi (__a, __b, __c)
+
+#define vtbx1_u8(__a, __b, __c) \
+  (uint8x8_t)__builtin_neon_vtbx1v8qi ((int8x8_t) __a, (int8x8_t) __b, (int8x8_t) __c)
+
+#define vtbx1_p8(__a, __b, __c) \
+  (poly8x8_t)__builtin_neon_vtbx1v8qi ((int8x8_t) __a, (int8x8_t) __b, (int8x8_t) __c)
+
+#define vtbx2_s8(__a, __b, __c) __extension__ \
+  ({ \
+     union { int8x8x2_t __i; __neon_int8x8x2_t __o; } __bu = { __b }; \
+     (int8x8_t)__builtin_neon_vtbx2v8qi (__a, __bu.__o, __c); \
+   })
+
+#define vtbx2_u8(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint8x8x2_t __i; __neon_int8x8x2_t __o; } __bu = { __b }; \
+     (uint8x8_t)__builtin_neon_vtbx2v8qi ((int8x8_t) __a, __bu.__o, (int8x8_t) __c); \
+   })
+
+#define vtbx2_p8(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly8x8x2_t __i; __neon_int8x8x2_t __o; } __bu = { __b }; \
+     (poly8x8_t)__builtin_neon_vtbx2v8qi ((int8x8_t) __a, __bu.__o, (int8x8_t) __c); \
+   })
+
+#define vtbx3_s8(__a, __b, __c) __extension__ \
+  ({ \
+     union { int8x8x3_t __i; __neon_int8x8x3_t __o; } __bu = { __b }; \
+     (int8x8_t)__builtin_neon_vtbx3v8qi (__a, __bu.__o, __c); \
+   })
+
+#define vtbx3_u8(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint8x8x3_t __i; __neon_int8x8x3_t __o; } __bu = { __b }; \
+     (uint8x8_t)__builtin_neon_vtbx3v8qi ((int8x8_t) __a, __bu.__o, (int8x8_t) __c); \
+   })
+
+#define vtbx3_p8(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly8x8x3_t __i; __neon_int8x8x3_t __o; } __bu = { __b }; \
+     (poly8x8_t)__builtin_neon_vtbx3v8qi ((int8x8_t) __a, __bu.__o, (int8x8_t) __c); \
+   })
+
+#define vtbx4_s8(__a, __b, __c) __extension__ \
+  ({ \
+     union { int8x8x4_t __i; __neon_int8x8x4_t __o; } __bu = { __b }; \
+     (int8x8_t)__builtin_neon_vtbx4v8qi (__a, __bu.__o, __c); \
+   })
+
+#define vtbx4_u8(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint8x8x4_t __i; __neon_int8x8x4_t __o; } __bu = { __b }; \
+     (uint8x8_t)__builtin_neon_vtbx4v8qi ((int8x8_t) __a, __bu.__o, (int8x8_t) __c); \
+   })
+
+#define vtbx4_p8(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly8x8x4_t __i; __neon_int8x8x4_t __o; } __bu = { __b }; \
+     (poly8x8_t)__builtin_neon_vtbx4v8qi ((int8x8_t) __a, __bu.__o, (int8x8_t) __c); \
+   })
+
+#define vmul_lane_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vmul_lanev4hi (__a, __b, __c, 1)
+
+#define vmul_lane_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vmul_lanev2si (__a, __b, __c, 1)
+
+#define vmul_lane_f32(__a, __b, __c) \
+  (float32x2_t)__builtin_neon_vmul_lanev2sf (__a, __b, __c, 5)
+
+#define vmul_lane_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vmul_lanev4hi ((int16x4_t) __a, (int16x4_t) __b, __c, 0)
+
+#define vmul_lane_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vmul_lanev2si ((int32x2_t) __a, (int32x2_t) __b, __c, 0)
+
+#define vmulq_lane_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vmul_lanev8hi (__a, __b, __c, 1)
+
+#define vmulq_lane_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vmul_lanev4si (__a, __b, __c, 1)
+
+#define vmulq_lane_f32(__a, __b, __c) \
+  (float32x4_t)__builtin_neon_vmul_lanev4sf (__a, __b, __c, 5)
+
+#define vmulq_lane_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vmul_lanev8hi ((int16x8_t) __a, (int16x4_t) __b, __c, 0)
+
+#define vmulq_lane_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vmul_lanev4si ((int32x4_t) __a, (int32x2_t) __b, __c, 0)
+
+#define vmla_lane_s16(__a, __b, __c, __d) \
+  (int16x4_t)__builtin_neon_vmla_lanev4hi (__a, __b, __c, __d, 1)
+
+#define vmla_lane_s32(__a, __b, __c, __d) \
+  (int32x2_t)__builtin_neon_vmla_lanev2si (__a, __b, __c, __d, 1)
+
+#define vmla_lane_f32(__a, __b, __c, __d) \
+  (float32x2_t)__builtin_neon_vmla_lanev2sf (__a, __b, __c, __d, 5)
+
+#define vmla_lane_u16(__a, __b, __c, __d) \
+  (uint16x4_t)__builtin_neon_vmla_lanev4hi ((int16x4_t) __a, (int16x4_t) __b, (int16x4_t) __c, __d, 0)
+
+#define vmla_lane_u32(__a, __b, __c, __d) \
+  (uint32x2_t)__builtin_neon_vmla_lanev2si ((int32x2_t) __a, (int32x2_t) __b, (int32x2_t) __c, __d, 0)
+
+#define vmlaq_lane_s16(__a, __b, __c, __d) \
+  (int16x8_t)__builtin_neon_vmla_lanev8hi (__a, __b, __c, __d, 1)
+
+#define vmlaq_lane_s32(__a, __b, __c, __d) \
+  (int32x4_t)__builtin_neon_vmla_lanev4si (__a, __b, __c, __d, 1)
+
+#define vmlaq_lane_f32(__a, __b, __c, __d) \
+  (float32x4_t)__builtin_neon_vmla_lanev4sf (__a, __b, __c, __d, 5)
+
+#define vmlaq_lane_u16(__a, __b, __c, __d) \
+  (uint16x8_t)__builtin_neon_vmla_lanev8hi ((int16x8_t) __a, (int16x8_t) __b, (int16x4_t) __c, __d, 0)
+
+#define vmlaq_lane_u32(__a, __b, __c, __d) \
+  (uint32x4_t)__builtin_neon_vmla_lanev4si ((int32x4_t) __a, (int32x4_t) __b, (int32x2_t) __c, __d, 0)
+
+#define vmlal_lane_s16(__a, __b, __c, __d) \
+  (int32x4_t)__builtin_neon_vmlal_lanev4hi (__a, __b, __c, __d, 1)
+
+#define vmlal_lane_s32(__a, __b, __c, __d) \
+  (int64x2_t)__builtin_neon_vmlal_lanev2si (__a, __b, __c, __d, 1)
+
+#define vmlal_lane_u16(__a, __b, __c, __d) \
+  (uint32x4_t)__builtin_neon_vmlal_lanev4hi ((int32x4_t) __a, (int16x4_t) __b, (int16x4_t) __c, __d, 0)
+
+#define vmlal_lane_u32(__a, __b, __c, __d) \
+  (uint64x2_t)__builtin_neon_vmlal_lanev2si ((int64x2_t) __a, (int32x2_t) __b, (int32x2_t) __c, __d, 0)
+
+#define vqdmlal_lane_s16(__a, __b, __c, __d) \
+  (int32x4_t)__builtin_neon_vqdmlal_lanev4hi (__a, __b, __c, __d, 1)
+
+#define vqdmlal_lane_s32(__a, __b, __c, __d) \
+  (int64x2_t)__builtin_neon_vqdmlal_lanev2si (__a, __b, __c, __d, 1)
+
+#define vmls_lane_s16(__a, __b, __c, __d) \
+  (int16x4_t)__builtin_neon_vmls_lanev4hi (__a, __b, __c, __d, 1)
+
+#define vmls_lane_s32(__a, __b, __c, __d) \
+  (int32x2_t)__builtin_neon_vmls_lanev2si (__a, __b, __c, __d, 1)
+
+#define vmls_lane_f32(__a, __b, __c, __d) \
+  (float32x2_t)__builtin_neon_vmls_lanev2sf (__a, __b, __c, __d, 5)
+
+#define vmls_lane_u16(__a, __b, __c, __d) \
+  (uint16x4_t)__builtin_neon_vmls_lanev4hi ((int16x4_t) __a, (int16x4_t) __b, (int16x4_t) __c, __d, 0)
+
+#define vmls_lane_u32(__a, __b, __c, __d) \
+  (uint32x2_t)__builtin_neon_vmls_lanev2si ((int32x2_t) __a, (int32x2_t) __b, (int32x2_t) __c, __d, 0)
+
+#define vmlsq_lane_s16(__a, __b, __c, __d) \
+  (int16x8_t)__builtin_neon_vmls_lanev8hi (__a, __b, __c, __d, 1)
+
+#define vmlsq_lane_s32(__a, __b, __c, __d) \
+  (int32x4_t)__builtin_neon_vmls_lanev4si (__a, __b, __c, __d, 1)
+
+#define vmlsq_lane_f32(__a, __b, __c, __d) \
+  (float32x4_t)__builtin_neon_vmls_lanev4sf (__a, __b, __c, __d, 5)
+
+#define vmlsq_lane_u16(__a, __b, __c, __d) \
+  (uint16x8_t)__builtin_neon_vmls_lanev8hi ((int16x8_t) __a, (int16x8_t) __b, (int16x4_t) __c, __d, 0)
+
+#define vmlsq_lane_u32(__a, __b, __c, __d) \
+  (uint32x4_t)__builtin_neon_vmls_lanev4si ((int32x4_t) __a, (int32x4_t) __b, (int32x2_t) __c, __d, 0)
+
+#define vmlsl_lane_s16(__a, __b, __c, __d) \
+  (int32x4_t)__builtin_neon_vmlsl_lanev4hi (__a, __b, __c, __d, 1)
+
+#define vmlsl_lane_s32(__a, __b, __c, __d) \
+  (int64x2_t)__builtin_neon_vmlsl_lanev2si (__a, __b, __c, __d, 1)
+
+#define vmlsl_lane_u16(__a, __b, __c, __d) \
+  (uint32x4_t)__builtin_neon_vmlsl_lanev4hi ((int32x4_t) __a, (int16x4_t) __b, (int16x4_t) __c, __d, 0)
+
+#define vmlsl_lane_u32(__a, __b, __c, __d) \
+  (uint64x2_t)__builtin_neon_vmlsl_lanev2si ((int64x2_t) __a, (int32x2_t) __b, (int32x2_t) __c, __d, 0)
+
+#define vqdmlsl_lane_s16(__a, __b, __c, __d) \
+  (int32x4_t)__builtin_neon_vqdmlsl_lanev4hi (__a, __b, __c, __d, 1)
+
+#define vqdmlsl_lane_s32(__a, __b, __c, __d) \
+  (int64x2_t)__builtin_neon_vqdmlsl_lanev2si (__a, __b, __c, __d, 1)
+
+#define vmull_lane_s16(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vmull_lanev4hi (__a, __b, __c, 1)
+
+#define vmull_lane_s32(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vmull_lanev2si (__a, __b, __c, 1)
+
+#define vmull_lane_u16(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vmull_lanev4hi ((int16x4_t) __a, (int16x4_t) __b, __c, 0)
+
+#define vmull_lane_u32(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vmull_lanev2si ((int32x2_t) __a, (int32x2_t) __b, __c, 0)
+
+#define vqdmull_lane_s16(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vqdmull_lanev4hi (__a, __b, __c, 1)
+
+#define vqdmull_lane_s32(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vqdmull_lanev2si (__a, __b, __c, 1)
+
+#define vqdmulhq_lane_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vqdmulh_lanev8hi (__a, __b, __c, 1)
+
+#define vqdmulhq_lane_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vqdmulh_lanev4si (__a, __b, __c, 1)
+
+#define vqdmulh_lane_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vqdmulh_lanev4hi (__a, __b, __c, 1)
+
+#define vqdmulh_lane_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vqdmulh_lanev2si (__a, __b, __c, 1)
+
+#define vqrdmulhq_lane_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vqdmulh_lanev8hi (__a, __b, __c, 3)
+
+#define vqrdmulhq_lane_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vqdmulh_lanev4si (__a, __b, __c, 3)
+
+#define vqrdmulh_lane_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vqdmulh_lanev4hi (__a, __b, __c, 3)
+
+#define vqrdmulh_lane_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vqdmulh_lanev2si (__a, __b, __c, 3)
+
+#define vmul_n_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vmul_nv4hi (__a, (__builtin_neon_hi) __b, 1)
+
+#define vmul_n_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vmul_nv2si (__a, (__builtin_neon_si) __b, 1)
+
+#define vmul_n_f32(__a, __b) \
+  (float32x2_t)__builtin_neon_vmul_nv2sf (__a, __b, 5)
+
+#define vmul_n_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vmul_nv4hi ((int16x4_t) __a, (__builtin_neon_hi) __b, 0)
+
+#define vmul_n_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vmul_nv2si ((int32x2_t) __a, (__builtin_neon_si) __b, 0)
+
+#define vmulq_n_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vmul_nv8hi (__a, (__builtin_neon_hi) __b, 1)
+
+#define vmulq_n_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vmul_nv4si (__a, (__builtin_neon_si) __b, 1)
+
+#define vmulq_n_f32(__a, __b) \
+  (float32x4_t)__builtin_neon_vmul_nv4sf (__a, __b, 5)
+
+#define vmulq_n_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vmul_nv8hi ((int16x8_t) __a, (__builtin_neon_hi) __b, 0)
+
+#define vmulq_n_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vmul_nv4si ((int32x4_t) __a, (__builtin_neon_si) __b, 0)
+
+#define vmull_n_s16(__a, __b) \
+  (int32x4_t)__builtin_neon_vmull_nv4hi (__a, (__builtin_neon_hi) __b, 1)
+
+#define vmull_n_s32(__a, __b) \
+  (int64x2_t)__builtin_neon_vmull_nv2si (__a, (__builtin_neon_si) __b, 1)
+
+#define vmull_n_u16(__a, __b) \
+  (uint32x4_t)__builtin_neon_vmull_nv4hi ((int16x4_t) __a, (__builtin_neon_hi) __b, 0)
+
+#define vmull_n_u32(__a, __b) \
+  (uint64x2_t)__builtin_neon_vmull_nv2si ((int32x2_t) __a, (__builtin_neon_si) __b, 0)
+
+#define vqdmull_n_s16(__a, __b) \
+  (int32x4_t)__builtin_neon_vqdmull_nv4hi (__a, (__builtin_neon_hi) __b, 1)
+
+#define vqdmull_n_s32(__a, __b) \
+  (int64x2_t)__builtin_neon_vqdmull_nv2si (__a, (__builtin_neon_si) __b, 1)
+
+#define vqdmulhq_n_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vqdmulh_nv8hi (__a, (__builtin_neon_hi) __b, 1)
+
+#define vqdmulhq_n_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vqdmulh_nv4si (__a, (__builtin_neon_si) __b, 1)
+
+#define vqdmulh_n_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vqdmulh_nv4hi (__a, (__builtin_neon_hi) __b, 1)
+
+#define vqdmulh_n_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vqdmulh_nv2si (__a, (__builtin_neon_si) __b, 1)
+
+#define vqrdmulhq_n_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vqdmulh_nv8hi (__a, (__builtin_neon_hi) __b, 3)
+
+#define vqrdmulhq_n_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vqdmulh_nv4si (__a, (__builtin_neon_si) __b, 3)
+
+#define vqrdmulh_n_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vqdmulh_nv4hi (__a, (__builtin_neon_hi) __b, 3)
+
+#define vqrdmulh_n_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vqdmulh_nv2si (__a, (__builtin_neon_si) __b, 3)
+
+#define vmla_n_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vmla_nv4hi (__a, __b, (__builtin_neon_hi) __c, 1)
+
+#define vmla_n_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vmla_nv2si (__a, __b, (__builtin_neon_si) __c, 1)
+
+#define vmla_n_f32(__a, __b, __c) \
+  (float32x2_t)__builtin_neon_vmla_nv2sf (__a, __b, __c, 5)
+
+#define vmla_n_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vmla_nv4hi ((int16x4_t) __a, (int16x4_t) __b, (__builtin_neon_hi) __c, 0)
+
+#define vmla_n_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vmla_nv2si ((int32x2_t) __a, (int32x2_t) __b, (__builtin_neon_si) __c, 0)
+
+#define vmlaq_n_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vmla_nv8hi (__a, __b, (__builtin_neon_hi) __c, 1)
+
+#define vmlaq_n_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vmla_nv4si (__a, __b, (__builtin_neon_si) __c, 1)
+
+#define vmlaq_n_f32(__a, __b, __c) \
+  (float32x4_t)__builtin_neon_vmla_nv4sf (__a, __b, __c, 5)
+
+#define vmlaq_n_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vmla_nv8hi ((int16x8_t) __a, (int16x8_t) __b, (__builtin_neon_hi) __c, 0)
+
+#define vmlaq_n_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vmla_nv4si ((int32x4_t) __a, (int32x4_t) __b, (__builtin_neon_si) __c, 0)
+
+#define vmlal_n_s16(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vmlal_nv4hi (__a, __b, (__builtin_neon_hi) __c, 1)
+
+#define vmlal_n_s32(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vmlal_nv2si (__a, __b, (__builtin_neon_si) __c, 1)
+
+#define vmlal_n_u16(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vmlal_nv4hi ((int32x4_t) __a, (int16x4_t) __b, (__builtin_neon_hi) __c, 0)
+
+#define vmlal_n_u32(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vmlal_nv2si ((int64x2_t) __a, (int32x2_t) __b, (__builtin_neon_si) __c, 0)
+
+#define vqdmlal_n_s16(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vqdmlal_nv4hi (__a, __b, (__builtin_neon_hi) __c, 1)
+
+#define vqdmlal_n_s32(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vqdmlal_nv2si (__a, __b, (__builtin_neon_si) __c, 1)
+
+#define vmls_n_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vmls_nv4hi (__a, __b, (__builtin_neon_hi) __c, 1)
+
+#define vmls_n_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vmls_nv2si (__a, __b, (__builtin_neon_si) __c, 1)
+
+#define vmls_n_f32(__a, __b, __c) \
+  (float32x2_t)__builtin_neon_vmls_nv2sf (__a, __b, __c, 5)
+
+#define vmls_n_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vmls_nv4hi ((int16x4_t) __a, (int16x4_t) __b, (__builtin_neon_hi) __c, 0)
+
+#define vmls_n_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vmls_nv2si ((int32x2_t) __a, (int32x2_t) __b, (__builtin_neon_si) __c, 0)
+
+#define vmlsq_n_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vmls_nv8hi (__a, __b, (__builtin_neon_hi) __c, 1)
+
+#define vmlsq_n_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vmls_nv4si (__a, __b, (__builtin_neon_si) __c, 1)
+
+#define vmlsq_n_f32(__a, __b, __c) \
+  (float32x4_t)__builtin_neon_vmls_nv4sf (__a, __b, __c, 5)
+
+#define vmlsq_n_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vmls_nv8hi ((int16x8_t) __a, (int16x8_t) __b, (__builtin_neon_hi) __c, 0)
+
+#define vmlsq_n_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vmls_nv4si ((int32x4_t) __a, (int32x4_t) __b, (__builtin_neon_si) __c, 0)
+
+#define vmlsl_n_s16(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vmlsl_nv4hi (__a, __b, (__builtin_neon_hi) __c, 1)
+
+#define vmlsl_n_s32(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vmlsl_nv2si (__a, __b, (__builtin_neon_si) __c, 1)
+
+#define vmlsl_n_u16(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vmlsl_nv4hi ((int32x4_t) __a, (int16x4_t) __b, (__builtin_neon_hi) __c, 0)
+
+#define vmlsl_n_u32(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vmlsl_nv2si ((int64x2_t) __a, (int32x2_t) __b, (__builtin_neon_si) __c, 0)
+
+#define vqdmlsl_n_s16(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vqdmlsl_nv4hi (__a, __b, (__builtin_neon_hi) __c, 1)
+
+#define vqdmlsl_n_s32(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vqdmlsl_nv2si (__a, __b, (__builtin_neon_si) __c, 1)
+
+#define vext_s8(__a, __b, __c) \
+  (int8x8_t)__builtin_neon_vextv8qi (__a, __b, __c)
+
+#define vext_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vextv4hi (__a, __b, __c)
+
+#define vext_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vextv2si (__a, __b, __c)
+
+#define vext_s64(__a, __b, __c) \
+  (int64x1_t)__builtin_neon_vextv1di (__a, __b, __c)
+
+#define vext_f32(__a, __b, __c) \
+  (float32x2_t)__builtin_neon_vextv2sf (__a, __b, __c)
+
+#define vext_u8(__a, __b, __c) \
+  (uint8x8_t)__builtin_neon_vextv8qi ((int8x8_t) __a, (int8x8_t) __b, __c)
+
+#define vext_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vextv4hi ((int16x4_t) __a, (int16x4_t) __b, __c)
+
+#define vext_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vextv2si ((int32x2_t) __a, (int32x2_t) __b, __c)
+
+#define vext_u64(__a, __b, __c) \
+  (uint64x1_t)__builtin_neon_vextv1di ((int64x1_t) __a, (int64x1_t) __b, __c)
+
+#define vext_p8(__a, __b, __c) \
+  (poly8x8_t)__builtin_neon_vextv8qi ((int8x8_t) __a, (int8x8_t) __b, __c)
+
+#define vext_p16(__a, __b, __c) \
+  (poly16x4_t)__builtin_neon_vextv4hi ((int16x4_t) __a, (int16x4_t) __b, __c)
+
+#define vextq_s8(__a, __b, __c) \
+  (int8x16_t)__builtin_neon_vextv16qi (__a, __b, __c)
+
+#define vextq_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vextv8hi (__a, __b, __c)
+
+#define vextq_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vextv4si (__a, __b, __c)
+
+#define vextq_s64(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vextv2di (__a, __b, __c)
+
+#define vextq_f32(__a, __b, __c) \
+  (float32x4_t)__builtin_neon_vextv4sf (__a, __b, __c)
+
+#define vextq_u8(__a, __b, __c) \
+  (uint8x16_t)__builtin_neon_vextv16qi ((int8x16_t) __a, (int8x16_t) __b, __c)
+
+#define vextq_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vextv8hi ((int16x8_t) __a, (int16x8_t) __b, __c)
+
+#define vextq_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vextv4si ((int32x4_t) __a, (int32x4_t) __b, __c)
+
+#define vextq_u64(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vextv2di ((int64x2_t) __a, (int64x2_t) __b, __c)
+
+#define vextq_p8(__a, __b, __c) \
+  (poly8x16_t)__builtin_neon_vextv16qi ((int8x16_t) __a, (int8x16_t) __b, __c)
+
+#define vextq_p16(__a, __b, __c) \
+  (poly16x8_t)__builtin_neon_vextv8hi ((int16x8_t) __a, (int16x8_t) __b, __c)
+
+#define vrev64_s8(__a) \
+  (int8x8_t)__builtin_neon_vrev64v8qi (__a, 1)
+
+#define vrev64_s16(__a) \
+  (int16x4_t)__builtin_neon_vrev64v4hi (__a, 1)
+
+#define vrev64_s32(__a) \
+  (int32x2_t)__builtin_neon_vrev64v2si (__a, 1)
+
+#define vrev64_f32(__a) \
+  (float32x2_t)__builtin_neon_vrev64v2sf (__a, 5)
+
+#define vrev64_u8(__a) \
+  (uint8x8_t)__builtin_neon_vrev64v8qi ((int8x8_t) __a, 0)
+
+#define vrev64_u16(__a) \
+  (uint16x4_t)__builtin_neon_vrev64v4hi ((int16x4_t) __a, 0)
+
+#define vrev64_u32(__a) \
+  (uint32x2_t)__builtin_neon_vrev64v2si ((int32x2_t) __a, 0)
+
+#define vrev64_p8(__a) \
+  (poly8x8_t)__builtin_neon_vrev64v8qi ((int8x8_t) __a, 4)
+
+#define vrev64_p16(__a) \
+  (poly16x4_t)__builtin_neon_vrev64v4hi ((int16x4_t) __a, 4)
+
+#define vrev64q_s8(__a) \
+  (int8x16_t)__builtin_neon_vrev64v16qi (__a, 1)
+
+#define vrev64q_s16(__a) \
+  (int16x8_t)__builtin_neon_vrev64v8hi (__a, 1)
+
+#define vrev64q_s32(__a) \
+  (int32x4_t)__builtin_neon_vrev64v4si (__a, 1)
+
+#define vrev64q_f32(__a) \
+  (float32x4_t)__builtin_neon_vrev64v4sf (__a, 5)
+
+#define vrev64q_u8(__a) \
+  (uint8x16_t)__builtin_neon_vrev64v16qi ((int8x16_t) __a, 0)
+
+#define vrev64q_u16(__a) \
+  (uint16x8_t)__builtin_neon_vrev64v8hi ((int16x8_t) __a, 0)
+
+#define vrev64q_u32(__a) \
+  (uint32x4_t)__builtin_neon_vrev64v4si ((int32x4_t) __a, 0)
+
+#define vrev64q_p8(__a) \
+  (poly8x16_t)__builtin_neon_vrev64v16qi ((int8x16_t) __a, 4)
+
+#define vrev64q_p16(__a) \
+  (poly16x8_t)__builtin_neon_vrev64v8hi ((int16x8_t) __a, 4)
+
+#define vrev32_s8(__a) \
+  (int8x8_t)__builtin_neon_vrev32v8qi (__a, 1)
+
+#define vrev32_s16(__a) \
+  (int16x4_t)__builtin_neon_vrev32v4hi (__a, 1)
+
+#define vrev32_u8(__a) \
+  (uint8x8_t)__builtin_neon_vrev32v8qi ((int8x8_t) __a, 0)
+
+#define vrev32_u16(__a) \
+  (uint16x4_t)__builtin_neon_vrev32v4hi ((int16x4_t) __a, 0)
+
+#define vrev32_p8(__a) \
+  (poly8x8_t)__builtin_neon_vrev32v8qi ((int8x8_t) __a, 4)
+
+#define vrev32_p16(__a) \
+  (poly16x4_t)__builtin_neon_vrev32v4hi ((int16x4_t) __a, 4)
+
+#define vrev32q_s8(__a) \
+  (int8x16_t)__builtin_neon_vrev32v16qi (__a, 1)
+
+#define vrev32q_s16(__a) \
+  (int16x8_t)__builtin_neon_vrev32v8hi (__a, 1)
+
+#define vrev32q_u8(__a) \
+  (uint8x16_t)__builtin_neon_vrev32v16qi ((int8x16_t) __a, 0)
+
+#define vrev32q_u16(__a) \
+  (uint16x8_t)__builtin_neon_vrev32v8hi ((int16x8_t) __a, 0)
+
+#define vrev32q_p8(__a) \
+  (poly8x16_t)__builtin_neon_vrev32v16qi ((int8x16_t) __a, 4)
+
+#define vrev32q_p16(__a) \
+  (poly16x8_t)__builtin_neon_vrev32v8hi ((int16x8_t) __a, 4)
+
+#define vrev16_s8(__a) \
+  (int8x8_t)__builtin_neon_vrev16v8qi (__a, 1)
+
+#define vrev16_u8(__a) \
+  (uint8x8_t)__builtin_neon_vrev16v8qi ((int8x8_t) __a, 0)
+
+#define vrev16_p8(__a) \
+  (poly8x8_t)__builtin_neon_vrev16v8qi ((int8x8_t) __a, 4)
+
+#define vrev16q_s8(__a) \
+  (int8x16_t)__builtin_neon_vrev16v16qi (__a, 1)
+
+#define vrev16q_u8(__a) \
+  (uint8x16_t)__builtin_neon_vrev16v16qi ((int8x16_t) __a, 0)
+
+#define vrev16q_p8(__a) \
+  (poly8x16_t)__builtin_neon_vrev16v16qi ((int8x16_t) __a, 4)
+
+#define vbsl_s8(__a, __b, __c) \
+  (int8x8_t)__builtin_neon_vbslv8qi ((int8x8_t) __a, __b, __c)
+
+#define vbsl_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vbslv4hi ((int16x4_t) __a, __b, __c)
+
+#define vbsl_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vbslv2si ((int32x2_t) __a, __b, __c)
+
+#define vbsl_s64(__a, __b, __c) \
+  (int64x1_t)__builtin_neon_vbslv1di ((int64x1_t) __a, __b, __c)
+
+#define vbsl_f32(__a, __b, __c) \
+  (float32x2_t)__builtin_neon_vbslv2sf ((int32x2_t) __a, __b, __c)
+
+#define vbsl_u8(__a, __b, __c) \
+  (uint8x8_t)__builtin_neon_vbslv8qi ((int8x8_t) __a, (int8x8_t) __b, (int8x8_t) __c)
+
+#define vbsl_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vbslv4hi ((int16x4_t) __a, (int16x4_t) __b, (int16x4_t) __c)
+
+#define vbsl_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vbslv2si ((int32x2_t) __a, (int32x2_t) __b, (int32x2_t) __c)
+
+#define vbsl_u64(__a, __b, __c) \
+  (uint64x1_t)__builtin_neon_vbslv1di ((int64x1_t) __a, (int64x1_t) __b, (int64x1_t) __c)
+
+#define vbsl_p8(__a, __b, __c) \
+  (poly8x8_t)__builtin_neon_vbslv8qi ((int8x8_t) __a, (int8x8_t) __b, (int8x8_t) __c)
+
+#define vbsl_p16(__a, __b, __c) \
+  (poly16x4_t)__builtin_neon_vbslv4hi ((int16x4_t) __a, (int16x4_t) __b, (int16x4_t) __c)
+
+#define vbslq_s8(__a, __b, __c) \
+  (int8x16_t)__builtin_neon_vbslv16qi ((int8x16_t) __a, __b, __c)
+
+#define vbslq_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vbslv8hi ((int16x8_t) __a, __b, __c)
+
+#define vbslq_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vbslv4si ((int32x4_t) __a, __b, __c)
+
+#define vbslq_s64(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vbslv2di ((int64x2_t) __a, __b, __c)
+
+#define vbslq_f32(__a, __b, __c) \
+  (float32x4_t)__builtin_neon_vbslv4sf ((int32x4_t) __a, __b, __c)
+
+#define vbslq_u8(__a, __b, __c) \
+  (uint8x16_t)__builtin_neon_vbslv16qi ((int8x16_t) __a, (int8x16_t) __b, (int8x16_t) __c)
+
+#define vbslq_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vbslv8hi ((int16x8_t) __a, (int16x8_t) __b, (int16x8_t) __c)
+
+#define vbslq_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vbslv4si ((int32x4_t) __a, (int32x4_t) __b, (int32x4_t) __c)
+
+#define vbslq_u64(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vbslv2di ((int64x2_t) __a, (int64x2_t) __b, (int64x2_t) __c)
+
+#define vbslq_p8(__a, __b, __c) \
+  (poly8x16_t)__builtin_neon_vbslv16qi ((int8x16_t) __a, (int8x16_t) __b, (int8x16_t) __c)
+
+#define vbslq_p16(__a, __b, __c) \
+  (poly16x8_t)__builtin_neon_vbslv8hi ((int16x8_t) __a, (int16x8_t) __b, (int16x8_t) __c)
+
+#define vtrn_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv8qi (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vtrn_s16(__a, __b) __extension__ \
+  ({ \
+     union { int16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv4hi (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vtrn_s32(__a, __b) __extension__ \
+  ({ \
+     union { int32x2x2_t __i; __neon_int32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv2si (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vtrn_f32(__a, __b) __extension__ \
+  ({ \
+     union { float32x2x2_t __i; __neon_float32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv2sf (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vtrn_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv8qi ((int8x8_t) __a, (int8x8_t) __b); \
+     __rv.__i; \
+   })
+
+#define vtrn_u16(__a, __b) __extension__ \
+  ({ \
+     union { uint16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv4hi ((int16x4_t) __a, (int16x4_t) __b); \
+     __rv.__i; \
+   })
+
+#define vtrn_u32(__a, __b) __extension__ \
+  ({ \
+     union { uint32x2x2_t __i; __neon_int32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv2si ((int32x2_t) __a, (int32x2_t) __b); \
+     __rv.__i; \
+   })
+
+#define vtrn_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv8qi ((int8x8_t) __a, (int8x8_t) __b); \
+     __rv.__i; \
+   })
+
+#define vtrn_p16(__a, __b) __extension__ \
+  ({ \
+     union { poly16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv4hi ((int16x4_t) __a, (int16x4_t) __b); \
+     __rv.__i; \
+   })
+
+#define vtrnq_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x16x2_t __i; __neon_int8x16x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv16qi (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vtrnq_s16(__a, __b) __extension__ \
+  ({ \
+     union { int16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv8hi (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vtrnq_s32(__a, __b) __extension__ \
+  ({ \
+     union { int32x4x2_t __i; __neon_int32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv4si (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vtrnq_f32(__a, __b) __extension__ \
+  ({ \
+     union { float32x4x2_t __i; __neon_float32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv4sf (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vtrnq_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x16x2_t __i; __neon_int8x16x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv16qi ((int8x16_t) __a, (int8x16_t) __b); \
+     __rv.__i; \
+   })
+
+#define vtrnq_u16(__a, __b) __extension__ \
+  ({ \
+     union { uint16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv8hi ((int16x8_t) __a, (int16x8_t) __b); \
+     __rv.__i; \
+   })
+
+#define vtrnq_u32(__a, __b) __extension__ \
+  ({ \
+     union { uint32x4x2_t __i; __neon_int32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv4si ((int32x4_t) __a, (int32x4_t) __b); \
+     __rv.__i; \
+   })
+
+#define vtrnq_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x16x2_t __i; __neon_int8x16x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv16qi ((int8x16_t) __a, (int8x16_t) __b); \
+     __rv.__i; \
+   })
+
+#define vtrnq_p16(__a, __b) __extension__ \
+  ({ \
+     union { poly16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vtrnv8hi ((int16x8_t) __a, (int16x8_t) __b); \
+     __rv.__i; \
+   })
+
+#define vzip_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv8qi (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vzip_s16(__a, __b) __extension__ \
+  ({ \
+     union { int16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv4hi (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vzip_s32(__a, __b) __extension__ \
+  ({ \
+     union { int32x2x2_t __i; __neon_int32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv2si (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vzip_f32(__a, __b) __extension__ \
+  ({ \
+     union { float32x2x2_t __i; __neon_float32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv2sf (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vzip_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv8qi ((int8x8_t) __a, (int8x8_t) __b); \
+     __rv.__i; \
+   })
+
+#define vzip_u16(__a, __b) __extension__ \
+  ({ \
+     union { uint16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv4hi ((int16x4_t) __a, (int16x4_t) __b); \
+     __rv.__i; \
+   })
+
+#define vzip_u32(__a, __b) __extension__ \
+  ({ \
+     union { uint32x2x2_t __i; __neon_int32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv2si ((int32x2_t) __a, (int32x2_t) __b); \
+     __rv.__i; \
+   })
+
+#define vzip_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv8qi ((int8x8_t) __a, (int8x8_t) __b); \
+     __rv.__i; \
+   })
+
+#define vzip_p16(__a, __b) __extension__ \
+  ({ \
+     union { poly16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv4hi ((int16x4_t) __a, (int16x4_t) __b); \
+     __rv.__i; \
+   })
+
+#define vzipq_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x16x2_t __i; __neon_int8x16x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv16qi (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vzipq_s16(__a, __b) __extension__ \
+  ({ \
+     union { int16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv8hi (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vzipq_s32(__a, __b) __extension__ \
+  ({ \
+     union { int32x4x2_t __i; __neon_int32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv4si (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vzipq_f32(__a, __b) __extension__ \
+  ({ \
+     union { float32x4x2_t __i; __neon_float32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv4sf (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vzipq_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x16x2_t __i; __neon_int8x16x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv16qi ((int8x16_t) __a, (int8x16_t) __b); \
+     __rv.__i; \
+   })
+
+#define vzipq_u16(__a, __b) __extension__ \
+  ({ \
+     union { uint16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv8hi ((int16x8_t) __a, (int16x8_t) __b); \
+     __rv.__i; \
+   })
+
+#define vzipq_u32(__a, __b) __extension__ \
+  ({ \
+     union { uint32x4x2_t __i; __neon_int32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv4si ((int32x4_t) __a, (int32x4_t) __b); \
+     __rv.__i; \
+   })
+
+#define vzipq_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x16x2_t __i; __neon_int8x16x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv16qi ((int8x16_t) __a, (int8x16_t) __b); \
+     __rv.__i; \
+   })
+
+#define vzipq_p16(__a, __b) __extension__ \
+  ({ \
+     union { poly16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vzipv8hi ((int16x8_t) __a, (int16x8_t) __b); \
+     __rv.__i; \
+   })
+
+#define vuzp_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv8qi (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vuzp_s16(__a, __b) __extension__ \
+  ({ \
+     union { int16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv4hi (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vuzp_s32(__a, __b) __extension__ \
+  ({ \
+     union { int32x2x2_t __i; __neon_int32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv2si (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vuzp_f32(__a, __b) __extension__ \
+  ({ \
+     union { float32x2x2_t __i; __neon_float32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv2sf (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vuzp_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv8qi ((int8x8_t) __a, (int8x8_t) __b); \
+     __rv.__i; \
+   })
+
+#define vuzp_u16(__a, __b) __extension__ \
+  ({ \
+     union { uint16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv4hi ((int16x4_t) __a, (int16x4_t) __b); \
+     __rv.__i; \
+   })
+
+#define vuzp_u32(__a, __b) __extension__ \
+  ({ \
+     union { uint32x2x2_t __i; __neon_int32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv2si ((int32x2_t) __a, (int32x2_t) __b); \
+     __rv.__i; \
+   })
+
+#define vuzp_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv8qi ((int8x8_t) __a, (int8x8_t) __b); \
+     __rv.__i; \
+   })
+
+#define vuzp_p16(__a, __b) __extension__ \
+  ({ \
+     union { poly16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv4hi ((int16x4_t) __a, (int16x4_t) __b); \
+     __rv.__i; \
+   })
+
+#define vuzpq_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x16x2_t __i; __neon_int8x16x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv16qi (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vuzpq_s16(__a, __b) __extension__ \
+  ({ \
+     union { int16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv8hi (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vuzpq_s32(__a, __b) __extension__ \
+  ({ \
+     union { int32x4x2_t __i; __neon_int32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv4si (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vuzpq_f32(__a, __b) __extension__ \
+  ({ \
+     union { float32x4x2_t __i; __neon_float32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv4sf (__a, __b); \
+     __rv.__i; \
+   })
+
+#define vuzpq_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x16x2_t __i; __neon_int8x16x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv16qi ((int8x16_t) __a, (int8x16_t) __b); \
+     __rv.__i; \
+   })
+
+#define vuzpq_u16(__a, __b) __extension__ \
+  ({ \
+     union { uint16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv8hi ((int16x8_t) __a, (int16x8_t) __b); \
+     __rv.__i; \
+   })
+
+#define vuzpq_u32(__a, __b) __extension__ \
+  ({ \
+     union { uint32x4x2_t __i; __neon_int32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv4si ((int32x4_t) __a, (int32x4_t) __b); \
+     __rv.__i; \
+   })
+
+#define vuzpq_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x16x2_t __i; __neon_int8x16x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv16qi ((int8x16_t) __a, (int8x16_t) __b); \
+     __rv.__i; \
+   })
+
+#define vuzpq_p16(__a, __b) __extension__ \
+  ({ \
+     union { poly16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vuzpv8hi ((int16x8_t) __a, (int16x8_t) __b); \
+     __rv.__i; \
+   })
+
+#define vld1_s8(__a) \
+  (int8x8_t)__builtin_neon_vld1v8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a))
+
+#define vld1_s16(__a) \
+  (int16x4_t)__builtin_neon_vld1v4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a))
+
+#define vld1_s32(__a) \
+  (int32x2_t)__builtin_neon_vld1v2si (__neon_ptr_cast(const __builtin_neon_si *, __a))
+
+#define vld1_s64(__a) \
+  (int64x1_t)__builtin_neon_vld1v1di (__neon_ptr_cast(const __builtin_neon_di *, __a))
+
+#define vld1_f32(__a) \
+  (float32x2_t)__builtin_neon_vld1v2sf (__a)
+
+#define vld1_u8(__a) \
+  (uint8x8_t)__builtin_neon_vld1v8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a))
+
+#define vld1_u16(__a) \
+  (uint16x4_t)__builtin_neon_vld1v4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a))
+
+#define vld1_u32(__a) \
+  (uint32x2_t)__builtin_neon_vld1v2si (__neon_ptr_cast(const __builtin_neon_si *, __a))
+
+#define vld1_u64(__a) \
+  (uint64x1_t)__builtin_neon_vld1v1di (__neon_ptr_cast(const __builtin_neon_di *, __a))
+
+#define vld1_p8(__a) \
+  (poly8x8_t)__builtin_neon_vld1v8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a))
+
+#define vld1_p16(__a) \
+  (poly16x4_t)__builtin_neon_vld1v4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a))
+
+#define vld1q_s8(__a) \
+  (int8x16_t)__builtin_neon_vld1v16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a))
+
+#define vld1q_s16(__a) \
+  (int16x8_t)__builtin_neon_vld1v8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a))
+
+#define vld1q_s32(__a) \
+  (int32x4_t)__builtin_neon_vld1v4si (__neon_ptr_cast(const __builtin_neon_si *, __a))
+
+#define vld1q_s64(__a) \
+  (int64x2_t)__builtin_neon_vld1v2di (__neon_ptr_cast(const __builtin_neon_di *, __a))
+
+#define vld1q_f32(__a) \
+  (float32x4_t)__builtin_neon_vld1v4sf (__a)
+
+#define vld1q_u8(__a) \
+  (uint8x16_t)__builtin_neon_vld1v16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a))
+
+#define vld1q_u16(__a) \
+  (uint16x8_t)__builtin_neon_vld1v8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a))
+
+#define vld1q_u32(__a) \
+  (uint32x4_t)__builtin_neon_vld1v4si (__neon_ptr_cast(const __builtin_neon_si *, __a))
+
+#define vld1q_u64(__a) \
+  (uint64x2_t)__builtin_neon_vld1v2di (__neon_ptr_cast(const __builtin_neon_di *, __a))
+
+#define vld1q_p8(__a) \
+  (poly8x16_t)__builtin_neon_vld1v16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a))
+
+#define vld1q_p16(__a) \
+  (poly16x8_t)__builtin_neon_vld1v8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a))
+
+#define vld1_lane_s8(__a, __b, __c) \
+  (int8x8_t)__builtin_neon_vld1_lanev8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), __b, __c)
+
+#define vld1_lane_s16(__a, __b, __c) \
+  (int16x4_t)__builtin_neon_vld1_lanev4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __b, __c)
+
+#define vld1_lane_s32(__a, __b, __c) \
+  (int32x2_t)__builtin_neon_vld1_lanev2si (__neon_ptr_cast(const __builtin_neon_si *, __a), __b, __c)
+
+#define vld1_lane_f32(__a, __b, __c) \
+  (float32x2_t)__builtin_neon_vld1_lanev2sf (__a, __b, __c)
+
+#define vld1_lane_u8(__a, __b, __c) \
+  (uint8x8_t)__builtin_neon_vld1_lanev8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), (int8x8_t) __b, __c)
+
+#define vld1_lane_u16(__a, __b, __c) \
+  (uint16x4_t)__builtin_neon_vld1_lanev4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), (int16x4_t) __b, __c)
+
+#define vld1_lane_u32(__a, __b, __c) \
+  (uint32x2_t)__builtin_neon_vld1_lanev2si (__neon_ptr_cast(const __builtin_neon_si *, __a), (int32x2_t) __b, __c)
+
+#define vld1_lane_p8(__a, __b, __c) \
+  (poly8x8_t)__builtin_neon_vld1_lanev8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), (int8x8_t) __b, __c)
+
+#define vld1_lane_p16(__a, __b, __c) \
+  (poly16x4_t)__builtin_neon_vld1_lanev4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), (int16x4_t) __b, __c)
+
+#define vld1_lane_s64(__a, __b, __c) \
+  (int64x1_t)__builtin_neon_vld1_lanev1di (__neon_ptr_cast(const __builtin_neon_di *, __a), __b, __c)
+
+#define vld1_lane_u64(__a, __b, __c) \
+  (uint64x1_t)__builtin_neon_vld1_lanev1di (__neon_ptr_cast(const __builtin_neon_di *, __a), (int64x1_t) __b, __c)
+
+#define vld1q_lane_s8(__a, __b, __c) \
+  (int8x16_t)__builtin_neon_vld1_lanev16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), __b, __c)
+
+#define vld1q_lane_s16(__a, __b, __c) \
+  (int16x8_t)__builtin_neon_vld1_lanev8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __b, __c)
+
+#define vld1q_lane_s32(__a, __b, __c) \
+  (int32x4_t)__builtin_neon_vld1_lanev4si (__neon_ptr_cast(const __builtin_neon_si *, __a), __b, __c)
+
+#define vld1q_lane_f32(__a, __b, __c) \
+  (float32x4_t)__builtin_neon_vld1_lanev4sf (__a, __b, __c)
+
+#define vld1q_lane_u8(__a, __b, __c) \
+  (uint8x16_t)__builtin_neon_vld1_lanev16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), (int8x16_t) __b, __c)
+
+#define vld1q_lane_u16(__a, __b, __c) \
+  (uint16x8_t)__builtin_neon_vld1_lanev8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), (int16x8_t) __b, __c)
+
+#define vld1q_lane_u32(__a, __b, __c) \
+  (uint32x4_t)__builtin_neon_vld1_lanev4si (__neon_ptr_cast(const __builtin_neon_si *, __a), (int32x4_t) __b, __c)
+
+#define vld1q_lane_p8(__a, __b, __c) \
+  (poly8x16_t)__builtin_neon_vld1_lanev16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), (int8x16_t) __b, __c)
+
+#define vld1q_lane_p16(__a, __b, __c) \
+  (poly16x8_t)__builtin_neon_vld1_lanev8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), (int16x8_t) __b, __c)
+
+#define vld1q_lane_s64(__a, __b, __c) \
+  (int64x2_t)__builtin_neon_vld1_lanev2di (__neon_ptr_cast(const __builtin_neon_di *, __a), __b, __c)
+
+#define vld1q_lane_u64(__a, __b, __c) \
+  (uint64x2_t)__builtin_neon_vld1_lanev2di (__neon_ptr_cast(const __builtin_neon_di *, __a), (int64x2_t) __b, __c)
+
+#define vld1_dup_s8(__a) \
+  (int8x8_t)__builtin_neon_vld1_dupv8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a))
+
+#define vld1_dup_s16(__a) \
+  (int16x4_t)__builtin_neon_vld1_dupv4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a))
+
+#define vld1_dup_s32(__a) \
+  (int32x2_t)__builtin_neon_vld1_dupv2si (__neon_ptr_cast(const __builtin_neon_si *, __a))
+
+#define vld1_dup_f32(__a) \
+  (float32x2_t)__builtin_neon_vld1_dupv2sf (__a)
+
+#define vld1_dup_u8(__a) \
+  (uint8x8_t)__builtin_neon_vld1_dupv8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a))
+
+#define vld1_dup_u16(__a) \
+  (uint16x4_t)__builtin_neon_vld1_dupv4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a))
+
+#define vld1_dup_u32(__a) \
+  (uint32x2_t)__builtin_neon_vld1_dupv2si (__neon_ptr_cast(const __builtin_neon_si *, __a))
+
+#define vld1_dup_p8(__a) \
+  (poly8x8_t)__builtin_neon_vld1_dupv8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a))
+
+#define vld1_dup_p16(__a) \
+  (poly16x4_t)__builtin_neon_vld1_dupv4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a))
+
+#define vld1_dup_s64(__a) \
+  (int64x1_t)__builtin_neon_vld1_dupv1di (__neon_ptr_cast(const __builtin_neon_di *, __a))
+
+#define vld1_dup_u64(__a) \
+  (uint64x1_t)__builtin_neon_vld1_dupv1di (__neon_ptr_cast(const __builtin_neon_di *, __a))
+
+#define vld1q_dup_s8(__a) \
+  (int8x16_t)__builtin_neon_vld1_dupv16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a))
+
+#define vld1q_dup_s16(__a) \
+  (int16x8_t)__builtin_neon_vld1_dupv8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a))
+
+#define vld1q_dup_s32(__a) \
+  (int32x4_t)__builtin_neon_vld1_dupv4si (__neon_ptr_cast(const __builtin_neon_si *, __a))
+
+#define vld1q_dup_f32(__a) \
+  (float32x4_t)__builtin_neon_vld1_dupv4sf (__a)
+
+#define vld1q_dup_u8(__a) \
+  (uint8x16_t)__builtin_neon_vld1_dupv16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a))
+
+#define vld1q_dup_u16(__a) \
+  (uint16x8_t)__builtin_neon_vld1_dupv8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a))
+
+#define vld1q_dup_u32(__a) \
+  (uint32x4_t)__builtin_neon_vld1_dupv4si (__neon_ptr_cast(const __builtin_neon_si *, __a))
+
+#define vld1q_dup_p8(__a) \
+  (poly8x16_t)__builtin_neon_vld1_dupv16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a))
+
+#define vld1q_dup_p16(__a) \
+  (poly16x8_t)__builtin_neon_vld1_dupv8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a))
+
+#define vld1q_dup_s64(__a) \
+  (int64x2_t)__builtin_neon_vld1_dupv2di (__neon_ptr_cast(const __builtin_neon_di *, __a))
+
+#define vld1q_dup_u64(__a) \
+  (uint64x2_t)__builtin_neon_vld1_dupv2di (__neon_ptr_cast(const __builtin_neon_di *, __a))
+
+#define vst1_s8(__a, __b) \
+  __builtin_neon_vst1v8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __b)
+
+#define vst1_s16(__a, __b) \
+  __builtin_neon_vst1v4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __b)
+
+#define vst1_s32(__a, __b) \
+  __builtin_neon_vst1v2si (__neon_ptr_cast(__builtin_neon_si *, __a), __b)
+
+#define vst1_s64(__a, __b) \
+  __builtin_neon_vst1v1di (__neon_ptr_cast(__builtin_neon_di *, __a), __b)
+
+#define vst1_f32(__a, __b) \
+  __builtin_neon_vst1v2sf (__a, __b)
+
+#define vst1_u8(__a, __b) \
+  __builtin_neon_vst1v8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), (int8x8_t) __b)
+
+#define vst1_u16(__a, __b) \
+  __builtin_neon_vst1v4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), (int16x4_t) __b)
+
+#define vst1_u32(__a, __b) \
+  __builtin_neon_vst1v2si (__neon_ptr_cast(__builtin_neon_si *, __a), (int32x2_t) __b)
+
+#define vst1_u64(__a, __b) \
+  __builtin_neon_vst1v1di (__neon_ptr_cast(__builtin_neon_di *, __a), (int64x1_t) __b)
+
+#define vst1_p8(__a, __b) \
+  __builtin_neon_vst1v8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), (int8x8_t) __b)
+
+#define vst1_p16(__a, __b) \
+  __builtin_neon_vst1v4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), (int16x4_t) __b)
+
+#define vst1q_s8(__a, __b) \
+  __builtin_neon_vst1v16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __b)
+
+#define vst1q_s16(__a, __b) \
+  __builtin_neon_vst1v8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __b)
+
+#define vst1q_s32(__a, __b) \
+  __builtin_neon_vst1v4si (__neon_ptr_cast(__builtin_neon_si *, __a), __b)
+
+#define vst1q_s64(__a, __b) \
+  __builtin_neon_vst1v2di (__neon_ptr_cast(__builtin_neon_di *, __a), __b)
+
+#define vst1q_f32(__a, __b) \
+  __builtin_neon_vst1v4sf (__a, __b)
+
+#define vst1q_u8(__a, __b) \
+  __builtin_neon_vst1v16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), (int8x16_t) __b)
+
+#define vst1q_u16(__a, __b) \
+  __builtin_neon_vst1v8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), (int16x8_t) __b)
+
+#define vst1q_u32(__a, __b) \
+  __builtin_neon_vst1v4si (__neon_ptr_cast(__builtin_neon_si *, __a), (int32x4_t) __b)
+
+#define vst1q_u64(__a, __b) \
+  __builtin_neon_vst1v2di (__neon_ptr_cast(__builtin_neon_di *, __a), (int64x2_t) __b)
+
+#define vst1q_p8(__a, __b) \
+  __builtin_neon_vst1v16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), (int8x16_t) __b)
+
+#define vst1q_p16(__a, __b) \
+  __builtin_neon_vst1v8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), (int16x8_t) __b)
+
+#define vst1_lane_s8(__a, __b, __c) \
+  __builtin_neon_vst1_lanev8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __b, __c)
+
+#define vst1_lane_s16(__a, __b, __c) \
+  __builtin_neon_vst1_lanev4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __b, __c)
+
+#define vst1_lane_s32(__a, __b, __c) \
+  __builtin_neon_vst1_lanev2si (__neon_ptr_cast(__builtin_neon_si *, __a), __b, __c)
+
+#define vst1_lane_f32(__a, __b, __c) \
+  __builtin_neon_vst1_lanev2sf (__a, __b, __c)
+
+#define vst1_lane_u8(__a, __b, __c) \
+  __builtin_neon_vst1_lanev8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), (int8x8_t) __b, __c)
+
+#define vst1_lane_u16(__a, __b, __c) \
+  __builtin_neon_vst1_lanev4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), (int16x4_t) __b, __c)
+
+#define vst1_lane_u32(__a, __b, __c) \
+  __builtin_neon_vst1_lanev2si (__neon_ptr_cast(__builtin_neon_si *, __a), (int32x2_t) __b, __c)
+
+#define vst1_lane_p8(__a, __b, __c) \
+  __builtin_neon_vst1_lanev8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), (int8x8_t) __b, __c)
+
+#define vst1_lane_p16(__a, __b, __c) \
+  __builtin_neon_vst1_lanev4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), (int16x4_t) __b, __c)
+
+#define vst1_lane_s64(__a, __b, __c) \
+  __builtin_neon_vst1_lanev1di (__neon_ptr_cast(__builtin_neon_di *, __a), __b, __c)
+
+#define vst1_lane_u64(__a, __b, __c) \
+  __builtin_neon_vst1_lanev1di (__neon_ptr_cast(__builtin_neon_di *, __a), (int64x1_t) __b, __c)
+
+#define vst1q_lane_s8(__a, __b, __c) \
+  __builtin_neon_vst1_lanev16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __b, __c)
+
+#define vst1q_lane_s16(__a, __b, __c) \
+  __builtin_neon_vst1_lanev8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __b, __c)
+
+#define vst1q_lane_s32(__a, __b, __c) \
+  __builtin_neon_vst1_lanev4si (__neon_ptr_cast(__builtin_neon_si *, __a), __b, __c)
+
+#define vst1q_lane_f32(__a, __b, __c) \
+  __builtin_neon_vst1_lanev4sf (__a, __b, __c)
+
+#define vst1q_lane_u8(__a, __b, __c) \
+  __builtin_neon_vst1_lanev16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), (int8x16_t) __b, __c)
+
+#define vst1q_lane_u16(__a, __b, __c) \
+  __builtin_neon_vst1_lanev8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), (int16x8_t) __b, __c)
+
+#define vst1q_lane_u32(__a, __b, __c) \
+  __builtin_neon_vst1_lanev4si (__neon_ptr_cast(__builtin_neon_si *, __a), (int32x4_t) __b, __c)
+
+#define vst1q_lane_p8(__a, __b, __c) \
+  __builtin_neon_vst1_lanev16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), (int8x16_t) __b, __c)
+
+#define vst1q_lane_p16(__a, __b, __c) \
+  __builtin_neon_vst1_lanev8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), (int16x8_t) __b, __c)
+
+#define vst1q_lane_s64(__a, __b, __c) \
+  __builtin_neon_vst1_lanev2di (__neon_ptr_cast(__builtin_neon_di *, __a), __b, __c)
+
+#define vst1q_lane_u64(__a, __b, __c) \
+  __builtin_neon_vst1_lanev2di (__neon_ptr_cast(__builtin_neon_di *, __a), (int64x2_t) __b, __c)
+
+#define vld2_s8(__a) __extension__ \
+  ({ \
+     union { int8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_s16(__a) __extension__ \
+  ({ \
+     union { int16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_s32(__a) __extension__ \
+  ({ \
+     union { int32x2x2_t __i; __neon_int32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v2si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_f32(__a) __extension__ \
+  ({ \
+     union { float32x2x2_t __i; __neon_float32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v2sf (__a); \
+     __rv.__i; \
+   })
+
+#define vld2_u8(__a) __extension__ \
+  ({ \
+     union { uint8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_u16(__a) __extension__ \
+  ({ \
+     union { uint16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_u32(__a) __extension__ \
+  ({ \
+     union { uint32x2x2_t __i; __neon_int32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v2si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_p8(__a) __extension__ \
+  ({ \
+     union { poly8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_p16(__a) __extension__ \
+  ({ \
+     union { poly16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_s64(__a) __extension__ \
+  ({ \
+     union { int64x1x2_t __i; __neon_int64x1x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v1di (__neon_ptr_cast(const __builtin_neon_di *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_u64(__a) __extension__ \
+  ({ \
+     union { uint64x1x2_t __i; __neon_int64x1x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v1di (__neon_ptr_cast(const __builtin_neon_di *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2q_s8(__a) __extension__ \
+  ({ \
+     union { int8x16x2_t __i; __neon_int8x16x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2q_s16(__a) __extension__ \
+  ({ \
+     union { int16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2q_s32(__a) __extension__ \
+  ({ \
+     union { int32x4x2_t __i; __neon_int32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v4si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2q_f32(__a) __extension__ \
+  ({ \
+     union { float32x4x2_t __i; __neon_float32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v4sf (__a); \
+     __rv.__i; \
+   })
+
+#define vld2q_u8(__a) __extension__ \
+  ({ \
+     union { uint8x16x2_t __i; __neon_int8x16x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2q_u16(__a) __extension__ \
+  ({ \
+     union { uint16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2q_u32(__a) __extension__ \
+  ({ \
+     union { uint32x4x2_t __i; __neon_int32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v4si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2q_p8(__a) __extension__ \
+  ({ \
+     union { poly8x16x2_t __i; __neon_int8x16x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2q_p16(__a) __extension__ \
+  ({ \
+     union { poly16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2v8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_lane_s8(__a, __b, __c) __extension__ \
+  ({ \
+     union { int8x8x2_t __i; __neon_int8x8x2_t __o; } __bu = { __b }; \
+     union { int8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2_lane_s16(__a, __b, __c) __extension__ \
+  ({ \
+     union { int16x4x2_t __i; __neon_int16x4x2_t __o; } __bu = { __b }; \
+     union { int16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2_lane_s32(__a, __b, __c) __extension__ \
+  ({ \
+     union { int32x2x2_t __i; __neon_int32x2x2_t __o; } __bu = { __b }; \
+     union { int32x2x2_t __i; __neon_int32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev2si (__neon_ptr_cast(const __builtin_neon_si *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2_lane_f32(__a, __b, __c) __extension__ \
+  ({ \
+     union { float32x2x2_t __i; __neon_float32x2x2_t __o; } __bu = { __b }; \
+     union { float32x2x2_t __i; __neon_float32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev2sf (__a, __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2_lane_u8(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint8x8x2_t __i; __neon_int8x8x2_t __o; } __bu = { __b }; \
+     union { uint8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2_lane_u16(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint16x4x2_t __i; __neon_int16x4x2_t __o; } __bu = { __b }; \
+     union { uint16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2_lane_u32(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint32x2x2_t __i; __neon_int32x2x2_t __o; } __bu = { __b }; \
+     union { uint32x2x2_t __i; __neon_int32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev2si (__neon_ptr_cast(const __builtin_neon_si *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2_lane_p8(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly8x8x2_t __i; __neon_int8x8x2_t __o; } __bu = { __b }; \
+     union { poly8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2_lane_p16(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly16x4x2_t __i; __neon_int16x4x2_t __o; } __bu = { __b }; \
+     union { poly16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2q_lane_s16(__a, __b, __c) __extension__ \
+  ({ \
+     union { int16x8x2_t __i; __neon_int16x8x2_t __o; } __bu = { __b }; \
+     union { int16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2q_lane_s32(__a, __b, __c) __extension__ \
+  ({ \
+     union { int32x4x2_t __i; __neon_int32x4x2_t __o; } __bu = { __b }; \
+     union { int32x4x2_t __i; __neon_int32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev4si (__neon_ptr_cast(const __builtin_neon_si *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2q_lane_f32(__a, __b, __c) __extension__ \
+  ({ \
+     union { float32x4x2_t __i; __neon_float32x4x2_t __o; } __bu = { __b }; \
+     union { float32x4x2_t __i; __neon_float32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev4sf (__a, __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2q_lane_u16(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint16x8x2_t __i; __neon_int16x8x2_t __o; } __bu = { __b }; \
+     union { uint16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2q_lane_u32(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint32x4x2_t __i; __neon_int32x4x2_t __o; } __bu = { __b }; \
+     union { uint32x4x2_t __i; __neon_int32x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev4si (__neon_ptr_cast(const __builtin_neon_si *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2q_lane_p16(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly16x8x2_t __i; __neon_int16x8x2_t __o; } __bu = { __b }; \
+     union { poly16x8x2_t __i; __neon_int16x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_lanev8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld2_dup_s8(__a) __extension__ \
+  ({ \
+     union { int8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_dupv8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_dup_s16(__a) __extension__ \
+  ({ \
+     union { int16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_dupv4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_dup_s32(__a) __extension__ \
+  ({ \
+     union { int32x2x2_t __i; __neon_int32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_dupv2si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_dup_f32(__a) __extension__ \
+  ({ \
+     union { float32x2x2_t __i; __neon_float32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_dupv2sf (__a); \
+     __rv.__i; \
+   })
+
+#define vld2_dup_u8(__a) __extension__ \
+  ({ \
+     union { uint8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_dupv8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_dup_u16(__a) __extension__ \
+  ({ \
+     union { uint16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_dupv4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_dup_u32(__a) __extension__ \
+  ({ \
+     union { uint32x2x2_t __i; __neon_int32x2x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_dupv2si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_dup_p8(__a) __extension__ \
+  ({ \
+     union { poly8x8x2_t __i; __neon_int8x8x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_dupv8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_dup_p16(__a) __extension__ \
+  ({ \
+     union { poly16x4x2_t __i; __neon_int16x4x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_dupv4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_dup_s64(__a) __extension__ \
+  ({ \
+     union { int64x1x2_t __i; __neon_int64x1x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_dupv1di (__neon_ptr_cast(const __builtin_neon_di *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld2_dup_u64(__a) __extension__ \
+  ({ \
+     union { uint64x1x2_t __i; __neon_int64x1x2_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld2_dupv1di (__neon_ptr_cast(const __builtin_neon_di *, __a)); \
+     __rv.__i; \
+   })
+
+#define vst2_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x8x2_t __i; __neon_int8x8x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst2_s16(__a, __b) __extension__ \
+  ({ \
+     union { int16x4x2_t __i; __neon_int16x4x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst2_s32(__a, __b) __extension__ \
+  ({ \
+     union { int32x2x2_t __i; __neon_int32x2x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v2si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o); \
+   })
+
+#define vst2_f32(__a, __b) __extension__ \
+  ({ \
+     union { float32x2x2_t __i; __neon_float32x2x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v2sf (__a, __bu.__o); \
+   })
+
+#define vst2_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x8x2_t __i; __neon_int8x8x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst2_u16(__a, __b) __extension__ \
+  ({ \
+     union { uint16x4x2_t __i; __neon_int16x4x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst2_u32(__a, __b) __extension__ \
+  ({ \
+     union { uint32x2x2_t __i; __neon_int32x2x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v2si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o); \
+   })
+
+#define vst2_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x8x2_t __i; __neon_int8x8x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst2_p16(__a, __b) __extension__ \
+  ({ \
+     union { poly16x4x2_t __i; __neon_int16x4x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst2_s64(__a, __b) __extension__ \
+  ({ \
+     union { int64x1x2_t __i; __neon_int64x1x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v1di (__neon_ptr_cast(__builtin_neon_di *, __a), __bu.__o); \
+   })
+
+#define vst2_u64(__a, __b) __extension__ \
+  ({ \
+     union { uint64x1x2_t __i; __neon_int64x1x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v1di (__neon_ptr_cast(__builtin_neon_di *, __a), __bu.__o); \
+   })
+
+#define vst2q_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x16x2_t __i; __neon_int8x16x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst2q_s16(__a, __b) __extension__ \
+  ({ \
+     union { int16x8x2_t __i; __neon_int16x8x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst2q_s32(__a, __b) __extension__ \
+  ({ \
+     union { int32x4x2_t __i; __neon_int32x4x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v4si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o); \
+   })
+
+#define vst2q_f32(__a, __b) __extension__ \
+  ({ \
+     union { float32x4x2_t __i; __neon_float32x4x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v4sf (__a, __bu.__o); \
+   })
+
+#define vst2q_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x16x2_t __i; __neon_int8x16x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst2q_u16(__a, __b) __extension__ \
+  ({ \
+     union { uint16x8x2_t __i; __neon_int16x8x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst2q_u32(__a, __b) __extension__ \
+  ({ \
+     union { uint32x4x2_t __i; __neon_int32x4x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v4si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o); \
+   })
+
+#define vst2q_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x16x2_t __i; __neon_int8x16x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst2q_p16(__a, __b) __extension__ \
+  ({ \
+     union { poly16x8x2_t __i; __neon_int16x8x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2v8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst2_lane_s8(__a, __b, __c) __extension__ \
+  ({ \
+     union { int8x8x2_t __i; __neon_int8x8x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o, __c); \
+   })
+
+#define vst2_lane_s16(__a, __b, __c) __extension__ \
+  ({ \
+     union { int16x4x2_t __i; __neon_int16x4x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst2_lane_s32(__a, __b, __c) __extension__ \
+  ({ \
+     union { int32x2x2_t __i; __neon_int32x2x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev2si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o, __c); \
+   })
+
+#define vst2_lane_f32(__a, __b, __c) __extension__ \
+  ({ \
+     union { float32x2x2_t __i; __neon_float32x2x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev2sf (__a, __bu.__o, __c); \
+   })
+
+#define vst2_lane_u8(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint8x8x2_t __i; __neon_int8x8x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o, __c); \
+   })
+
+#define vst2_lane_u16(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint16x4x2_t __i; __neon_int16x4x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst2_lane_u32(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint32x2x2_t __i; __neon_int32x2x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev2si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o, __c); \
+   })
+
+#define vst2_lane_p8(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly8x8x2_t __i; __neon_int8x8x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o, __c); \
+   })
+
+#define vst2_lane_p16(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly16x4x2_t __i; __neon_int16x4x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst2q_lane_s16(__a, __b, __c) __extension__ \
+  ({ \
+     union { int16x8x2_t __i; __neon_int16x8x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst2q_lane_s32(__a, __b, __c) __extension__ \
+  ({ \
+     union { int32x4x2_t __i; __neon_int32x4x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev4si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o, __c); \
+   })
+
+#define vst2q_lane_f32(__a, __b, __c) __extension__ \
+  ({ \
+     union { float32x4x2_t __i; __neon_float32x4x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev4sf (__a, __bu.__o, __c); \
+   })
+
+#define vst2q_lane_u16(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint16x8x2_t __i; __neon_int16x8x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst2q_lane_u32(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint32x4x2_t __i; __neon_int32x4x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev4si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o, __c); \
+   })
+
+#define vst2q_lane_p16(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly16x8x2_t __i; __neon_int16x8x2_t __o; } __bu = { __b }; \
+     __builtin_neon_vst2_lanev8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vld3_s8(__a) __extension__ \
+  ({ \
+     union { int8x8x3_t __i; __neon_int8x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_s16(__a) __extension__ \
+  ({ \
+     union { int16x4x3_t __i; __neon_int16x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_s32(__a) __extension__ \
+  ({ \
+     union { int32x2x3_t __i; __neon_int32x2x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v2si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_f32(__a) __extension__ \
+  ({ \
+     union { float32x2x3_t __i; __neon_float32x2x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v2sf (__a); \
+     __rv.__i; \
+   })
+
+#define vld3_u8(__a) __extension__ \
+  ({ \
+     union { uint8x8x3_t __i; __neon_int8x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_u16(__a) __extension__ \
+  ({ \
+     union { uint16x4x3_t __i; __neon_int16x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_u32(__a) __extension__ \
+  ({ \
+     union { uint32x2x3_t __i; __neon_int32x2x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v2si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_p8(__a) __extension__ \
+  ({ \
+     union { poly8x8x3_t __i; __neon_int8x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_p16(__a) __extension__ \
+  ({ \
+     union { poly16x4x3_t __i; __neon_int16x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_s64(__a) __extension__ \
+  ({ \
+     union { int64x1x3_t __i; __neon_int64x1x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v1di (__neon_ptr_cast(const __builtin_neon_di *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_u64(__a) __extension__ \
+  ({ \
+     union { uint64x1x3_t __i; __neon_int64x1x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v1di (__neon_ptr_cast(const __builtin_neon_di *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3q_s8(__a) __extension__ \
+  ({ \
+     union { int8x16x3_t __i; __neon_int8x16x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3q_s16(__a) __extension__ \
+  ({ \
+     union { int16x8x3_t __i; __neon_int16x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3q_s32(__a) __extension__ \
+  ({ \
+     union { int32x4x3_t __i; __neon_int32x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v4si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3q_f32(__a) __extension__ \
+  ({ \
+     union { float32x4x3_t __i; __neon_float32x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v4sf (__a); \
+     __rv.__i; \
+   })
+
+#define vld3q_u8(__a) __extension__ \
+  ({ \
+     union { uint8x16x3_t __i; __neon_int8x16x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3q_u16(__a) __extension__ \
+  ({ \
+     union { uint16x8x3_t __i; __neon_int16x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3q_u32(__a) __extension__ \
+  ({ \
+     union { uint32x4x3_t __i; __neon_int32x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v4si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3q_p8(__a) __extension__ \
+  ({ \
+     union { poly8x16x3_t __i; __neon_int8x16x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3q_p16(__a) __extension__ \
+  ({ \
+     union { poly16x8x3_t __i; __neon_int16x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3v8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_lane_s8(__a, __b, __c) __extension__ \
+  ({ \
+     union { int8x8x3_t __i; __neon_int8x8x3_t __o; } __bu = { __b }; \
+     union { int8x8x3_t __i; __neon_int8x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3_lane_s16(__a, __b, __c) __extension__ \
+  ({ \
+     union { int16x4x3_t __i; __neon_int16x4x3_t __o; } __bu = { __b }; \
+     union { int16x4x3_t __i; __neon_int16x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3_lane_s32(__a, __b, __c) __extension__ \
+  ({ \
+     union { int32x2x3_t __i; __neon_int32x2x3_t __o; } __bu = { __b }; \
+     union { int32x2x3_t __i; __neon_int32x2x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev2si (__neon_ptr_cast(const __builtin_neon_si *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3_lane_f32(__a, __b, __c) __extension__ \
+  ({ \
+     union { float32x2x3_t __i; __neon_float32x2x3_t __o; } __bu = { __b }; \
+     union { float32x2x3_t __i; __neon_float32x2x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev2sf (__a, __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3_lane_u8(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint8x8x3_t __i; __neon_int8x8x3_t __o; } __bu = { __b }; \
+     union { uint8x8x3_t __i; __neon_int8x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3_lane_u16(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint16x4x3_t __i; __neon_int16x4x3_t __o; } __bu = { __b }; \
+     union { uint16x4x3_t __i; __neon_int16x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3_lane_u32(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint32x2x3_t __i; __neon_int32x2x3_t __o; } __bu = { __b }; \
+     union { uint32x2x3_t __i; __neon_int32x2x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev2si (__neon_ptr_cast(const __builtin_neon_si *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3_lane_p8(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly8x8x3_t __i; __neon_int8x8x3_t __o; } __bu = { __b }; \
+     union { poly8x8x3_t __i; __neon_int8x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3_lane_p16(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly16x4x3_t __i; __neon_int16x4x3_t __o; } __bu = { __b }; \
+     union { poly16x4x3_t __i; __neon_int16x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3q_lane_s16(__a, __b, __c) __extension__ \
+  ({ \
+     union { int16x8x3_t __i; __neon_int16x8x3_t __o; } __bu = { __b }; \
+     union { int16x8x3_t __i; __neon_int16x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3q_lane_s32(__a, __b, __c) __extension__ \
+  ({ \
+     union { int32x4x3_t __i; __neon_int32x4x3_t __o; } __bu = { __b }; \
+     union { int32x4x3_t __i; __neon_int32x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev4si (__neon_ptr_cast(const __builtin_neon_si *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3q_lane_f32(__a, __b, __c) __extension__ \
+  ({ \
+     union { float32x4x3_t __i; __neon_float32x4x3_t __o; } __bu = { __b }; \
+     union { float32x4x3_t __i; __neon_float32x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev4sf (__a, __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3q_lane_u16(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint16x8x3_t __i; __neon_int16x8x3_t __o; } __bu = { __b }; \
+     union { uint16x8x3_t __i; __neon_int16x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3q_lane_u32(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint32x4x3_t __i; __neon_int32x4x3_t __o; } __bu = { __b }; \
+     union { uint32x4x3_t __i; __neon_int32x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev4si (__neon_ptr_cast(const __builtin_neon_si *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3q_lane_p16(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly16x8x3_t __i; __neon_int16x8x3_t __o; } __bu = { __b }; \
+     union { poly16x8x3_t __i; __neon_int16x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_lanev8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld3_dup_s8(__a) __extension__ \
+  ({ \
+     union { int8x8x3_t __i; __neon_int8x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_dupv8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_dup_s16(__a) __extension__ \
+  ({ \
+     union { int16x4x3_t __i; __neon_int16x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_dupv4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_dup_s32(__a) __extension__ \
+  ({ \
+     union { int32x2x3_t __i; __neon_int32x2x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_dupv2si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_dup_f32(__a) __extension__ \
+  ({ \
+     union { float32x2x3_t __i; __neon_float32x2x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_dupv2sf (__a); \
+     __rv.__i; \
+   })
+
+#define vld3_dup_u8(__a) __extension__ \
+  ({ \
+     union { uint8x8x3_t __i; __neon_int8x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_dupv8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_dup_u16(__a) __extension__ \
+  ({ \
+     union { uint16x4x3_t __i; __neon_int16x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_dupv4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_dup_u32(__a) __extension__ \
+  ({ \
+     union { uint32x2x3_t __i; __neon_int32x2x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_dupv2si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_dup_p8(__a) __extension__ \
+  ({ \
+     union { poly8x8x3_t __i; __neon_int8x8x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_dupv8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_dup_p16(__a) __extension__ \
+  ({ \
+     union { poly16x4x3_t __i; __neon_int16x4x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_dupv4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_dup_s64(__a) __extension__ \
+  ({ \
+     union { int64x1x3_t __i; __neon_int64x1x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_dupv1di (__neon_ptr_cast(const __builtin_neon_di *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld3_dup_u64(__a) __extension__ \
+  ({ \
+     union { uint64x1x3_t __i; __neon_int64x1x3_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld3_dupv1di (__neon_ptr_cast(const __builtin_neon_di *, __a)); \
+     __rv.__i; \
+   })
+
+#define vst3_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x8x3_t __i; __neon_int8x8x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst3_s16(__a, __b) __extension__ \
+  ({ \
+     union { int16x4x3_t __i; __neon_int16x4x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst3_s32(__a, __b) __extension__ \
+  ({ \
+     union { int32x2x3_t __i; __neon_int32x2x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v2si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o); \
+   })
+
+#define vst3_f32(__a, __b) __extension__ \
+  ({ \
+     union { float32x2x3_t __i; __neon_float32x2x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v2sf (__a, __bu.__o); \
+   })
+
+#define vst3_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x8x3_t __i; __neon_int8x8x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst3_u16(__a, __b) __extension__ \
+  ({ \
+     union { uint16x4x3_t __i; __neon_int16x4x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst3_u32(__a, __b) __extension__ \
+  ({ \
+     union { uint32x2x3_t __i; __neon_int32x2x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v2si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o); \
+   })
+
+#define vst3_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x8x3_t __i; __neon_int8x8x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst3_p16(__a, __b) __extension__ \
+  ({ \
+     union { poly16x4x3_t __i; __neon_int16x4x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst3_s64(__a, __b) __extension__ \
+  ({ \
+     union { int64x1x3_t __i; __neon_int64x1x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v1di (__neon_ptr_cast(__builtin_neon_di *, __a), __bu.__o); \
+   })
+
+#define vst3_u64(__a, __b) __extension__ \
+  ({ \
+     union { uint64x1x3_t __i; __neon_int64x1x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v1di (__neon_ptr_cast(__builtin_neon_di *, __a), __bu.__o); \
+   })
+
+#define vst3q_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x16x3_t __i; __neon_int8x16x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst3q_s16(__a, __b) __extension__ \
+  ({ \
+     union { int16x8x3_t __i; __neon_int16x8x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst3q_s32(__a, __b) __extension__ \
+  ({ \
+     union { int32x4x3_t __i; __neon_int32x4x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v4si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o); \
+   })
+
+#define vst3q_f32(__a, __b) __extension__ \
+  ({ \
+     union { float32x4x3_t __i; __neon_float32x4x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v4sf (__a, __bu.__o); \
+   })
+
+#define vst3q_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x16x3_t __i; __neon_int8x16x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst3q_u16(__a, __b) __extension__ \
+  ({ \
+     union { uint16x8x3_t __i; __neon_int16x8x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst3q_u32(__a, __b) __extension__ \
+  ({ \
+     union { uint32x4x3_t __i; __neon_int32x4x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v4si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o); \
+   })
+
+#define vst3q_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x16x3_t __i; __neon_int8x16x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst3q_p16(__a, __b) __extension__ \
+  ({ \
+     union { poly16x8x3_t __i; __neon_int16x8x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3v8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst3_lane_s8(__a, __b, __c) __extension__ \
+  ({ \
+     union { int8x8x3_t __i; __neon_int8x8x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o, __c); \
+   })
+
+#define vst3_lane_s16(__a, __b, __c) __extension__ \
+  ({ \
+     union { int16x4x3_t __i; __neon_int16x4x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst3_lane_s32(__a, __b, __c) __extension__ \
+  ({ \
+     union { int32x2x3_t __i; __neon_int32x2x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev2si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o, __c); \
+   })
+
+#define vst3_lane_f32(__a, __b, __c) __extension__ \
+  ({ \
+     union { float32x2x3_t __i; __neon_float32x2x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev2sf (__a, __bu.__o, __c); \
+   })
+
+#define vst3_lane_u8(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint8x8x3_t __i; __neon_int8x8x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o, __c); \
+   })
+
+#define vst3_lane_u16(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint16x4x3_t __i; __neon_int16x4x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst3_lane_u32(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint32x2x3_t __i; __neon_int32x2x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev2si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o, __c); \
+   })
+
+#define vst3_lane_p8(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly8x8x3_t __i; __neon_int8x8x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o, __c); \
+   })
+
+#define vst3_lane_p16(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly16x4x3_t __i; __neon_int16x4x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst3q_lane_s16(__a, __b, __c) __extension__ \
+  ({ \
+     union { int16x8x3_t __i; __neon_int16x8x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst3q_lane_s32(__a, __b, __c) __extension__ \
+  ({ \
+     union { int32x4x3_t __i; __neon_int32x4x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev4si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o, __c); \
+   })
+
+#define vst3q_lane_f32(__a, __b, __c) __extension__ \
+  ({ \
+     union { float32x4x3_t __i; __neon_float32x4x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev4sf (__a, __bu.__o, __c); \
+   })
+
+#define vst3q_lane_u16(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint16x8x3_t __i; __neon_int16x8x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst3q_lane_u32(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint32x4x3_t __i; __neon_int32x4x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev4si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o, __c); \
+   })
+
+#define vst3q_lane_p16(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly16x8x3_t __i; __neon_int16x8x3_t __o; } __bu = { __b }; \
+     __builtin_neon_vst3_lanev8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vld4_s8(__a) __extension__ \
+  ({ \
+     union { int8x8x4_t __i; __neon_int8x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_s16(__a) __extension__ \
+  ({ \
+     union { int16x4x4_t __i; __neon_int16x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_s32(__a) __extension__ \
+  ({ \
+     union { int32x2x4_t __i; __neon_int32x2x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v2si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_f32(__a) __extension__ \
+  ({ \
+     union { float32x2x4_t __i; __neon_float32x2x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v2sf (__a); \
+     __rv.__i; \
+   })
+
+#define vld4_u8(__a) __extension__ \
+  ({ \
+     union { uint8x8x4_t __i; __neon_int8x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_u16(__a) __extension__ \
+  ({ \
+     union { uint16x4x4_t __i; __neon_int16x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_u32(__a) __extension__ \
+  ({ \
+     union { uint32x2x4_t __i; __neon_int32x2x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v2si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_p8(__a) __extension__ \
+  ({ \
+     union { poly8x8x4_t __i; __neon_int8x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_p16(__a) __extension__ \
+  ({ \
+     union { poly16x4x4_t __i; __neon_int16x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_s64(__a) __extension__ \
+  ({ \
+     union { int64x1x4_t __i; __neon_int64x1x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v1di (__neon_ptr_cast(const __builtin_neon_di *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_u64(__a) __extension__ \
+  ({ \
+     union { uint64x1x4_t __i; __neon_int64x1x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v1di (__neon_ptr_cast(const __builtin_neon_di *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4q_s8(__a) __extension__ \
+  ({ \
+     union { int8x16x4_t __i; __neon_int8x16x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4q_s16(__a) __extension__ \
+  ({ \
+     union { int16x8x4_t __i; __neon_int16x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4q_s32(__a) __extension__ \
+  ({ \
+     union { int32x4x4_t __i; __neon_int32x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v4si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4q_f32(__a) __extension__ \
+  ({ \
+     union { float32x4x4_t __i; __neon_float32x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v4sf (__a); \
+     __rv.__i; \
+   })
+
+#define vld4q_u8(__a) __extension__ \
+  ({ \
+     union { uint8x16x4_t __i; __neon_int8x16x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4q_u16(__a) __extension__ \
+  ({ \
+     union { uint16x8x4_t __i; __neon_int16x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4q_u32(__a) __extension__ \
+  ({ \
+     union { uint32x4x4_t __i; __neon_int32x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v4si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4q_p8(__a) __extension__ \
+  ({ \
+     union { poly8x16x4_t __i; __neon_int8x16x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v16qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4q_p16(__a) __extension__ \
+  ({ \
+     union { poly16x8x4_t __i; __neon_int16x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4v8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_lane_s8(__a, __b, __c) __extension__ \
+  ({ \
+     union { int8x8x4_t __i; __neon_int8x8x4_t __o; } __bu = { __b }; \
+     union { int8x8x4_t __i; __neon_int8x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4_lane_s16(__a, __b, __c) __extension__ \
+  ({ \
+     union { int16x4x4_t __i; __neon_int16x4x4_t __o; } __bu = { __b }; \
+     union { int16x4x4_t __i; __neon_int16x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4_lane_s32(__a, __b, __c) __extension__ \
+  ({ \
+     union { int32x2x4_t __i; __neon_int32x2x4_t __o; } __bu = { __b }; \
+     union { int32x2x4_t __i; __neon_int32x2x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev2si (__neon_ptr_cast(const __builtin_neon_si *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4_lane_f32(__a, __b, __c) __extension__ \
+  ({ \
+     union { float32x2x4_t __i; __neon_float32x2x4_t __o; } __bu = { __b }; \
+     union { float32x2x4_t __i; __neon_float32x2x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev2sf (__a, __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4_lane_u8(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint8x8x4_t __i; __neon_int8x8x4_t __o; } __bu = { __b }; \
+     union { uint8x8x4_t __i; __neon_int8x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4_lane_u16(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint16x4x4_t __i; __neon_int16x4x4_t __o; } __bu = { __b }; \
+     union { uint16x4x4_t __i; __neon_int16x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4_lane_u32(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint32x2x4_t __i; __neon_int32x2x4_t __o; } __bu = { __b }; \
+     union { uint32x2x4_t __i; __neon_int32x2x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev2si (__neon_ptr_cast(const __builtin_neon_si *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4_lane_p8(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly8x8x4_t __i; __neon_int8x8x4_t __o; } __bu = { __b }; \
+     union { poly8x8x4_t __i; __neon_int8x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4_lane_p16(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly16x4x4_t __i; __neon_int16x4x4_t __o; } __bu = { __b }; \
+     union { poly16x4x4_t __i; __neon_int16x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4q_lane_s16(__a, __b, __c) __extension__ \
+  ({ \
+     union { int16x8x4_t __i; __neon_int16x8x4_t __o; } __bu = { __b }; \
+     union { int16x8x4_t __i; __neon_int16x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4q_lane_s32(__a, __b, __c) __extension__ \
+  ({ \
+     union { int32x4x4_t __i; __neon_int32x4x4_t __o; } __bu = { __b }; \
+     union { int32x4x4_t __i; __neon_int32x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev4si (__neon_ptr_cast(const __builtin_neon_si *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4q_lane_f32(__a, __b, __c) __extension__ \
+  ({ \
+     union { float32x4x4_t __i; __neon_float32x4x4_t __o; } __bu = { __b }; \
+     union { float32x4x4_t __i; __neon_float32x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev4sf (__a, __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4q_lane_u16(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint16x8x4_t __i; __neon_int16x8x4_t __o; } __bu = { __b }; \
+     union { uint16x8x4_t __i; __neon_int16x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4q_lane_u32(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint32x4x4_t __i; __neon_int32x4x4_t __o; } __bu = { __b }; \
+     union { uint32x4x4_t __i; __neon_int32x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev4si (__neon_ptr_cast(const __builtin_neon_si *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4q_lane_p16(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly16x8x4_t __i; __neon_int16x8x4_t __o; } __bu = { __b }; \
+     union { poly16x8x4_t __i; __neon_int16x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_lanev8hi (__neon_ptr_cast(const __builtin_neon_hi *, __a), __bu.__o, __c); \
+     __rv.__i; \
+   })
+
+#define vld4_dup_s8(__a) __extension__ \
+  ({ \
+     union { int8x8x4_t __i; __neon_int8x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_dupv8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_dup_s16(__a) __extension__ \
+  ({ \
+     union { int16x4x4_t __i; __neon_int16x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_dupv4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_dup_s32(__a) __extension__ \
+  ({ \
+     union { int32x2x4_t __i; __neon_int32x2x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_dupv2si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_dup_f32(__a) __extension__ \
+  ({ \
+     union { float32x2x4_t __i; __neon_float32x2x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_dupv2sf (__a); \
+     __rv.__i; \
+   })
+
+#define vld4_dup_u8(__a) __extension__ \
+  ({ \
+     union { uint8x8x4_t __i; __neon_int8x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_dupv8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_dup_u16(__a) __extension__ \
+  ({ \
+     union { uint16x4x4_t __i; __neon_int16x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_dupv4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_dup_u32(__a) __extension__ \
+  ({ \
+     union { uint32x2x4_t __i; __neon_int32x2x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_dupv2si (__neon_ptr_cast(const __builtin_neon_si *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_dup_p8(__a) __extension__ \
+  ({ \
+     union { poly8x8x4_t __i; __neon_int8x8x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_dupv8qi (__neon_ptr_cast(const __builtin_neon_qi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_dup_p16(__a) __extension__ \
+  ({ \
+     union { poly16x4x4_t __i; __neon_int16x4x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_dupv4hi (__neon_ptr_cast(const __builtin_neon_hi *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_dup_s64(__a) __extension__ \
+  ({ \
+     union { int64x1x4_t __i; __neon_int64x1x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_dupv1di (__neon_ptr_cast(const __builtin_neon_di *, __a)); \
+     __rv.__i; \
+   })
+
+#define vld4_dup_u64(__a) __extension__ \
+  ({ \
+     union { uint64x1x4_t __i; __neon_int64x1x4_t __o; } __rv; \
+     __rv.__o = __builtin_neon_vld4_dupv1di (__neon_ptr_cast(const __builtin_neon_di *, __a)); \
+     __rv.__i; \
+   })
+
+#define vst4_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x8x4_t __i; __neon_int8x8x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst4_s16(__a, __b) __extension__ \
+  ({ \
+     union { int16x4x4_t __i; __neon_int16x4x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst4_s32(__a, __b) __extension__ \
+  ({ \
+     union { int32x2x4_t __i; __neon_int32x2x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v2si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o); \
+   })
+
+#define vst4_f32(__a, __b) __extension__ \
+  ({ \
+     union { float32x2x4_t __i; __neon_float32x2x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v2sf (__a, __bu.__o); \
+   })
+
+#define vst4_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x8x4_t __i; __neon_int8x8x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst4_u16(__a, __b) __extension__ \
+  ({ \
+     union { uint16x4x4_t __i; __neon_int16x4x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst4_u32(__a, __b) __extension__ \
+  ({ \
+     union { uint32x2x4_t __i; __neon_int32x2x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v2si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o); \
+   })
+
+#define vst4_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x8x4_t __i; __neon_int8x8x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst4_p16(__a, __b) __extension__ \
+  ({ \
+     union { poly16x4x4_t __i; __neon_int16x4x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst4_s64(__a, __b) __extension__ \
+  ({ \
+     union { int64x1x4_t __i; __neon_int64x1x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v1di (__neon_ptr_cast(__builtin_neon_di *, __a), __bu.__o); \
+   })
+
+#define vst4_u64(__a, __b) __extension__ \
+  ({ \
+     union { uint64x1x4_t __i; __neon_int64x1x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v1di (__neon_ptr_cast(__builtin_neon_di *, __a), __bu.__o); \
+   })
+
+#define vst4q_s8(__a, __b) __extension__ \
+  ({ \
+     union { int8x16x4_t __i; __neon_int8x16x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst4q_s16(__a, __b) __extension__ \
+  ({ \
+     union { int16x8x4_t __i; __neon_int16x8x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst4q_s32(__a, __b) __extension__ \
+  ({ \
+     union { int32x4x4_t __i; __neon_int32x4x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v4si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o); \
+   })
+
+#define vst4q_f32(__a, __b) __extension__ \
+  ({ \
+     union { float32x4x4_t __i; __neon_float32x4x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v4sf (__a, __bu.__o); \
+   })
+
+#define vst4q_u8(__a, __b) __extension__ \
+  ({ \
+     union { uint8x16x4_t __i; __neon_int8x16x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst4q_u16(__a, __b) __extension__ \
+  ({ \
+     union { uint16x8x4_t __i; __neon_int16x8x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst4q_u32(__a, __b) __extension__ \
+  ({ \
+     union { uint32x4x4_t __i; __neon_int32x4x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v4si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o); \
+   })
+
+#define vst4q_p8(__a, __b) __extension__ \
+  ({ \
+     union { poly8x16x4_t __i; __neon_int8x16x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v16qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o); \
+   })
+
+#define vst4q_p16(__a, __b) __extension__ \
+  ({ \
+     union { poly16x8x4_t __i; __neon_int16x8x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4v8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o); \
+   })
+
+#define vst4_lane_s8(__a, __b, __c) __extension__ \
+  ({ \
+     union { int8x8x4_t __i; __neon_int8x8x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o, __c); \
+   })
+
+#define vst4_lane_s16(__a, __b, __c) __extension__ \
+  ({ \
+     union { int16x4x4_t __i; __neon_int16x4x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst4_lane_s32(__a, __b, __c) __extension__ \
+  ({ \
+     union { int32x2x4_t __i; __neon_int32x2x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev2si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o, __c); \
+   })
+
+#define vst4_lane_f32(__a, __b, __c) __extension__ \
+  ({ \
+     union { float32x2x4_t __i; __neon_float32x2x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev2sf (__a, __bu.__o, __c); \
+   })
+
+#define vst4_lane_u8(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint8x8x4_t __i; __neon_int8x8x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o, __c); \
+   })
+
+#define vst4_lane_u16(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint16x4x4_t __i; __neon_int16x4x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst4_lane_u32(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint32x2x4_t __i; __neon_int32x2x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev2si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o, __c); \
+   })
+
+#define vst4_lane_p8(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly8x8x4_t __i; __neon_int8x8x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev8qi (__neon_ptr_cast(__builtin_neon_qi *, __a), __bu.__o, __c); \
+   })
+
+#define vst4_lane_p16(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly16x4x4_t __i; __neon_int16x4x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev4hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst4q_lane_s16(__a, __b, __c) __extension__ \
+  ({ \
+     union { int16x8x4_t __i; __neon_int16x8x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst4q_lane_s32(__a, __b, __c) __extension__ \
+  ({ \
+     union { int32x4x4_t __i; __neon_int32x4x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev4si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o, __c); \
+   })
+
+#define vst4q_lane_f32(__a, __b, __c) __extension__ \
+  ({ \
+     union { float32x4x4_t __i; __neon_float32x4x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev4sf (__a, __bu.__o, __c); \
+   })
+
+#define vst4q_lane_u16(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint16x8x4_t __i; __neon_int16x8x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vst4q_lane_u32(__a, __b, __c) __extension__ \
+  ({ \
+     union { uint32x4x4_t __i; __neon_int32x4x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev4si (__neon_ptr_cast(__builtin_neon_si *, __a), __bu.__o, __c); \
+   })
+
+#define vst4q_lane_p16(__a, __b, __c) __extension__ \
+  ({ \
+     union { poly16x8x4_t __i; __neon_int16x8x4_t __o; } __bu = { __b }; \
+     __builtin_neon_vst4_lanev8hi (__neon_ptr_cast(__builtin_neon_hi *, __a), __bu.__o, __c); \
+   })
+
+#define vand_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vandv8qi (__a, __b, 1)
+
+#define vand_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vandv4hi (__a, __b, 1)
+
+#define vand_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vandv2si (__a, __b, 1)
+
+#define vand_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vandv1di (__a, __b, 1)
+
+#define vand_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vandv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vand_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vandv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vand_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vandv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vand_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vandv1di ((int64x1_t) __a, (int64x1_t) __b, 0)
+
+#define vandq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vandv16qi (__a, __b, 1)
+
+#define vandq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vandv8hi (__a, __b, 1)
+
+#define vandq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vandv4si (__a, __b, 1)
+
+#define vandq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vandv2di (__a, __b, 1)
+
+#define vandq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vandv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vandq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vandv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vandq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vandv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vandq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vandv2di ((int64x2_t) __a, (int64x2_t) __b, 0)
+
+#define vorr_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vorrv8qi (__a, __b, 1)
+
+#define vorr_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vorrv4hi (__a, __b, 1)
+
+#define vorr_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vorrv2si (__a, __b, 1)
+
+#define vorr_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vorrv1di (__a, __b, 1)
+
+#define vorr_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vorrv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vorr_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vorrv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vorr_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vorrv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vorr_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vorrv1di ((int64x1_t) __a, (int64x1_t) __b, 0)
+
+#define vorrq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vorrv16qi (__a, __b, 1)
+
+#define vorrq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vorrv8hi (__a, __b, 1)
+
+#define vorrq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vorrv4si (__a, __b, 1)
+
+#define vorrq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vorrv2di (__a, __b, 1)
+
+#define vorrq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vorrv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vorrq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vorrv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vorrq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vorrv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vorrq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vorrv2di ((int64x2_t) __a, (int64x2_t) __b, 0)
+
+#define veor_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_veorv8qi (__a, __b, 1)
+
+#define veor_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_veorv4hi (__a, __b, 1)
+
+#define veor_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_veorv2si (__a, __b, 1)
+
+#define veor_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_veorv1di (__a, __b, 1)
+
+#define veor_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_veorv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define veor_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_veorv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define veor_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_veorv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define veor_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_veorv1di ((int64x1_t) __a, (int64x1_t) __b, 0)
+
+#define veorq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_veorv16qi (__a, __b, 1)
+
+#define veorq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_veorv8hi (__a, __b, 1)
+
+#define veorq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_veorv4si (__a, __b, 1)
+
+#define veorq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_veorv2di (__a, __b, 1)
+
+#define veorq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_veorv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define veorq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_veorv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define veorq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_veorv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define veorq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_veorv2di ((int64x2_t) __a, (int64x2_t) __b, 0)
+
+#define vbic_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vbicv8qi (__a, __b, 1)
+
+#define vbic_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vbicv4hi (__a, __b, 1)
+
+#define vbic_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vbicv2si (__a, __b, 1)
+
+#define vbic_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vbicv1di (__a, __b, 1)
+
+#define vbic_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vbicv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vbic_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vbicv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vbic_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vbicv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vbic_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vbicv1di ((int64x1_t) __a, (int64x1_t) __b, 0)
+
+#define vbicq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vbicv16qi (__a, __b, 1)
+
+#define vbicq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vbicv8hi (__a, __b, 1)
+
+#define vbicq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vbicv4si (__a, __b, 1)
+
+#define vbicq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vbicv2di (__a, __b, 1)
+
+#define vbicq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vbicv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vbicq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vbicv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vbicq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vbicv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vbicq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vbicv2di ((int64x2_t) __a, (int64x2_t) __b, 0)
+
+#define vorn_s8(__a, __b) \
+  (int8x8_t)__builtin_neon_vornv8qi (__a, __b, 1)
+
+#define vorn_s16(__a, __b) \
+  (int16x4_t)__builtin_neon_vornv4hi (__a, __b, 1)
+
+#define vorn_s32(__a, __b) \
+  (int32x2_t)__builtin_neon_vornv2si (__a, __b, 1)
+
+#define vorn_s64(__a, __b) \
+  (int64x1_t)__builtin_neon_vornv1di (__a, __b, 1)
+
+#define vorn_u8(__a, __b) \
+  (uint8x8_t)__builtin_neon_vornv8qi ((int8x8_t) __a, (int8x8_t) __b, 0)
+
+#define vorn_u16(__a, __b) \
+  (uint16x4_t)__builtin_neon_vornv4hi ((int16x4_t) __a, (int16x4_t) __b, 0)
+
+#define vorn_u32(__a, __b) \
+  (uint32x2_t)__builtin_neon_vornv2si ((int32x2_t) __a, (int32x2_t) __b, 0)
+
+#define vorn_u64(__a, __b) \
+  (uint64x1_t)__builtin_neon_vornv1di ((int64x1_t) __a, (int64x1_t) __b, 0)
+
+#define vornq_s8(__a, __b) \
+  (int8x16_t)__builtin_neon_vornv16qi (__a, __b, 1)
+
+#define vornq_s16(__a, __b) \
+  (int16x8_t)__builtin_neon_vornv8hi (__a, __b, 1)
+
+#define vornq_s32(__a, __b) \
+  (int32x4_t)__builtin_neon_vornv4si (__a, __b, 1)
+
+#define vornq_s64(__a, __b) \
+  (int64x2_t)__builtin_neon_vornv2di (__a, __b, 1)
+
+#define vornq_u8(__a, __b) \
+  (uint8x16_t)__builtin_neon_vornv16qi ((int8x16_t) __a, (int8x16_t) __b, 0)
+
+#define vornq_u16(__a, __b) \
+  (uint16x8_t)__builtin_neon_vornv8hi ((int16x8_t) __a, (int16x8_t) __b, 0)
+
+#define vornq_u32(__a, __b) \
+  (uint32x4_t)__builtin_neon_vornv4si ((int32x4_t) __a, (int32x4_t) __b, 0)
+
+#define vornq_u64(__a, __b) \
+  (uint64x2_t)__builtin_neon_vornv2di ((int64x2_t) __a, (int64x2_t) __b, 0)
+
+
+#define vreinterpret_p8_s8(__a) \
+  (poly8x8_t)__builtin_neon_vreinterpretv8qiv8qi (__a)
+
+#define vreinterpret_p8_s16(__a) \
+  (poly8x8_t)__builtin_neon_vreinterpretv8qiv4hi (__a)
+
+#define vreinterpret_p8_s32(__a) \
+  (poly8x8_t)__builtin_neon_vreinterpretv8qiv2si (__a)
+
+#define vreinterpret_p8_s64(__a) \
+  (poly8x8_t)__builtin_neon_vreinterpretv8qiv1di (__a)
+
+#define vreinterpret_p8_f32(__a) \
+  (poly8x8_t)__builtin_neon_vreinterpretv8qiv2sf (__a)
+
+#define vreinterpret_p8_u8(__a) \
+  (poly8x8_t)__builtin_neon_vreinterpretv8qiv8qi ((int8x8_t) __a)
+
+#define vreinterpret_p8_u16(__a) \
+  (poly8x8_t)__builtin_neon_vreinterpretv8qiv4hi ((int16x4_t) __a)
+
+#define vreinterpret_p8_u32(__a) \
+  (poly8x8_t)__builtin_neon_vreinterpretv8qiv2si ((int32x2_t) __a)
+
+#define vreinterpret_p8_u64(__a) \
+  (poly8x8_t)__builtin_neon_vreinterpretv8qiv1di ((int64x1_t) __a)
+
+#define vreinterpret_p8_p16(__a) \
+  (poly8x8_t)__builtin_neon_vreinterpretv8qiv4hi ((int16x4_t) __a)
+
+#define vreinterpretq_p8_s8(__a) \
+  (poly8x16_t)__builtin_neon_vreinterpretv16qiv16qi (__a)
+
+#define vreinterpretq_p8_s16(__a) \
+  (poly8x16_t)__builtin_neon_vreinterpretv16qiv8hi (__a)
+
+#define vreinterpretq_p8_s32(__a) \
+  (poly8x16_t)__builtin_neon_vreinterpretv16qiv4si (__a)
+
+#define vreinterpretq_p8_s64(__a) \
+  (poly8x16_t)__builtin_neon_vreinterpretv16qiv2di (__a)
+
+#define vreinterpretq_p8_f32(__a) \
+  (poly8x16_t)__builtin_neon_vreinterpretv16qiv4sf (__a)
+
+#define vreinterpretq_p8_u8(__a) \
+  (poly8x16_t)__builtin_neon_vreinterpretv16qiv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_p8_u16(__a) \
+  (poly8x16_t)__builtin_neon_vreinterpretv16qiv8hi ((int16x8_t) __a)
+
+#define vreinterpretq_p8_u32(__a) \
+  (poly8x16_t)__builtin_neon_vreinterpretv16qiv4si ((int32x4_t) __a)
+
+#define vreinterpretq_p8_u64(__a) \
+  (poly8x16_t)__builtin_neon_vreinterpretv16qiv2di ((int64x2_t) __a)
+
+#define vreinterpretq_p8_p16(__a) \
+  (poly8x16_t)__builtin_neon_vreinterpretv16qiv8hi ((int16x8_t) __a)
+
+#define vreinterpret_p16_s8(__a) \
+  (poly16x4_t)__builtin_neon_vreinterpretv4hiv8qi (__a)
+
+#define vreinterpret_p16_s16(__a) \
+  (poly16x4_t)__builtin_neon_vreinterpretv4hiv4hi (__a)
+
+#define vreinterpret_p16_s32(__a) \
+  (poly16x4_t)__builtin_neon_vreinterpretv4hiv2si (__a)
+
+#define vreinterpret_p16_s64(__a) \
+  (poly16x4_t)__builtin_neon_vreinterpretv4hiv1di (__a)
+
+#define vreinterpret_p16_f32(__a) \
+  (poly16x4_t)__builtin_neon_vreinterpretv4hiv2sf (__a)
+
+#define vreinterpret_p16_u8(__a) \
+  (poly16x4_t)__builtin_neon_vreinterpretv4hiv8qi ((int8x8_t) __a)
+
+#define vreinterpret_p16_u16(__a) \
+  (poly16x4_t)__builtin_neon_vreinterpretv4hiv4hi ((int16x4_t) __a)
+
+#define vreinterpret_p16_u32(__a) \
+  (poly16x4_t)__builtin_neon_vreinterpretv4hiv2si ((int32x2_t) __a)
+
+#define vreinterpret_p16_u64(__a) \
+  (poly16x4_t)__builtin_neon_vreinterpretv4hiv1di ((int64x1_t) __a)
+
+#define vreinterpret_p16_p8(__a) \
+  (poly16x4_t)__builtin_neon_vreinterpretv4hiv8qi ((int8x8_t) __a)
+
+#define vreinterpretq_p16_s8(__a) \
+  (poly16x8_t)__builtin_neon_vreinterpretv8hiv16qi (__a)
+
+#define vreinterpretq_p16_s16(__a) \
+  (poly16x8_t)__builtin_neon_vreinterpretv8hiv8hi (__a)
+
+#define vreinterpretq_p16_s32(__a) \
+  (poly16x8_t)__builtin_neon_vreinterpretv8hiv4si (__a)
+
+#define vreinterpretq_p16_s64(__a) \
+  (poly16x8_t)__builtin_neon_vreinterpretv8hiv2di (__a)
+
+#define vreinterpretq_p16_f32(__a) \
+  (poly16x8_t)__builtin_neon_vreinterpretv8hiv4sf (__a)
+
+#define vreinterpretq_p16_u8(__a) \
+  (poly16x8_t)__builtin_neon_vreinterpretv8hiv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_p16_u16(__a) \
+  (poly16x8_t)__builtin_neon_vreinterpretv8hiv8hi ((int16x8_t) __a)
+
+#define vreinterpretq_p16_u32(__a) \
+  (poly16x8_t)__builtin_neon_vreinterpretv8hiv4si ((int32x4_t) __a)
+
+#define vreinterpretq_p16_u64(__a) \
+  (poly16x8_t)__builtin_neon_vreinterpretv8hiv2di ((int64x2_t) __a)
+
+#define vreinterpretq_p16_p8(__a) \
+  (poly16x8_t)__builtin_neon_vreinterpretv8hiv16qi ((int8x16_t) __a)
+
+#define vreinterpret_f32_s8(__a) \
+  (float32x2_t)__builtin_neon_vreinterpretv2sfv8qi (__a)
+
+#define vreinterpret_f32_s16(__a) \
+  (float32x2_t)__builtin_neon_vreinterpretv2sfv4hi (__a)
+
+#define vreinterpret_f32_s32(__a) \
+  (float32x2_t)__builtin_neon_vreinterpretv2sfv2si (__a)
+
+#define vreinterpret_f32_s64(__a) \
+  (float32x2_t)__builtin_neon_vreinterpretv2sfv1di (__a)
+
+#define vreinterpret_f32_u8(__a) \
+  (float32x2_t)__builtin_neon_vreinterpretv2sfv8qi ((int8x8_t) __a)
+
+#define vreinterpret_f32_u16(__a) \
+  (float32x2_t)__builtin_neon_vreinterpretv2sfv4hi ((int16x4_t) __a)
+
+#define vreinterpret_f32_u32(__a) \
+  (float32x2_t)__builtin_neon_vreinterpretv2sfv2si ((int32x2_t) __a)
+
+#define vreinterpret_f32_u64(__a) \
+  (float32x2_t)__builtin_neon_vreinterpretv2sfv1di ((int64x1_t) __a)
+
+#define vreinterpret_f32_p8(__a) \
+  (float32x2_t)__builtin_neon_vreinterpretv2sfv8qi ((int8x8_t) __a)
+
+#define vreinterpret_f32_p16(__a) \
+  (float32x2_t)__builtin_neon_vreinterpretv2sfv4hi ((int16x4_t) __a)
+
+#define vreinterpretq_f32_s8(__a) \
+  (float32x4_t)__builtin_neon_vreinterpretv4sfv16qi (__a)
+
+#define vreinterpretq_f32_s16(__a) \
+  (float32x4_t)__builtin_neon_vreinterpretv4sfv8hi (__a)
+
+#define vreinterpretq_f32_s32(__a) \
+  (float32x4_t)__builtin_neon_vreinterpretv4sfv4si (__a)
+
+#define vreinterpretq_f32_s64(__a) \
+  (float32x4_t)__builtin_neon_vreinterpretv4sfv2di (__a)
+
+#define vreinterpretq_f32_u8(__a) \
+  (float32x4_t)__builtin_neon_vreinterpretv4sfv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_f32_u16(__a) \
+  (float32x4_t)__builtin_neon_vreinterpretv4sfv8hi ((int16x8_t) __a)
+
+#define vreinterpretq_f32_u32(__a) \
+  (float32x4_t)__builtin_neon_vreinterpretv4sfv4si ((int32x4_t) __a)
+
+#define vreinterpretq_f32_u64(__a) \
+  (float32x4_t)__builtin_neon_vreinterpretv4sfv2di ((int64x2_t) __a)
+
+#define vreinterpretq_f32_p8(__a) \
+  (float32x4_t)__builtin_neon_vreinterpretv4sfv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_f32_p16(__a) \
+  (float32x4_t)__builtin_neon_vreinterpretv4sfv8hi ((int16x8_t) __a)
+
+#define vreinterpret_s64_s8(__a) \
+  (int64x1_t)__builtin_neon_vreinterpretv1div8qi (__a)
+
+#define vreinterpret_s64_s16(__a) \
+  (int64x1_t)__builtin_neon_vreinterpretv1div4hi (__a)
+
+#define vreinterpret_s64_s32(__a) \
+  (int64x1_t)__builtin_neon_vreinterpretv1div2si (__a)
+
+#define vreinterpret_s64_f32(__a) \
+  (int64x1_t)__builtin_neon_vreinterpretv1div2sf (__a)
+
+#define vreinterpret_s64_u8(__a) \
+  (int64x1_t)__builtin_neon_vreinterpretv1div8qi ((int8x8_t) __a)
+
+#define vreinterpret_s64_u16(__a) \
+  (int64x1_t)__builtin_neon_vreinterpretv1div4hi ((int16x4_t) __a)
+
+#define vreinterpret_s64_u32(__a) \
+  (int64x1_t)__builtin_neon_vreinterpretv1div2si ((int32x2_t) __a)
+
+#define vreinterpret_s64_u64(__a) \
+  (int64x1_t)__builtin_neon_vreinterpretv1div1di ((int64x1_t) __a)
+
+#define vreinterpret_s64_p8(__a) \
+  (int64x1_t)__builtin_neon_vreinterpretv1div8qi ((int8x8_t) __a)
+
+#define vreinterpret_s64_p16(__a) \
+  (int64x1_t)__builtin_neon_vreinterpretv1div4hi ((int16x4_t) __a)
+
+#define vreinterpretq_s64_s8(__a) \
+  (int64x2_t)__builtin_neon_vreinterpretv2div16qi (__a)
+
+#define vreinterpretq_s64_s16(__a) \
+  (int64x2_t)__builtin_neon_vreinterpretv2div8hi (__a)
+
+#define vreinterpretq_s64_s32(__a) \
+  (int64x2_t)__builtin_neon_vreinterpretv2div4si (__a)
+
+#define vreinterpretq_s64_f32(__a) \
+  (int64x2_t)__builtin_neon_vreinterpretv2div4sf (__a)
+
+#define vreinterpretq_s64_u8(__a) \
+  (int64x2_t)__builtin_neon_vreinterpretv2div16qi ((int8x16_t) __a)
+
+#define vreinterpretq_s64_u16(__a) \
+  (int64x2_t)__builtin_neon_vreinterpretv2div8hi ((int16x8_t) __a)
+
+#define vreinterpretq_s64_u32(__a) \
+  (int64x2_t)__builtin_neon_vreinterpretv2div4si ((int32x4_t) __a)
+
+#define vreinterpretq_s64_u64(__a) \
+  (int64x2_t)__builtin_neon_vreinterpretv2div2di ((int64x2_t) __a)
+
+#define vreinterpretq_s64_p8(__a) \
+  (int64x2_t)__builtin_neon_vreinterpretv2div16qi ((int8x16_t) __a)
+
+#define vreinterpretq_s64_p16(__a) \
+  (int64x2_t)__builtin_neon_vreinterpretv2div8hi ((int16x8_t) __a)
+
+#define vreinterpret_u64_s8(__a) \
+  (uint64x1_t)__builtin_neon_vreinterpretv1div8qi (__a)
+
+#define vreinterpret_u64_s16(__a) \
+  (uint64x1_t)__builtin_neon_vreinterpretv1div4hi (__a)
+
+#define vreinterpret_u64_s32(__a) \
+  (uint64x1_t)__builtin_neon_vreinterpretv1div2si (__a)
+
+#define vreinterpret_u64_s64(__a) \
+  (uint64x1_t)__builtin_neon_vreinterpretv1div1di (__a)
+
+#define vreinterpret_u64_f32(__a) \
+  (uint64x1_t)__builtin_neon_vreinterpretv1div2sf (__a)
+
+#define vreinterpret_u64_u8(__a) \
+  (uint64x1_t)__builtin_neon_vreinterpretv1div8qi ((int8x8_t) __a)
+
+#define vreinterpret_u64_u16(__a) \
+  (uint64x1_t)__builtin_neon_vreinterpretv1div4hi ((int16x4_t) __a)
+
+#define vreinterpret_u64_u32(__a) \
+  (uint64x1_t)__builtin_neon_vreinterpretv1div2si ((int32x2_t) __a)
+
+#define vreinterpret_u64_p8(__a) \
+  (uint64x1_t)__builtin_neon_vreinterpretv1div8qi ((int8x8_t) __a)
+
+#define vreinterpret_u64_p16(__a) \
+  (uint64x1_t)__builtin_neon_vreinterpretv1div4hi ((int16x4_t) __a)
+
+#define vreinterpretq_u64_s8(__a) \
+  (uint64x2_t)__builtin_neon_vreinterpretv2div16qi (__a)
+
+#define vreinterpretq_u64_s16(__a) \
+  (uint64x2_t)__builtin_neon_vreinterpretv2div8hi (__a)
+
+#define vreinterpretq_u64_s32(__a) \
+  (uint64x2_t)__builtin_neon_vreinterpretv2div4si (__a)
+
+#define vreinterpretq_u64_s64(__a) \
+  (uint64x2_t)__builtin_neon_vreinterpretv2div2di (__a)
+
+#define vreinterpretq_u64_f32(__a) \
+  (uint64x2_t)__builtin_neon_vreinterpretv2div4sf (__a)
+
+#define vreinterpretq_u64_u8(__a) \
+  (uint64x2_t)__builtin_neon_vreinterpretv2div16qi ((int8x16_t) __a)
+
+#define vreinterpretq_u64_u16(__a) \
+  (uint64x2_t)__builtin_neon_vreinterpretv2div8hi ((int16x8_t) __a)
+
+#define vreinterpretq_u64_u32(__a) \
+  (uint64x2_t)__builtin_neon_vreinterpretv2div4si ((int32x4_t) __a)
+
+#define vreinterpretq_u64_p8(__a) \
+  (uint64x2_t)__builtin_neon_vreinterpretv2div16qi ((int8x16_t) __a)
+
+#define vreinterpretq_u64_p16(__a) \
+  (uint64x2_t)__builtin_neon_vreinterpretv2div8hi ((int16x8_t) __a)
+
+#define vreinterpret_s8_s16(__a) \
+  (int8x8_t)__builtin_neon_vreinterpretv8qiv4hi (__a)
+
+#define vreinterpret_s8_s32(__a) \
+  (int8x8_t)__builtin_neon_vreinterpretv8qiv2si (__a)
+
+#define vreinterpret_s8_s64(__a) \
+  (int8x8_t)__builtin_neon_vreinterpretv8qiv1di (__a)
+
+#define vreinterpret_s8_f32(__a) \
+  (int8x8_t)__builtin_neon_vreinterpretv8qiv2sf (__a)
+
+#define vreinterpret_s8_u8(__a) \
+  (int8x8_t)__builtin_neon_vreinterpretv8qiv8qi ((int8x8_t) __a)
+
+#define vreinterpret_s8_u16(__a) \
+  (int8x8_t)__builtin_neon_vreinterpretv8qiv4hi ((int16x4_t) __a)
+
+#define vreinterpret_s8_u32(__a) \
+  (int8x8_t)__builtin_neon_vreinterpretv8qiv2si ((int32x2_t) __a)
+
+#define vreinterpret_s8_u64(__a) \
+  (int8x8_t)__builtin_neon_vreinterpretv8qiv1di ((int64x1_t) __a)
+
+#define vreinterpret_s8_p8(__a) \
+  (int8x8_t)__builtin_neon_vreinterpretv8qiv8qi ((int8x8_t) __a)
+
+#define vreinterpret_s8_p16(__a) \
+  (int8x8_t)__builtin_neon_vreinterpretv8qiv4hi ((int16x4_t) __a)
+
+#define vreinterpretq_s8_s16(__a) \
+  (int8x16_t)__builtin_neon_vreinterpretv16qiv8hi (__a)
+
+#define vreinterpretq_s8_s32(__a) \
+  (int8x16_t)__builtin_neon_vreinterpretv16qiv4si (__a)
+
+#define vreinterpretq_s8_s64(__a) \
+  (int8x16_t)__builtin_neon_vreinterpretv16qiv2di (__a)
+
+#define vreinterpretq_s8_f32(__a) \
+  (int8x16_t)__builtin_neon_vreinterpretv16qiv4sf (__a)
+
+#define vreinterpretq_s8_u8(__a) \
+  (int8x16_t)__builtin_neon_vreinterpretv16qiv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_s8_u16(__a) \
+  (int8x16_t)__builtin_neon_vreinterpretv16qiv8hi ((int16x8_t) __a)
+
+#define vreinterpretq_s8_u32(__a) \
+  (int8x16_t)__builtin_neon_vreinterpretv16qiv4si ((int32x4_t) __a)
+
+#define vreinterpretq_s8_u64(__a) \
+  (int8x16_t)__builtin_neon_vreinterpretv16qiv2di ((int64x2_t) __a)
+
+#define vreinterpretq_s8_p8(__a) \
+  (int8x16_t)__builtin_neon_vreinterpretv16qiv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_s8_p16(__a) \
+  (int8x16_t)__builtin_neon_vreinterpretv16qiv8hi ((int16x8_t) __a)
+
+#define vreinterpret_s16_s8(__a) \
+  (int16x4_t)__builtin_neon_vreinterpretv4hiv8qi (__a)
+
+#define vreinterpret_s16_s32(__a) \
+  (int16x4_t)__builtin_neon_vreinterpretv4hiv2si (__a)
+
+#define vreinterpret_s16_s64(__a) \
+  (int16x4_t)__builtin_neon_vreinterpretv4hiv1di (__a)
+
+#define vreinterpret_s16_f32(__a) \
+  (int16x4_t)__builtin_neon_vreinterpretv4hiv2sf (__a)
+
+#define vreinterpret_s16_u8(__a) \
+  (int16x4_t)__builtin_neon_vreinterpretv4hiv8qi ((int8x8_t) __a)
+
+#define vreinterpret_s16_u16(__a) \
+  (int16x4_t)__builtin_neon_vreinterpretv4hiv4hi ((int16x4_t) __a)
+
+#define vreinterpret_s16_u32(__a) \
+  (int16x4_t)__builtin_neon_vreinterpretv4hiv2si ((int32x2_t) __a)
+
+#define vreinterpret_s16_u64(__a) \
+  (int16x4_t)__builtin_neon_vreinterpretv4hiv1di ((int64x1_t) __a)
+
+#define vreinterpret_s16_p8(__a) \
+  (int16x4_t)__builtin_neon_vreinterpretv4hiv8qi ((int8x8_t) __a)
+
+#define vreinterpret_s16_p16(__a) \
+  (int16x4_t)__builtin_neon_vreinterpretv4hiv4hi ((int16x4_t) __a)
+
+#define vreinterpretq_s16_s8(__a) \
+  (int16x8_t)__builtin_neon_vreinterpretv8hiv16qi (__a)
+
+#define vreinterpretq_s16_s32(__a) \
+  (int16x8_t)__builtin_neon_vreinterpretv8hiv4si (__a)
+
+#define vreinterpretq_s16_s64(__a) \
+  (int16x8_t)__builtin_neon_vreinterpretv8hiv2di (__a)
+
+#define vreinterpretq_s16_f32(__a) \
+  (int16x8_t)__builtin_neon_vreinterpretv8hiv4sf (__a)
+
+#define vreinterpretq_s16_u8(__a) \
+  (int16x8_t)__builtin_neon_vreinterpretv8hiv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_s16_u16(__a) \
+  (int16x8_t)__builtin_neon_vreinterpretv8hiv8hi ((int16x8_t) __a)
+
+#define vreinterpretq_s16_u32(__a) \
+  (int16x8_t)__builtin_neon_vreinterpretv8hiv4si ((int32x4_t) __a)
+
+#define vreinterpretq_s16_u64(__a) \
+  (int16x8_t)__builtin_neon_vreinterpretv8hiv2di ((int64x2_t) __a)
+
+#define vreinterpretq_s16_p8(__a) \
+  (int16x8_t)__builtin_neon_vreinterpretv8hiv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_s16_p16(__a) \
+  (int16x8_t)__builtin_neon_vreinterpretv8hiv8hi ((int16x8_t) __a)
+
+#define vreinterpret_s32_s8(__a) \
+  (int32x2_t)__builtin_neon_vreinterpretv2siv8qi (__a)
+
+#define vreinterpret_s32_s16(__a) \
+  (int32x2_t)__builtin_neon_vreinterpretv2siv4hi (__a)
+
+#define vreinterpret_s32_s64(__a) \
+  (int32x2_t)__builtin_neon_vreinterpretv2siv1di (__a)
+
+#define vreinterpret_s32_f32(__a) \
+  (int32x2_t)__builtin_neon_vreinterpretv2siv2sf (__a)
+
+#define vreinterpret_s32_u8(__a) \
+  (int32x2_t)__builtin_neon_vreinterpretv2siv8qi ((int8x8_t) __a)
+
+#define vreinterpret_s32_u16(__a) \
+  (int32x2_t)__builtin_neon_vreinterpretv2siv4hi ((int16x4_t) __a)
+
+#define vreinterpret_s32_u32(__a) \
+  (int32x2_t)__builtin_neon_vreinterpretv2siv2si ((int32x2_t) __a)
+
+#define vreinterpret_s32_u64(__a) \
+  (int32x2_t)__builtin_neon_vreinterpretv2siv1di ((int64x1_t) __a)
+
+#define vreinterpret_s32_p8(__a) \
+  (int32x2_t)__builtin_neon_vreinterpretv2siv8qi ((int8x8_t) __a)
+
+#define vreinterpret_s32_p16(__a) \
+  (int32x2_t)__builtin_neon_vreinterpretv2siv4hi ((int16x4_t) __a)
+
+#define vreinterpretq_s32_s8(__a) \
+  (int32x4_t)__builtin_neon_vreinterpretv4siv16qi (__a)
+
+#define vreinterpretq_s32_s16(__a) \
+  (int32x4_t)__builtin_neon_vreinterpretv4siv8hi (__a)
+
+#define vreinterpretq_s32_s64(__a) \
+  (int32x4_t)__builtin_neon_vreinterpretv4siv2di (__a)
+
+#define vreinterpretq_s32_f32(__a) \
+  (int32x4_t)__builtin_neon_vreinterpretv4siv4sf (__a)
+
+#define vreinterpretq_s32_u8(__a) \
+  (int32x4_t)__builtin_neon_vreinterpretv4siv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_s32_u16(__a) \
+  (int32x4_t)__builtin_neon_vreinterpretv4siv8hi ((int16x8_t) __a)
+
+#define vreinterpretq_s32_u32(__a) \
+  (int32x4_t)__builtin_neon_vreinterpretv4siv4si ((int32x4_t) __a)
+
+#define vreinterpretq_s32_u64(__a) \
+  (int32x4_t)__builtin_neon_vreinterpretv4siv2di ((int64x2_t) __a)
+
+#define vreinterpretq_s32_p8(__a) \
+  (int32x4_t)__builtin_neon_vreinterpretv4siv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_s32_p16(__a) \
+  (int32x4_t)__builtin_neon_vreinterpretv4siv8hi ((int16x8_t) __a)
+
+#define vreinterpret_u8_s8(__a) \
+  (uint8x8_t)__builtin_neon_vreinterpretv8qiv8qi (__a)
+
+#define vreinterpret_u8_s16(__a) \
+  (uint8x8_t)__builtin_neon_vreinterpretv8qiv4hi (__a)
+
+#define vreinterpret_u8_s32(__a) \
+  (uint8x8_t)__builtin_neon_vreinterpretv8qiv2si (__a)
+
+#define vreinterpret_u8_s64(__a) \
+  (uint8x8_t)__builtin_neon_vreinterpretv8qiv1di (__a)
+
+#define vreinterpret_u8_f32(__a) \
+  (uint8x8_t)__builtin_neon_vreinterpretv8qiv2sf (__a)
+
+#define vreinterpret_u8_u16(__a) \
+  (uint8x8_t)__builtin_neon_vreinterpretv8qiv4hi ((int16x4_t) __a)
+
+#define vreinterpret_u8_u32(__a) \
+  (uint8x8_t)__builtin_neon_vreinterpretv8qiv2si ((int32x2_t) __a)
+
+#define vreinterpret_u8_u64(__a) \
+  (uint8x8_t)__builtin_neon_vreinterpretv8qiv1di ((int64x1_t) __a)
+
+#define vreinterpret_u8_p8(__a) \
+  (uint8x8_t)__builtin_neon_vreinterpretv8qiv8qi ((int8x8_t) __a)
+
+#define vreinterpret_u8_p16(__a) \
+  (uint8x8_t)__builtin_neon_vreinterpretv8qiv4hi ((int16x4_t) __a)
+
+#define vreinterpretq_u8_s8(__a) \
+  (uint8x16_t)__builtin_neon_vreinterpretv16qiv16qi (__a)
+
+#define vreinterpretq_u8_s16(__a) \
+  (uint8x16_t)__builtin_neon_vreinterpretv16qiv8hi (__a)
+
+#define vreinterpretq_u8_s32(__a) \
+  (uint8x16_t)__builtin_neon_vreinterpretv16qiv4si (__a)
+
+#define vreinterpretq_u8_s64(__a) \
+  (uint8x16_t)__builtin_neon_vreinterpretv16qiv2di (__a)
+
+#define vreinterpretq_u8_f32(__a) \
+  (uint8x16_t)__builtin_neon_vreinterpretv16qiv4sf (__a)
+
+#define vreinterpretq_u8_u16(__a) \
+  (uint8x16_t)__builtin_neon_vreinterpretv16qiv8hi ((int16x8_t) __a)
+
+#define vreinterpretq_u8_u32(__a) \
+  (uint8x16_t)__builtin_neon_vreinterpretv16qiv4si ((int32x4_t) __a)
+
+#define vreinterpretq_u8_u64(__a) \
+  (uint8x16_t)__builtin_neon_vreinterpretv16qiv2di ((int64x2_t) __a)
+
+#define vreinterpretq_u8_p8(__a) \
+  (uint8x16_t)__builtin_neon_vreinterpretv16qiv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_u8_p16(__a) \
+  (uint8x16_t)__builtin_neon_vreinterpretv16qiv8hi ((int16x8_t) __a)
+
+#define vreinterpret_u16_s8(__a) \
+  (uint16x4_t)__builtin_neon_vreinterpretv4hiv8qi (__a)
+
+#define vreinterpret_u16_s16(__a) \
+  (uint16x4_t)__builtin_neon_vreinterpretv4hiv4hi (__a)
+
+#define vreinterpret_u16_s32(__a) \
+  (uint16x4_t)__builtin_neon_vreinterpretv4hiv2si (__a)
+
+#define vreinterpret_u16_s64(__a) \
+  (uint16x4_t)__builtin_neon_vreinterpretv4hiv1di (__a)
+
+#define vreinterpret_u16_f32(__a) \
+  (uint16x4_t)__builtin_neon_vreinterpretv4hiv2sf (__a)
+
+#define vreinterpret_u16_u8(__a) \
+  (uint16x4_t)__builtin_neon_vreinterpretv4hiv8qi ((int8x8_t) __a)
+
+#define vreinterpret_u16_u32(__a) \
+  (uint16x4_t)__builtin_neon_vreinterpretv4hiv2si ((int32x2_t) __a)
+
+#define vreinterpret_u16_u64(__a) \
+  (uint16x4_t)__builtin_neon_vreinterpretv4hiv1di ((int64x1_t) __a)
+
+#define vreinterpret_u16_p8(__a) \
+  (uint16x4_t)__builtin_neon_vreinterpretv4hiv8qi ((int8x8_t) __a)
+
+#define vreinterpret_u16_p16(__a) \
+  (uint16x4_t)__builtin_neon_vreinterpretv4hiv4hi ((int16x4_t) __a)
+
+#define vreinterpretq_u16_s8(__a) \
+  (uint16x8_t)__builtin_neon_vreinterpretv8hiv16qi (__a)
+
+#define vreinterpretq_u16_s16(__a) \
+  (uint16x8_t)__builtin_neon_vreinterpretv8hiv8hi (__a)
+
+#define vreinterpretq_u16_s32(__a) \
+  (uint16x8_t)__builtin_neon_vreinterpretv8hiv4si (__a)
+
+#define vreinterpretq_u16_s64(__a) \
+  (uint16x8_t)__builtin_neon_vreinterpretv8hiv2di (__a)
+
+#define vreinterpretq_u16_f32(__a) \
+  (uint16x8_t)__builtin_neon_vreinterpretv8hiv4sf (__a)
+
+#define vreinterpretq_u16_u8(__a) \
+  (uint16x8_t)__builtin_neon_vreinterpretv8hiv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_u16_u32(__a) \
+  (uint16x8_t)__builtin_neon_vreinterpretv8hiv4si ((int32x4_t) __a)
+
+#define vreinterpretq_u16_u64(__a) \
+  (uint16x8_t)__builtin_neon_vreinterpretv8hiv2di ((int64x2_t) __a)
+
+#define vreinterpretq_u16_p8(__a) \
+  (uint16x8_t)__builtin_neon_vreinterpretv8hiv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_u16_p16(__a) \
+  (uint16x8_t)__builtin_neon_vreinterpretv8hiv8hi ((int16x8_t) __a)
+
+#define vreinterpret_u32_s8(__a) \
+  (uint32x2_t)__builtin_neon_vreinterpretv2siv8qi (__a)
+
+#define vreinterpret_u32_s16(__a) \
+  (uint32x2_t)__builtin_neon_vreinterpretv2siv4hi (__a)
+
+#define vreinterpret_u32_s32(__a) \
+  (uint32x2_t)__builtin_neon_vreinterpretv2siv2si (__a)
+
+#define vreinterpret_u32_s64(__a) \
+  (uint32x2_t)__builtin_neon_vreinterpretv2siv1di (__a)
+
+#define vreinterpret_u32_f32(__a) \
+  (uint32x2_t)__builtin_neon_vreinterpretv2siv2sf (__a)
+
+#define vreinterpret_u32_u8(__a) \
+  (uint32x2_t)__builtin_neon_vreinterpretv2siv8qi ((int8x8_t) __a)
+
+#define vreinterpret_u32_u16(__a) \
+  (uint32x2_t)__builtin_neon_vreinterpretv2siv4hi ((int16x4_t) __a)
+
+#define vreinterpret_u32_u64(__a) \
+  (uint32x2_t)__builtin_neon_vreinterpretv2siv1di ((int64x1_t) __a)
+
+#define vreinterpret_u32_p8(__a) \
+  (uint32x2_t)__builtin_neon_vreinterpretv2siv8qi ((int8x8_t) __a)
+
+#define vreinterpret_u32_p16(__a) \
+  (uint32x2_t)__builtin_neon_vreinterpretv2siv4hi ((int16x4_t) __a)
+
+#define vreinterpretq_u32_s8(__a) \
+  (uint32x4_t)__builtin_neon_vreinterpretv4siv16qi (__a)
+
+#define vreinterpretq_u32_s16(__a) \
+  (uint32x4_t)__builtin_neon_vreinterpretv4siv8hi (__a)
+
+#define vreinterpretq_u32_s32(__a) \
+  (uint32x4_t)__builtin_neon_vreinterpretv4siv4si (__a)
+
+#define vreinterpretq_u32_s64(__a) \
+  (uint32x4_t)__builtin_neon_vreinterpretv4siv2di (__a)
+
+#define vreinterpretq_u32_f32(__a) \
+  (uint32x4_t)__builtin_neon_vreinterpretv4siv4sf (__a)
+
+#define vreinterpretq_u32_u8(__a) \
+  (uint32x4_t)__builtin_neon_vreinterpretv4siv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_u32_u16(__a) \
+  (uint32x4_t)__builtin_neon_vreinterpretv4siv8hi ((int16x8_t) __a)
+
+#define vreinterpretq_u32_u64(__a) \
+  (uint32x4_t)__builtin_neon_vreinterpretv4siv2di ((int64x2_t) __a)
+
+#define vreinterpretq_u32_p8(__a) \
+  (uint32x4_t)__builtin_neon_vreinterpretv4siv16qi ((int8x16_t) __a)
+
+#define vreinterpretq_u32_p16(__a) \
+  (uint32x4_t)__builtin_neon_vreinterpretv4siv8hi ((int16x8_t) __a)
+
+#ifdef __cplusplus
+}
+#endif
+#endif
+#endif

Copied: llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon_std.h (from r103724, llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon.h)
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon_std.h?p2=llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon_std.h&p1=llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon.h&r1=103724&r2=103811&rev=103811&view=diff
==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon.h (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/arm_neon_std.h Fri May 14 16:31:28 2010
@@ -1,7 +1,8 @@
-/* LLVM LOCAL file Changed to use preprocessor macros.  */
-/* APPLE LOCAL file v7 support. Merge from Codesourcery */
-/* ARM NEON intrinsics include file. This file is generated automatically
-   using neon-gen.ml.  Please do not edit manually.
+/* Internal definitions for standard versions of NEON types and intrinsics.
+   Do not include this file directly; please use <arm_neon.h>.
+
+   This file is generated automatically using neon-gen-std.ml.
+   Please do not edit manually.
 
    Copyright (C) 2006, 2007 Free Software Foundation, Inc.
    Contributed by CodeSourcery.
@@ -46,29 +47,6 @@
 
 #include <stdint.h>
 
-typedef __builtin_neon_qi __neon_int8x8_t	__attribute__ ((__vector_size__ (8)));
-typedef __builtin_neon_hi __neon_int16x4_t	__attribute__ ((__vector_size__ (8)));
-typedef __builtin_neon_si __neon_int32x2_t	__attribute__ ((__vector_size__ (8)));
-typedef __builtin_neon_di __neon_int64x1_t	__attribute__ ((__vector_size__ (8)));
-typedef __builtin_neon_sf __neon_float32x2_t	__attribute__ ((__vector_size__ (8)));
-typedef __builtin_neon_poly8 __neon_poly8x8_t	__attribute__ ((__vector_size__ (8)));
-typedef __builtin_neon_poly16 __neon_poly16x4_t	__attribute__ ((__vector_size__ (8)));
-typedef __builtin_neon_uqi __neon_uint8x8_t	__attribute__ ((__vector_size__ (8)));
-typedef __builtin_neon_uhi __neon_uint16x4_t	__attribute__ ((__vector_size__ (8)));
-typedef __builtin_neon_usi __neon_uint32x2_t	__attribute__ ((__vector_size__ (8)));
-typedef __builtin_neon_udi __neon_uint64x1_t	__attribute__ ((__vector_size__ (8)));
-typedef __builtin_neon_qi __neon_int8x16_t	__attribute__ ((__vector_size__ (16)));
-typedef __builtin_neon_hi __neon_int16x8_t	__attribute__ ((__vector_size__ (16)));
-typedef __builtin_neon_si __neon_int32x4_t	__attribute__ ((__vector_size__ (16)));
-typedef __builtin_neon_di __neon_int64x2_t	__attribute__ ((__vector_size__ (16)));
-typedef __builtin_neon_sf __neon_float32x4_t	__attribute__ ((__vector_size__ (16)));
-typedef __builtin_neon_poly8 __neon_poly8x16_t	__attribute__ ((__vector_size__ (16)));
-typedef __builtin_neon_poly16 __neon_poly16x8_t	__attribute__ ((__vector_size__ (16)));
-typedef __builtin_neon_uqi __neon_uint8x16_t	__attribute__ ((__vector_size__ (16)));
-typedef __builtin_neon_uhi __neon_uint16x8_t	__attribute__ ((__vector_size__ (16)));
-typedef __builtin_neon_usi __neon_uint32x4_t	__attribute__ ((__vector_size__ (16)));
-typedef __builtin_neon_udi __neon_uint64x2_t	__attribute__ ((__vector_size__ (16)));
-
 typedef __builtin_neon_sf float32_t;
 typedef __builtin_neon_poly8 poly8_t;
 typedef __builtin_neon_poly16 poly16_t;

Added: llvm-gcc-4.2/trunk/gcc/config/arm/neon-gen-std.ml
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/neon-gen-std.ml?rev=103811&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/neon-gen-std.ml (added)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/neon-gen-std.ml Fri May 14 16:31:28 2010
@@ -0,0 +1,507 @@
+(* APPLE LOCAL file v7 support. Merge from Codesourcery *)
+(* Auto-generate ARM Neon intrinsics header file.
+   Copyright (C) 2006, 2007 Free Software Foundation, Inc.
+   Contributed by CodeSourcery.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it under
+   the terms of the GNU General Public License as published by the Free
+   Software Foundation; either version 2, or (at your option) any later
+   version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+   WARRANTY; without even the implied warranty of MERCHANTABILITY or
+   FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+   for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING.  If not, write to the Free
+   Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA
+   02110-1301, USA.
+
+   This is an O'Caml program.  The O'Caml compiler is available from:
+
+     http://caml.inria.fr/
+
+   Or from your favourite OS's friendly packaging system. Tested with version
+   3.09.2, though other versions will probably work too.
+  
+   Compile with:
+     ocamlc -c neon.ml
+     ocamlc -o neon-gen-std neon.cmo neon-gen-std.ml
+
+   Run with:
+     ./neon-gen-std > arm_neon_std.h
+*)
+
+open Neon
+
+(* The format codes used in the following functions are documented at:
+     http://caml.inria.fr/pub/docs/manual-ocaml/libref/Format.html\
+     #6_printflikefunctionsforprettyprinting
+   (one line, remove the backslash.)
+*)
+
+(* Following functions can be used to approximate GNU indentation style.  *)
+let start_function () =
+  Format.printf "@[<v 0>";
+  ref 0
+
+let end_function nesting =
+  match !nesting with
+    0 -> Format.printf "@;@;@]"
+  | _ -> failwith ("Bad nesting (ending function at level "
+                   ^ (string_of_int !nesting) ^ ")")
+   
+let open_braceblock nesting =
+  begin match !nesting with
+    0 -> Format.printf "@,@<0>{@[<v 2>@,"
+  | _ -> Format.printf "@,@[<v 2>  @<0>{@[<v 2>@,"
+  end;
+  incr nesting
+
+let close_braceblock nesting =
+  decr nesting;
+  match !nesting with
+    0 -> Format.printf "@]@,@<0>}"
+  | _ -> Format.printf "@]@,@<0>}@]"
+
+(* LLVM LOCAL begin Print macros instead of inline functions.
+   This is needed so that immediate arguments (e.g., lane numbers, shift
+   amounts, etc.) can be checked for validity.  GCC can check them after
+   inlining, but LLVM does inlining separately.
+
+   Some macros translate to simple intrinsic calls and should not end with
+   semicolons, but for others, which use GCC's statement-expressions to
+   include unions that convert argument and/or return types, the semicolons
+   need to be emitted after every statement.  This is implemented by deferring
+   the emission of trailing semicolons so they are only added in the context
+   of statement-expressions. *)
+let print_function arity fnname body =
+  let ffmt = start_function () in
+  Format.printf "@[<v 2>#define ";
+  begin match arity with
+    Arity0 ret ->
+      Format.printf "%s()" fnname
+  | Arity1 (ret, arg0) ->
+      Format.printf "%s(__a)" fnname
+  | Arity2 (ret, arg0, arg1) ->
+      Format.printf "%s(__a, __b)" fnname
+  | Arity3 (ret, arg0, arg1, arg2) ->
+      Format.printf "%s(__a, __b, __c)" fnname
+  | Arity4 (ret, arg0, arg1, arg2, arg3) ->
+      Format.printf "%s(__a, __b, __c, __d)" fnname
+  end;
+  let rec print_lines = function
+    [] -> ()
+  | [line] -> Format.printf "%s; \\" line
+  | line::lines -> Format.printf "%s; \\@," line; print_lines lines in
+  let print_macro_body = function
+    [] -> Format.printf " \\@,";
+  | [line] -> Format.printf " \\@,";
+              Format.printf "%s" line
+  | line::lines -> Format.printf " __extension__ \\@,";
+                   Format.printf "@[<v 3>({ \\@,%s; \\@," line;
+                   print_lines lines;
+                   Format.printf "@]@, })" in
+  print_macro_body body;
+  Format.printf "@]";
+  end_function ffmt
+(* LLVM LOCAL end Print macros instead of inline functions.  *)
+
+let return_by_ptr features = List.mem ReturnPtr features
+
+let rec signed_ctype = function
+    T_uint8x8 | T_poly8x8 -> T_int8x8
+  | T_uint8x16 | T_poly8x16 -> T_int8x16
+  | T_uint16x4 | T_poly16x4 -> T_int16x4
+  | T_uint16x8 | T_poly16x8 -> T_int16x8
+  | T_uint32x2 -> T_int32x2
+  | T_uint32x4 -> T_int32x4
+  | T_uint64x1 -> T_int64x1
+  | T_uint64x2 -> T_int64x2
+  (* Cast to types defined by mode in arm.c, not random types pulled in from
+     the <stdint.h> header in use. This fixes incompatible pointer errors when
+     compiling with C++.  *)
+  | T_uint8 | T_int8 -> T_intQI
+  | T_uint16 | T_int16 -> T_intHI
+  | T_uint32 | T_int32 -> T_intSI
+  | T_uint64 | T_int64 -> T_intDI
+  | T_poly8 -> T_intQI
+  | T_poly16 -> T_intHI
+  | T_arrayof (n, elt) -> T_arrayof (n, signed_ctype elt)
+  | T_ptrto elt -> T_ptrto (signed_ctype elt)
+  | T_const elt -> T_const (signed_ctype elt)
+  | x -> x
+
+(* LLVM LOCAL begin union_string.
+   Array types are handled as structs in llvm-gcc, not as wide integers, and
+   single vector types have wrapper structs.  Unions are used here to convert
+   back and forth between these different representations.  The union_string
+   function has been updated accordingly, and it is moved below signed_ctype
+   so it can use that function.  *)
+let union_string num elts base =
+  let itype = match num with
+    1 -> elts
+  | _ -> T_arrayof (num, elts) in
+  let iname = string_of_vectype (signed_ctype itype)
+  and sname = string_of_vectype itype in
+  Printf.sprintf "union { %s __i; __neon_%s __o; } %s" sname iname base
+(* LLVM LOCAL end union_string.  *)
+
+(* LLVM LOCAL begin add_cast_with_prefix.  *)
+let add_cast_with_prefix ctype cval stype_prefix =
+  let stype = signed_ctype ctype in
+  if ctype <> stype then
+    match stype with
+      T_ptrto elt ->
+        Printf.sprintf "__neon_ptr_cast(%s%s, %s)" stype_prefix (string_of_vectype stype) cval
+    | _ ->
+        Printf.sprintf "(%s%s) %s" stype_prefix (string_of_vectype stype) cval
+  else
+    cval
+
+let add_cast ctype cval = add_cast_with_prefix ctype cval ""
+(* LLVM LOCAL end add_cast_with_prefix.  *)
+
+let cast_for_return to_ty = "(" ^ (string_of_vectype to_ty) ^ ")"
+
+(* Return a tuple of a list of declarations to go at the start of the function,
+   and a list of statements needed to return THING.  *)
+(* LLVM LOCAL begin Omit "return" keywords and trailing semicolons.  *)
+let return arity return_by_ptr thing =
+  match arity with
+    Arity0 (ret) | Arity1 (ret, _) | Arity2 (ret, _, _) | Arity3 (ret, _, _, _)
+  | Arity4 (ret, _, _, _, _) ->
+    match ret with
+      T_arrayof (num, vec) ->
+        if return_by_ptr then
+          let sname = string_of_vectype ret in
+          [Printf.sprintf "%s __rv" sname],
+          [thing; "__rv"]
+        else
+          let uname = union_string num vec "__rv" in
+          [uname], ["__rv.__o = " ^ thing; "__rv.__i"]
+    (* LLVM LOCAL begin Convert vector result to wrapper struct. *)
+    | T_int8x8    | T_int8x16
+    | T_int16x4   | T_int16x8
+    | T_int32x2   | T_int32x4
+    | T_int64x1   | T_int64x2
+    | T_uint8x8   | T_uint8x16
+    | T_uint16x4  | T_uint16x8
+    | T_uint32x2  | T_uint32x4
+    | T_uint64x1  | T_uint64x2
+    | T_float32x2 | T_float32x4
+    | T_poly8x8   | T_poly8x16
+    | T_poly16x4  | T_poly16x8 ->
+        let uname = union_string 1 ret "__rv" in
+        [uname], ["__rv.__o = " ^ thing; "__rv.__i"]
+    (* LLVM LOCAL end Convert vector result to wrapper struct. *)
+    | T_void -> [], [thing]
+    | _ ->
+        [], [(cast_for_return ret) ^ thing]
+(* LLVM LOCAL end Omit "return" keywords and trailing semicolons.  *)
+
+let rec element_type ctype =
+  match ctype with
+    T_arrayof (_, v) -> element_type v
+  | _ -> ctype
+
+let params return_by_ptr ps =
+  let pdecls = ref [] in
+  let ptype t p =
+    match t with
+      T_arrayof (num, elts) ->
+        let uname = union_string num elts (p ^ "u") in
+        (* LLVM LOCAL Omit trailing semicolon.  *)
+        let decl = Printf.sprintf "%s = { %s }" uname p in
+        pdecls := decl :: !pdecls;
+        p ^ "u.__o"
+    (* LLVM LOCAL begin Extract vector operand from wrapper struct. *)
+    | T_int8x8    | T_int8x16
+    | T_int16x4   | T_int16x8
+    | T_int32x2   | T_int32x4
+    | T_int64x1   | T_int64x2
+    | T_uint8x8   | T_uint8x16
+    | T_uint16x4  | T_uint16x8
+    | T_uint32x2  | T_uint32x4
+    | T_uint64x1  | T_uint64x2
+    | T_float32x2 | T_float32x4
+    | T_poly8x8   | T_poly8x16
+    | T_poly16x4  | T_poly16x8 ->
+        let decl = Printf.sprintf "%s %s = %s"
+          (string_of_vectype t) (p ^ "x") p in
+        pdecls := decl :: !pdecls;
+        add_cast_with_prefix t (p ^ "x.val") "__neon_"
+    | T_immediate (lo, hi) -> p
+    | _ ->
+        let decl = Printf.sprintf "%s %s = %s"
+          (string_of_vectype t) (p ^ "x") p in
+        pdecls := decl :: !pdecls;
+        add_cast t (p ^ "x") in
+    (* LLVM LOCAL end Extract vector operand from wrapper struct. *)
+  let plist = match ps with
+    Arity0 _ -> []
+  | Arity1 (_, t1) -> [ptype t1 "__a"]
+  | Arity2 (_, t1, t2) -> [ptype t1 "__a"; ptype t2 "__b"]
+  | Arity3 (_, t1, t2, t3) -> [ptype t1 "__a"; ptype t2 "__b"; ptype t3 "__c"]
+  | Arity4 (_, t1, t2, t3, t4) ->
+      [ptype t1 "__a"; ptype t2 "__b"; ptype t3 "__c"; ptype t4 "__d"] in
+  match ps with
+    Arity0 ret | Arity1 (ret, _) | Arity2 (ret, _, _) | Arity3 (ret, _, _, _)
+  | Arity4 (ret, _, _, _, _) ->
+      if return_by_ptr then
+        !pdecls, add_cast (T_ptrto (element_type ret)) "&__rv.val[0]" :: plist
+      else
+        !pdecls, plist
+
+let modify_params features plist =
+  let is_flipped =
+    List.exists (function Flipped _ -> true | _ -> false) features in
+  if is_flipped then
+    match plist with
+      [ a; b ] -> [ b; a ]
+    | _ ->
+      failwith ("Don't know how to flip args " ^ (String.concat ", " plist))
+  else
+    plist
+
+(* !!! Decide whether to add an extra information word based on the shape
+   form.  *)
+let extra_word shape features paramlist bits =
+  let use_word =
+    match shape with
+      All _ | Long | Long_noreg _ | Wide | Wide_noreg _ | Narrow
+    | By_scalar _ | Wide_scalar | Wide_lane | Binary_imm _ | Long_imm
+    | Narrow_imm -> true
+    | _ -> List.mem InfoWord features
+  in
+    if use_word then
+      paramlist @ [string_of_int bits]
+    else
+      paramlist
+
+(* Bit 0 represents signed (1) vs unsigned (0), or float (1) vs poly (0).
+   Bit 1 represents rounding (1) vs none (0)
+   Bit 2 represents floats & polynomials (1), or ordinary integers (0).  *)
+let infoword_value elttype features =
+  let bits02 =
+    match elt_class elttype with
+      Signed | ConvClass (Signed, _) | ConvClass (_, Signed) -> 0b001
+    | Poly -> 0b100
+    | Float -> 0b101
+    | _ -> 0b000
+  and rounding_bit = if List.mem Rounding features then 0b010 else 0b000 in
+  bits02 lor rounding_bit
+
+(* "Cast" type operations will throw an exception in mode_of_elt (actually in
+   elt_width, called from there). Deal with that here, and generate a suffix
+   with multiple modes (<to><from>).  *)
+let rec mode_suffix elttype shape =
+  try
+    let mode = mode_of_elt elttype shape in
+    string_of_mode mode
+  with MixedMode (dst, src) ->
+    let dstmode = mode_of_elt dst shape
+    and srcmode = mode_of_elt src shape in
+    string_of_mode dstmode ^ string_of_mode srcmode
+
+let print_variant opcode features shape name (ctype, asmtype, elttype) =
+  let bits = infoword_value elttype features in
+  let modesuf = mode_suffix elttype shape in
+  let return_by_ptr = return_by_ptr features in
+  let pdecls, paramlist = params return_by_ptr ctype in
+  let paramlist' = modify_params features paramlist in
+  let paramlist'' = extra_word shape features paramlist' bits in
+  let parstr = String.concat ", " paramlist'' in
+  let builtin = Printf.sprintf "__builtin_neon_%s%s (%s)"
+                  (builtin_name features name) modesuf parstr in
+  let rdecls, stmts = return ctype return_by_ptr builtin in
+  let body = pdecls @ rdecls @ stmts
+  and fnname = (intrinsic_name name) ^ "_" ^ (string_of_elt elttype) in
+  print_function ctype fnname body
+
+(* When this function processes the element types in the ops table, it rewrites
+   them in a list of tuples (a,b,c):
+     a : C type as an "arity", e.g. Arity1 (T_poly8x8, T_poly8x8)
+     b : Asm type : a single, processed element type, e.g. P16. This is the
+         type which should be attached to the asm opcode.
+     c : Variant type : the unprocessed type for this variant (e.g. in add
+         instructions which don't care about the sign, b might be i16 and c
+         might be s16.)
+*)
+
+let print_op (opcode, features, shape, name, munge, types) =
+  let sorted_types = List.sort compare types in
+  let munged_types = List.map
+    (fun elt -> let c, asm = munge shape elt in c, asm, elt) sorted_types in
+  List.iter
+    (fun variant -> print_variant opcode features shape name variant)
+    munged_types
+  
+let print_ops ops =
+  List.iter print_op ops
+
+(* Output type definitions. Table entries are:
+     cbase : "C" name for the type.
+     abase : "ARM" base name for the type (i.e. int in int8x8_t).
+     esize : element size.
+     enum : element count.
+   We can't really distinguish between polynomial types and integer types in
+   the C type system, I don't think, which may allow the user to make mistakes
+   without warnings from the compiler.
+   FIXME: It's probably better to use stdint.h names here.
+*)
+
+let deftypes () =
+  let typeinfo = [
+    (* Doubleword vector types.  *)
+    "__builtin_neon_qi", "int", 8, 8;
+    "__builtin_neon_hi", "int", 16, 4;
+    "__builtin_neon_si", "int", 32, 2;
+    "__builtin_neon_di", "int", 64, 1;
+    "__builtin_neon_sf", "float", 32, 2;
+    "__builtin_neon_poly8", "poly", 8, 8;
+    "__builtin_neon_poly16", "poly", 16, 4;
+    "__builtin_neon_uqi", "uint", 8, 8;
+    "__builtin_neon_uhi", "uint", 16, 4;
+    "__builtin_neon_usi", "uint", 32, 2;
+    "__builtin_neon_udi", "uint", 64, 1;
+    
+    (* Quadword vector types.  *)
+    "__builtin_neon_qi", "int", 8, 16;
+    "__builtin_neon_hi", "int", 16, 8;
+    "__builtin_neon_si", "int", 32, 4;
+    "__builtin_neon_di", "int", 64, 2;
+    "__builtin_neon_sf", "float", 32, 4;
+    "__builtin_neon_poly8", "poly", 8, 16;
+    "__builtin_neon_poly16", "poly", 16, 8;
+    "__builtin_neon_uqi", "uint", 8, 16;
+    "__builtin_neon_uhi", "uint", 16, 8;
+    "__builtin_neon_usi", "uint", 32, 4;
+    "__builtin_neon_udi", "uint", 64, 2
+  ] in
+  (* LLVM LOCAL remove typedefs for builtin Neon vector types *)
+  (* Extra types not in <stdint.h>.  *)
+  Format.printf "typedef __builtin_neon_sf float32_t;\n";
+  Format.printf "typedef __builtin_neon_poly8 poly8_t;\n";
+  Format.printf "typedef __builtin_neon_poly16 poly16_t;\n"
+(* LLVM LOCAL begin Define containerized vector types. *)
+  ;
+  List.iter
+    (fun (cbase, abase, esize, enum) ->
+      let typename =
+        Printf.sprintf "%s%dx%d_t" abase esize enum in
+      let structname =
+        Printf.sprintf "__simd%d_%s%d_t" (esize * enum) abase esize in
+      let sfmt = start_function () in
+      Format.printf "typedef struct %s" structname;
+      open_braceblock sfmt;
+      Format.printf "__neon_%s val;" typename;
+      close_braceblock sfmt;
+      Format.printf " %s;" typename;
+      end_function sfmt)
+    typeinfo
+(* LLVM LOCAL end Define containerized vector types. *)
+
+(* Output structs containing arrays, for load & store instructions etc.  *)
+
+let arrtypes () =
+  let typeinfo = [
+    "int", 8;    "int", 16;
+    "int", 32;   "int", 64;
+    "uint", 8;   "uint", 16;
+    "uint", 32;  "uint", 64;
+    "float", 32; "poly", 8;
+    "poly", 16
+  ] in
+  let writestruct elname elsize regsize arrsize =
+    let elnum = regsize / elsize in
+    let structname =
+      Printf.sprintf "%s%dx%dx%d_t" elname elsize elnum arrsize in
+    let sfmt = start_function () in
+    Format.printf "typedef struct %s" structname;
+    open_braceblock sfmt;
+    Format.printf "%s%dx%d_t val[%d];" elname elsize elnum arrsize;
+    close_braceblock sfmt;
+    Format.printf " %s;" structname;
+    end_function sfmt;
+  in
+    for n = 2 to 4 do
+      List.iter
+        (fun (elname, elsize) ->
+          writestruct elname elsize 64 n;
+          writestruct elname elsize 128 n)
+        typeinfo
+    done
+
+let print_lines = List.iter (fun s -> Format.printf "%s@\n" s)
+
+(* Do it.  *)
+
+let _ =
+  print_lines [
+"/* Internal definitions for standard versions of NEON types and intrinsics.";
+"   Do not include this file directly; please use <arm_neon.h>.";
+"";
+"   This file is generated automatically using neon-gen-std.ml.";
+"   Please do not edit manually.";
+"";
+"   Copyright (C) 2006, 2007 Free Software Foundation, Inc.";
+"   Contributed by CodeSourcery.";
+"";
+"   This file is part of GCC.";
+"";
+"   GCC is free software; you can redistribute it and/or modify it";
+"   under the terms of the GNU General Public License as published";
+"   by the Free Software Foundation; either version 2, or (at your";
+"   option) any later version.";
+"";
+"   GCC is distributed in the hope that it will be useful, but WITHOUT";
+"   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY";
+"   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public";
+"   License for more details.";
+"";
+"   You should have received a copy of the GNU General Public License";
+"   along with GCC; see the file COPYING.  If not, write to the";
+"   Free Software Foundation, 51 Franklin Street, Fifth Floor, Boston,";
+"   MA 02110-1301, USA.  */";
+"";
+"/* As a special exception, if you include this header file into source";
+"   files compiled by GCC, this header file does not by itself cause";
+"   the resulting executable to be covered by the GNU General Public";
+"   License.  This exception does not however invalidate any other";
+"   reasons why the executable file might be covered by the GNU General";
+"   Public License.  */";
+"";
+"#ifndef _GCC_ARM_NEON_H";
+"#define _GCC_ARM_NEON_H 1";
+"";
+"#ifndef __ARM_NEON__";
+"#error You must enable NEON instructions (e.g. -mfloat-abi=softfp -mfpu=neon) to use arm_neon.h";
+"#else";
+"";
+"#ifdef __cplusplus";
+"extern \"C\" {";
+(* LLVM LOCAL begin Use reinterpret_cast for pointers in C++ *)
+"#define __neon_ptr_cast(ty, ptr) reinterpret_cast<ty>(ptr)";
+"#else";
+"#define __neon_ptr_cast(ty, ptr) (ty)(ptr)";
+(* LLVM LOCAL end Use reinterpret_cast for pointers in C++ *)
+"#endif";
+"";
+"#include <stdint.h>";
+""];
+  deftypes ();
+  arrtypes ();
+  Format.print_newline ();
+  print_ops ops;
+  Format.print_newline ();
+  print_ops reinterp;
+  print_lines [
+"#ifdef __cplusplus";
+"}";
+"#endif";
+"#endif";
+"#endif"]

Modified: llvm-gcc-4.2/trunk/gcc/config/arm/neon-gen.ml
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/gcc/config/arm/neon-gen.ml?rev=103811&r1=103810&r2=103811&view=diff
==============================================================================
--- llvm-gcc-4.2/trunk/gcc/config/arm/neon-gen.ml (original)
+++ llvm-gcc-4.2/trunk/gcc/config/arm/neon-gen.ml Fri May 14 16:31:28 2010
@@ -32,7 +32,7 @@
      ocamlc -o neon-gen neon.cmo neon-gen.ml
 
    Run with:
-     ./neon-gen > arm_neon.h
+     ./neon-gen > arm_neon_gcc.h
 *)
 
 open Neon
@@ -136,35 +136,29 @@
   | x -> x
 
 (* LLVM LOCAL begin union_string.
-   Array types are handled as structs in llvm-gcc, not as wide integers, and
-   single vector types have wrapper structs.  Unions are used here to convert
+   Array types are handled as structs in llvm-gcc, not as wide integers.
+   Unions are used here to convert
    back and forth between these different representations.  The union_string
    function has been updated accordingly, and it is moved below signed_ctype
    so it can use that function.  *)
 let union_string num elts base =
-  let itype = match num with
-    1 -> elts
-  | _ -> T_arrayof (num, elts) in
+  let itype = T_arrayof (num, elts) in
   let iname = string_of_vectype (signed_ctype itype)
   and sname = string_of_vectype itype in
   Printf.sprintf "union { %s __i; __neon_%s __o; } %s" sname iname base
 (* LLVM LOCAL end union_string.  *)
 
-(* LLVM LOCAL begin add_cast_with_prefix.  *)
-let add_cast_with_prefix ctype cval stype_prefix =
+let add_cast ctype cval =
   let stype = signed_ctype ctype in
   if ctype <> stype then
     match stype with
       T_ptrto elt ->
-        Printf.sprintf "__neon_ptr_cast(%s%s, %s)" stype_prefix (string_of_vectype stype) cval
+        Printf.sprintf "__neon_ptr_cast(%s, %s)" (string_of_vectype stype) cval
     | _ ->
-        Printf.sprintf "(%s%s) %s" stype_prefix (string_of_vectype stype) cval
+        Printf.sprintf "(%s) %s" (string_of_vectype stype) cval
   else
     cval
 
-let add_cast ctype cval = add_cast_with_prefix ctype cval ""
-(* LLVM LOCAL end add_cast_with_prefix.  *)
-
 let cast_for_return to_ty = "(" ^ (string_of_vectype to_ty) ^ ")"
 
 (* Return a tuple of a list of declarations to go at the start of the function,
@@ -183,21 +177,6 @@
         else
           let uname = union_string num vec "__rv" in
           [uname], ["__rv.__o = " ^ thing; "__rv.__i"]
-    (* LLVM LOCAL begin Convert vector result to wrapper struct. *)
-    | T_int8x8    | T_int8x16
-    | T_int16x4   | T_int16x8
-    | T_int32x2   | T_int32x4
-    | T_int64x1   | T_int64x2
-    | T_uint8x8   | T_uint8x16
-    | T_uint16x4  | T_uint16x8
-    | T_uint32x2  | T_uint32x4
-    | T_uint64x1  | T_uint64x2
-    | T_float32x2 | T_float32x4
-    | T_poly8x8   | T_poly8x16
-    | T_poly16x4  | T_poly16x8 ->
-        let uname = union_string 1 ret "__rv" in
-        [uname], ["__rv.__o = " ^ thing; "__rv.__i"]
-    (* LLVM LOCAL end Convert vector result to wrapper struct. *)
     | T_void -> [], [thing]
     | _ ->
         [], [(cast_for_return ret) ^ thing]
@@ -218,29 +197,7 @@
         let decl = Printf.sprintf "%s = { %s }" uname p in
         pdecls := decl :: !pdecls;
         p ^ "u.__o"
-    (* LLVM LOCAL begin Extract vector operand from wrapper struct. *)
-    | T_int8x8    | T_int8x16
-    | T_int16x4   | T_int16x8
-    | T_int32x2   | T_int32x4
-    | T_int64x1   | T_int64x2
-    | T_uint8x8   | T_uint8x16
-    | T_uint16x4  | T_uint16x8
-    | T_uint32x2  | T_uint32x4
-    | T_uint64x1  | T_uint64x2
-    | T_float32x2 | T_float32x4
-    | T_poly8x8   | T_poly8x16
-    | T_poly16x4  | T_poly16x8 ->
-        let decl = Printf.sprintf "%s %s = %s"
-          (string_of_vectype t) (p ^ "x") p in
-        pdecls := decl :: !pdecls;
-        add_cast_with_prefix t (p ^ "x.val") "__neon_"
-    | T_immediate (lo, hi) -> p
-    | _ ->
-        let decl = Printf.sprintf "%s %s = %s"
-          (string_of_vectype t) (p ^ "x") p in
-        pdecls := decl :: !pdecls;
-        add_cast t (p ^ "x") in
-    (* LLVM LOCAL end Extract vector operand from wrapper struct. *)
+    | _ -> add_cast t p in
   let plist = match ps with
     Arity0 _ -> []
   | Arity1 (_, t1) -> [ptype t1 "__a"]
@@ -382,36 +339,19 @@
     "__builtin_neon_usi", "uint", 32, 4;
     "__builtin_neon_udi", "uint", 64, 2
   ] in
-  List.iter
-    (fun (cbase, abase, esize, enum) ->
-      let attr =
-        match enum with
-        (* LLVM LOCAL no special case for enum == 1 so int64x1_t is a vector *)
-          _ -> Printf.sprintf "\t__attribute__ ((__vector_size__ (%d)))"
-                              (esize * enum / 8) in
-      (* LLVM LOCAL Add "__neon_" prefix. *)
-      Format.printf "typedef %s __neon_%s%dx%d_t%s;@\n" cbase abase esize enum attr)
-    typeinfo;
-  Format.print_newline ();
+  (* LLVM LOCAL remove typedefs for builtin Neon vector types *)
   (* Extra types not in <stdint.h>.  *)
   Format.printf "typedef __builtin_neon_sf float32_t;\n";
   Format.printf "typedef __builtin_neon_poly8 poly8_t;\n";
   Format.printf "typedef __builtin_neon_poly16 poly16_t;\n"
 (* LLVM LOCAL begin Define containerized vector types. *)
   ;
+  Format.print_newline ();
   List.iter
     (fun (cbase, abase, esize, enum) ->
       let typename =
         Printf.sprintf "%s%dx%d_t" abase esize enum in
-      let structname =
-        Printf.sprintf "__simd%d_%s%d_t" (esize * enum) abase esize in
-      let sfmt = start_function () in
-      Format.printf "typedef struct %s" structname;
-      open_braceblock sfmt;
-      Format.printf "__neon_%s val;" typename;
-      close_braceblock sfmt;
-      Format.printf " %s;" typename;
-      end_function sfmt)
+      Format.printf "typedef __neon_%s %s;\n" typename typename)
     typeinfo
 (* LLVM LOCAL end Define containerized vector types. *)
 
@@ -452,10 +392,12 @@
 
 let _ =
   print_lines [
-"/* LLVM LOCAL file Changed to use preprocessor macros.  */";
-"/* APPLE LOCAL file v7 support. Merge from Codesourcery */";
-"/* ARM NEON intrinsics include file. This file is generated automatically";
-"   using neon-gen.ml.  Please do not edit manually.";
+"/* Internal definitions for GCC-compatible NEON types and intrinsics.";
+"   Do not include this file directly; please use <arm_neon.h> and define";
+"   the ARM_NEON_GCC_COMPATIBILITY macro.";
+"";
+"   This file is generated automatically using neon-gen.ml.";
+"   Please do not edit manually.";
 "";
 "   Copyright (C) 2006, 2007 Free Software Foundation, Inc.";
 "   Contributed by CodeSourcery.";





More information about the llvm-commits mailing list