[llvm] r230786 - [opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction

David Blaikie dblaikie at gmail.com
Fri Feb 27 11:29:18 PST 2015


Author: dblaikie
Date: Fri Feb 27 13:29:02 2015
New Revision: 230786

URL: http://llvm.org/viewvc/llvm-project?rev=230786&view=rev
Log:
[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction

One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.

This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.

* This doesn't modify gep operators, only instructions (operators will be
  handled separately)

* Textual IR changes only. Bitcode (including upgrade) and changing the
  in-memory representation will be in separate changes.

* geps of vectors are transformed as:
    getelementptr <4 x float*> %x, ...
  ->getelementptr float, <4 x float*> %x, ...
  Then, once the opaque pointer type is introduced, this will ultimately look
  like:
    getelementptr float, <4 x ptr> %x
  with the unambiguous interpretation that it is a vector of pointers to float.

* address spaces remain on the pointer, not the type:
    getelementptr float addrspace(1)* %x
  ->getelementptr float, float addrspace(1)* %x
  Then, eventually:
    getelementptr float, ptr addrspace(1) %x

Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.

update.py:
import fileinput
import sys
import re

ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile(       r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")

def conv(match, line):
  if not match:
    return line
  line = match.groups()[0]
  if len(match.groups()[5]) == 0:
    line += match.groups()[2]
  line += match.groups()[3]
  line += ", "
  line += match.groups()[1]
  line += "\n"
  return line

for line in sys.stdin:
  if line.find("getelementptr ") == line.find("getelementptr inbounds"):
    if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
      line = conv(re.match(ibrep, line), line)
  elif line.find("getelementptr ") != line.find("getelementptr ("):
    line = conv(re.match(normrep, line), line)
  sys.stdout.write(line)

apply.sh:
for name in "$@"
do
  python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
  rm -f "$name.tmp"
done

The actual commands:
>From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
>From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
>From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh

After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).

The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.

Reviewers: rafael, dexonsmith, grosser

Differential Revision: http://reviews.llvm.org/D7636

Added:
    llvm/trunk/test/Assembler/invalid-gep-mismatched-explicit-type.ll
    llvm/trunk/test/Assembler/invalid-gep-missing-explicit-type.ll
Modified:
    llvm/trunk/lib/AsmParser/LLParser.cpp
    llvm/trunk/lib/IR/AsmWriter.cpp
    llvm/trunk/test/Analysis/BasicAA/2003-02-26-AccessSizeTest.ll
    llvm/trunk/test/Analysis/BasicAA/2003-03-04-GEPCrash.ll
    llvm/trunk/test/Analysis/BasicAA/2003-04-22-GEPProblem.ll
    llvm/trunk/test/Analysis/BasicAA/2003-04-25-GEPCrash.ll
    llvm/trunk/test/Analysis/BasicAA/2003-05-21-GEP-Problem.ll
    llvm/trunk/test/Analysis/BasicAA/2003-06-01-AliasCrash.ll
    llvm/trunk/test/Analysis/BasicAA/2003-07-03-BasicAACrash.ll
    llvm/trunk/test/Analysis/BasicAA/2003-11-04-SimpleCases.ll
    llvm/trunk/test/Analysis/BasicAA/2003-12-11-ConstExprGEP.ll
    llvm/trunk/test/Analysis/BasicAA/2004-07-28-MustAliasbug.ll
    llvm/trunk/test/Analysis/BasicAA/2006-03-03-BadArraySubscript.ll
    llvm/trunk/test/Analysis/BasicAA/2006-11-03-BasicAAVectorCrash.ll
    llvm/trunk/test/Analysis/BasicAA/2007-01-13-BasePointerBadNoAlias.ll
    llvm/trunk/test/Analysis/BasicAA/2007-08-01-NoAliasAndGEP.ll
    llvm/trunk/test/Analysis/BasicAA/2007-10-24-ArgumentsGlobals.ll
    llvm/trunk/test/Analysis/BasicAA/2007-11-05-SizeCrash.ll
    llvm/trunk/test/Analysis/BasicAA/2007-12-08-OutOfBoundsCrash.ll
    llvm/trunk/test/Analysis/BasicAA/2008-04-15-Byval.ll
    llvm/trunk/test/Analysis/BasicAA/2009-03-04-GEPNoalias.ll
    llvm/trunk/test/Analysis/BasicAA/2009-10-13-AtomicModRef.ll
    llvm/trunk/test/Analysis/BasicAA/2009-10-13-GEP-BaseNoAlias.ll
    llvm/trunk/test/Analysis/BasicAA/2010-09-15-GEP-SignedArithmetic.ll
    llvm/trunk/test/Analysis/BasicAA/2014-03-18-Maxlookup-reached.ll
    llvm/trunk/test/Analysis/BasicAA/byval.ll
    llvm/trunk/test/Analysis/BasicAA/constant-over-index.ll
    llvm/trunk/test/Analysis/BasicAA/cs-cs.ll
    llvm/trunk/test/Analysis/BasicAA/featuretest.ll
    llvm/trunk/test/Analysis/BasicAA/full-store-partial-alias.ll
    llvm/trunk/test/Analysis/BasicAA/gep-alias.ll
    llvm/trunk/test/Analysis/BasicAA/global-size.ll
    llvm/trunk/test/Analysis/BasicAA/intrinsics.ll
    llvm/trunk/test/Analysis/BasicAA/modref.ll
    llvm/trunk/test/Analysis/BasicAA/must-and-partial.ll
    llvm/trunk/test/Analysis/BasicAA/no-escape-call.ll
    llvm/trunk/test/Analysis/BasicAA/noalias-bugs.ll
    llvm/trunk/test/Analysis/BasicAA/noalias-geps.ll
    llvm/trunk/test/Analysis/BasicAA/phi-aa.ll
    llvm/trunk/test/Analysis/BasicAA/phi-spec-order.ll
    llvm/trunk/test/Analysis/BasicAA/phi-speculation.ll
    llvm/trunk/test/Analysis/BasicAA/pr18573.ll
    llvm/trunk/test/Analysis/BasicAA/store-promote.ll
    llvm/trunk/test/Analysis/BasicAA/struct-geps.ll
    llvm/trunk/test/Analysis/BasicAA/underlying-value.ll
    llvm/trunk/test/Analysis/BasicAA/unreachable-block.ll
    llvm/trunk/test/Analysis/BasicAA/zext.ll
    llvm/trunk/test/Analysis/BlockFrequencyInfo/basic.ll
    llvm/trunk/test/Analysis/BranchProbabilityInfo/basic.ll
    llvm/trunk/test/Analysis/BranchProbabilityInfo/loop.ll
    llvm/trunk/test/Analysis/BranchProbabilityInfo/pr18705.ll
    llvm/trunk/test/Analysis/CFLAliasAnalysis/const-expr-gep.ll
    llvm/trunk/test/Analysis/CFLAliasAnalysis/constant-over-index.ll
    llvm/trunk/test/Analysis/CFLAliasAnalysis/full-store-partial-alias.ll
    llvm/trunk/test/Analysis/CFLAliasAnalysis/gep-signed-arithmetic.ll
    llvm/trunk/test/Analysis/CFLAliasAnalysis/must-and-partial.ll
    llvm/trunk/test/Analysis/CFLAliasAnalysis/simple.ll
    llvm/trunk/test/Analysis/CostModel/ARM/gep.ll
    llvm/trunk/test/Analysis/CostModel/X86/gep.ll
    llvm/trunk/test/Analysis/CostModel/X86/intrinsic-cost.ll
    llvm/trunk/test/Analysis/CostModel/X86/loop_v2.ll
    llvm/trunk/test/Analysis/CostModel/X86/vectorized-loop.ll
    llvm/trunk/test/Analysis/Delinearization/a.ll
    llvm/trunk/test/Analysis/Delinearization/gcd_multiply_expr.ll
    llvm/trunk/test/Analysis/Delinearization/himeno_1.ll
    llvm/trunk/test/Analysis/Delinearization/himeno_2.ll
    llvm/trunk/test/Analysis/Delinearization/iv_times_constant_in_subscript.ll
    llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_integer_offsets_3d.ll
    llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_integer_offsets_nts_3d.ll
    llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_parameteric_offsets_3d.ll
    llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_2d.ll
    llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_2d_nested.ll
    llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_3d.ll
    llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_3d_cast.ll
    llvm/trunk/test/Analysis/Delinearization/multidim_two_accesses_different_delinearization.ll
    llvm/trunk/test/Analysis/Delinearization/undef.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/Banerjee.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/Coupled.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/ExactRDIV.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/ExactSIV.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/GCD.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/Invariant.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/NonCanonicalizedSubscript.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/Preliminary.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/Propagating.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/Separability.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/StrongSIV.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/SymbolicRDIV.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/SymbolicSIV.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/WeakCrossingSIV.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/WeakZeroDstSIV.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/WeakZeroSrcSIV.ll
    llvm/trunk/test/Analysis/DependenceAnalysis/ZIV.ll
    llvm/trunk/test/Analysis/Dominators/invoke.ll
    llvm/trunk/test/Analysis/LoopAccessAnalysis/backward-dep-different-types.ll
    llvm/trunk/test/Analysis/LoopAccessAnalysis/unsafe-and-rt-checks-no-dbg.ll
    llvm/trunk/test/Analysis/LoopAccessAnalysis/unsafe-and-rt-checks.ll
    llvm/trunk/test/Analysis/MemoryDependenceAnalysis/memdep_requires_dominator_tree.ll
    llvm/trunk/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll
    llvm/trunk/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll
    llvm/trunk/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll
    llvm/trunk/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll
    llvm/trunk/test/Analysis/ScalarEvolution/2012-03-26-LoadConstant.ll
    llvm/trunk/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll
    llvm/trunk/test/Analysis/ScalarEvolution/avoid-smax-0.ll
    llvm/trunk/test/Analysis/ScalarEvolution/avoid-smax-1.ll
    llvm/trunk/test/Analysis/ScalarEvolution/load.ll
    llvm/trunk/test/Analysis/ScalarEvolution/max-trip-count-address-space.ll
    llvm/trunk/test/Analysis/ScalarEvolution/max-trip-count.ll
    llvm/trunk/test/Analysis/ScalarEvolution/min-max-exprs.ll
    llvm/trunk/test/Analysis/ScalarEvolution/nsw-offset-assume.ll
    llvm/trunk/test/Analysis/ScalarEvolution/nsw-offset.ll
    llvm/trunk/test/Analysis/ScalarEvolution/nsw.ll
    llvm/trunk/test/Analysis/ScalarEvolution/pr22674.ll
    llvm/trunk/test/Analysis/ScalarEvolution/scev-aa.ll
    llvm/trunk/test/Analysis/ScalarEvolution/sext-inreg.ll
    llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-0.ll
    llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-1.ll
    llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-2.ll
    llvm/trunk/test/Analysis/ScalarEvolution/sle.ll
    llvm/trunk/test/Analysis/ScalarEvolution/trip-count.ll
    llvm/trunk/test/Analysis/ScalarEvolution/trip-count11.ll
    llvm/trunk/test/Analysis/ScalarEvolution/trip-count12.ll
    llvm/trunk/test/Analysis/ScalarEvolution/trip-count2.ll
    llvm/trunk/test/Analysis/ScalarEvolution/trip-count3.ll
    llvm/trunk/test/Analysis/ScalarEvolution/trip-count4.ll
    llvm/trunk/test/Analysis/ScalarEvolution/trip-count5.ll
    llvm/trunk/test/Analysis/ScalarEvolution/trip-count6.ll
    llvm/trunk/test/Analysis/ScalarEvolution/trip-count7.ll
    llvm/trunk/test/Analysis/ScopedNoAliasAA/basic-domains.ll
    llvm/trunk/test/Analysis/ScopedNoAliasAA/basic.ll
    llvm/trunk/test/Analysis/ScopedNoAliasAA/basic2.ll
    llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/dynamic-indices.ll
    llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/licm.ll
    llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/placement-tbaa.ll
    llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/precedence.ll
    llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/tbaa-path.ll
    llvm/trunk/test/Analysis/ValueTracking/memory-dereferenceable.ll
    llvm/trunk/test/Assembler/2002-08-19-BytecodeReader.ll
    llvm/trunk/test/Assembler/2004-04-04-GetElementPtrIndexTypes.ll
    llvm/trunk/test/Assembler/2004-06-07-VerifierBug.ll
    llvm/trunk/test/Assembler/2007-01-05-Cmp-ConstExpr.ll
    llvm/trunk/test/Assembler/flags.ll
    llvm/trunk/test/Assembler/getelementptr.ll
    llvm/trunk/test/Assembler/getelementptr_struct.ll
    llvm/trunk/test/Assembler/getelementptr_vec_idx1.ll
    llvm/trunk/test/Assembler/getelementptr_vec_idx2.ll
    llvm/trunk/test/Assembler/getelementptr_vec_idx3.ll
    llvm/trunk/test/Assembler/getelementptr_vec_struct.ll
    llvm/trunk/test/Bitcode/constantsTest.3.2.ll
    llvm/trunk/test/Bitcode/function-encoding-rel-operands.ll
    llvm/trunk/test/Bitcode/memInstructions.3.2.ll
    llvm/trunk/test/BugPoint/compile-custom.ll   (contents, props changed)
    llvm/trunk/test/CodeGen/AArch64/128bit_load_store.ll
    llvm/trunk/test/CodeGen/AArch64/PBQP-chain.ll
    llvm/trunk/test/CodeGen/AArch64/PBQP-coalesce-benefit.ll
    llvm/trunk/test/CodeGen/AArch64/PBQP-csr.ll
    llvm/trunk/test/CodeGen/AArch64/Redundantstore.ll
    llvm/trunk/test/CodeGen/AArch64/aarch64-2014-08-11-MachineCombinerCrash.ll
    llvm/trunk/test/CodeGen/AArch64/aarch64-a57-fp-load-balancing.ll
    llvm/trunk/test/CodeGen/AArch64/aarch64-address-type-promotion-assertion.ll
    llvm/trunk/test/CodeGen/AArch64/aarch64-address-type-promotion.ll
    llvm/trunk/test/CodeGen/AArch64/aarch64-gep-opt.ll
    llvm/trunk/test/CodeGen/AArch64/and-mask-removal.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-2011-03-21-Unaligned-Frame-Index.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-2012-05-22-LdStOptBug.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-2012-07-11-InstrEmitterBug.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-abi-varargs.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-abi_align.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-addr-mode-folding.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-addr-type-promotion.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-addrmode.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-atomic.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-bcc.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-big-endian-varargs.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-big-stack.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-bitfield-extract.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-cast-opt.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-ccmp-heuristics.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-ccmp.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-collect-loh-garbage-crash.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-complex-copy-noneon.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-const-addr.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-cse.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-dead-indexed-load.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-early-ifcvt.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-elf-globals.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-extend.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-extern-weak.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-addr-offset.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-alloca.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-indirectbr.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-intrinsic.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-fastisel-gep-promote-before-add.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-fold-address.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-fold-lsl.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-indexed-memory.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-indexed-vector-ldst-2.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-indexed-vector-ldst.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-large-frame.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-ldp.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-ldur.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-memset-inline.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-misched-basic-A53.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-misched-basic-A57.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-prefetch.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-register-offset-addressing.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-rev.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-scaled_iv.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-scvt.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-spill-lr.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-st1.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-stp.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-stur.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-this-return.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-triv-disjoint-mem-access.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-trunc-store.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-vector-ldst.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-virtual_base.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-volatile.ll
    llvm/trunk/test/CodeGen/AArch64/arm64-zextload-unscaled.ll
    llvm/trunk/test/CodeGen/AArch64/assertion-rc-mismatch.ll
    llvm/trunk/test/CodeGen/AArch64/cmpwithshort.ll
    llvm/trunk/test/CodeGen/AArch64/combine-comparisons-by-cse.ll
    llvm/trunk/test/CodeGen/AArch64/complex-copy-noneon.ll
    llvm/trunk/test/CodeGen/AArch64/eliminate-trunc.ll
    llvm/trunk/test/CodeGen/AArch64/extern-weak.ll
    llvm/trunk/test/CodeGen/AArch64/f16-convert.ll
    llvm/trunk/test/CodeGen/AArch64/fast-isel-gep.ll
    llvm/trunk/test/CodeGen/AArch64/func-argpassing.ll
    llvm/trunk/test/CodeGen/AArch64/global-merge-3.ll
    llvm/trunk/test/CodeGen/AArch64/i128-align.ll
    llvm/trunk/test/CodeGen/AArch64/intrinsics-memory-barrier.ll
    llvm/trunk/test/CodeGen/AArch64/ldst-opt.ll
    llvm/trunk/test/CodeGen/AArch64/ldst-regoffset.ll
    llvm/trunk/test/CodeGen/AArch64/ldst-unscaledimm.ll
    llvm/trunk/test/CodeGen/AArch64/ldst-unsignedimm.ll
    llvm/trunk/test/CodeGen/AArch64/paired-load.ll
    llvm/trunk/test/CodeGen/AArch64/ragreedy-csr.ll
    llvm/trunk/test/CodeGen/AArch64/stack_guard_remat.ll
    llvm/trunk/test/CodeGen/AArch64/zero-reg.ll
    llvm/trunk/test/CodeGen/ARM/2006-11-10-CycleInDAG.ll
    llvm/trunk/test/CodeGen/ARM/2007-01-19-InfiniteLoop.ll
    llvm/trunk/test/CodeGen/ARM/2007-03-07-CombinerCrash.ll
    llvm/trunk/test/CodeGen/ARM/2007-03-13-InstrSched.ll
    llvm/trunk/test/CodeGen/ARM/2007-03-21-JoinIntervalsCrash.ll
    llvm/trunk/test/CodeGen/ARM/2007-03-27-RegScavengerAssert.ll
    llvm/trunk/test/CodeGen/ARM/2007-04-02-RegScavengerAssert.ll
    llvm/trunk/test/CodeGen/ARM/2007-04-03-UndefinedSymbol.ll
    llvm/trunk/test/CodeGen/ARM/2007-05-03-BadPostIndexedLd.ll
    llvm/trunk/test/CodeGen/ARM/2007-05-23-BadPreIndexedStore.ll
    llvm/trunk/test/CodeGen/ARM/2007-08-15-ReuseBug.ll
    llvm/trunk/test/CodeGen/ARM/2008-02-29-RegAllocLocal.ll
    llvm/trunk/test/CodeGen/ARM/2008-04-10-ScavengerAssert.ll
    llvm/trunk/test/CodeGen/ARM/2008-05-19-LiveIntervalsBug.ll
    llvm/trunk/test/CodeGen/ARM/2009-03-09-AddrModeBug.ll
    llvm/trunk/test/CodeGen/ARM/2009-04-08-AggregateAddr.ll
    llvm/trunk/test/CodeGen/ARM/2009-06-22-CoalescerBug.ll
    llvm/trunk/test/CodeGen/ARM/2009-06-30-RegScavengerAssert.ll
    llvm/trunk/test/CodeGen/ARM/2009-06-30-RegScavengerAssert2.ll
    llvm/trunk/test/CodeGen/ARM/2009-06-30-RegScavengerAssert3.ll
    llvm/trunk/test/CodeGen/ARM/2009-06-30-RegScavengerAssert4.ll
    llvm/trunk/test/CodeGen/ARM/2009-07-01-CommuteBug.ll
    llvm/trunk/test/CodeGen/ARM/2009-07-18-RewriterBug.ll
    llvm/trunk/test/CodeGen/ARM/2009-07-22-SchedulerAssert.ll
    llvm/trunk/test/CodeGen/ARM/2009-08-15-RegScavenger-EarlyClobber.ll
    llvm/trunk/test/CodeGen/ARM/2009-08-21-PostRAKill.ll
    llvm/trunk/test/CodeGen/ARM/2009-08-21-PostRAKill2.ll
    llvm/trunk/test/CodeGen/ARM/2009-08-21-PostRAKill3.ll
    llvm/trunk/test/CodeGen/ARM/2009-08-31-LSDA-Name.ll
    llvm/trunk/test/CodeGen/ARM/2009-09-13-InvalidSubreg.ll
    llvm/trunk/test/CodeGen/ARM/2009-09-28-LdStOptiBug.ll
    llvm/trunk/test/CodeGen/ARM/2009-11-01-NeonMoves.ll
    llvm/trunk/test/CodeGen/ARM/2009-11-13-ScavengerAssert.ll
    llvm/trunk/test/CodeGen/ARM/2009-11-13-ScavengerAssert2.ll
    llvm/trunk/test/CodeGen/ARM/2009-11-13-VRRewriterCrash.ll
    llvm/trunk/test/CodeGen/ARM/2009-12-02-vtrn-undef.ll
    llvm/trunk/test/CodeGen/ARM/2010-03-04-eabi-fp-spill.ll
    llvm/trunk/test/CodeGen/ARM/2010-03-04-stm-undef-addr.ll
    llvm/trunk/test/CodeGen/ARM/2010-05-17-FastAllocCrash.ll
    llvm/trunk/test/CodeGen/ARM/2010-05-18-PostIndexBug.ll
    llvm/trunk/test/CodeGen/ARM/2010-05-21-BuildVector.ll
    llvm/trunk/test/CodeGen/ARM/2010-06-21-LdStMultipleBug.ll
    llvm/trunk/test/CodeGen/ARM/2010-06-21-nondarwin-tc.ll
    llvm/trunk/test/CodeGen/ARM/2010-06-25-Thumb2ITInvalidIterator.ll
    llvm/trunk/test/CodeGen/ARM/2010-07-26-GlobalMerge.ll
    llvm/trunk/test/CodeGen/ARM/2010-08-04-StackVariable.ll
    llvm/trunk/test/CodeGen/ARM/2010-11-15-SpillEarlyClobber.ll
    llvm/trunk/test/CodeGen/ARM/2010-12-15-elf-lcomm.ll
    llvm/trunk/test/CodeGen/ARM/2010-12-17-LocalStackSlotCrash.ll
    llvm/trunk/test/CodeGen/ARM/2011-02-04-AntidepMultidef.ll
    llvm/trunk/test/CodeGen/ARM/2011-02-07-AntidepClobber.ll
    llvm/trunk/test/CodeGen/ARM/2011-03-10-DAGCombineCrash.ll
    llvm/trunk/test/CodeGen/ARM/2011-03-15-LdStMultipleBug.ll
    llvm/trunk/test/CodeGen/ARM/2011-04-07-schediv.ll
    llvm/trunk/test/CodeGen/ARM/2011-04-11-MachineLICMBug.ll
    llvm/trunk/test/CodeGen/ARM/2011-08-29-ldr_pre_imm.ll
    llvm/trunk/test/CodeGen/ARM/2011-10-26-ExpandUnalignedLoadCrash.ll
    llvm/trunk/test/CodeGen/ARM/2011-11-14-EarlyClobber.ll
    llvm/trunk/test/CodeGen/ARM/2012-08-27-CopyPhysRegCrash.ll
    llvm/trunk/test/CodeGen/ARM/2012-10-04-AAPCS-byval-align8.ll
    llvm/trunk/test/CodeGen/ARM/2012-10-04-FixedFrame-vs-byval.ll
    llvm/trunk/test/CodeGen/ARM/2013-01-21-PR14992.ll
    llvm/trunk/test/CodeGen/ARM/2013-04-18-load-overlap-PR14824.ll
    llvm/trunk/test/CodeGen/ARM/2013-05-07-ByteLoadSameAddress.ll
    llvm/trunk/test/CodeGen/ARM/2014-07-18-earlyclobber-str-post.ll
    llvm/trunk/test/CodeGen/ARM/2015-01-21-thumbv4t-ldstr-opt.ll
    llvm/trunk/test/CodeGen/ARM/MergeConsecutiveStores.ll
    llvm/trunk/test/CodeGen/ARM/Windows/chkstk-movw-movt-isel.ll
    llvm/trunk/test/CodeGen/ARM/Windows/stack-probe-non-default.ll
    llvm/trunk/test/CodeGen/ARM/Windows/vla.ll
    llvm/trunk/test/CodeGen/ARM/a15-partial-update.ll
    llvm/trunk/test/CodeGen/ARM/arm-and-tst-peephole.ll
    llvm/trunk/test/CodeGen/ARM/arm-negative-stride.ll
    llvm/trunk/test/CodeGen/ARM/avoid-cpsr-rmw.ll
    llvm/trunk/test/CodeGen/ARM/bfx.ll
    llvm/trunk/test/CodeGen/ARM/bx_fold.ll
    llvm/trunk/test/CodeGen/ARM/call.ll
    llvm/trunk/test/CodeGen/ARM/call_nolink.ll
    llvm/trunk/test/CodeGen/ARM/coalesce-subregs.ll
    llvm/trunk/test/CodeGen/ARM/code-placement.ll
    llvm/trunk/test/CodeGen/ARM/commute-movcc.ll
    llvm/trunk/test/CodeGen/ARM/compare-call.ll
    llvm/trunk/test/CodeGen/ARM/crash-greedy-v6.ll
    llvm/trunk/test/CodeGen/ARM/crash.ll
    llvm/trunk/test/CodeGen/ARM/debug-frame-vararg.ll
    llvm/trunk/test/CodeGen/ARM/debug-info-blocks.ll
    llvm/trunk/test/CodeGen/ARM/debug-info-d16-reg.ll
    llvm/trunk/test/CodeGen/ARM/debug-info-s16-reg.ll
    llvm/trunk/test/CodeGen/ARM/divmod.ll
    llvm/trunk/test/CodeGen/ARM/dyn-stackalloc.ll
    llvm/trunk/test/CodeGen/ARM/fast-isel-align.ll
    llvm/trunk/test/CodeGen/ARM/fast-isel-ldr-str-arm.ll
    llvm/trunk/test/CodeGen/ARM/fast-isel-ldr-str-thumb-neg-index.ll
    llvm/trunk/test/CodeGen/ARM/fast-isel-ldrh-strh-arm.ll
    llvm/trunk/test/CodeGen/ARM/fast-isel-pred.ll
    llvm/trunk/test/CodeGen/ARM/fast-isel-redefinition.ll
    llvm/trunk/test/CodeGen/ARM/fastisel-gep-promote-before-add.ll
    llvm/trunk/test/CodeGen/ARM/flag-crash.ll
    llvm/trunk/test/CodeGen/ARM/fpmem.ll
    llvm/trunk/test/CodeGen/ARM/ifconv-kills.ll
    llvm/trunk/test/CodeGen/ARM/ifcvt-branch-weight-bug.ll
    llvm/trunk/test/CodeGen/ARM/ifcvt-branch-weight.ll
    llvm/trunk/test/CodeGen/ARM/ifcvt11.ll
    llvm/trunk/test/CodeGen/ARM/indirect-reg-input.ll
    llvm/trunk/test/CodeGen/ARM/indirectbr-2.ll
    llvm/trunk/test/CodeGen/ARM/indirectbr.ll
    llvm/trunk/test/CodeGen/ARM/inline-diagnostics.ll
    llvm/trunk/test/CodeGen/ARM/inlineasm-64bit.ll
    llvm/trunk/test/CodeGen/ARM/intrinsics-memory-barrier.ll
    llvm/trunk/test/CodeGen/ARM/ldr.ll
    llvm/trunk/test/CodeGen/ARM/ldr_frame.ll
    llvm/trunk/test/CodeGen/ARM/ldr_pre.ll
    llvm/trunk/test/CodeGen/ARM/ldrd.ll
    llvm/trunk/test/CodeGen/ARM/ldst-f32-2-i32.ll
    llvm/trunk/test/CodeGen/ARM/ldstrex.ll
    llvm/trunk/test/CodeGen/ARM/lsr-code-insertion.ll
    llvm/trunk/test/CodeGen/ARM/lsr-icmp-imm.ll
    llvm/trunk/test/CodeGen/ARM/lsr-scale-addr-mode.ll
    llvm/trunk/test/CodeGen/ARM/lsr-unfolded-offset.ll
    llvm/trunk/test/CodeGen/ARM/machine-cse-cmp.ll
    llvm/trunk/test/CodeGen/ARM/machine-licm.ll
    llvm/trunk/test/CodeGen/ARM/memset-inline.ll
    llvm/trunk/test/CodeGen/ARM/misched-copy-arm.ll
    llvm/trunk/test/CodeGen/ARM/negative-offset.ll
    llvm/trunk/test/CodeGen/ARM/no-tail-call.ll
    llvm/trunk/test/CodeGen/ARM/phi.ll
    llvm/trunk/test/CodeGen/ARM/pr13249.ll
    llvm/trunk/test/CodeGen/ARM/prefetch.ll
    llvm/trunk/test/CodeGen/ARM/reg_sequence.ll
    llvm/trunk/test/CodeGen/ARM/saxpy10-a9.ll
    llvm/trunk/test/CodeGen/ARM/shifter_operand.ll
    llvm/trunk/test/CodeGen/ARM/sjlj-prepare-critical-edge.ll
    llvm/trunk/test/CodeGen/ARM/ssp-data-layout.ll
    llvm/trunk/test/CodeGen/ARM/stack-alignment.ll
    llvm/trunk/test/CodeGen/ARM/stack-protector-bmovpcb_call.ll
    llvm/trunk/test/CodeGen/ARM/stack_guard_remat.ll
    llvm/trunk/test/CodeGen/ARM/str_pre.ll
    llvm/trunk/test/CodeGen/ARM/struct-byval-frame-index.ll
    llvm/trunk/test/CodeGen/ARM/struct_byval.ll
    llvm/trunk/test/CodeGen/ARM/swift-vldm.ll
    llvm/trunk/test/CodeGen/ARM/tail-dup.ll
    llvm/trunk/test/CodeGen/ARM/test-sharedidx.ll
    llvm/trunk/test/CodeGen/ARM/this-return.ll
    llvm/trunk/test/CodeGen/ARM/thumb1-varalloc.ll
    llvm/trunk/test/CodeGen/ARM/trunc_ldr.ll
    llvm/trunk/test/CodeGen/ARM/unaligned_load_store_vector.ll
    llvm/trunk/test/CodeGen/ARM/undef-sext.ll
    llvm/trunk/test/CodeGen/ARM/vector-load.ll
    llvm/trunk/test/CodeGen/ARM/vector-spilling.ll
    llvm/trunk/test/CodeGen/ARM/vector-store.ll
    llvm/trunk/test/CodeGen/ARM/vfp.ll
    llvm/trunk/test/CodeGen/ARM/vld1.ll
    llvm/trunk/test/CodeGen/ARM/vld2.ll
    llvm/trunk/test/CodeGen/ARM/vld3.ll
    llvm/trunk/test/CodeGen/ARM/vld4.ll
    llvm/trunk/test/CodeGen/ARM/vlddup.ll
    llvm/trunk/test/CodeGen/ARM/vldlane.ll
    llvm/trunk/test/CodeGen/ARM/vldm-liveness.ll
    llvm/trunk/test/CodeGen/ARM/vldm-sched-a9.ll
    llvm/trunk/test/CodeGen/ARM/vmov.ll
    llvm/trunk/test/CodeGen/ARM/vmul.ll
    llvm/trunk/test/CodeGen/ARM/vrev.ll
    llvm/trunk/test/CodeGen/ARM/vst1.ll
    llvm/trunk/test/CodeGen/ARM/vst2.ll
    llvm/trunk/test/CodeGen/ARM/vst3.ll
    llvm/trunk/test/CodeGen/ARM/vst4.ll
    llvm/trunk/test/CodeGen/ARM/vstlane.ll
    llvm/trunk/test/CodeGen/ARM/warn-stack.ll
    llvm/trunk/test/CodeGen/ARM/wrong-t2stmia-size-opt.ll
    llvm/trunk/test/CodeGen/ARM/zextload_demandedbits.ll
    llvm/trunk/test/CodeGen/BPF/basictest.ll
    llvm/trunk/test/CodeGen/BPF/byval.ll
    llvm/trunk/test/CodeGen/BPF/ex1.ll
    llvm/trunk/test/CodeGen/BPF/load.ll
    llvm/trunk/test/CodeGen/BPF/loops.ll
    llvm/trunk/test/CodeGen/BPF/sanity.ll
    llvm/trunk/test/CodeGen/Generic/2003-05-28-ManyArgs.ll
    llvm/trunk/test/CodeGen/Generic/2003-05-30-BadFoldGEP.ll
    llvm/trunk/test/CodeGen/Generic/2003-07-29-BadConstSbyte.ll
    llvm/trunk/test/CodeGen/Generic/2006-03-01-dagcombineinfloop.ll
    llvm/trunk/test/CodeGen/Generic/2006-05-06-GEP-Cast-Sink-Crash.ll
    llvm/trunk/test/CodeGen/Generic/2006-06-13-ComputeMaskedBitsCrash.ll
    llvm/trunk/test/CodeGen/Generic/2006-09-02-LocalAllocCrash.ll
    llvm/trunk/test/CodeGen/Generic/2008-01-30-LoadCrash.ll
    llvm/trunk/test/CodeGen/Generic/2008-02-20-MatchingMem.ll
    llvm/trunk/test/CodeGen/Generic/2014-02-05-OpaqueConstants.ll
    llvm/trunk/test/CodeGen/Generic/badFoldGEP.ll
    llvm/trunk/test/CodeGen/Generic/cast-fp.ll
    llvm/trunk/test/CodeGen/Generic/constindices.ll
    llvm/trunk/test/CodeGen/Generic/crash.ll
    llvm/trunk/test/CodeGen/Generic/hello.ll
    llvm/trunk/test/CodeGen/Generic/negintconst.ll
    llvm/trunk/test/CodeGen/Generic/print-add.ll
    llvm/trunk/test/CodeGen/Generic/print-arith-fp.ll
    llvm/trunk/test/CodeGen/Generic/print-arith-int.ll
    llvm/trunk/test/CodeGen/Generic/print-int.ll
    llvm/trunk/test/CodeGen/Generic/print-mul-exp.ll
    llvm/trunk/test/CodeGen/Generic/print-mul.ll
    llvm/trunk/test/CodeGen/Generic/print-shift.ll
    llvm/trunk/test/CodeGen/Generic/select.ll
    llvm/trunk/test/CodeGen/Generic/undef-phi.ll
    llvm/trunk/test/CodeGen/Generic/vector.ll
    llvm/trunk/test/CodeGen/Hexagon/always-ext.ll
    llvm/trunk/test/CodeGen/Hexagon/cext-check.ll
    llvm/trunk/test/CodeGen/Hexagon/cext-valid-packet2.ll
    llvm/trunk/test/CodeGen/Hexagon/combine_ir.ll
    llvm/trunk/test/CodeGen/Hexagon/hwloop-cleanup.ll
    llvm/trunk/test/CodeGen/Hexagon/hwloop-const.ll
    llvm/trunk/test/CodeGen/Hexagon/hwloop-dbg.ll
    llvm/trunk/test/CodeGen/Hexagon/hwloop-le.ll
    llvm/trunk/test/CodeGen/Hexagon/hwloop-lt.ll
    llvm/trunk/test/CodeGen/Hexagon/hwloop-lt1.ll
    llvm/trunk/test/CodeGen/Hexagon/hwloop-ne.ll
    llvm/trunk/test/CodeGen/Hexagon/i16_VarArg.ll
    llvm/trunk/test/CodeGen/Hexagon/i1_VarArg.ll
    llvm/trunk/test/CodeGen/Hexagon/i8_VarArg.ll
    llvm/trunk/test/CodeGen/Hexagon/idxload-with-zero-offset.ll
    llvm/trunk/test/CodeGen/Hexagon/memops.ll
    llvm/trunk/test/CodeGen/Hexagon/memops1.ll
    llvm/trunk/test/CodeGen/Hexagon/memops2.ll
    llvm/trunk/test/CodeGen/Hexagon/memops3.ll
    llvm/trunk/test/CodeGen/Hexagon/postinc-load.ll
    llvm/trunk/test/CodeGen/Hexagon/postinc-store.ll
    llvm/trunk/test/CodeGen/Hexagon/remove_lsr.ll
    llvm/trunk/test/CodeGen/Hexagon/union-1.ll
    llvm/trunk/test/CodeGen/MSP430/2009-11-08-InvalidResNo.ll
    llvm/trunk/test/CodeGen/MSP430/2009-12-22-InlineAsm.ll
    llvm/trunk/test/CodeGen/MSP430/AddrMode-bis-rx.ll
    llvm/trunk/test/CodeGen/MSP430/AddrMode-bis-xr.ll
    llvm/trunk/test/CodeGen/MSP430/AddrMode-mov-rx.ll
    llvm/trunk/test/CodeGen/MSP430/AddrMode-mov-xr.ll
    llvm/trunk/test/CodeGen/MSP430/byval.ll
    llvm/trunk/test/CodeGen/MSP430/indirectbr.ll
    llvm/trunk/test/CodeGen/MSP430/indirectbr2.ll
    llvm/trunk/test/CodeGen/MSP430/postinc.ll
    llvm/trunk/test/CodeGen/Mips/2008-07-03-SRet.ll
    llvm/trunk/test/CodeGen/Mips/2008-10-13-LegalizerBug.ll
    llvm/trunk/test/CodeGen/Mips/2008-11-10-xint_to_fp.ll
    llvm/trunk/test/CodeGen/Mips/Fast-ISel/overflt.ll
    llvm/trunk/test/CodeGen/Mips/addressing-mode.ll
    llvm/trunk/test/CodeGen/Mips/align16.ll
    llvm/trunk/test/CodeGen/Mips/alloca.ll
    llvm/trunk/test/CodeGen/Mips/alloca16.ll
    llvm/trunk/test/CodeGen/Mips/brdelayslot.ll
    llvm/trunk/test/CodeGen/Mips/brind.ll
    llvm/trunk/test/CodeGen/Mips/cconv/arguments-float.ll
    llvm/trunk/test/CodeGen/Mips/cconv/arguments-fp128.ll
    llvm/trunk/test/CodeGen/Mips/cconv/arguments-hard-float-varargs.ll
    llvm/trunk/test/CodeGen/Mips/cconv/arguments-hard-float.ll
    llvm/trunk/test/CodeGen/Mips/cconv/arguments-hard-fp128.ll
    llvm/trunk/test/CodeGen/Mips/cconv/arguments-varargs-small-structs-byte.ll
    llvm/trunk/test/CodeGen/Mips/cconv/arguments-varargs-small-structs-combinations.ll
    llvm/trunk/test/CodeGen/Mips/cconv/arguments-varargs-small-structs-multiple-args.ll
    llvm/trunk/test/CodeGen/Mips/cconv/arguments-varargs.ll
    llvm/trunk/test/CodeGen/Mips/cconv/arguments.ll
    llvm/trunk/test/CodeGen/Mips/cmplarge.ll
    llvm/trunk/test/CodeGen/Mips/dsp-patterns.ll
    llvm/trunk/test/CodeGen/Mips/fp-indexed-ls.ll
    llvm/trunk/test/CodeGen/Mips/fp-spill-reload.ll
    llvm/trunk/test/CodeGen/Mips/hfptrcall.ll
    llvm/trunk/test/CodeGen/Mips/largeimm1.ll
    llvm/trunk/test/CodeGen/Mips/largeimmprinting.ll
    llvm/trunk/test/CodeGen/Mips/memcpy.ll
    llvm/trunk/test/CodeGen/Mips/micromips-delay-slot-jr.ll
    llvm/trunk/test/CodeGen/Mips/micromips-sw-lw-16.ll
    llvm/trunk/test/CodeGen/Mips/mips16_fpret.ll
    llvm/trunk/test/CodeGen/Mips/misha.ll
    llvm/trunk/test/CodeGen/Mips/mno-ldc1-sdc1.ll
    llvm/trunk/test/CodeGen/Mips/msa/frameindex.ll
    llvm/trunk/test/CodeGen/Mips/msa/spill.ll
    llvm/trunk/test/CodeGen/Mips/nacl-align.ll
    llvm/trunk/test/CodeGen/Mips/o32_cc_byval.ll
    llvm/trunk/test/CodeGen/Mips/prevent-hoisting.ll
    llvm/trunk/test/CodeGen/Mips/sr1.ll
    llvm/trunk/test/CodeGen/Mips/stackcoloring.ll
    llvm/trunk/test/CodeGen/Mips/swzero.ll
    llvm/trunk/test/CodeGen/NVPTX/access-non-generic.ll
    llvm/trunk/test/CodeGen/NVPTX/bug21465.ll
    llvm/trunk/test/CodeGen/NVPTX/bug22322.ll
    llvm/trunk/test/CodeGen/NVPTX/call-with-alloca-buffer.ll
    llvm/trunk/test/CodeGen/NVPTX/ldu-reg-plus-offset.ll
    llvm/trunk/test/CodeGen/NVPTX/load-sext-i1.ll
    llvm/trunk/test/CodeGen/NVPTX/noduplicate-syncthreads.ll
    llvm/trunk/test/CodeGen/NVPTX/nounroll.ll
    llvm/trunk/test/CodeGen/NVPTX/pr17529.ll
    llvm/trunk/test/CodeGen/NVPTX/sched1.ll
    llvm/trunk/test/CodeGen/NVPTX/sched2.ll
    llvm/trunk/test/CodeGen/PowerPC/2006-05-12-rlwimi-crash.ll
    llvm/trunk/test/CodeGen/PowerPC/2006-07-07-ComputeMaskedBits.ll
    llvm/trunk/test/CodeGen/PowerPC/2006-12-07-LargeAlloca.ll
    llvm/trunk/test/CodeGen/PowerPC/2007-01-31-InlineAsmAddrMode.ll
    llvm/trunk/test/CodeGen/PowerPC/2007-03-30-SpillerCrash.ll
    llvm/trunk/test/CodeGen/PowerPC/2007-05-14-InlineAsmSelectCrash.ll
    llvm/trunk/test/CodeGen/PowerPC/2007-06-28-BCCISelBug.ll
    llvm/trunk/test/CodeGen/PowerPC/2007-08-04-CoalescerAssert.ll
    llvm/trunk/test/CodeGen/PowerPC/2007-09-08-unaligned.ll
    llvm/trunk/test/CodeGen/PowerPC/2007-11-16-landingpad-split.ll
    llvm/trunk/test/CodeGen/PowerPC/2008-03-17-RegScavengerCrash.ll
    llvm/trunk/test/CodeGen/PowerPC/2008-03-24-AddressRegImm.ll
    llvm/trunk/test/CodeGen/PowerPC/2008-04-23-CoalescerCrash.ll
    llvm/trunk/test/CodeGen/PowerPC/2008-07-15-Bswap.ll
    llvm/trunk/test/CodeGen/PowerPC/2008-09-12-CoalescerBug.ll
    llvm/trunk/test/CodeGen/PowerPC/2009-03-17-LSRBug.ll
    llvm/trunk/test/CodeGen/PowerPC/2009-08-17-inline-asm-addr-mode-breakage.ll
    llvm/trunk/test/CodeGen/PowerPC/2009-11-15-ProcImpDefsBug.ll
    llvm/trunk/test/CodeGen/PowerPC/2011-12-05-NoSpillDupCR.ll
    llvm/trunk/test/CodeGen/PowerPC/2011-12-06-SpillAndRestoreCR.ll
    llvm/trunk/test/CodeGen/PowerPC/2013-05-15-preinc-fold.ll
    llvm/trunk/test/CodeGen/PowerPC/2013-07-01-PHIElimBug.ll
    llvm/trunk/test/CodeGen/PowerPC/a2-fp-basic.ll
    llvm/trunk/test/CodeGen/PowerPC/add-fi.ll
    llvm/trunk/test/CodeGen/PowerPC/addi-licm.ll
    llvm/trunk/test/CodeGen/PowerPC/addi-reassoc.ll
    llvm/trunk/test/CodeGen/PowerPC/anon_aggr.ll
    llvm/trunk/test/CodeGen/PowerPC/atomics-indexed.ll
    llvm/trunk/test/CodeGen/PowerPC/bdzlr.ll
    llvm/trunk/test/CodeGen/PowerPC/bswap-load-store.ll
    llvm/trunk/test/CodeGen/PowerPC/byval-aliased.ll
    llvm/trunk/test/CodeGen/PowerPC/code-align.ll
    llvm/trunk/test/CodeGen/PowerPC/complex-return.ll
    llvm/trunk/test/CodeGen/PowerPC/cr-spills.ll
    llvm/trunk/test/CodeGen/PowerPC/ctrloop-cpsgn.ll
    llvm/trunk/test/CodeGen/PowerPC/ctrloop-fp64.ll
    llvm/trunk/test/CodeGen/PowerPC/ctrloop-i64.ll
    llvm/trunk/test/CodeGen/PowerPC/ctrloop-le.ll
    llvm/trunk/test/CodeGen/PowerPC/ctrloop-lt.ll
    llvm/trunk/test/CodeGen/PowerPC/ctrloop-ne.ll
    llvm/trunk/test/CodeGen/PowerPC/ctrloop-s000.ll
    llvm/trunk/test/CodeGen/PowerPC/ctrloop-sums.ll
    llvm/trunk/test/CodeGen/PowerPC/delete-node.ll
    llvm/trunk/test/CodeGen/PowerPC/dyn-alloca-aligned.ll
    llvm/trunk/test/CodeGen/PowerPC/fast-isel-redefinition.ll
    llvm/trunk/test/CodeGen/PowerPC/fastisel-gep-promote-before-add.ll
    llvm/trunk/test/CodeGen/PowerPC/flt-preinc.ll
    llvm/trunk/test/CodeGen/PowerPC/glob-comp-aa-crash.ll
    llvm/trunk/test/CodeGen/PowerPC/ia-mem-r0.ll
    llvm/trunk/test/CodeGen/PowerPC/indexed-load.ll
    llvm/trunk/test/CodeGen/PowerPC/indirectbr.ll
    llvm/trunk/test/CodeGen/PowerPC/lbzux.ll
    llvm/trunk/test/CodeGen/PowerPC/ld-st-upd.ll
    llvm/trunk/test/CodeGen/PowerPC/ldtoc-inv.ll
    llvm/trunk/test/CodeGen/PowerPC/loop-data-prefetch.ll
    llvm/trunk/test/CodeGen/PowerPC/lsa.ll
    llvm/trunk/test/CodeGen/PowerPC/lsr-postinc-pos.ll
    llvm/trunk/test/CodeGen/PowerPC/mcm-8.ll
    llvm/trunk/test/CodeGen/PowerPC/mem-rr-addr-mode.ll
    llvm/trunk/test/CodeGen/PowerPC/mem_update.ll
    llvm/trunk/test/CodeGen/PowerPC/misched-inorder-latency.ll
    llvm/trunk/test/CodeGen/PowerPC/misched.ll
    llvm/trunk/test/CodeGen/PowerPC/post-ra-ec.ll
    llvm/trunk/test/CodeGen/PowerPC/ppc440-fp-basic.ll
    llvm/trunk/test/CodeGen/PowerPC/ppc64-align-long-double.ll
    llvm/trunk/test/CodeGen/PowerPC/ppc64-byval-align.ll
    llvm/trunk/test/CodeGen/PowerPC/ppc64-gep-opt.ll
    llvm/trunk/test/CodeGen/PowerPC/ppc64-toc.ll
    llvm/trunk/test/CodeGen/PowerPC/pr15031.ll
    llvm/trunk/test/CodeGen/PowerPC/pr16556-2.ll
    llvm/trunk/test/CodeGen/PowerPC/pr17354.ll
    llvm/trunk/test/CodeGen/PowerPC/pr20442.ll
    llvm/trunk/test/CodeGen/PowerPC/preincprep-invoke.ll
    llvm/trunk/test/CodeGen/PowerPC/qpx-unalperm.ll
    llvm/trunk/test/CodeGen/PowerPC/reg-coalesce-simple.ll
    llvm/trunk/test/CodeGen/PowerPC/resolvefi-basereg.ll
    llvm/trunk/test/CodeGen/PowerPC/resolvefi-disp.ll
    llvm/trunk/test/CodeGen/PowerPC/s000-alias-misched.ll
    llvm/trunk/test/CodeGen/PowerPC/split-index-tc.ll
    llvm/trunk/test/CodeGen/PowerPC/stack-realign.ll
    llvm/trunk/test/CodeGen/PowerPC/stdux-constuse.ll
    llvm/trunk/test/CodeGen/PowerPC/stfiwx.ll
    llvm/trunk/test/CodeGen/PowerPC/store-update.ll
    llvm/trunk/test/CodeGen/PowerPC/structsinmem.ll
    llvm/trunk/test/CodeGen/PowerPC/structsinregs.ll
    llvm/trunk/test/CodeGen/PowerPC/stwu8.ll
    llvm/trunk/test/CodeGen/PowerPC/stwux.ll
    llvm/trunk/test/CodeGen/PowerPC/subreg-postra-2.ll
    llvm/trunk/test/CodeGen/PowerPC/subreg-postra.ll
    llvm/trunk/test/CodeGen/PowerPC/tls-cse.ll
    llvm/trunk/test/CodeGen/PowerPC/toc-load-sched-bug.ll
    llvm/trunk/test/CodeGen/PowerPC/trampoline.ll
    llvm/trunk/test/CodeGen/PowerPC/unal-altivec-wint.ll
    llvm/trunk/test/CodeGen/PowerPC/unal-altivec.ll
    llvm/trunk/test/CodeGen/PowerPC/unal-altivec2.ll
    llvm/trunk/test/CodeGen/PowerPC/varargs-struct-float.ll
    llvm/trunk/test/CodeGen/PowerPC/vec-abi-align.ll
    llvm/trunk/test/CodeGen/PowerPC/vec_misaligned.ll
    llvm/trunk/test/CodeGen/PowerPC/vsx-fma-m.ll
    llvm/trunk/test/CodeGen/PowerPC/vsx-infl-copy1.ll
    llvm/trunk/test/CodeGen/PowerPC/vsx-infl-copy2.ll
    llvm/trunk/test/CodeGen/PowerPC/vsx-ldst-builtin-le.ll
    llvm/trunk/test/CodeGen/PowerPC/zext-free.ll
    llvm/trunk/test/CodeGen/R600/32-bit-local-address-space.ll
    llvm/trunk/test/CodeGen/R600/add.ll
    llvm/trunk/test/CodeGen/R600/add_i64.ll
    llvm/trunk/test/CodeGen/R600/address-space.ll
    llvm/trunk/test/CodeGen/R600/and.ll
    llvm/trunk/test/CodeGen/R600/array-ptr-calc-i32.ll
    llvm/trunk/test/CodeGen/R600/array-ptr-calc-i64.ll
    llvm/trunk/test/CodeGen/R600/atomic_cmp_swap_local.ll
    llvm/trunk/test/CodeGen/R600/atomic_load_add.ll
    llvm/trunk/test/CodeGen/R600/atomic_load_sub.ll
    llvm/trunk/test/CodeGen/R600/call.ll
    llvm/trunk/test/CodeGen/R600/codegen-prepare-addrmode-sext.ll
    llvm/trunk/test/CodeGen/R600/combine_vloads.ll
    llvm/trunk/test/CodeGen/R600/commute_modifiers.ll
    llvm/trunk/test/CodeGen/R600/copy-to-reg.ll
    llvm/trunk/test/CodeGen/R600/ctpop.ll
    llvm/trunk/test/CodeGen/R600/ctpop64.ll
    llvm/trunk/test/CodeGen/R600/dagcombiner-bug-illegal-vec4-int-to-fp.ll
    llvm/trunk/test/CodeGen/R600/disconnected-predset-break-bug.ll
    llvm/trunk/test/CodeGen/R600/ds-negative-offset-addressing-mode-loop.ll
    llvm/trunk/test/CodeGen/R600/ds_read2.ll
    llvm/trunk/test/CodeGen/R600/ds_read2_offset_order.ll
    llvm/trunk/test/CodeGen/R600/ds_read2st64.ll
    llvm/trunk/test/CodeGen/R600/ds_write2.ll
    llvm/trunk/test/CodeGen/R600/ds_write2st64.ll
    llvm/trunk/test/CodeGen/R600/endcf-loop-header.ll
    llvm/trunk/test/CodeGen/R600/extract_vector_elt_i16.ll
    llvm/trunk/test/CodeGen/R600/fabs.f64.ll
    llvm/trunk/test/CodeGen/R600/fadd.ll
    llvm/trunk/test/CodeGen/R600/fcmp.ll
    llvm/trunk/test/CodeGen/R600/fdiv.f64.ll
    llvm/trunk/test/CodeGen/R600/fdiv.ll
    llvm/trunk/test/CodeGen/R600/flat-address-space.ll
    llvm/trunk/test/CodeGen/R600/fma-combine.ll
    llvm/trunk/test/CodeGen/R600/fma.ll
    llvm/trunk/test/CodeGen/R600/fmax_legacy.f64.ll
    llvm/trunk/test/CodeGen/R600/fmax_legacy.ll
    llvm/trunk/test/CodeGen/R600/fmin_legacy.f64.ll
    llvm/trunk/test/CodeGen/R600/fmin_legacy.ll
    llvm/trunk/test/CodeGen/R600/fmul.ll
    llvm/trunk/test/CodeGen/R600/fmuladd.ll
    llvm/trunk/test/CodeGen/R600/fp_to_sint.f64.ll
    llvm/trunk/test/CodeGen/R600/fp_to_uint.f64.ll
    llvm/trunk/test/CodeGen/R600/frem.ll
    llvm/trunk/test/CodeGen/R600/fsub.ll
    llvm/trunk/test/CodeGen/R600/fsub64.ll
    llvm/trunk/test/CodeGen/R600/gep-address-space.ll
    llvm/trunk/test/CodeGen/R600/global-directive.ll
    llvm/trunk/test/CodeGen/R600/global-zero-initializer.ll
    llvm/trunk/test/CodeGen/R600/global_atomics.ll
    llvm/trunk/test/CodeGen/R600/gv-const-addrspace-fail.ll
    llvm/trunk/test/CodeGen/R600/gv-const-addrspace.ll
    llvm/trunk/test/CodeGen/R600/icmp-select-sete-reverse-args.ll
    llvm/trunk/test/CodeGen/R600/indirect-private-64.ll
    llvm/trunk/test/CodeGen/R600/insert_vector_elt.ll
    llvm/trunk/test/CodeGen/R600/large-alloca.ll
    llvm/trunk/test/CodeGen/R600/lds-initializer.ll
    llvm/trunk/test/CodeGen/R600/lds-output-queue.ll
    llvm/trunk/test/CodeGen/R600/lds-zero-initializer.ll
    llvm/trunk/test/CodeGen/R600/legalizedag-bug-expand-setcc.ll
    llvm/trunk/test/CodeGen/R600/llvm.AMDGPU.barrier.global.ll
    llvm/trunk/test/CodeGen/R600/llvm.AMDGPU.barrier.local.ll
    llvm/trunk/test/CodeGen/R600/llvm.AMDGPU.class.ll
    llvm/trunk/test/CodeGen/R600/llvm.AMDGPU.div_fmas.ll
    llvm/trunk/test/CodeGen/R600/llvm.AMDGPU.div_scale.ll
    llvm/trunk/test/CodeGen/R600/llvm.AMDGPU.umad24.ll
    llvm/trunk/test/CodeGen/R600/llvm.SI.imageload.ll
    llvm/trunk/test/CodeGen/R600/llvm.SI.load.dword.ll
    llvm/trunk/test/CodeGen/R600/llvm.round.f64.ll
    llvm/trunk/test/CodeGen/R600/load.ll
    llvm/trunk/test/CodeGen/R600/local-64.ll
    llvm/trunk/test/CodeGen/R600/local-atomics.ll
    llvm/trunk/test/CodeGen/R600/local-atomics64.ll
    llvm/trunk/test/CodeGen/R600/local-memory-two-objects.ll
    llvm/trunk/test/CodeGen/R600/local-memory.ll
    llvm/trunk/test/CodeGen/R600/loop-address.ll
    llvm/trunk/test/CodeGen/R600/loop-idiom.ll
    llvm/trunk/test/CodeGen/R600/m0-spill.ll
    llvm/trunk/test/CodeGen/R600/mad-combine.ll
    llvm/trunk/test/CodeGen/R600/mad-sub.ll
    llvm/trunk/test/CodeGen/R600/madak.ll
    llvm/trunk/test/CodeGen/R600/madmk.ll
    llvm/trunk/test/CodeGen/R600/max.ll
    llvm/trunk/test/CodeGen/R600/max3.ll
    llvm/trunk/test/CodeGen/R600/min.ll
    llvm/trunk/test/CodeGen/R600/min3.ll
    llvm/trunk/test/CodeGen/R600/missing-store.ll
    llvm/trunk/test/CodeGen/R600/mubuf.ll
    llvm/trunk/test/CodeGen/R600/mul.ll
    llvm/trunk/test/CodeGen/R600/no-shrink-extloads.ll
    llvm/trunk/test/CodeGen/R600/operand-folding.ll
    llvm/trunk/test/CodeGen/R600/or.ll
    llvm/trunk/test/CodeGen/R600/private-memory-atomics.ll
    llvm/trunk/test/CodeGen/R600/private-memory-broken.ll
    llvm/trunk/test/CodeGen/R600/private-memory.ll
    llvm/trunk/test/CodeGen/R600/register-count-comments.ll
    llvm/trunk/test/CodeGen/R600/rsq.ll
    llvm/trunk/test/CodeGen/R600/salu-to-valu.ll
    llvm/trunk/test/CodeGen/R600/schedule-global-loads.ll
    llvm/trunk/test/CodeGen/R600/scratch-buffer.ll
    llvm/trunk/test/CodeGen/R600/sdiv.ll
    llvm/trunk/test/CodeGen/R600/sdivrem24.ll
    llvm/trunk/test/CodeGen/R600/selectcc-opt.ll
    llvm/trunk/test/CodeGen/R600/setcc.ll
    llvm/trunk/test/CodeGen/R600/sext-in-reg.ll
    llvm/trunk/test/CodeGen/R600/sgpr-control-flow.ll
    llvm/trunk/test/CodeGen/R600/sgpr-copy.ll
    llvm/trunk/test/CodeGen/R600/shl.ll
    llvm/trunk/test/CodeGen/R600/shl_add_constant.ll
    llvm/trunk/test/CodeGen/R600/shl_add_ptr.ll
    llvm/trunk/test/CodeGen/R600/si-lod-bias.ll
    llvm/trunk/test/CodeGen/R600/si-sgpr-spill.ll
    llvm/trunk/test/CodeGen/R600/si-triv-disjoint-mem-access.ll
    llvm/trunk/test/CodeGen/R600/si-vector-hang.ll
    llvm/trunk/test/CodeGen/R600/simplify-demanded-bits-build-pair.ll
    llvm/trunk/test/CodeGen/R600/sint_to_fp.f64.ll
    llvm/trunk/test/CodeGen/R600/smrd.ll
    llvm/trunk/test/CodeGen/R600/split-scalar-i64-add.ll
    llvm/trunk/test/CodeGen/R600/sra.ll
    llvm/trunk/test/CodeGen/R600/srem.ll
    llvm/trunk/test/CodeGen/R600/srl.ll
    llvm/trunk/test/CodeGen/R600/store-barrier.ll
    llvm/trunk/test/CodeGen/R600/store-vector-ptrs.ll
    llvm/trunk/test/CodeGen/R600/store.ll
    llvm/trunk/test/CodeGen/R600/sub.ll
    llvm/trunk/test/CodeGen/R600/trunc.ll
    llvm/trunk/test/CodeGen/R600/tti-unroll-prefs.ll
    llvm/trunk/test/CodeGen/R600/udiv.ll
    llvm/trunk/test/CodeGen/R600/udivrem24.ll
    llvm/trunk/test/CodeGen/R600/uint_to_fp.f64.ll
    llvm/trunk/test/CodeGen/R600/unaligned-load-store.ll
    llvm/trunk/test/CodeGen/R600/unhandled-loop-condition-assertion.ll
    llvm/trunk/test/CodeGen/R600/unroll.ll
    llvm/trunk/test/CodeGen/R600/urem.ll
    llvm/trunk/test/CodeGen/R600/v_cndmask.ll
    llvm/trunk/test/CodeGen/R600/valu-i1.ll
    llvm/trunk/test/CodeGen/R600/vector-alloca.ll
    llvm/trunk/test/CodeGen/R600/vop-shrink.ll
    llvm/trunk/test/CodeGen/R600/wait.ll
    llvm/trunk/test/CodeGen/R600/wrong-transalu-pos-fix.ll
    llvm/trunk/test/CodeGen/SPARC/2011-01-22-SRet.ll
    llvm/trunk/test/CodeGen/SPARC/64abi.ll
    llvm/trunk/test/CodeGen/SPARC/64bit.ll
    llvm/trunk/test/CodeGen/SPARC/basictest.ll
    llvm/trunk/test/CodeGen/SPARC/leafproc.ll
    llvm/trunk/test/CodeGen/SPARC/setjmp.ll
    llvm/trunk/test/CodeGen/SPARC/spillsize.ll
    llvm/trunk/test/CodeGen/SPARC/varargs.ll
    llvm/trunk/test/CodeGen/SystemZ/alloca-01.ll
    llvm/trunk/test/CodeGen/SystemZ/alloca-02.ll
    llvm/trunk/test/CodeGen/SystemZ/and-01.ll
    llvm/trunk/test/CodeGen/SystemZ/and-03.ll
    llvm/trunk/test/CodeGen/SystemZ/and-05.ll
    llvm/trunk/test/CodeGen/SystemZ/and-08.ll
    llvm/trunk/test/CodeGen/SystemZ/asm-18.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-add-05.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-add-06.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-and-05.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-and-06.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-minmax-03.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-minmax-04.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-or-05.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-or-06.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-sub-05.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-sub-06.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-xchg-03.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-xchg-04.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-xor-05.ll
    llvm/trunk/test/CodeGen/SystemZ/atomicrmw-xor-06.ll
    llvm/trunk/test/CodeGen/SystemZ/branch-06.ll
    llvm/trunk/test/CodeGen/SystemZ/bswap-02.ll
    llvm/trunk/test/CodeGen/SystemZ/bswap-03.ll
    llvm/trunk/test/CodeGen/SystemZ/bswap-04.ll
    llvm/trunk/test/CodeGen/SystemZ/bswap-05.ll
    llvm/trunk/test/CodeGen/SystemZ/cmpxchg-03.ll
    llvm/trunk/test/CodeGen/SystemZ/cmpxchg-04.ll
    llvm/trunk/test/CodeGen/SystemZ/cond-load-01.ll
    llvm/trunk/test/CodeGen/SystemZ/cond-load-02.ll
    llvm/trunk/test/CodeGen/SystemZ/cond-store-01.ll
    llvm/trunk/test/CodeGen/SystemZ/cond-store-02.ll
    llvm/trunk/test/CodeGen/SystemZ/cond-store-03.ll
    llvm/trunk/test/CodeGen/SystemZ/cond-store-04.ll
    llvm/trunk/test/CodeGen/SystemZ/cond-store-05.ll
    llvm/trunk/test/CodeGen/SystemZ/cond-store-06.ll
    llvm/trunk/test/CodeGen/SystemZ/cond-store-07.ll
    llvm/trunk/test/CodeGen/SystemZ/cond-store-08.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-add-01.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-add-02.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-cmp-01.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-cmp-02.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-conv-02.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-conv-03.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-conv-04.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-div-01.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-div-02.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-move-03.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-move-04.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-move-06.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-move-07.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-mul-01.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-mul-02.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-mul-03.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-mul-04.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-mul-06.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-mul-07.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-mul-08.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-mul-09.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-sqrt-01.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-sqrt-02.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-sub-01.ll
    llvm/trunk/test/CodeGen/SystemZ/fp-sub-02.ll
    llvm/trunk/test/CodeGen/SystemZ/frame-01.ll
    llvm/trunk/test/CodeGen/SystemZ/frame-05.ll
    llvm/trunk/test/CodeGen/SystemZ/frame-06.ll
    llvm/trunk/test/CodeGen/SystemZ/frame-07.ll
    llvm/trunk/test/CodeGen/SystemZ/frame-08.ll
    llvm/trunk/test/CodeGen/SystemZ/frame-09.ll
    llvm/trunk/test/CodeGen/SystemZ/frame-13.ll
    llvm/trunk/test/CodeGen/SystemZ/frame-14.ll
    llvm/trunk/test/CodeGen/SystemZ/frame-15.ll
    llvm/trunk/test/CodeGen/SystemZ/frame-16.ll
    llvm/trunk/test/CodeGen/SystemZ/insert-01.ll
    llvm/trunk/test/CodeGen/SystemZ/insert-02.ll
    llvm/trunk/test/CodeGen/SystemZ/int-add-01.ll
    llvm/trunk/test/CodeGen/SystemZ/int-add-02.ll
    llvm/trunk/test/CodeGen/SystemZ/int-add-03.ll
    llvm/trunk/test/CodeGen/SystemZ/int-add-04.ll
    llvm/trunk/test/CodeGen/SystemZ/int-add-05.ll
    llvm/trunk/test/CodeGen/SystemZ/int-add-08.ll
    llvm/trunk/test/CodeGen/SystemZ/int-add-10.ll
    llvm/trunk/test/CodeGen/SystemZ/int-add-11.ll
    llvm/trunk/test/CodeGen/SystemZ/int-add-12.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-01.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-02.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-03.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-04.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-05.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-06.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-07.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-08.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-15.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-22.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-23.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-32.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-33.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-34.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-35.ll
    llvm/trunk/test/CodeGen/SystemZ/int-cmp-48.ll
    llvm/trunk/test/CodeGen/SystemZ/int-const-03.ll
    llvm/trunk/test/CodeGen/SystemZ/int-const-04.ll
    llvm/trunk/test/CodeGen/SystemZ/int-const-05.ll
    llvm/trunk/test/CodeGen/SystemZ/int-const-06.ll
    llvm/trunk/test/CodeGen/SystemZ/int-conv-01.ll
    llvm/trunk/test/CodeGen/SystemZ/int-conv-02.ll
    llvm/trunk/test/CodeGen/SystemZ/int-conv-03.ll
    llvm/trunk/test/CodeGen/SystemZ/int-conv-04.ll
    llvm/trunk/test/CodeGen/SystemZ/int-conv-05.ll
    llvm/trunk/test/CodeGen/SystemZ/int-conv-06.ll
    llvm/trunk/test/CodeGen/SystemZ/int-conv-07.ll
    llvm/trunk/test/CodeGen/SystemZ/int-conv-08.ll
    llvm/trunk/test/CodeGen/SystemZ/int-conv-09.ll
    llvm/trunk/test/CodeGen/SystemZ/int-conv-10.ll
    llvm/trunk/test/CodeGen/SystemZ/int-div-01.ll
    llvm/trunk/test/CodeGen/SystemZ/int-div-02.ll
    llvm/trunk/test/CodeGen/SystemZ/int-div-03.ll
    llvm/trunk/test/CodeGen/SystemZ/int-div-04.ll
    llvm/trunk/test/CodeGen/SystemZ/int-div-05.ll
    llvm/trunk/test/CodeGen/SystemZ/int-move-02.ll
    llvm/trunk/test/CodeGen/SystemZ/int-move-03.ll
    llvm/trunk/test/CodeGen/SystemZ/int-move-04.ll
    llvm/trunk/test/CodeGen/SystemZ/int-move-05.ll
    llvm/trunk/test/CodeGen/SystemZ/int-move-06.ll
    llvm/trunk/test/CodeGen/SystemZ/int-move-07.ll
    llvm/trunk/test/CodeGen/SystemZ/int-move-08.ll
    llvm/trunk/test/CodeGen/SystemZ/int-mul-01.ll
    llvm/trunk/test/CodeGen/SystemZ/int-mul-02.ll
    llvm/trunk/test/CodeGen/SystemZ/int-mul-03.ll
    llvm/trunk/test/CodeGen/SystemZ/int-mul-04.ll
    llvm/trunk/test/CodeGen/SystemZ/int-mul-08.ll
    llvm/trunk/test/CodeGen/SystemZ/int-sub-01.ll
    llvm/trunk/test/CodeGen/SystemZ/int-sub-02.ll
    llvm/trunk/test/CodeGen/SystemZ/int-sub-03.ll
    llvm/trunk/test/CodeGen/SystemZ/int-sub-04.ll
    llvm/trunk/test/CodeGen/SystemZ/int-sub-05.ll
    llvm/trunk/test/CodeGen/SystemZ/int-sub-06.ll
    llvm/trunk/test/CodeGen/SystemZ/int-sub-07.ll
    llvm/trunk/test/CodeGen/SystemZ/loop-01.ll
    llvm/trunk/test/CodeGen/SystemZ/memcpy-01.ll
    llvm/trunk/test/CodeGen/SystemZ/memcpy-02.ll
    llvm/trunk/test/CodeGen/SystemZ/or-01.ll
    llvm/trunk/test/CodeGen/SystemZ/or-03.ll
    llvm/trunk/test/CodeGen/SystemZ/or-05.ll
    llvm/trunk/test/CodeGen/SystemZ/or-08.ll
    llvm/trunk/test/CodeGen/SystemZ/prefetch-01.ll
    llvm/trunk/test/CodeGen/SystemZ/spill-01.ll
    llvm/trunk/test/CodeGen/SystemZ/unaligned-01.ll
    llvm/trunk/test/CodeGen/SystemZ/xor-01.ll
    llvm/trunk/test/CodeGen/SystemZ/xor-03.ll
    llvm/trunk/test/CodeGen/SystemZ/xor-05.ll
    llvm/trunk/test/CodeGen/SystemZ/xor-08.ll
    llvm/trunk/test/CodeGen/Thumb/2009-08-12-ConstIslandAssert.ll
    llvm/trunk/test/CodeGen/Thumb/2009-08-12-RegInfoAssert.ll
    llvm/trunk/test/CodeGen/Thumb/2009-08-20-ISelBug.ll
    llvm/trunk/test/CodeGen/Thumb/2009-12-17-pre-regalloc-taildup.ll
    llvm/trunk/test/CodeGen/Thumb/2010-07-15-debugOrdering.ll
    llvm/trunk/test/CodeGen/Thumb/2011-05-11-DAGLegalizer.ll
    llvm/trunk/test/CodeGen/Thumb/2014-06-10-thumb1-ldst-opt-bug.ll
    llvm/trunk/test/CodeGen/Thumb/PR17309.ll
    llvm/trunk/test/CodeGen/Thumb/asmprinter-bug.ll
    llvm/trunk/test/CodeGen/Thumb/dyn-stackalloc.ll
    llvm/trunk/test/CodeGen/Thumb/ldm-merge-call.ll
    llvm/trunk/test/CodeGen/Thumb/ldm-stm-base-materialization.ll
    llvm/trunk/test/CodeGen/Thumb/ldr_frame.ll
    llvm/trunk/test/CodeGen/Thumb/stack_guard_remat.ll
    llvm/trunk/test/CodeGen/Thumb/vargs.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-07-17-CrossRegClassCopy.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-07-21-ISelBug.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-07-30-PEICrash.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-08-01-WrongLDRBOpc.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-08-02-CoalescerBug.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-08-04-CoalescerBug.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-08-04-ScavengerAssert.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-08-07-NeonFPBug.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-08-10-ISelBug.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-09-01-PostRAProlog.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-09-28-ITBlockBug.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-11-11-ScavengerAssert.ll
    llvm/trunk/test/CodeGen/Thumb2/2009-12-01-LoopIVUsers.ll
    llvm/trunk/test/CodeGen/Thumb2/2010-01-06-TailDuplicateLabels.ll
    llvm/trunk/test/CodeGen/Thumb2/2010-01-19-RemovePredicates.ll
    llvm/trunk/test/CodeGen/Thumb2/2010-03-08-addi12-ccout.ll
    llvm/trunk/test/CodeGen/Thumb2/2010-03-15-AsmCCClobber.ll
    llvm/trunk/test/CodeGen/Thumb2/2010-08-10-VarSizedAllocaBug.ll
    llvm/trunk/test/CodeGen/Thumb2/2011-06-07-TwoAddrEarlyClobber.ll
    llvm/trunk/test/CodeGen/Thumb2/2011-12-16-T2SizeReduceAssert.ll
    llvm/trunk/test/CodeGen/Thumb2/2012-01-13-CBNZBug.ll
    llvm/trunk/test/CodeGen/Thumb2/2013-02-19-tail-call-register-hint.ll
    llvm/trunk/test/CodeGen/Thumb2/constant-islands.ll
    llvm/trunk/test/CodeGen/Thumb2/crash.ll
    llvm/trunk/test/CodeGen/Thumb2/cross-rc-coalescing-2.ll
    llvm/trunk/test/CodeGen/Thumb2/frameless2.ll
    llvm/trunk/test/CodeGen/Thumb2/lsr-deficiency.ll
    llvm/trunk/test/CodeGen/Thumb2/machine-licm.ll
    llvm/trunk/test/CodeGen/Thumb2/pic-load.ll
    llvm/trunk/test/CodeGen/Thumb2/stack_guard_remat.ll
    llvm/trunk/test/CodeGen/Thumb2/thumb2-cbnz.ll
    llvm/trunk/test/CodeGen/Thumb2/thumb2-ldr.ll
    llvm/trunk/test/CodeGen/Thumb2/thumb2-ldr_pre.ll
    llvm/trunk/test/CodeGen/Thumb2/thumb2-ldrb.ll
    llvm/trunk/test/CodeGen/Thumb2/thumb2-ldrh.ll
    llvm/trunk/test/CodeGen/Thumb2/thumb2-str.ll
    llvm/trunk/test/CodeGen/Thumb2/thumb2-str_pre.ll
    llvm/trunk/test/CodeGen/Thumb2/thumb2-strb.ll
    llvm/trunk/test/CodeGen/Thumb2/thumb2-strh.ll
    llvm/trunk/test/CodeGen/X86/2006-04-27-ISelFoldingBug.ll
    llvm/trunk/test/CodeGen/X86/2006-05-02-InstrSched1.ll
    llvm/trunk/test/CodeGen/X86/2006-05-08-InstrSched.ll
    llvm/trunk/test/CodeGen/X86/2006-05-11-InstrSched.ll
    llvm/trunk/test/CodeGen/X86/2006-08-16-CycleInDAG.ll
    llvm/trunk/test/CodeGen/X86/2006-09-01-CycleInDAG.ll
    llvm/trunk/test/CodeGen/X86/2006-10-10-FindModifiedNodeSlotBug.ll
    llvm/trunk/test/CodeGen/X86/2006-10-12-CycleInDAG.ll
    llvm/trunk/test/CodeGen/X86/2006-11-12-CSRetCC.ll
    llvm/trunk/test/CodeGen/X86/2006-12-16-InlineAsmCrash.ll
    llvm/trunk/test/CodeGen/X86/2007-01-08-X86-64-Pointer.ll
    llvm/trunk/test/CodeGen/X86/2007-01-13-StackPtrIndex.ll
    llvm/trunk/test/CodeGen/X86/2007-02-04-OrAddrMode.ll
    llvm/trunk/test/CodeGen/X86/2007-02-16-BranchFold.ll
    llvm/trunk/test/CodeGen/X86/2007-03-15-GEP-Idx-Sink.ll
    llvm/trunk/test/CodeGen/X86/2007-04-17-LiveIntervalAssert.ll
    llvm/trunk/test/CodeGen/X86/2007-06-04-X86-64-CtorAsmBugs.ll
    llvm/trunk/test/CodeGen/X86/2007-06-29-VecFPConstantCSEBug.ll
    llvm/trunk/test/CodeGen/X86/2007-07-18-Vector-Extract.ll
    llvm/trunk/test/CodeGen/X86/2007-08-09-IllegalX86-64Asm.ll
    llvm/trunk/test/CodeGen/X86/2007-09-05-InvalidAsm.ll
    llvm/trunk/test/CodeGen/X86/2007-10-12-SpillerUnfold1.ll
    llvm/trunk/test/CodeGen/X86/2007-10-12-SpillerUnfold2.ll
    llvm/trunk/test/CodeGen/X86/2007-10-30-LSRCrash.ll
    llvm/trunk/test/CodeGen/X86/2007-11-06-InstrSched.ll
    llvm/trunk/test/CodeGen/X86/2007-12-16-BURRSchedCrash.ll
    llvm/trunk/test/CodeGen/X86/2007-12-18-LoadCSEBug.ll
    llvm/trunk/test/CodeGen/X86/2008-01-08-SchedulerCrash.ll
    llvm/trunk/test/CodeGen/X86/2008-01-16-InvalidDAGCombineXform.ll
    llvm/trunk/test/CodeGen/X86/2008-02-06-LoadFoldingBug.ll
    llvm/trunk/test/CodeGen/X86/2008-02-18-TailMergingBug.ll
    llvm/trunk/test/CodeGen/X86/2008-02-20-InlineAsmClobber.ll
    llvm/trunk/test/CodeGen/X86/2008-02-22-LocalRegAllocBug.ll
    llvm/trunk/test/CodeGen/X86/2008-02-25-InlineAsmBug.ll
    llvm/trunk/test/CodeGen/X86/2008-02-25-X86-64-CoalescerBug.ll
    llvm/trunk/test/CodeGen/X86/2008-02-27-DeadSlotElimBug.ll
    llvm/trunk/test/CodeGen/X86/2008-03-07-APIntBug.ll
    llvm/trunk/test/CodeGen/X86/2008-03-12-ThreadLocalAlias.ll
    llvm/trunk/test/CodeGen/X86/2008-03-14-SpillerCrash.ll
    llvm/trunk/test/CodeGen/X86/2008-03-23-DarwinAsmComments.ll
    llvm/trunk/test/CodeGen/X86/2008-03-31-SpillerFoldingBug.ll
    llvm/trunk/test/CodeGen/X86/2008-04-16-ReMatBug.ll
    llvm/trunk/test/CodeGen/X86/2008-05-09-PHIElimBug.ll
    llvm/trunk/test/CodeGen/X86/2008-05-12-tailmerge-5.ll
    llvm/trunk/test/CodeGen/X86/2008-05-21-CoalescerBug.ll
    llvm/trunk/test/CodeGen/X86/2008-07-07-DanglingDeadInsts.ll
    llvm/trunk/test/CodeGen/X86/2008-09-18-inline-asm-2.ll
    llvm/trunk/test/CodeGen/X86/2008-09-29-ReMatBug.ll
    llvm/trunk/test/CodeGen/X86/2008-11-06-testb.ll
    llvm/trunk/test/CodeGen/X86/2008-12-01-loop-iv-used-outside-loop.ll
    llvm/trunk/test/CodeGen/X86/2008-12-02-dagcombine-1.ll
    llvm/trunk/test/CodeGen/X86/2008-12-02-dagcombine-2.ll
    llvm/trunk/test/CodeGen/X86/2008-12-23-crazy-address.ll
    llvm/trunk/test/CodeGen/X86/2009-02-09-ivs-different-sizes.ll
    llvm/trunk/test/CodeGen/X86/2009-02-11-codegenprepare-reuse.ll
    llvm/trunk/test/CodeGen/X86/2009-02-12-DebugInfoVLA.ll
    llvm/trunk/test/CodeGen/X86/2009-02-26-MachineLICMBug.ll
    llvm/trunk/test/CodeGen/X86/2009-03-03-BTHang.ll
    llvm/trunk/test/CodeGen/X86/2009-03-05-burr-list-crash.ll
    llvm/trunk/test/CodeGen/X86/2009-04-12-picrel.ll
    llvm/trunk/test/CodeGen/X86/2009-04-14-IllegalRegs.ll
    llvm/trunk/test/CodeGen/X86/2009-04-16-SpillerUnfold.ll
    llvm/trunk/test/CodeGen/X86/2009-04-27-CoalescerAssert.ll
    llvm/trunk/test/CodeGen/X86/2009-04-29-IndirectDestOperands.ll
    llvm/trunk/test/CodeGen/X86/2009-04-29-LinearScanBug.ll
    llvm/trunk/test/CodeGen/X86/2009-04-29-RegAllocAssert.ll
    llvm/trunk/test/CodeGen/X86/2009-04-scale.ll
    llvm/trunk/test/CodeGen/X86/2009-05-30-ISelBug.ll
    llvm/trunk/test/CodeGen/X86/2009-06-02-RewriterBug.ll
    llvm/trunk/test/CodeGen/X86/2009-06-04-VirtualLiveIn.ll
    llvm/trunk/test/CodeGen/X86/2009-07-20-CoalescerBug.ll
    llvm/trunk/test/CodeGen/X86/2009-08-06-inlineasm.ll
    llvm/trunk/test/CodeGen/X86/2009-08-14-Win64MemoryIndirectArg.ll
    llvm/trunk/test/CodeGen/X86/2009-09-10-LoadFoldingBug.ll
    llvm/trunk/test/CodeGen/X86/2009-09-10-SpillComments.ll
    llvm/trunk/test/CodeGen/X86/2009-09-21-NoSpillLoopCount.ll
    llvm/trunk/test/CodeGen/X86/2009-10-19-EmergencySpill.ll
    llvm/trunk/test/CodeGen/X86/2009-10-25-RewriterBug.ll
    llvm/trunk/test/CodeGen/X86/2009-11-16-MachineLICM.ll
    llvm/trunk/test/CodeGen/X86/2009-11-16-UnfoldMemOpBug.ll
    llvm/trunk/test/CodeGen/X86/2009-11-17-UpdateTerminator.ll
    llvm/trunk/test/CodeGen/X86/2009-12-11-TLSNoRedZone.ll
    llvm/trunk/test/CodeGen/X86/2010-01-13-OptExtBug.ll
    llvm/trunk/test/CodeGen/X86/2010-01-15-SelectionDAGCycle.ll
    llvm/trunk/test/CodeGen/X86/2010-01-18-DbgValue.ll
    llvm/trunk/test/CodeGen/X86/2010-02-04-SchedulerBug.ll
    llvm/trunk/test/CodeGen/X86/2010-02-19-TailCallRetAddrBug.ll
    llvm/trunk/test/CodeGen/X86/2010-03-05-ConstantFoldCFG.ll
    llvm/trunk/test/CodeGen/X86/2010-03-17-ISelBug.ll
    llvm/trunk/test/CodeGen/X86/2010-04-08-CoalescerBug.ll
    llvm/trunk/test/CodeGen/X86/2010-04-30-LocalAlloc-LandingPad.ll
    llvm/trunk/test/CodeGen/X86/2010-05-26-DotDebugLoc.ll
    llvm/trunk/test/CodeGen/X86/2010-06-25-CoalescerSubRegDefDead.ll
    llvm/trunk/test/CodeGen/X86/2010-08-04-StackVariable.ll
    llvm/trunk/test/CodeGen/X86/2010-09-16-asmcrash.ll
    llvm/trunk/test/CodeGen/X86/2010-09-17-SideEffectsInChain.ll
    llvm/trunk/test/CodeGen/X86/2010-11-09-MOVLPS.ll
    llvm/trunk/test/CodeGen/X86/2011-02-21-VirtRegRewriter-KillSubReg.ll
    llvm/trunk/test/CodeGen/X86/2011-03-02-DAGCombiner.ll
    llvm/trunk/test/CodeGen/X86/2011-03-09-Physreg-Coalescing.ll
    llvm/trunk/test/CodeGen/X86/2011-04-13-SchedCmpJmp.ll
    llvm/trunk/test/CodeGen/X86/2011-05-26-UnreachableBlockElim.ll
    llvm/trunk/test/CodeGen/X86/2011-05-27-CrossClassCoalescing.ll
    llvm/trunk/test/CodeGen/X86/2011-06-03-x87chain.ll
    llvm/trunk/test/CodeGen/X86/2011-06-12-FastAllocSpill.ll
    llvm/trunk/test/CodeGen/X86/2011-06-19-QuicksortCoalescerBug.ll
    llvm/trunk/test/CodeGen/X86/2011-07-13-BadFrameIndexDisplacement.ll
    llvm/trunk/test/CodeGen/X86/2011-10-12-MachineCSE.ll
    llvm/trunk/test/CodeGen/X86/2012-01-10-UndefExceptionEdge.ll
    llvm/trunk/test/CodeGen/X86/2012-03-26-PostRALICMBug.ll
    llvm/trunk/test/CodeGen/X86/2012-04-26-sdglue.ll
    llvm/trunk/test/CodeGen/X86/2012-09-28-CGPBug.ll
    llvm/trunk/test/CodeGen/X86/2012-10-02-DAGCycle.ll
    llvm/trunk/test/CodeGen/X86/2012-10-03-DAGCycle.ll
    llvm/trunk/test/CodeGen/X86/2012-10-18-crash-dagco.ll
    llvm/trunk/test/CodeGen/X86/2012-11-28-merge-store-alias.ll
    llvm/trunk/test/CodeGen/X86/2012-11-30-handlemove-dbg.ll
    llvm/trunk/test/CodeGen/X86/2012-11-30-misched-dbg.ll
    llvm/trunk/test/CodeGen/X86/2012-11-30-regpres-dbg.ll
    llvm/trunk/test/CodeGen/X86/2012-12-06-python27-miscompile.ll
    llvm/trunk/test/CodeGen/X86/2012-12-1-merge-multiple.ll
    llvm/trunk/test/CodeGen/X86/2012-12-19-NoImplicitFloat.ll
    llvm/trunk/test/CodeGen/X86/2014-08-29-CompactUnwind.ll
    llvm/trunk/test/CodeGen/X86/GC/badreadproto.ll
    llvm/trunk/test/CodeGen/X86/GC/badwriteproto.ll
    llvm/trunk/test/CodeGen/X86/GC/inline.ll
    llvm/trunk/test/CodeGen/X86/GC/inline2.ll
    llvm/trunk/test/CodeGen/X86/MachineSink-eflags.ll
    llvm/trunk/test/CodeGen/X86/MergeConsecutiveStores.ll
    llvm/trunk/test/CodeGen/X86/StackColoring-dbg.ll
    llvm/trunk/test/CodeGen/X86/StackColoring.ll
    llvm/trunk/test/CodeGen/X86/SwitchLowering.ll
    llvm/trunk/test/CodeGen/X86/abi-isel.ll
    llvm/trunk/test/CodeGen/X86/addr-mode-matcher.ll
    llvm/trunk/test/CodeGen/X86/aligned-variadic.ll
    llvm/trunk/test/CodeGen/X86/atom-cmpb.ll
    llvm/trunk/test/CodeGen/X86/atom-fixup-lea1.ll
    llvm/trunk/test/CodeGen/X86/atom-fixup-lea2.ll
    llvm/trunk/test/CodeGen/X86/atom-fixup-lea3.ll
    llvm/trunk/test/CodeGen/X86/atom-fixup-lea4.ll
    llvm/trunk/test/CodeGen/X86/atom-lea-sp.ll
    llvm/trunk/test/CodeGen/X86/atomic-dagsched.ll
    llvm/trunk/test/CodeGen/X86/avoid-loop-align-2.ll
    llvm/trunk/test/CodeGen/X86/avoid-loop-align.ll
    llvm/trunk/test/CodeGen/X86/avoid_complex_am.ll
    llvm/trunk/test/CodeGen/X86/avx-basic.ll
    llvm/trunk/test/CodeGen/X86/avx-splat.ll
    llvm/trunk/test/CodeGen/X86/avx-vextractf128.ll
    llvm/trunk/test/CodeGen/X86/avx-vinsertf128.ll
    llvm/trunk/test/CodeGen/X86/avx.ll
    llvm/trunk/test/CodeGen/X86/avx512-i1test.ll   (contents, props changed)
    llvm/trunk/test/CodeGen/X86/avx512er-intrinsics.ll
    llvm/trunk/test/CodeGen/X86/block-placement.ll
    llvm/trunk/test/CodeGen/X86/break-false-dep.ll
    llvm/trunk/test/CodeGen/X86/byval-align.ll
    llvm/trunk/test/CodeGen/X86/byval.ll
    llvm/trunk/test/CodeGen/X86/byval2.ll
    llvm/trunk/test/CodeGen/X86/byval3.ll
    llvm/trunk/test/CodeGen/X86/byval4.ll
    llvm/trunk/test/CodeGen/X86/byval5.ll
    llvm/trunk/test/CodeGen/X86/byval7.ll
    llvm/trunk/test/CodeGen/X86/call-push.ll
    llvm/trunk/test/CodeGen/X86/chain_order.ll
    llvm/trunk/test/CodeGen/X86/change-compare-stride-1.ll
    llvm/trunk/test/CodeGen/X86/cmp.ll
    llvm/trunk/test/CodeGen/X86/coalesce-esp.ll
    llvm/trunk/test/CodeGen/X86/coalescer-commute1.ll
    llvm/trunk/test/CodeGen/X86/coalescer-commute4.ll
    llvm/trunk/test/CodeGen/X86/coalescer-cross.ll
    llvm/trunk/test/CodeGen/X86/code_placement.ll
    llvm/trunk/test/CodeGen/X86/codegen-prepare-addrmode-sext.ll
    llvm/trunk/test/CodeGen/X86/codegen-prepare-cast.ll
    llvm/trunk/test/CodeGen/X86/codegen-prepare-crash.ll
    llvm/trunk/test/CodeGen/X86/codegen-prepare.ll
    llvm/trunk/test/CodeGen/X86/combiner-aa-0.ll
    llvm/trunk/test/CodeGen/X86/combiner-aa-1.ll
    llvm/trunk/test/CodeGen/X86/compact-unwind.ll
    llvm/trunk/test/CodeGen/X86/complex-asm.ll
    llvm/trunk/test/CodeGen/X86/const-base-addr.ll
    llvm/trunk/test/CodeGen/X86/constant-combines.ll
    llvm/trunk/test/CodeGen/X86/convert-2-addr-3-addr-inc64.ll
    llvm/trunk/test/CodeGen/X86/cppeh-catch-all.ll
    llvm/trunk/test/CodeGen/X86/cppeh-catch-scalar.ll
    llvm/trunk/test/CodeGen/X86/cppeh-frame-vars.ll
    llvm/trunk/test/CodeGen/X86/crash-O0.ll
    llvm/trunk/test/CodeGen/X86/crash.ll
    llvm/trunk/test/CodeGen/X86/dagcombine-cse.ll
    llvm/trunk/test/CodeGen/X86/dbg-changes-codegen-branch-folding.ll
    llvm/trunk/test/CodeGen/X86/dbg-changes-codegen.ll
    llvm/trunk/test/CodeGen/X86/dbg-combine.ll
    llvm/trunk/test/CodeGen/X86/dynamic-alloca-lifetime.ll
    llvm/trunk/test/CodeGen/X86/early-ifcvt.ll
    llvm/trunk/test/CodeGen/X86/extract-extract.ll
    llvm/trunk/test/CodeGen/X86/fast-isel-gep.ll
    llvm/trunk/test/CodeGen/X86/fast-isel-x86-64.ll
    llvm/trunk/test/CodeGen/X86/fast-isel.ll
    llvm/trunk/test/CodeGen/X86/fastcc-byval.ll
    llvm/trunk/test/CodeGen/X86/fastcc-sret.ll
    llvm/trunk/test/CodeGen/X86/fastisel-gep-promote-before-add.ll
    llvm/trunk/test/CodeGen/X86/fold-add.ll
    llvm/trunk/test/CodeGen/X86/fold-and-shift.ll
    llvm/trunk/test/CodeGen/X86/fold-call-3.ll
    llvm/trunk/test/CodeGen/X86/fold-call-oper.ll
    llvm/trunk/test/CodeGen/X86/fold-call.ll
    llvm/trunk/test/CodeGen/X86/fold-load-vec.ll
    llvm/trunk/test/CodeGen/X86/fold-mul-lohi.ll
    llvm/trunk/test/CodeGen/X86/fold-tied-op.ll
    llvm/trunk/test/CodeGen/X86/full-lsr.ll
    llvm/trunk/test/CodeGen/X86/gather-addresses.ll
    llvm/trunk/test/CodeGen/X86/gs-fold.ll
    llvm/trunk/test/CodeGen/X86/h-register-addressing-32.ll
    llvm/trunk/test/CodeGen/X86/h-register-addressing-64.ll
    llvm/trunk/test/CodeGen/X86/h-registers-2.ll
    llvm/trunk/test/CodeGen/X86/huge-stack-offset.ll
    llvm/trunk/test/CodeGen/X86/i128-mul.ll
    llvm/trunk/test/CodeGen/X86/inalloca-ctor.ll
    llvm/trunk/test/CodeGen/X86/inalloca-invoke.ll
    llvm/trunk/test/CodeGen/X86/inalloca-stdcall.ll
    llvm/trunk/test/CodeGen/X86/inalloca.ll
    llvm/trunk/test/CodeGen/X86/ins_subreg_coalesce-3.ll
    llvm/trunk/test/CodeGen/X86/insert-positions.ll
    llvm/trunk/test/CodeGen/X86/isel-optnone.ll
    llvm/trunk/test/CodeGen/X86/isel-sink.ll
    llvm/trunk/test/CodeGen/X86/isel-sink2.ll
    llvm/trunk/test/CodeGen/X86/isel-sink3.ll
    llvm/trunk/test/CodeGen/X86/jump_sign.ll
    llvm/trunk/test/CodeGen/X86/large-gep-chain.ll
    llvm/trunk/test/CodeGen/X86/large-gep-scale.ll
    llvm/trunk/test/CodeGen/X86/lea-5.ll
    llvm/trunk/test/CodeGen/X86/legalize-sub-zero-2.ll
    llvm/trunk/test/CodeGen/X86/licm-nested.ll
    llvm/trunk/test/CodeGen/X86/liveness-local-regalloc.ll
    llvm/trunk/test/CodeGen/X86/load-slice.ll
    llvm/trunk/test/CodeGen/X86/loop-hoist.ll
    llvm/trunk/test/CodeGen/X86/loop-strength-reduce-2.ll
    llvm/trunk/test/CodeGen/X86/loop-strength-reduce-3.ll
    llvm/trunk/test/CodeGen/X86/loop-strength-reduce.ll
    llvm/trunk/test/CodeGen/X86/loop-strength-reduce2.ll
    llvm/trunk/test/CodeGen/X86/loop-strength-reduce4.ll
    llvm/trunk/test/CodeGen/X86/loop-strength-reduce7.ll
    llvm/trunk/test/CodeGen/X86/loop-strength-reduce8.ll
    llvm/trunk/test/CodeGen/X86/lsr-delayed-fold.ll
    llvm/trunk/test/CodeGen/X86/lsr-i386.ll
    llvm/trunk/test/CodeGen/X86/lsr-interesting-step.ll
    llvm/trunk/test/CodeGen/X86/lsr-loop-exit-cond.ll
    llvm/trunk/test/CodeGen/X86/lsr-normalization.ll
    llvm/trunk/test/CodeGen/X86/lsr-quadratic-expand.ll
    llvm/trunk/test/CodeGen/X86/lsr-redundant-addressing.ll
    llvm/trunk/test/CodeGen/X86/lsr-reuse-trunc.ll
    llvm/trunk/test/CodeGen/X86/lsr-reuse.ll
    llvm/trunk/test/CodeGen/X86/lsr-static-addr.ll
    llvm/trunk/test/CodeGen/X86/machine-cse.ll
    llvm/trunk/test/CodeGen/X86/masked-iv-safe.ll
    llvm/trunk/test/CodeGen/X86/masked-iv-unsafe.ll
    llvm/trunk/test/CodeGen/X86/mem-intrin-base-reg.ll
    llvm/trunk/test/CodeGen/X86/memset-3.ll
    llvm/trunk/test/CodeGen/X86/memset.ll
    llvm/trunk/test/CodeGen/X86/merge_store.ll
    llvm/trunk/test/CodeGen/X86/mingw-alloca.ll
    llvm/trunk/test/CodeGen/X86/misched-aa-colored.ll
    llvm/trunk/test/CodeGen/X86/misched-aa-mmos.ll
    llvm/trunk/test/CodeGen/X86/misched-balance.ll
    llvm/trunk/test/CodeGen/X86/misched-crash.ll
    llvm/trunk/test/CodeGen/X86/misched-fusion.ll
    llvm/trunk/test/CodeGen/X86/misched-matmul.ll
    llvm/trunk/test/CodeGen/X86/misched-matrix.ll
    llvm/trunk/test/CodeGen/X86/mmx-arith.ll
    llvm/trunk/test/CodeGen/X86/movmsk.ll
    llvm/trunk/test/CodeGen/X86/ms-inline-asm.ll
    llvm/trunk/test/CodeGen/X86/mul128_sext_loop.ll
    llvm/trunk/test/CodeGen/X86/muloti.ll
    llvm/trunk/test/CodeGen/X86/multiple-loop-post-inc.ll
    llvm/trunk/test/CodeGen/X86/musttail-indirect.ll
    llvm/trunk/test/CodeGen/X86/musttail-thiscall.ll
    llvm/trunk/test/CodeGen/X86/musttail-varargs.ll
    llvm/trunk/test/CodeGen/X86/nancvt.ll
    llvm/trunk/test/CodeGen/X86/negate-add-zero.ll
    llvm/trunk/test/CodeGen/X86/nosse-varargs.ll
    llvm/trunk/test/CodeGen/X86/optimize-max-0.ll
    llvm/trunk/test/CodeGen/X86/optimize-max-1.ll
    llvm/trunk/test/CodeGen/X86/optimize-max-2.ll
    llvm/trunk/test/CodeGen/X86/optimize-max-3.ll
    llvm/trunk/test/CodeGen/X86/or-address.ll
    llvm/trunk/test/CodeGen/X86/peep-test-0.ll
    llvm/trunk/test/CodeGen/X86/peep-test-1.ll
    llvm/trunk/test/CodeGen/X86/peephole-fold-movsd.ll
    llvm/trunk/test/CodeGen/X86/phi-bit-propagation.ll
    llvm/trunk/test/CodeGen/X86/phielim-split.ll
    llvm/trunk/test/CodeGen/X86/phys_subreg_coalesce-3.ll
    llvm/trunk/test/CodeGen/X86/postra-licm.ll
    llvm/trunk/test/CodeGen/X86/pr13899.ll
    llvm/trunk/test/CodeGen/X86/pr14333.ll
    llvm/trunk/test/CodeGen/X86/pr15309.ll
    llvm/trunk/test/CodeGen/X86/pr18162.ll
    llvm/trunk/test/CodeGen/X86/pr18846.ll
    llvm/trunk/test/CodeGen/X86/pr20020.ll
    llvm/trunk/test/CodeGen/X86/pr2177.ll
    llvm/trunk/test/CodeGen/X86/pr21792.ll
    llvm/trunk/test/CodeGen/X86/pr2656.ll
    llvm/trunk/test/CodeGen/X86/pr2849.ll
    llvm/trunk/test/CodeGen/X86/pr3154.ll
    llvm/trunk/test/CodeGen/X86/pr3317.ll
    llvm/trunk/test/CodeGen/X86/pre-ra-sched.ll
    llvm/trunk/test/CodeGen/X86/private-2.ll
    llvm/trunk/test/CodeGen/X86/psubus.ll
    llvm/trunk/test/CodeGen/X86/ragreedy-bug.ll
    llvm/trunk/test/CodeGen/X86/ragreedy-hoist-spill.ll
    llvm/trunk/test/CodeGen/X86/ragreedy-last-chance-recoloring.ll
    llvm/trunk/test/CodeGen/X86/rd-mod-wr-eflags.ll
    llvm/trunk/test/CodeGen/X86/rdrand.ll
    llvm/trunk/test/CodeGen/X86/regalloc-reconcile-broken-hints.ll
    llvm/trunk/test/CodeGen/X86/regpressure.ll
    llvm/trunk/test/CodeGen/X86/remat-fold-load.ll
    llvm/trunk/test/CodeGen/X86/remat-invalid-liveness.ll
    llvm/trunk/test/CodeGen/X86/remat-scalar-zero.ll
    llvm/trunk/test/CodeGen/X86/reverse_branches.ll
    llvm/trunk/test/CodeGen/X86/rip-rel-lea.ll
    llvm/trunk/test/CodeGen/X86/scalar_widen_div.ll
    llvm/trunk/test/CodeGen/X86/seh-finally.ll   (contents, props changed)
    llvm/trunk/test/CodeGen/X86/select.ll
    llvm/trunk/test/CodeGen/X86/sext-load.ll
    llvm/trunk/test/CodeGen/X86/shift-combine.ll
    llvm/trunk/test/CodeGen/X86/shift-folding.ll
    llvm/trunk/test/CodeGen/X86/shl-i64.ll
    llvm/trunk/test/CodeGen/X86/sibcall-4.ll
    llvm/trunk/test/CodeGen/X86/sibcall.ll
    llvm/trunk/test/CodeGen/X86/sink-hoist.ll
    llvm/trunk/test/CodeGen/X86/sink-out-of-loop.ll
    llvm/trunk/test/CodeGen/X86/slow-incdec.ll
    llvm/trunk/test/CodeGen/X86/soft-fp.ll
    llvm/trunk/test/CodeGen/X86/sse-domains.ll
    llvm/trunk/test/CodeGen/X86/sse2.ll
    llvm/trunk/test/CodeGen/X86/sse41.ll
    llvm/trunk/test/CodeGen/X86/ssp-data-layout.ll
    llvm/trunk/test/CodeGen/X86/stack-align.ll
    llvm/trunk/test/CodeGen/X86/stack-protector-vreg-to-vreg-copy.ll
    llvm/trunk/test/CodeGen/X86/stack-protector-weight.ll
    llvm/trunk/test/CodeGen/X86/stack-protector.ll
    llvm/trunk/test/CodeGen/X86/stack-update-frame-opcode.ll
    llvm/trunk/test/CodeGen/X86/stack_guard_remat.ll
    llvm/trunk/test/CodeGen/X86/stdarg.ll
    llvm/trunk/test/CodeGen/X86/store_op_load_fold2.ll
    llvm/trunk/test/CodeGen/X86/stride-nine-with-base-reg.ll
    llvm/trunk/test/CodeGen/X86/stride-reuse.ll
    llvm/trunk/test/CodeGen/X86/subreg-to-reg-2.ll
    llvm/trunk/test/CodeGen/X86/sunkaddr-ext.ll
    llvm/trunk/test/CodeGen/X86/tail-opts.ll
    llvm/trunk/test/CodeGen/X86/tailcall-64.ll
    llvm/trunk/test/CodeGen/X86/tailcall-returndup-void.ll
    llvm/trunk/test/CodeGen/X86/tailcall-ri64.ll
    llvm/trunk/test/CodeGen/X86/tailcallbyval.ll
    llvm/trunk/test/CodeGen/X86/tailcallbyval64.ll
    llvm/trunk/test/CodeGen/X86/this-return-64.ll
    llvm/trunk/test/CodeGen/X86/twoaddr-pass-sink.ll
    llvm/trunk/test/CodeGen/X86/unaligned-32-byte-memops.ll
    llvm/trunk/test/CodeGen/X86/unaligned-load.ll
    llvm/trunk/test/CodeGen/X86/unaligned-spill-folding.ll
    llvm/trunk/test/CodeGen/X86/unwindraise.ll
    llvm/trunk/test/CodeGen/X86/vaargs.ll
    llvm/trunk/test/CodeGen/X86/vec_align.ll
    llvm/trunk/test/CodeGen/X86/vec_ins_extract.ll
    llvm/trunk/test/CodeGen/X86/vec_loadsingles.ll
    llvm/trunk/test/CodeGen/X86/vec_setcc-2.ll
    llvm/trunk/test/CodeGen/X86/vector-gep.ll
    llvm/trunk/test/CodeGen/X86/vortex-bug.ll
    llvm/trunk/test/CodeGen/X86/vselect-avx.ll
    llvm/trunk/test/CodeGen/X86/vselect-minmax.ll
    llvm/trunk/test/CodeGen/X86/warn-stack.ll
    llvm/trunk/test/CodeGen/X86/widen_arith-1.ll
    llvm/trunk/test/CodeGen/X86/widen_arith-2.ll
    llvm/trunk/test/CodeGen/X86/widen_arith-3.ll
    llvm/trunk/test/CodeGen/X86/widen_arith-4.ll
    llvm/trunk/test/CodeGen/X86/widen_arith-5.ll
    llvm/trunk/test/CodeGen/X86/widen_arith-6.ll
    llvm/trunk/test/CodeGen/X86/widen_cast-1.ll
    llvm/trunk/test/CodeGen/X86/widen_cast-2.ll
    llvm/trunk/test/CodeGen/X86/widen_cast-4.ll
    llvm/trunk/test/CodeGen/X86/widen_load-1.ll
    llvm/trunk/test/CodeGen/X86/win32_sret.ll
    llvm/trunk/test/CodeGen/X86/win64_frame.ll
    llvm/trunk/test/CodeGen/X86/x32-lea-1.ll
    llvm/trunk/test/CodeGen/X86/x86-64-disp.ll
    llvm/trunk/test/CodeGen/X86/x86-64-jumps.ll
    llvm/trunk/test/CodeGen/X86/x86-64-sret-return.ll
    llvm/trunk/test/CodeGen/X86/x86-64-static-relo-movl.ll
    llvm/trunk/test/CodeGen/X86/zext-sext.ll
    llvm/trunk/test/CodeGen/X86/zlib-longest-match.ll
    llvm/trunk/test/CodeGen/XCore/2009-01-08-Crash.ll
    llvm/trunk/test/CodeGen/XCore/2010-02-25-LSR-Crash.ll
    llvm/trunk/test/CodeGen/XCore/codemodel.ll
    llvm/trunk/test/CodeGen/XCore/epilogue_prologue.ll
    llvm/trunk/test/CodeGen/XCore/indirectbr.ll
    llvm/trunk/test/CodeGen/XCore/load.ll
    llvm/trunk/test/CodeGen/XCore/offset_folding.ll
    llvm/trunk/test/CodeGen/XCore/scavenging.ll
    llvm/trunk/test/CodeGen/XCore/store.ll
    llvm/trunk/test/CodeGen/XCore/trampoline.ll
    llvm/trunk/test/DebugInfo/2010-05-03-OriginDIE.ll
    llvm/trunk/test/DebugInfo/AArch64/cfi-eof-prologue.ll
    llvm/trunk/test/DebugInfo/AArch64/frameindices.ll
    llvm/trunk/test/DebugInfo/AArch64/struct_by_value.ll
    llvm/trunk/test/DebugInfo/ARM/cfi-eof-prologue.ll
    llvm/trunk/test/DebugInfo/ARM/lowerbdgdeclare_vla.ll
    llvm/trunk/test/DebugInfo/ARM/selectiondag-deadcode.ll
    llvm/trunk/test/DebugInfo/SystemZ/variable-loc.ll
    llvm/trunk/test/DebugInfo/X86/2011-12-16-BadStructRef.ll
    llvm/trunk/test/DebugInfo/X86/DW_AT_byte_size.ll
    llvm/trunk/test/DebugInfo/X86/DW_AT_object_pointer.ll
    llvm/trunk/test/DebugInfo/X86/arguments.ll
    llvm/trunk/test/DebugInfo/X86/array.ll
    llvm/trunk/test/DebugInfo/X86/array2.ll
    llvm/trunk/test/DebugInfo/X86/block-capture.ll
    llvm/trunk/test/DebugInfo/X86/cu-ranges-odr.ll
    llvm/trunk/test/DebugInfo/X86/dbg-byval-parameter.ll
    llvm/trunk/test/DebugInfo/X86/dbg-declare-arg.ll
    llvm/trunk/test/DebugInfo/X86/dbg-value-dag-combine.ll
    llvm/trunk/test/DebugInfo/X86/dbg-value-inlined-parameter.ll
    llvm/trunk/test/DebugInfo/X86/dbg-value-range.ll
    llvm/trunk/test/DebugInfo/X86/debug-info-block-captured-self.ll
    llvm/trunk/test/DebugInfo/X86/debug-info-blocks.ll
    llvm/trunk/test/DebugInfo/X86/debug-info-static-member.ll
    llvm/trunk/test/DebugInfo/X86/debug-loc-offset.ll
    llvm/trunk/test/DebugInfo/X86/decl-derived-member.ll
    llvm/trunk/test/DebugInfo/X86/earlydup-crash.ll
    llvm/trunk/test/DebugInfo/X86/elf-names.ll
    llvm/trunk/test/DebugInfo/X86/empty-and-one-elem-array.ll
    llvm/trunk/test/DebugInfo/X86/instcombine-instrinsics.ll
    llvm/trunk/test/DebugInfo/X86/misched-dbg-value.ll
    llvm/trunk/test/DebugInfo/X86/nodebug_with_debug_loc.ll
    llvm/trunk/test/DebugInfo/X86/op_deref.ll
    llvm/trunk/test/DebugInfo/X86/pieces-2.ll
    llvm/trunk/test/DebugInfo/X86/pr12831.ll
    llvm/trunk/test/DebugInfo/X86/recursive_inlining.ll
    llvm/trunk/test/DebugInfo/X86/sret.ll
    llvm/trunk/test/DebugInfo/X86/sroasplit-1.ll
    llvm/trunk/test/DebugInfo/X86/sroasplit-2.ll
    llvm/trunk/test/DebugInfo/X86/sroasplit-3.ll
    llvm/trunk/test/DebugInfo/X86/sroasplit-4.ll
    llvm/trunk/test/DebugInfo/X86/subregisters.ll
    llvm/trunk/test/DebugInfo/X86/vla.ll
    llvm/trunk/test/DebugInfo/block-asan.ll
    llvm/trunk/test/DebugInfo/debug-info-always-inline.ll
    llvm/trunk/test/DebugInfo/incorrect-variable-debugloc.ll
    llvm/trunk/test/DebugInfo/inheritance.ll
    llvm/trunk/test/ExecutionEngine/MCJIT/2002-12-16-ArgTest.ll
    llvm/trunk/test/ExecutionEngine/MCJIT/2003-05-07-ArgumentTest.ll
    llvm/trunk/test/ExecutionEngine/MCJIT/2008-06-05-APInt-OverAShr.ll
    llvm/trunk/test/ExecutionEngine/MCJIT/fpbitcast.ll
    llvm/trunk/test/ExecutionEngine/MCJIT/pr13727.ll
    llvm/trunk/test/ExecutionEngine/MCJIT/remote/test-common-symbols-remote.ll
    llvm/trunk/test/ExecutionEngine/MCJIT/test-common-symbols.ll
    llvm/trunk/test/ExecutionEngine/OrcJIT/2002-12-16-ArgTest.ll
    llvm/trunk/test/ExecutionEngine/OrcJIT/2003-05-07-ArgumentTest.ll
    llvm/trunk/test/ExecutionEngine/OrcJIT/2008-06-05-APInt-OverAShr.ll
    llvm/trunk/test/ExecutionEngine/OrcJIT/fpbitcast.ll
    llvm/trunk/test/ExecutionEngine/OrcJIT/pr13727.ll
    llvm/trunk/test/ExecutionEngine/OrcJIT/remote/test-common-symbols-remote.ll
    llvm/trunk/test/ExecutionEngine/OrcJIT/test-common-symbols.ll
    llvm/trunk/test/ExecutionEngine/fma3-jit.ll
    llvm/trunk/test/ExecutionEngine/test-interp-vec-loadstore.ll
    llvm/trunk/test/Feature/globalvars.ll
    llvm/trunk/test/Feature/memorymarkers.ll
    llvm/trunk/test/Feature/recursivetype.ll
    llvm/trunk/test/Feature/testalloca.ll
    llvm/trunk/test/Feature/testconstants.ll
    llvm/trunk/test/Feature/weak_constant.ll
    llvm/trunk/test/Instrumentation/AddressSanitizer/X86/bug_11395.ll
    llvm/trunk/test/Instrumentation/AddressSanitizer/stack-poisoning.ll
    llvm/trunk/test/Instrumentation/AddressSanitizer/ubsan.ll
    llvm/trunk/test/Instrumentation/BoundsChecking/phi.ll
    llvm/trunk/test/Instrumentation/BoundsChecking/simple-32.ll
    llvm/trunk/test/Instrumentation/BoundsChecking/simple.ll
    llvm/trunk/test/Instrumentation/DataFlowSanitizer/abilist.ll
    llvm/trunk/test/Instrumentation/DataFlowSanitizer/load.ll
    llvm/trunk/test/Instrumentation/DataFlowSanitizer/store.ll
    llvm/trunk/test/Instrumentation/MemorySanitizer/msan_basic.ll
    llvm/trunk/test/Instrumentation/MemorySanitizer/store-long-origin.ll
    llvm/trunk/test/Instrumentation/SanitizerCoverage/coverage-dbg.ll
    llvm/trunk/test/Instrumentation/ThreadSanitizer/read_from_global.ll
    llvm/trunk/test/Integer/2007-01-19-TruncSext.ll
    llvm/trunk/test/LTO/X86/keep-used-puts-during-instcombine.ll
    llvm/trunk/test/LTO/X86/no-undefined-puts-when-implemented.ll
    llvm/trunk/test/Linker/2004-05-07-TypeResolution2.ll
    llvm/trunk/test/Linker/AppendingLinkage.ll
    llvm/trunk/test/Linker/DbgDeclare2.ll
    llvm/trunk/test/Linker/Inputs/PR11464.b.ll
    llvm/trunk/test/Linker/Inputs/opaque.ll
    llvm/trunk/test/Linker/Inputs/testlink.ll
    llvm/trunk/test/Linker/opaque.ll
    llvm/trunk/test/Linker/partial-type-refinement-link.ll
    llvm/trunk/test/Linker/testlink.ll
    llvm/trunk/test/Linker/type-unique-src-type.ll
    llvm/trunk/test/Linker/type-unique-type-array-a.ll
    llvm/trunk/test/Linker/type-unique-type-array-b.ll
    llvm/trunk/test/MC/ARM/elf-reloc-03.ll
    llvm/trunk/test/MC/COFF/linker-options.ll   (contents, props changed)
    llvm/trunk/test/Other/2008-06-04-FieldSizeInPacked.ll
    llvm/trunk/test/Other/constant-fold-gep.ll
    llvm/trunk/test/Other/lint.ll
    llvm/trunk/test/Transforms/ADCE/2002-05-23-ZeroArgPHITest.ll
    llvm/trunk/test/Transforms/ADCE/2002-05-28-Crash.ll
    llvm/trunk/test/Transforms/ADCE/2003-06-24-BadSuccessor.ll
    llvm/trunk/test/Transforms/ADCE/2003-06-24-BasicFunctionality.ll
    llvm/trunk/test/Transforms/ADCE/basictest1.ll
    llvm/trunk/test/Transforms/ADCE/basictest2.ll
    llvm/trunk/test/Transforms/AlignmentFromAssumptions/simple.ll
    llvm/trunk/test/Transforms/AlignmentFromAssumptions/simple32.ll
    llvm/trunk/test/Transforms/AlignmentFromAssumptions/start-unk.ll
    llvm/trunk/test/Transforms/ArgumentPromotion/2008-07-02-array-indexing.ll
    llvm/trunk/test/Transforms/ArgumentPromotion/aggregate-promote.ll
    llvm/trunk/test/Transforms/ArgumentPromotion/attrs.ll
    llvm/trunk/test/Transforms/ArgumentPromotion/byval-2.ll
    llvm/trunk/test/Transforms/ArgumentPromotion/byval.ll
    llvm/trunk/test/Transforms/ArgumentPromotion/crash.ll
    llvm/trunk/test/Transforms/ArgumentPromotion/fp80.ll
    llvm/trunk/test/Transforms/ArgumentPromotion/inalloca.ll
    llvm/trunk/test/Transforms/BBVectorize/X86/loop1.ll
    llvm/trunk/test/Transforms/BBVectorize/X86/pr15289.ll
    llvm/trunk/test/Transforms/BBVectorize/X86/sh-rec.ll
    llvm/trunk/test/Transforms/BBVectorize/X86/sh-rec2.ll
    llvm/trunk/test/Transforms/BBVectorize/X86/sh-rec3.ll
    llvm/trunk/test/Transforms/BBVectorize/X86/simple-ldstr.ll
    llvm/trunk/test/Transforms/BBVectorize/X86/wr-aliases.ll
    llvm/trunk/test/Transforms/BBVectorize/func-alias.ll
    llvm/trunk/test/Transforms/BBVectorize/ld1.ll
    llvm/trunk/test/Transforms/BBVectorize/loop1.ll
    llvm/trunk/test/Transforms/BBVectorize/metadata.ll
    llvm/trunk/test/Transforms/BBVectorize/no-ldstr-conn.ll
    llvm/trunk/test/Transforms/BBVectorize/simple-ldstr-ptrs.ll
    llvm/trunk/test/Transforms/BBVectorize/simple-ldstr.ll
    llvm/trunk/test/Transforms/CodeGenPrepare/X86/sink-addrspacecast.ll
    llvm/trunk/test/Transforms/CodeGenPrepare/statepoint-relocate.ll
    llvm/trunk/test/Transforms/ConstProp/2009-09-01-GEP-Crash.ll
    llvm/trunk/test/Transforms/ConstantHoisting/AArch64/const-addr.ll
    llvm/trunk/test/Transforms/ConstantHoisting/PowerPC/const-base-addr.ll
    llvm/trunk/test/Transforms/ConstantHoisting/X86/const-base-addr.ll
    llvm/trunk/test/Transforms/ConstantHoisting/X86/delete-dead-cast-inst.ll
    llvm/trunk/test/Transforms/DeadArgElim/2008-01-16-VarargsParamAttrs.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/2011-03-25-DSEMiscompile.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/2011-09-06-EndOfFunction.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/2011-09-06-MemCpy.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/OverwriteStoreEnd.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/PartialStore.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/const-pointers.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/crash.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/cs-cs-aliasing.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/dominate.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/free.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/libcalls.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/lifetime.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/no-targetdata.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/pr11390.ll
    llvm/trunk/test/Transforms/DeadStoreElimination/simple.ll
    llvm/trunk/test/Transforms/FunctionAttrs/nocapture.ll
    llvm/trunk/test/Transforms/GCOVProfiling/linezero.ll
    llvm/trunk/test/Transforms/GVN/2007-07-25-NestedLoop.ll
    llvm/trunk/test/Transforms/GVN/2007-07-25-SinglePredecessor.ll
    llvm/trunk/test/Transforms/GVN/2007-07-31-NoDomInherit.ll
    llvm/trunk/test/Transforms/GVN/2008-02-12-UndefLoad.ll
    llvm/trunk/test/Transforms/GVN/2008-12-09-SelfRemove.ll
    llvm/trunk/test/Transforms/GVN/2008-12-12-RLE-Crash.ll
    llvm/trunk/test/Transforms/GVN/2008-12-14-rle-reanalyze.ll
    llvm/trunk/test/Transforms/GVN/2008-12-15-CacheVisited.ll
    llvm/trunk/test/Transforms/GVN/2009-01-22-SortInvalidation.ll
    llvm/trunk/test/Transforms/GVN/2009-02-17-LoadPRECrash.ll
    llvm/trunk/test/Transforms/GVN/2009-06-17-InvalidPRE.ll
    llvm/trunk/test/Transforms/GVN/2010-05-08-OneBit.ll
    llvm/trunk/test/Transforms/GVN/2011-06-01-NonLocalMemdepMiscompile.ll
    llvm/trunk/test/Transforms/GVN/calls-readonly.ll
    llvm/trunk/test/Transforms/GVN/cond_br2.ll
    llvm/trunk/test/Transforms/GVN/crash-no-aa.ll
    llvm/trunk/test/Transforms/GVN/crash.ll
    llvm/trunk/test/Transforms/GVN/load-constant-mem.ll
    llvm/trunk/test/Transforms/GVN/load-pre-licm.ll
    llvm/trunk/test/Transforms/GVN/load-pre-nonlocal.ll
    llvm/trunk/test/Transforms/GVN/lpre-call-wrap-2.ll
    llvm/trunk/test/Transforms/GVN/lpre-call-wrap.ll
    llvm/trunk/test/Transforms/GVN/non-local-offset.ll
    llvm/trunk/test/Transforms/GVN/nonescaping-malloc.ll
    llvm/trunk/test/Transforms/GVN/null-aliases-nothing.ll
    llvm/trunk/test/Transforms/GVN/phi-translate-partial-alias.ll
    llvm/trunk/test/Transforms/GVN/phi-translate.ll
    llvm/trunk/test/Transforms/GVN/pr17852.ll
    llvm/trunk/test/Transforms/GVN/pre-gep-load.ll
    llvm/trunk/test/Transforms/GVN/pre-load.ll
    llvm/trunk/test/Transforms/GVN/rle-must-alias.ll
    llvm/trunk/test/Transforms/GVN/rle-phi-translate.ll
    llvm/trunk/test/Transforms/GVN/rle.ll
    llvm/trunk/test/Transforms/GlobalDCE/indirectbr.ll
    llvm/trunk/test/Transforms/GlobalOpt/2005-09-27-Crash.ll
    llvm/trunk/test/Transforms/GlobalOpt/2007-05-13-Crash.ll
    llvm/trunk/test/Transforms/GlobalOpt/2007-11-09-GEP-GEP-Crash.ll
    llvm/trunk/test/Transforms/GlobalOpt/2008-01-13-OutOfRangeSROA.ll
    llvm/trunk/test/Transforms/GlobalOpt/2008-12-16-HeapSRACrash-2.ll
    llvm/trunk/test/Transforms/GlobalOpt/2008-12-16-HeapSRACrash.ll
    llvm/trunk/test/Transforms/GlobalOpt/2009-01-13-phi-user.ll
    llvm/trunk/test/Transforms/GlobalOpt/2009-11-16-MallocSingleStoreToGlobalVar.ll
    llvm/trunk/test/Transforms/GlobalOpt/constantfold-initializers.ll
    llvm/trunk/test/Transforms/GlobalOpt/crash.ll
    llvm/trunk/test/Transforms/GlobalOpt/ctor-list-opt-constexpr.ll
    llvm/trunk/test/Transforms/GlobalOpt/ctor-list-opt.ll
    llvm/trunk/test/Transforms/GlobalOpt/deadfunction.ll
    llvm/trunk/test/Transforms/GlobalOpt/deadglobal-2.ll
    llvm/trunk/test/Transforms/GlobalOpt/globalsra-partial.ll
    llvm/trunk/test/Transforms/GlobalOpt/globalsra-unknown-index.ll
    llvm/trunk/test/Transforms/GlobalOpt/heap-sra-1.ll
    llvm/trunk/test/Transforms/GlobalOpt/heap-sra-2.ll
    llvm/trunk/test/Transforms/GlobalOpt/heap-sra-3.ll
    llvm/trunk/test/Transforms/GlobalOpt/heap-sra-4.ll
    llvm/trunk/test/Transforms/GlobalOpt/heap-sra-phi.ll
    llvm/trunk/test/Transforms/GlobalOpt/load-store-global.ll
    llvm/trunk/test/Transforms/GlobalOpt/malloc-promote-2.ll
    llvm/trunk/test/Transforms/GlobalOpt/malloc-promote-3.ll
    llvm/trunk/test/Transforms/GlobalOpt/memcpy.ll
    llvm/trunk/test/Transforms/GlobalOpt/unnamed-addr.ll
    llvm/trunk/test/Transforms/GlobalOpt/zeroinitializer-gep-load.ll
    llvm/trunk/test/Transforms/IPConstantProp/2009-09-24-byval-ptr.ll
    llvm/trunk/test/Transforms/IPConstantProp/dangling-block-address.ll
    llvm/trunk/test/Transforms/IRCE/decrementing-loop.ll
    llvm/trunk/test/Transforms/IRCE/low-becount.ll
    llvm/trunk/test/Transforms/IRCE/multiple-access-no-preloop.ll
    llvm/trunk/test/Transforms/IRCE/not-likely-taken.ll
    llvm/trunk/test/Transforms/IRCE/single-access-no-preloop.ll
    llvm/trunk/test/Transforms/IRCE/single-access-with-preloop.ll
    llvm/trunk/test/Transforms/IRCE/unhandled.ll
    llvm/trunk/test/Transforms/IRCE/with-parent-loops.ll
    llvm/trunk/test/Transforms/IndVarSimplify/2005-11-18-Crash.ll
    llvm/trunk/test/Transforms/IndVarSimplify/2007-01-06-TripCount.ll
    llvm/trunk/test/Transforms/IndVarSimplify/2008-09-02-IVType.ll
    llvm/trunk/test/Transforms/IndVarSimplify/2009-04-14-shorten_iv_vars.ll
    llvm/trunk/test/Transforms/IndVarSimplify/2009-04-15-shorten-iv-vars-2.ll
    llvm/trunk/test/Transforms/IndVarSimplify/2011-09-27-hoistsext.ll
    llvm/trunk/test/Transforms/IndVarSimplify/2011-10-27-lftrnull.ll
    llvm/trunk/test/Transforms/IndVarSimplify/2011-11-01-lftrptr.ll
    llvm/trunk/test/Transforms/IndVarSimplify/2011-11-15-multiexit.ll
    llvm/trunk/test/Transforms/IndVarSimplify/NVPTX/no-widen-expensive.ll
    llvm/trunk/test/Transforms/IndVarSimplify/ada-loops.ll
    llvm/trunk/test/Transforms/IndVarSimplify/ashr-tripcount.ll
    llvm/trunk/test/Transforms/IndVarSimplify/backedge-on-min-max.ll
    llvm/trunk/test/Transforms/IndVarSimplify/casted-argument.ll
    llvm/trunk/test/Transforms/IndVarSimplify/dangling-use.ll
    llvm/trunk/test/Transforms/IndVarSimplify/elim-extend.ll
    llvm/trunk/test/Transforms/IndVarSimplify/eliminate-comparison.ll
    llvm/trunk/test/Transforms/IndVarSimplify/eliminate-rem.ll
    llvm/trunk/test/Transforms/IndVarSimplify/indirectbr.ll
    llvm/trunk/test/Transforms/IndVarSimplify/iv-fold.ll
    llvm/trunk/test/Transforms/IndVarSimplify/iv-sext.ll
    llvm/trunk/test/Transforms/IndVarSimplify/iv-widen.ll
    llvm/trunk/test/Transforms/IndVarSimplify/iv-zext.ll
    llvm/trunk/test/Transforms/IndVarSimplify/lftr-address-space-pointers.ll
    llvm/trunk/test/Transforms/IndVarSimplify/lftr-promote.ll
    llvm/trunk/test/Transforms/IndVarSimplify/lftr-reuse.ll
    llvm/trunk/test/Transforms/IndVarSimplify/lftr-zext.ll
    llvm/trunk/test/Transforms/IndVarSimplify/loop_evaluate7.ll
    llvm/trunk/test/Transforms/IndVarSimplify/loop_evaluate8.ll
    llvm/trunk/test/Transforms/IndVarSimplify/masked-iv.ll
    llvm/trunk/test/Transforms/IndVarSimplify/no-iv-rewrite.ll
    llvm/trunk/test/Transforms/IndVarSimplify/overflowcheck.ll
    llvm/trunk/test/Transforms/IndVarSimplify/polynomial-expand.ll
    llvm/trunk/test/Transforms/IndVarSimplify/preserve-signed-wrap.ll
    llvm/trunk/test/Transforms/IndVarSimplify/promote-iv-to-eliminate-casts.ll
    llvm/trunk/test/Transforms/IndVarSimplify/sharpen-range.ll
    llvm/trunk/test/Transforms/IndVarSimplify/signed-trip-count.ll
    llvm/trunk/test/Transforms/IndVarSimplify/sink-alloca.ll
    llvm/trunk/test/Transforms/IndVarSimplify/udiv.ll
    llvm/trunk/test/Transforms/IndVarSimplify/uglygep.ll
    llvm/trunk/test/Transforms/IndVarSimplify/ult-sub-to-eq.ll
    llvm/trunk/test/Transforms/IndVarSimplify/variable-stride-ivs-0.ll
    llvm/trunk/test/Transforms/IndVarSimplify/widen-loop-comp.ll
    llvm/trunk/test/Transforms/IndVarSimplify/widen-nsw.ll
    llvm/trunk/test/Transforms/Inline/2009-01-13-RecursiveInlineCrash.ll
    llvm/trunk/test/Transforms/Inline/align.ll
    llvm/trunk/test/Transforms/Inline/alloca-bonus.ll
    llvm/trunk/test/Transforms/Inline/alloca-dbgdeclare.ll
    llvm/trunk/test/Transforms/Inline/alloca-in-scc.ll
    llvm/trunk/test/Transforms/Inline/alloca-merge-align-nodl.ll
    llvm/trunk/test/Transforms/Inline/alloca-merge-align.ll
    llvm/trunk/test/Transforms/Inline/basictest.ll
    llvm/trunk/test/Transforms/Inline/byval.ll
    llvm/trunk/test/Transforms/Inline/byval_lifetime.ll
    llvm/trunk/test/Transforms/Inline/devirtualize-3.ll
    llvm/trunk/test/Transforms/Inline/devirtualize.ll
    llvm/trunk/test/Transforms/Inline/inline-byval-bonus.ll
    llvm/trunk/test/Transforms/Inline/inline-fast-math-flags.ll
    llvm/trunk/test/Transforms/Inline/inline-musttail-varargs.ll
    llvm/trunk/test/Transforms/Inline/inline-vla.ll
    llvm/trunk/test/Transforms/Inline/inline_dbg_declare.ll
    llvm/trunk/test/Transforms/Inline/inline_minisize.ll
    llvm/trunk/test/Transforms/Inline/noalias-cs.ll
    llvm/trunk/test/Transforms/Inline/noalias.ll
    llvm/trunk/test/Transforms/Inline/noalias2.ll
    llvm/trunk/test/Transforms/Inline/ptr-diff.ll
    llvm/trunk/test/Transforms/InstCombine/2006-12-08-Phi-ICmp-Op-Fold.ll
    llvm/trunk/test/Transforms/InstCombine/2006-12-08-Select-ICmp.ll
    llvm/trunk/test/Transforms/InstCombine/2006-12-15-Range-Test.ll
    llvm/trunk/test/Transforms/InstCombine/2007-02-07-PointerCast.ll
    llvm/trunk/test/Transforms/InstCombine/2007-03-25-BadShiftMask.ll
    llvm/trunk/test/Transforms/InstCombine/2007-05-14-Crash.ll
    llvm/trunk/test/Transforms/InstCombine/2007-10-10-EliminateMemCpy.ll
    llvm/trunk/test/Transforms/InstCombine/2007-10-12-Crash.ll
    llvm/trunk/test/Transforms/InstCombine/2007-10-28-stacksave.ll
    llvm/trunk/test/Transforms/InstCombine/2007-12-12-GEPScale.ll
    llvm/trunk/test/Transforms/InstCombine/2008-01-14-VarArgTrampoline.ll
    llvm/trunk/test/Transforms/InstCombine/2008-05-08-LiveStoreDelete.ll
    llvm/trunk/test/Transforms/InstCombine/2008-05-08-StrLenSink.ll
    llvm/trunk/test/Transforms/InstCombine/2008-05-09-SinkOfInvoke.ll
    llvm/trunk/test/Transforms/InstCombine/2008-06-24-StackRestore.ll
    llvm/trunk/test/Transforms/InstCombine/2008-08-05-And.ll
    llvm/trunk/test/Transforms/InstCombine/2009-01-08-AlignAlloca.ll
    llvm/trunk/test/Transforms/InstCombine/2009-01-24-EmptyStruct.ll
    llvm/trunk/test/Transforms/InstCombine/2009-02-20-InstCombine-SROA.ll
    llvm/trunk/test/Transforms/InstCombine/2009-02-25-CrashZeroSizeArray.ll
    llvm/trunk/test/Transforms/InstCombine/2009-12-17-CmpSelectNull.ll
    llvm/trunk/test/Transforms/InstCombine/2010-11-21-SizeZeroTypeGEP.ll
    llvm/trunk/test/Transforms/InstCombine/2011-05-13-InBoundsGEP.ll
    llvm/trunk/test/Transforms/InstCombine/2011-09-03-Trampoline.ll
    llvm/trunk/test/Transforms/InstCombine/2012-09-17-ZeroSizedAlloca.ll
    llvm/trunk/test/Transforms/InstCombine/2012-09-24-MemcpyFromGlobalCrash.ll
    llvm/trunk/test/Transforms/InstCombine/2012-10-25-vector-of-pointers.ll
    llvm/trunk/test/Transforms/InstCombine/2013-03-05-Combine-BitcastTy-Into-Alloca.ll
    llvm/trunk/test/Transforms/InstCombine/add3.ll
    llvm/trunk/test/Transforms/InstCombine/addrspacecast.ll
    llvm/trunk/test/Transforms/InstCombine/align-2d-gep.ll
    llvm/trunk/test/Transforms/InstCombine/align-addr.ll
    llvm/trunk/test/Transforms/InstCombine/aligned-altivec.ll
    llvm/trunk/test/Transforms/InstCombine/aligned-qpx.ll
    llvm/trunk/test/Transforms/InstCombine/alloca.ll
    llvm/trunk/test/Transforms/InstCombine/assume-loop-align.ll
    llvm/trunk/test/Transforms/InstCombine/assume-redundant.ll
    llvm/trunk/test/Transforms/InstCombine/cast.ll
    llvm/trunk/test/Transforms/InstCombine/constant-fold-address-space-pointer.ll
    llvm/trunk/test/Transforms/InstCombine/constant-fold-gep.ll
    llvm/trunk/test/Transforms/InstCombine/crash.ll
    llvm/trunk/test/Transforms/InstCombine/descale-zero.ll
    llvm/trunk/test/Transforms/InstCombine/disable-simplify-libcalls.ll
    llvm/trunk/test/Transforms/InstCombine/div-shift-crash.ll
    llvm/trunk/test/Transforms/InstCombine/enforce-known-alignment.ll
    llvm/trunk/test/Transforms/InstCombine/extractvalue.ll
    llvm/trunk/test/Transforms/InstCombine/fp-ret-bitcast.ll
    llvm/trunk/test/Transforms/InstCombine/fprintf-1.ll
    llvm/trunk/test/Transforms/InstCombine/fputs-1.ll
    llvm/trunk/test/Transforms/InstCombine/fwrite-1.ll
    llvm/trunk/test/Transforms/InstCombine/gep-addrspace.ll
    llvm/trunk/test/Transforms/InstCombine/gep-sext.ll
    llvm/trunk/test/Transforms/InstCombine/gepphigep.ll
    llvm/trunk/test/Transforms/InstCombine/getelementptr.ll
    llvm/trunk/test/Transforms/InstCombine/icmp.ll
    llvm/trunk/test/Transforms/InstCombine/load-cmp.ll
    llvm/trunk/test/Transforms/InstCombine/load.ll
    llvm/trunk/test/Transforms/InstCombine/load3.ll
    llvm/trunk/test/Transforms/InstCombine/loadstore-alignment.ll
    llvm/trunk/test/Transforms/InstCombine/loadstore-metadata.ll
    llvm/trunk/test/Transforms/InstCombine/lshr-phi.ll
    llvm/trunk/test/Transforms/InstCombine/mem-gep-zidx.ll
    llvm/trunk/test/Transforms/InstCombine/memcmp-1.ll
    llvm/trunk/test/Transforms/InstCombine/memcpy-from-global.ll
    llvm/trunk/test/Transforms/InstCombine/memmove.ll
    llvm/trunk/test/Transforms/InstCombine/memset.ll
    llvm/trunk/test/Transforms/InstCombine/memset2.ll
    llvm/trunk/test/Transforms/InstCombine/multi-size-address-space-pointer.ll
    llvm/trunk/test/Transforms/InstCombine/objsize.ll
    llvm/trunk/test/Transforms/InstCombine/phi-merge-gep.ll
    llvm/trunk/test/Transforms/InstCombine/phi.ll
    llvm/trunk/test/Transforms/InstCombine/pr2645-0.ll
    llvm/trunk/test/Transforms/InstCombine/pr2645-1.ll
    llvm/trunk/test/Transforms/InstCombine/printf-1.ll
    llvm/trunk/test/Transforms/InstCombine/printf-2.ll
    llvm/trunk/test/Transforms/InstCombine/puts-1.ll
    llvm/trunk/test/Transforms/InstCombine/select-cmp-br.ll
    llvm/trunk/test/Transforms/InstCombine/shufflemask-undef.ll
    llvm/trunk/test/Transforms/InstCombine/signed-comparison.ll
    llvm/trunk/test/Transforms/InstCombine/simplify-libcalls.ll
    llvm/trunk/test/Transforms/InstCombine/sprintf-1.ll
    llvm/trunk/test/Transforms/InstCombine/sqrt.ll
    llvm/trunk/test/Transforms/InstCombine/stack-overalign.ll
    llvm/trunk/test/Transforms/InstCombine/stacksaverestore.ll
    llvm/trunk/test/Transforms/InstCombine/store.ll
    llvm/trunk/test/Transforms/InstCombine/stpcpy-1.ll
    llvm/trunk/test/Transforms/InstCombine/stpcpy-2.ll
    llvm/trunk/test/Transforms/InstCombine/stpcpy_chk-1.ll
    llvm/trunk/test/Transforms/InstCombine/stpcpy_chk-2.ll
    llvm/trunk/test/Transforms/InstCombine/strcat-1.ll
    llvm/trunk/test/Transforms/InstCombine/strcat-2.ll
    llvm/trunk/test/Transforms/InstCombine/strcat-3.ll
    llvm/trunk/test/Transforms/InstCombine/strchr-1.ll
    llvm/trunk/test/Transforms/InstCombine/strchr-2.ll
    llvm/trunk/test/Transforms/InstCombine/strcmp-1.ll
    llvm/trunk/test/Transforms/InstCombine/strcmp-2.ll
    llvm/trunk/test/Transforms/InstCombine/strcpy-1.ll
    llvm/trunk/test/Transforms/InstCombine/strcpy-2.ll
    llvm/trunk/test/Transforms/InstCombine/strcpy_chk-1.ll
    llvm/trunk/test/Transforms/InstCombine/strcpy_chk-2.ll
    llvm/trunk/test/Transforms/InstCombine/strcpy_chk-64.ll
    llvm/trunk/test/Transforms/InstCombine/strcspn-1.ll
    llvm/trunk/test/Transforms/InstCombine/strcspn-2.ll
    llvm/trunk/test/Transforms/InstCombine/strlen-1.ll
    llvm/trunk/test/Transforms/InstCombine/strlen-2.ll
    llvm/trunk/test/Transforms/InstCombine/strncat-1.ll
    llvm/trunk/test/Transforms/InstCombine/strncat-2.ll
    llvm/trunk/test/Transforms/InstCombine/strncat-3.ll
    llvm/trunk/test/Transforms/InstCombine/strncmp-1.ll
    llvm/trunk/test/Transforms/InstCombine/strncmp-2.ll
    llvm/trunk/test/Transforms/InstCombine/strncpy-1.ll
    llvm/trunk/test/Transforms/InstCombine/strncpy-2.ll
    llvm/trunk/test/Transforms/InstCombine/strncpy_chk-1.ll
    llvm/trunk/test/Transforms/InstCombine/strncpy_chk-2.ll
    llvm/trunk/test/Transforms/InstCombine/strpbrk-1.ll
    llvm/trunk/test/Transforms/InstCombine/strpbrk-2.ll
    llvm/trunk/test/Transforms/InstCombine/strrchr-1.ll
    llvm/trunk/test/Transforms/InstCombine/strrchr-2.ll
    llvm/trunk/test/Transforms/InstCombine/strspn-1.ll
    llvm/trunk/test/Transforms/InstCombine/strstr-1.ll
    llvm/trunk/test/Transforms/InstCombine/strstr-2.ll
    llvm/trunk/test/Transforms/InstCombine/struct-assign-tbaa.ll
    llvm/trunk/test/Transforms/InstCombine/sub.ll
    llvm/trunk/test/Transforms/InstCombine/vec_phi_extract.ll
    llvm/trunk/test/Transforms/InstCombine/vector-casts.ll
    llvm/trunk/test/Transforms/InstCombine/vector_gep1.ll
    llvm/trunk/test/Transforms/InstCombine/vector_gep2.ll
    llvm/trunk/test/Transforms/InstCombine/weak-symbols.ll
    llvm/trunk/test/Transforms/InstCombine/zext-or-icmp.ll
    llvm/trunk/test/Transforms/InstMerge/ld_hoist1.ll
    llvm/trunk/test/Transforms/InstMerge/ld_hoist_st_sink.ll
    llvm/trunk/test/Transforms/InstMerge/st_sink_barrier_call.ll
    llvm/trunk/test/Transforms/InstMerge/st_sink_bugfix_22613.ll
    llvm/trunk/test/Transforms/InstMerge/st_sink_no_barrier_call.ll
    llvm/trunk/test/Transforms/InstMerge/st_sink_no_barrier_load.ll
    llvm/trunk/test/Transforms/InstMerge/st_sink_no_barrier_store.ll
    llvm/trunk/test/Transforms/InstMerge/st_sink_two_stores.ll
    llvm/trunk/test/Transforms/InstMerge/st_sink_with_barrier.ll
    llvm/trunk/test/Transforms/InstSimplify/call.ll
    llvm/trunk/test/Transforms/InstSimplify/compare.ll
    llvm/trunk/test/Transforms/InstSimplify/gep.ll
    llvm/trunk/test/Transforms/InstSimplify/noalias-ptr.ll
    llvm/trunk/test/Transforms/InstSimplify/past-the-end.ll
    llvm/trunk/test/Transforms/InstSimplify/ptr_diff.ll
    llvm/trunk/test/Transforms/InstSimplify/vector_gep.ll
    llvm/trunk/test/Transforms/JumpThreading/2010-08-26-and.ll
    llvm/trunk/test/Transforms/JumpThreading/landing-pad.ll
    llvm/trunk/test/Transforms/JumpThreading/lvi-load.ll
    llvm/trunk/test/Transforms/JumpThreading/phi-eq.ll
    llvm/trunk/test/Transforms/LCSSA/2006-06-03-IncorrectIDFPhis.ll
    llvm/trunk/test/Transforms/LCSSA/2006-07-09-NoDominator.ll
    llvm/trunk/test/Transforms/LICM/2003-02-26-LoopExitNotDominated.ll
    llvm/trunk/test/Transforms/LICM/2004-11-17-UndefIndexCrash.ll
    llvm/trunk/test/Transforms/LICM/2007-05-22-VolatileSink.ll
    llvm/trunk/test/Transforms/LICM/2007-07-30-AliasSet.ll
    llvm/trunk/test/Transforms/LICM/2007-09-17-PromoteValue.ll
    llvm/trunk/test/Transforms/LICM/2008-07-22-LoadGlobalConstant.ll
    llvm/trunk/test/Transforms/LICM/2011-07-06-Alignment.ll
    llvm/trunk/test/Transforms/LICM/PR21582.ll
    llvm/trunk/test/Transforms/LICM/crash.ll
    llvm/trunk/test/Transforms/LICM/hoist-bitcast-load.ll
    llvm/trunk/test/Transforms/LICM/hoist-deref-load.ll
    llvm/trunk/test/Transforms/LICM/scalar_promote.ll
    llvm/trunk/test/Transforms/LICM/sinking.ll
    llvm/trunk/test/Transforms/LICM/speculate.ll
    llvm/trunk/test/Transforms/LoadCombine/load-combine-aa.ll
    llvm/trunk/test/Transforms/LoadCombine/load-combine-assume.ll
    llvm/trunk/test/Transforms/LoadCombine/load-combine.ll
    llvm/trunk/test/Transforms/LoopDeletion/2008-05-06-Phi.ll
    llvm/trunk/test/Transforms/LoopIdiom/basic-address-space.ll
    llvm/trunk/test/Transforms/LoopIdiom/basic.ll
    llvm/trunk/test/Transforms/LoopIdiom/crash.ll
    llvm/trunk/test/Transforms/LoopIdiom/debug-line.ll
    llvm/trunk/test/Transforms/LoopIdiom/memset_noidiom.ll
    llvm/trunk/test/Transforms/LoopIdiom/non-canonical-loop.ll
    llvm/trunk/test/Transforms/LoopIdiom/scev-invalidation.ll
    llvm/trunk/test/Transforms/LoopReroll/basic.ll
    llvm/trunk/test/Transforms/LoopReroll/nonconst_lb.ll
    llvm/trunk/test/Transforms/LoopReroll/reduction.ll
    llvm/trunk/test/Transforms/LoopRotate/PhiRename-1.ll
    llvm/trunk/test/Transforms/LoopRotate/PhiSelfReference-1.ll
    llvm/trunk/test/Transforms/LoopRotate/basic.ll
    llvm/trunk/test/Transforms/LoopRotate/crash.ll
    llvm/trunk/test/Transforms/LoopRotate/dbgvalue.ll
    llvm/trunk/test/Transforms/LoopRotate/multiple-exits.ll
    llvm/trunk/test/Transforms/LoopRotate/nosimplifylatch.ll
    llvm/trunk/test/Transforms/LoopRotate/phi-duplicate.ll
    llvm/trunk/test/Transforms/LoopRotate/pr22337.ll
    llvm/trunk/test/Transforms/LoopRotate/simplifylatch.ll
    llvm/trunk/test/Transforms/LoopSimplify/2003-08-15-PreheadersFail.ll
    llvm/trunk/test/Transforms/LoopSimplify/merge-exits.ll
    llvm/trunk/test/Transforms/LoopSimplify/notify-scev.ll
    llvm/trunk/test/Transforms/LoopSimplify/phi-node-simplify.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2005-08-15-AddRecIV.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2007-04-23-UseIterator.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2009-04-28-no-reduce-mul.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2011-07-19-CritEdgeBreakCrash.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2011-10-03-CritEdgeMerge.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2011-10-06-ReusePhi.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2011-10-13-SCEVChain.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2011-10-14-IntPtr.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2011-12-19-PostincQuadratic.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2012-01-02-nopreheader.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2012-01-16-nopreheader.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2012-03-15-nopreheader.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2012-03-26-constexpr.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2012-07-13-ExpandUDiv.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2012-07-18-LimitReassociate.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/2013-01-14-ReuseCast.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/AArch64/lsr-memcpy.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/AArch64/lsr-memset.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/ARM/2012-06-15-lsr-noaddrmode.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/ARM/ivchain-ARM.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/X86/2011-12-04-loserreg.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/X86/2012-01-13-phielim.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/X86/ivchain-X86.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/X86/ivchain-stress-X86.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/X86/no_superflous_induction_vars.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/addrec-gep-address-space.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/addrec-gep.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/address-space-loop.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/dominate-assert.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/dont-hoist-simple-loop-constants.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/dont_insert_redundant_ops.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/dont_reduce_bytes.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/hoist-parent-preheader.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/invariant_value_first.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/invariant_value_first_arg.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/ivchain.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/ops_after_indvar.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/phi_node_update_multiple_preds.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/post-inc-icmpzero.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/pr12018.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/pr12048.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/pr3086.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/preserve-gep-loop-variant.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/related_indvars.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/remove_indvar.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/scaling_factor_cost_crash.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/share_code_in_preheader.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/uglygep-address-space.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/uglygep.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/use_postinc_value_outside_loop.ll
    llvm/trunk/test/Transforms/LoopStrengthReduce/var_stride_used_by_compare.ll
    llvm/trunk/test/Transforms/LoopUnroll/2007-05-05-UnrollMiscomp.ll
    llvm/trunk/test/Transforms/LoopUnroll/2011-08-08-PhiUpdate.ll
    llvm/trunk/test/Transforms/LoopUnroll/2011-08-09-IVSimplify.ll
    llvm/trunk/test/Transforms/LoopUnroll/2011-10-01-NoopTrunc.ll
    llvm/trunk/test/Transforms/LoopUnroll/PowerPC/a2-unrolling.ll
    llvm/trunk/test/Transforms/LoopUnroll/X86/partial.ll
    llvm/trunk/test/Transforms/LoopUnroll/ephemeral.ll
    llvm/trunk/test/Transforms/LoopUnroll/full-unroll-heuristics.ll
    llvm/trunk/test/Transforms/LoopUnroll/ignore-annotation-intrinsic-cost.ll
    llvm/trunk/test/Transforms/LoopUnroll/runtime-loop.ll
    llvm/trunk/test/Transforms/LoopUnroll/runtime-loop1.ll
    llvm/trunk/test/Transforms/LoopUnroll/runtime-loop2.ll
    llvm/trunk/test/Transforms/LoopUnroll/runtime-loop3.ll
    llvm/trunk/test/Transforms/LoopUnroll/scevunroll.ll
    llvm/trunk/test/Transforms/LoopUnroll/shifted-tripcount.ll
    llvm/trunk/test/Transforms/LoopUnroll/unroll-pragmas-disabled.ll
    llvm/trunk/test/Transforms/LoopUnroll/unroll-pragmas.ll
    llvm/trunk/test/Transforms/LoopUnswitch/2007-07-18-DomInfo.ll
    llvm/trunk/test/Transforms/LoopUnswitch/2011-09-26-EHCrash.ll
    llvm/trunk/test/Transforms/LoopUnswitch/2012-04-30-LoopUnswitch-LPad-Crash.ll
    llvm/trunk/test/Transforms/LoopUnswitch/basictest.ll
    llvm/trunk/test/Transforms/LoopUnswitch/preserve-analyses.ll
    llvm/trunk/test/Transforms/LoopVectorize/12-12-11-if-conv.ll
    llvm/trunk/test/Transforms/LoopVectorize/2012-10-22-isconsec.ll
    llvm/trunk/test/Transforms/LoopVectorize/AArch64/aarch64-unroll.ll
    llvm/trunk/test/Transforms/LoopVectorize/AArch64/arbitrary-induction-step.ll
    llvm/trunk/test/Transforms/LoopVectorize/AArch64/arm64-unroll.ll
    llvm/trunk/test/Transforms/LoopVectorize/AArch64/gather-cost.ll
    llvm/trunk/test/Transforms/LoopVectorize/AArch64/sdiv-pow2.ll
    llvm/trunk/test/Transforms/LoopVectorize/ARM/arm-unroll.ll
    llvm/trunk/test/Transforms/LoopVectorize/ARM/gather-cost.ll
    llvm/trunk/test/Transforms/LoopVectorize/ARM/gcc-examples.ll
    llvm/trunk/test/Transforms/LoopVectorize/ARM/width-detect.ll
    llvm/trunk/test/Transforms/LoopVectorize/PowerPC/small-loop-rdx.ll
    llvm/trunk/test/Transforms/LoopVectorize/PowerPC/vsx-tsvc-s173.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/already-vectorized.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/assume.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/avx1.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/avx512.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/constant-vector-operand.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/conversion-cost.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/cost-model.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/fp32_to_uint32-cost-model.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/fp64_to_uint32-cost-model.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/fp_to_sint8-cost-model.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/gather-cost.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/gcc-examples.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/illegal-parallel-loop-uniform-write.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/masked_load_store.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/metadata-enable.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/min-trip-count-switch.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/no-vector.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/parallel-loops-after-reg2mem.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/parallel-loops.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/powof2div.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/reduction-crash.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/small-size.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/struct-store.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/tripcount.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/uint64_to_fp64-cost-model.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/unroll-pm.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/unroll-small-loops.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/unroll_selection.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/vect.omp.force.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/vect.omp.force.small-tc.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/vector-scalar-select-cost.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/vector_ptr_load_store.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/vectorization-remarks-missed.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/vectorization-remarks.ll
    llvm/trunk/test/Transforms/LoopVectorize/X86/x86_fp80-vector-store.ll
    llvm/trunk/test/Transforms/LoopVectorize/XCore/no-vector-registers.ll
    llvm/trunk/test/Transforms/LoopVectorize/align.ll
    llvm/trunk/test/Transforms/LoopVectorize/bsd_regex.ll
    llvm/trunk/test/Transforms/LoopVectorize/bzip_reverse_loops.ll
    llvm/trunk/test/Transforms/LoopVectorize/calloc.ll
    llvm/trunk/test/Transforms/LoopVectorize/cast-induction.ll
    llvm/trunk/test/Transforms/LoopVectorize/conditional-assignment.ll
    llvm/trunk/test/Transforms/LoopVectorize/control-flow.ll
    llvm/trunk/test/Transforms/LoopVectorize/cpp-new-array.ll
    llvm/trunk/test/Transforms/LoopVectorize/dbg.value.ll
    llvm/trunk/test/Transforms/LoopVectorize/debugloc.ll
    llvm/trunk/test/Transforms/LoopVectorize/duplicated-metadata.ll
    llvm/trunk/test/Transforms/LoopVectorize/ee-crash.ll
    llvm/trunk/test/Transforms/LoopVectorize/exact.ll
    llvm/trunk/test/Transforms/LoopVectorize/flags.ll
    llvm/trunk/test/Transforms/LoopVectorize/float-reduction.ll
    llvm/trunk/test/Transforms/LoopVectorize/funcall.ll
    llvm/trunk/test/Transforms/LoopVectorize/gcc-examples.ll
    llvm/trunk/test/Transforms/LoopVectorize/global_alias.ll
    llvm/trunk/test/Transforms/LoopVectorize/hoist-loads.ll
    llvm/trunk/test/Transforms/LoopVectorize/if-conversion-edgemasks.ll
    llvm/trunk/test/Transforms/LoopVectorize/if-conversion-nest.ll
    llvm/trunk/test/Transforms/LoopVectorize/if-conversion-reduction.ll
    llvm/trunk/test/Transforms/LoopVectorize/if-conversion.ll
    llvm/trunk/test/Transforms/LoopVectorize/if-pred-stores.ll
    llvm/trunk/test/Transforms/LoopVectorize/incorrect-dom-info.ll
    llvm/trunk/test/Transforms/LoopVectorize/increment.ll
    llvm/trunk/test/Transforms/LoopVectorize/induction.ll
    llvm/trunk/test/Transforms/LoopVectorize/induction_plus.ll
    llvm/trunk/test/Transforms/LoopVectorize/intrinsic.ll
    llvm/trunk/test/Transforms/LoopVectorize/lifetime.ll
    llvm/trunk/test/Transforms/LoopVectorize/loop-form.ll
    llvm/trunk/test/Transforms/LoopVectorize/loop-vect-memdep.ll
    llvm/trunk/test/Transforms/LoopVectorize/memdep.ll
    llvm/trunk/test/Transforms/LoopVectorize/metadata-unroll.ll
    llvm/trunk/test/Transforms/LoopVectorize/metadata-width.ll
    llvm/trunk/test/Transforms/LoopVectorize/metadata.ll
    llvm/trunk/test/Transforms/LoopVectorize/minmax_reduction.ll
    llvm/trunk/test/Transforms/LoopVectorize/multiple-address-spaces.ll
    llvm/trunk/test/Transforms/LoopVectorize/no_array_bounds.ll
    llvm/trunk/test/Transforms/LoopVectorize/no_idiv_reduction.ll
    llvm/trunk/test/Transforms/LoopVectorize/no_int_induction.ll
    llvm/trunk/test/Transforms/LoopVectorize/no_switch.ll
    llvm/trunk/test/Transforms/LoopVectorize/nofloat.ll
    llvm/trunk/test/Transforms/LoopVectorize/non-const-n.ll
    llvm/trunk/test/Transforms/LoopVectorize/nsw-crash.ll
    llvm/trunk/test/Transforms/LoopVectorize/opt.ll
    llvm/trunk/test/Transforms/LoopVectorize/ptr_loops.ll
    llvm/trunk/test/Transforms/LoopVectorize/read-only.ll
    llvm/trunk/test/Transforms/LoopVectorize/reduction.ll
    llvm/trunk/test/Transforms/LoopVectorize/reverse_induction.ll
    llvm/trunk/test/Transforms/LoopVectorize/reverse_iter.ll
    llvm/trunk/test/Transforms/LoopVectorize/runtime-check-address-space.ll
    llvm/trunk/test/Transforms/LoopVectorize/runtime-check-readonly-address-space.ll
    llvm/trunk/test/Transforms/LoopVectorize/runtime-check-readonly.ll
    llvm/trunk/test/Transforms/LoopVectorize/runtime-check.ll
    llvm/trunk/test/Transforms/LoopVectorize/runtime-limit.ll
    llvm/trunk/test/Transforms/LoopVectorize/safegep.ll
    llvm/trunk/test/Transforms/LoopVectorize/same-base-access.ll
    llvm/trunk/test/Transforms/LoopVectorize/scalar-select.ll
    llvm/trunk/test/Transforms/LoopVectorize/scev-exitlim-crash.ll
    llvm/trunk/test/Transforms/LoopVectorize/simple-unroll.ll
    llvm/trunk/test/Transforms/LoopVectorize/small-loop.ll
    llvm/trunk/test/Transforms/LoopVectorize/start-non-zero.ll
    llvm/trunk/test/Transforms/LoopVectorize/store-shuffle-bug.ll
    llvm/trunk/test/Transforms/LoopVectorize/struct_access.ll
    llvm/trunk/test/Transforms/LoopVectorize/tbaa-nodep.ll
    llvm/trunk/test/Transforms/LoopVectorize/undef-inst-bug.ll
    llvm/trunk/test/Transforms/LoopVectorize/unroll_novec.ll
    llvm/trunk/test/Transforms/LoopVectorize/unsized-pointee-crash.ll
    llvm/trunk/test/Transforms/LoopVectorize/value-ptr-bug.ll
    llvm/trunk/test/Transforms/LoopVectorize/vect.omp.persistence.ll
    llvm/trunk/test/Transforms/LoopVectorize/vect.stats.ll
    llvm/trunk/test/Transforms/LoopVectorize/vectorize-once.ll
    llvm/trunk/test/Transforms/LoopVectorize/version-mem-access.ll
    llvm/trunk/test/Transforms/LoopVectorize/write-only.ll
    llvm/trunk/test/Transforms/LowerBitSets/simple.ll
    llvm/trunk/test/Transforms/Mem2Reg/2005-06-30-ReadBeforeWrite.ll
    llvm/trunk/test/Transforms/Mem2Reg/ignore-lifetime.ll
    llvm/trunk/test/Transforms/MemCpyOpt/2008-02-24-MultipleUseofSRet.ll
    llvm/trunk/test/Transforms/MemCpyOpt/2008-03-13-ReturnSlotBitcast.ll
    llvm/trunk/test/Transforms/MemCpyOpt/2011-06-02-CallSlotOverwritten.ll
    llvm/trunk/test/Transforms/MemCpyOpt/align.ll
    llvm/trunk/test/Transforms/MemCpyOpt/atomic.ll
    llvm/trunk/test/Transforms/MemCpyOpt/callslot_deref.ll
    llvm/trunk/test/Transforms/MemCpyOpt/crash.ll
    llvm/trunk/test/Transforms/MemCpyOpt/form-memset.ll
    llvm/trunk/test/Transforms/MemCpyOpt/loadstore-sret.ll
    llvm/trunk/test/Transforms/MemCpyOpt/memcpy-to-memset.ll
    llvm/trunk/test/Transforms/MemCpyOpt/memcpy-undef.ll
    llvm/trunk/test/Transforms/MemCpyOpt/memcpy.ll
    llvm/trunk/test/Transforms/MemCpyOpt/memmove.ll
    llvm/trunk/test/Transforms/MemCpyOpt/smaller.ll
    llvm/trunk/test/Transforms/MemCpyOpt/sret.ll
    llvm/trunk/test/Transforms/MergeFunc/2011-02-08-RemoveEqual.ll
    llvm/trunk/test/Transforms/MergeFunc/address-spaces.ll
    llvm/trunk/test/Transforms/MergeFunc/crash.ll
    llvm/trunk/test/Transforms/MergeFunc/inttoptr-address-space.ll
    llvm/trunk/test/Transforms/MergeFunc/inttoptr.ll
    llvm/trunk/test/Transforms/MergeFunc/mergefunc-struct-return.ll
    llvm/trunk/test/Transforms/MergeFunc/vector-GEP-crash.ll
    llvm/trunk/test/Transforms/MetaRenamer/metarenamer.ll
    llvm/trunk/test/Transforms/ObjCARC/allocas.ll
    llvm/trunk/test/Transforms/ObjCARC/basic.ll
    llvm/trunk/test/Transforms/ObjCARC/contract-storestrong-ivar.ll
    llvm/trunk/test/Transforms/ObjCARC/escape.ll
    llvm/trunk/test/Transforms/ObjCARC/move-and-form-retain-autorelease.ll
    llvm/trunk/test/Transforms/ObjCARC/nested.ll
    llvm/trunk/test/Transforms/ObjCARC/retain-block-side-effects.ll
    llvm/trunk/test/Transforms/ObjCARC/weak-copies.ll
    llvm/trunk/test/Transforms/PhaseOrdering/2010-03-22-empty-baseclass.ll
    llvm/trunk/test/Transforms/PhaseOrdering/PR6627.ll
    llvm/trunk/test/Transforms/PhaseOrdering/basic.ll
    llvm/trunk/test/Transforms/PhaseOrdering/scev.ll
    llvm/trunk/test/Transforms/Reassociate/2011-01-26-UseAfterFree.ll
    llvm/trunk/test/Transforms/Reassociate/looptest.ll
    llvm/trunk/test/Transforms/RewriteStatepointsForGC/basics.ll
    llvm/trunk/test/Transforms/SCCP/2002-08-30-GetElementPtrTest.ll
    llvm/trunk/test/Transforms/SCCP/2003-06-24-OverdefinedPHIValue.ll
    llvm/trunk/test/Transforms/SCCP/2006-10-23-IPSCCP-Crash.ll
    llvm/trunk/test/Transforms/SCCP/2006-12-04-PackedType.ll
    llvm/trunk/test/Transforms/SCCP/apint-array.ll
    llvm/trunk/test/Transforms/SCCP/apint-bigarray.ll
    llvm/trunk/test/Transforms/SCCP/apint-bigint2.ll
    llvm/trunk/test/Transforms/SCCP/apint-ipsccp4.ll
    llvm/trunk/test/Transforms/SCCP/apint-load.ll
    llvm/trunk/test/Transforms/SCCP/apint-select.ll
    llvm/trunk/test/Transforms/SCCP/loadtest.ll
    llvm/trunk/test/Transforms/SLPVectorizer/AArch64/commute.ll
    llvm/trunk/test/Transforms/SLPVectorizer/AArch64/load-store-q.ll
    llvm/trunk/test/Transforms/SLPVectorizer/AArch64/sdiv-pow2.ll
    llvm/trunk/test/Transforms/SLPVectorizer/ARM/memory.ll
    llvm/trunk/test/Transforms/SLPVectorizer/ARM/sroa.ll
    llvm/trunk/test/Transforms/SLPVectorizer/R600/simplebb.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/addsub.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/align.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/bad_types.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/barriercall.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/call.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/cast.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/cmp_sel.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/compare-reduce.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/consecutive-access.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/continue_vectorizing.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_7zip.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet3.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_cmpop.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_dequeue.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_gep.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_mandeltext.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_scheduling.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_sim4b1.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_vectorizeTree.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/cross_block_slp.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/cse.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/cycle_dup.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/debug_info.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/diamond.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/external_user.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/extract.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/extract_in_tree_user.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/extractcost.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/flag.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/gep.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/hoist.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/horizontal.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/implicitfloat.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/in-tree-user.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/intrinsic.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/long_chains.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/loopinvariant.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/metadata.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/multi_block.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/multi_user.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/odd_store.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/operandorder.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/opt.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/phi.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/phi_overalignedtype.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/powof2div.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/pr16899.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/pr19657.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/propagate_ir_flags.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/reduction.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/reduction2.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/return.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/rgb_phi.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/saxpy.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/scheduling.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/simple-loop.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/simplebb.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/tiny-tree.ll
    llvm/trunk/test/Transforms/SLPVectorizer/X86/unreachable.ll
    llvm/trunk/test/Transforms/SLPVectorizer/XCore/no-vector-registers.ll
    llvm/trunk/test/Transforms/SROA/address-spaces.ll
    llvm/trunk/test/Transforms/SROA/alignment.ll
    llvm/trunk/test/Transforms/SROA/basictest.ll
    llvm/trunk/test/Transforms/SROA/big-endian.ll
    llvm/trunk/test/Transforms/SROA/fca.ll
    llvm/trunk/test/Transforms/SROA/phi-and-select.ll
    llvm/trunk/test/Transforms/SROA/slice-order-independence.ll
    llvm/trunk/test/Transforms/SROA/slice-width.ll
    llvm/trunk/test/Transforms/SROA/vector-promotion.ll
    llvm/trunk/test/Transforms/SampleProfile/branch.ll
    llvm/trunk/test/Transforms/ScalarRepl/2003-05-29-ArrayFail.ll
    llvm/trunk/test/Transforms/ScalarRepl/2003-09-12-IncorrectPromote.ll
    llvm/trunk/test/Transforms/ScalarRepl/2003-10-29-ArrayProblem.ll
    llvm/trunk/test/Transforms/ScalarRepl/2006-11-07-InvalidArrayPromote.ll
    llvm/trunk/test/Transforms/ScalarRepl/2007-05-29-MemcpyPreserve.ll
    llvm/trunk/test/Transforms/ScalarRepl/2007-11-03-bigendian_apint.ll
    llvm/trunk/test/Transforms/ScalarRepl/2008-01-29-PromoteBug.ll
    llvm/trunk/test/Transforms/ScalarRepl/2008-02-28-SubElementExtractCrash.ll
    llvm/trunk/test/Transforms/ScalarRepl/2008-06-05-loadstore-agg.ll
    llvm/trunk/test/Transforms/ScalarRepl/2008-08-22-out-of-range-array-promote.ll
    llvm/trunk/test/Transforms/ScalarRepl/2008-09-22-vector-gep.ll
    llvm/trunk/test/Transforms/ScalarRepl/2009-02-02-ScalarPromoteOutOfRange.ll
    llvm/trunk/test/Transforms/ScalarRepl/2009-03-04-MemCpyAlign.ll
    llvm/trunk/test/Transforms/ScalarRepl/2009-12-11-NeonTypes.ll
    llvm/trunk/test/Transforms/ScalarRepl/2011-05-06-CapturedAlloca.ll
    llvm/trunk/test/Transforms/ScalarRepl/2011-06-08-VectorExtractValue.ll
    llvm/trunk/test/Transforms/ScalarRepl/2011-06-17-VectorPartialMemset.ll
    llvm/trunk/test/Transforms/ScalarRepl/2011-10-22-VectorCrash.ll
    llvm/trunk/test/Transforms/ScalarRepl/2011-11-11-EmptyStruct.ll
    llvm/trunk/test/Transforms/ScalarRepl/AggregatePromote.ll
    llvm/trunk/test/Transforms/ScalarRepl/address-space.ll
    llvm/trunk/test/Transforms/ScalarRepl/arraytest.ll
    llvm/trunk/test/Transforms/ScalarRepl/badarray.ll
    llvm/trunk/test/Transforms/ScalarRepl/basictest.ll
    llvm/trunk/test/Transforms/ScalarRepl/bitfield-sroa.ll
    llvm/trunk/test/Transforms/ScalarRepl/copy-aggregate.ll
    llvm/trunk/test/Transforms/ScalarRepl/crash.ll
    llvm/trunk/test/Transforms/ScalarRepl/inline-vector.ll
    llvm/trunk/test/Transforms/ScalarRepl/lifetime.ll
    llvm/trunk/test/Transforms/ScalarRepl/load-store-aggregate.ll
    llvm/trunk/test/Transforms/ScalarRepl/memset-aggregate-byte-leader.ll
    llvm/trunk/test/Transforms/ScalarRepl/memset-aggregate.ll
    llvm/trunk/test/Transforms/ScalarRepl/negative-memset.ll
    llvm/trunk/test/Transforms/ScalarRepl/nonzero-first-index.ll
    llvm/trunk/test/Transforms/ScalarRepl/not-a-vector.ll
    llvm/trunk/test/Transforms/ScalarRepl/phi-cycle.ll
    llvm/trunk/test/Transforms/ScalarRepl/phi-select.ll
    llvm/trunk/test/Transforms/ScalarRepl/sroa_two.ll
    llvm/trunk/test/Transforms/ScalarRepl/union-pointer.ll
    llvm/trunk/test/Transforms/ScalarRepl/vector_promote.ll
    llvm/trunk/test/Transforms/ScalarRepl/volatile.ll
    llvm/trunk/test/Transforms/Scalarizer/basic.ll
    llvm/trunk/test/Transforms/Scalarizer/dbginfo.ll
    llvm/trunk/test/Transforms/SeparateConstOffsetFromGEP/NVPTX/split-gep-and-gvn.ll
    llvm/trunk/test/Transforms/SeparateConstOffsetFromGEP/NVPTX/split-gep.ll
    llvm/trunk/test/Transforms/SimplifyCFG/2005-08-01-PHIUpdateFail.ll
    llvm/trunk/test/Transforms/SimplifyCFG/2005-12-03-IncorrectPHIFold.ll
    llvm/trunk/test/Transforms/SimplifyCFG/2006-08-03-Crash.ll
    llvm/trunk/test/Transforms/SimplifyCFG/2006-12-08-Ptr-ICmp-Branch.ll
    llvm/trunk/test/Transforms/SimplifyCFG/X86/switch-covered-bug.ll
    llvm/trunk/test/Transforms/SimplifyCFG/X86/switch-table-bug.ll
    llvm/trunk/test/Transforms/SimplifyCFG/X86/switch_to_lookup_table.ll
    llvm/trunk/test/Transforms/SimplifyCFG/attr-noduplicate.ll
    llvm/trunk/test/Transforms/SimplifyCFG/branch-fold-dbg.ll
    llvm/trunk/test/Transforms/SimplifyCFG/indirectbr.ll
    llvm/trunk/test/Transforms/SimplifyCFG/multiple-phis.ll
    llvm/trunk/test/Transforms/SimplifyCFG/phi-undef-loadstore.ll
    llvm/trunk/test/Transforms/SimplifyCFG/select-gep.ll
    llvm/trunk/test/Transforms/SimplifyCFG/speculate-store.ll
    llvm/trunk/test/Transforms/SimplifyCFG/speculate-with-offset.ll
    llvm/trunk/test/Transforms/SimplifyCFG/switch_create.ll
    llvm/trunk/test/Transforms/SimplifyCFG/unreachable-blocks.ll
    llvm/trunk/test/Transforms/SimplifyCFG/volatile-phioper.ll
    llvm/trunk/test/Transforms/Sink/basic.ll
    llvm/trunk/test/Transforms/StructurizeCFG/branch-on-argument.ll
    llvm/trunk/test/Transforms/StructurizeCFG/loop-multiple-exits.ll
    llvm/trunk/test/Transforms/StructurizeCFG/post-order-traversal-bug.ll
    llvm/trunk/test/Verifier/2002-11-05-GetelementptrPointers.ll
    llvm/trunk/test/Verifier/2010-08-07-PointerIntrinsic.ll
    llvm/trunk/test/Verifier/bitcast-address-space-through-constant-inttoptr-inside-gep-instruction.ll
    llvm/trunk/test/Verifier/inalloca-vararg.ll   (contents, props changed)
    llvm/trunk/test/tools/gold/slp-vectorize.ll
    llvm/trunk/test/tools/gold/vectorize.ll
    llvm/trunk/unittests/IR/ConstantsTest.cpp

Modified: llvm/trunk/lib/AsmParser/LLParser.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/AsmParser/LLParser.cpp?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/lib/AsmParser/LLParser.cpp (original)
+++ llvm/trunk/lib/AsmParser/LLParser.cpp Fri Feb 27 13:29:02 2015
@@ -5440,7 +5440,19 @@ int LLParser::ParseGetElementPtr(Instruc
 
   bool InBounds = EatIfPresent(lltok::kw_inbounds);
 
-  if (ParseTypeAndValue(Ptr, Loc, PFS)) return true;
+  Type *Ty = nullptr;
+  LocTy ExplicitTypeLoc = Lex.getLoc();
+  if (ParseType(Ty) ||
+      ParseToken(lltok::comma, "expected comma after getelementptr's type") ||
+      ParseTypeAndValue(Ptr, Loc, PFS))
+    return true;
+
+  Type *PtrTy = Ptr->getType();
+  if (VectorType *VT = dyn_cast<VectorType>(PtrTy))
+    PtrTy = VT->getElementType();
+  if (Ty != cast<SequentialType>(PtrTy)->getElementType())
+    return Error(ExplicitTypeLoc,
+                 "explicit pointee type doesn't match operand's pointee type");
 
   Type *BaseType = Ptr->getType();
   PointerType *BasePointerType = dyn_cast<PointerType>(BaseType->getScalarType());

Modified: llvm/trunk/lib/IR/AsmWriter.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/IR/AsmWriter.cpp?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/lib/IR/AsmWriter.cpp (original)
+++ llvm/trunk/lib/IR/AsmWriter.cpp Fri Feb 27 13:29:02 2015
@@ -2898,6 +2898,11 @@ void AssemblyWriter::printInstruction(co
     Out << ", ";
     TypePrinter.print(I.getType(), Out);
   } else if (Operand) {   // Print the normal way.
+    if (const GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(&I)) {
+      Out << ' ';
+      TypePrinter.print(GEP->getSourceElementType(), Out);
+      Out << ',';
+    }
 
     // PrintAllTypes - Instructions who have operands of all the same type
     // omit the type from all but the first operand.  If the instruction has

Modified: llvm/trunk/test/Analysis/BasicAA/2003-02-26-AccessSizeTest.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2003-02-26-AccessSizeTest.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2003-02-26-AccessSizeTest.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2003-02-26-AccessSizeTest.ll Fri Feb 27 13:29:02 2015
@@ -11,7 +11,7 @@ define i32 @test() {
   store i32 0, i32* %A
   %X = load i32* %A
   %B = bitcast i32* %A to i8*
-  %C = getelementptr i8* %B, i64 1
+  %C = getelementptr i8, i8* %B, i64 1
   store i8 1, i8* %C    ; Aliases %A
   %Y.DONOTREMOVE = load i32* %A
   %Z = sub i32 %X, %Y.DONOTREMOVE

Modified: llvm/trunk/test/Analysis/BasicAA/2003-03-04-GEPCrash.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2003-03-04-GEPCrash.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2003-03-04-GEPCrash.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2003-03-04-GEPCrash.ll Fri Feb 27 13:29:02 2015
@@ -1,7 +1,7 @@
 ; RUN: opt < %s -basicaa -aa-eval -disable-output 2>/dev/null
 ; Test for a bug in BasicAA which caused a crash when querying equality of P1&P2
 define void @test({[2 x i32],[2 x i32]}* %A, i64 %X, i64 %Y) {
-	%P1 = getelementptr {[2 x i32],[2 x i32]}* %A, i64 0, i32 0, i64 %X
-	%P2 = getelementptr {[2 x i32],[2 x i32]}* %A, i64 0, i32 1, i64 %Y
+	%P1 = getelementptr {[2 x i32],[2 x i32]}, {[2 x i32],[2 x i32]}* %A, i64 0, i32 0, i64 %X
+	%P2 = getelementptr {[2 x i32],[2 x i32]}, {[2 x i32],[2 x i32]}* %A, i64 0, i32 1, i64 %Y
 	ret void
 }

Modified: llvm/trunk/test/Analysis/BasicAA/2003-04-22-GEPProblem.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2003-04-22-GEPProblem.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2003-04-22-GEPProblem.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2003-04-22-GEPProblem.ll Fri Feb 27 13:29:02 2015
@@ -4,8 +4,8 @@
 
 define i32 @test(i32 *%Ptr, i64 %V) {
 ; CHECK: sub i32 %X, %Y
-  %P2 = getelementptr i32* %Ptr, i64 1
-  %P1 = getelementptr i32* %Ptr, i64 %V
+  %P2 = getelementptr i32, i32* %Ptr, i64 1
+  %P1 = getelementptr i32, i32* %Ptr, i64 %V
   %X = load i32* %P1
   store i32 5, i32* %P2
   %Y = load i32* %P1

Modified: llvm/trunk/test/Analysis/BasicAA/2003-04-25-GEPCrash.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2003-04-25-GEPCrash.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2003-04-25-GEPCrash.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2003-04-25-GEPCrash.ll Fri Feb 27 13:29:02 2015
@@ -1,7 +1,7 @@
 ; RUN: opt < %s -basicaa -aa-eval -disable-output 2>/dev/null
 ; Test for a bug in BasicAA which caused a crash when querying equality of P1&P2
 define void @test([17 x i16]* %mask_bits) {
-	%P1 = getelementptr [17 x i16]* %mask_bits, i64 0, i64 0
-	%P2 = getelementptr [17 x i16]* %mask_bits, i64 252645134, i64 0
+	%P1 = getelementptr [17 x i16], [17 x i16]* %mask_bits, i64 0, i64 0
+	%P2 = getelementptr [17 x i16], [17 x i16]* %mask_bits, i64 252645134, i64 0
 	ret void
 }

Modified: llvm/trunk/test/Analysis/BasicAA/2003-05-21-GEP-Problem.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2003-05-21-GEP-Problem.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2003-05-21-GEP-Problem.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2003-05-21-GEP-Problem.ll Fri Feb 27 13:29:02 2015
@@ -6,13 +6,13 @@ define void @table_reindex(%struct..apr_
 	br label %loopentry
 
 loopentry:		; preds = %0, %no_exit
-	%tmp.101 = getelementptr %struct..apr_table_t* %t.1, i64 0, i32 0, i32 2
+	%tmp.101 = getelementptr %struct..apr_table_t, %struct..apr_table_t* %t.1, i64 0, i32 0, i32 2
 	%tmp.11 = load i32* %tmp.101		; <i32> [#uses=0]
 	br i1 false, label %no_exit, label %UnifiedExitNode
 
 no_exit:		; preds = %loopentry
 	%tmp.25 = sext i32 0 to i64		; <i64> [#uses=1]
-	%tmp.261 = getelementptr %struct..apr_table_t* %t.1, i64 0, i32 3, i64 %tmp.25		; <i32*> [#uses=1]
+	%tmp.261 = getelementptr %struct..apr_table_t, %struct..apr_table_t* %t.1, i64 0, i32 3, i64 %tmp.25		; <i32*> [#uses=1]
 	store i32 0, i32* %tmp.261
 	br label %loopentry
 

Modified: llvm/trunk/test/Analysis/BasicAA/2003-06-01-AliasCrash.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2003-06-01-AliasCrash.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2003-06-01-AliasCrash.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2003-06-01-AliasCrash.ll Fri Feb 27 13:29:02 2015
@@ -1,11 +1,11 @@
 ; RUN: opt < %s -basicaa -aa-eval -disable-output 2>/dev/null
 
 define i32 @MTConcat([3 x i32]* %a.1) {
-	%tmp.961 = getelementptr [3 x i32]* %a.1, i64 0, i64 4
+	%tmp.961 = getelementptr [3 x i32], [3 x i32]* %a.1, i64 0, i64 4
 	%tmp.97 = load i32* %tmp.961
-	%tmp.119 = getelementptr [3 x i32]* %a.1, i64 1, i64 0
+	%tmp.119 = getelementptr [3 x i32], [3 x i32]* %a.1, i64 1, i64 0
 	%tmp.120 = load i32* %tmp.119
-	%tmp.1541 = getelementptr [3 x i32]* %a.1, i64 0, i64 4
+	%tmp.1541 = getelementptr [3 x i32], [3 x i32]* %a.1, i64 0, i64 4
 	%tmp.155 = load i32* %tmp.1541
 	ret i32 0
 }

Modified: llvm/trunk/test/Analysis/BasicAA/2003-07-03-BasicAACrash.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2003-07-03-BasicAACrash.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2003-07-03-BasicAACrash.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2003-07-03-BasicAACrash.ll Fri Feb 27 13:29:02 2015
@@ -4,7 +4,7 @@
 %struct..RefRect = type { %struct..RefPoint, %struct..RefPoint }
 
 define i32 @BMT_CommitPartDrawObj() {
-	%tmp.19111 = getelementptr %struct..RefRect* null, i64 0, i32 0, i32 1, i32 2
-	%tmp.20311 = getelementptr %struct..RefRect* null, i64 0, i32 1, i32 1, i32 2
+	%tmp.19111 = getelementptr %struct..RefRect, %struct..RefRect* null, i64 0, i32 0, i32 1, i32 2
+	%tmp.20311 = getelementptr %struct..RefRect, %struct..RefRect* null, i64 0, i32 1, i32 1, i32 2
 	ret i32 0
 }

Modified: llvm/trunk/test/Analysis/BasicAA/2003-11-04-SimpleCases.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2003-11-04-SimpleCases.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2003-11-04-SimpleCases.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2003-11-04-SimpleCases.ll Fri Feb 27 13:29:02 2015
@@ -11,10 +11,10 @@ target datalayout = "e-m:e-i64:64-f80:12
 ; CHECK-NOT:   MayAlias:
 
 define void @test(%T* %P) {
-  %A = getelementptr %T* %P, i64 0
-  %B = getelementptr %T* %P, i64 0, i32 0
-  %C = getelementptr %T* %P, i64 0, i32 1
-  %D = getelementptr %T* %P, i64 0, i32 1, i64 0
-  %E = getelementptr %T* %P, i64 0, i32 1, i64 5
+  %A = getelementptr %T, %T* %P, i64 0
+  %B = getelementptr %T, %T* %P, i64 0, i32 0
+  %C = getelementptr %T, %T* %P, i64 0, i32 1
+  %D = getelementptr %T, %T* %P, i64 0, i32 1, i64 0
+  %E = getelementptr %T, %T* %P, i64 0, i32 1, i64 5
   ret void
 }

Modified: llvm/trunk/test/Analysis/BasicAA/2003-12-11-ConstExprGEP.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2003-12-11-ConstExprGEP.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2003-12-11-ConstExprGEP.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2003-12-11-ConstExprGEP.ll Fri Feb 27 13:29:02 2015
@@ -13,10 +13,10 @@ target datalayout = "e-m:e-i64:64-f80:12
 ; CHECK-NOT:   MayAlias:
 
 define void @test() {
-  %D = getelementptr %T* @G, i64 0, i32 0
-  %E = getelementptr %T* @G, i64 0, i32 1, i64 5
-  %F = getelementptr i32* getelementptr (%T* @G, i64 0, i32 0), i64 0
-  %X = getelementptr [10 x i8]* getelementptr (%T* @G, i64 0, i32 1), i64 0, i64 5
+  %D = getelementptr %T, %T* @G, i64 0, i32 0
+  %E = getelementptr %T, %T* @G, i64 0, i32 1, i64 5
+  %F = getelementptr i32, i32* getelementptr (%T* @G, i64 0, i32 0), i64 0
+  %X = getelementptr [10 x i8], [10 x i8]* getelementptr (%T* @G, i64 0, i32 1), i64 0, i64 5
 
   ret void
 }

Modified: llvm/trunk/test/Analysis/BasicAA/2004-07-28-MustAliasbug.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2004-07-28-MustAliasbug.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2004-07-28-MustAliasbug.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2004-07-28-MustAliasbug.ll Fri Feb 27 13:29:02 2015
@@ -2,9 +2,9 @@
 
 define void @test({i32,i32 }* %P) {
 ; CHECK: store i32 0, i32* %X
-  %Q = getelementptr {i32,i32}* %P, i32 1
-  %X = getelementptr {i32,i32}* %Q, i32 0, i32 1
-  %Y = getelementptr {i32,i32}* %Q, i32 1, i32 1
+  %Q = getelementptr {i32,i32}, {i32,i32}* %P, i32 1
+  %X = getelementptr {i32,i32}, {i32,i32}* %Q, i32 0, i32 1
+  %Y = getelementptr {i32,i32}, {i32,i32}* %Q, i32 1, i32 1
   store i32 0, i32* %X
   store i32 1, i32* %Y
   ret void

Modified: llvm/trunk/test/Analysis/BasicAA/2006-03-03-BadArraySubscript.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2006-03-03-BadArraySubscript.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2006-03-03-BadArraySubscript.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2006-03-03-BadArraySubscript.ll Fri Feb 27 13:29:02 2015
@@ -12,11 +12,11 @@ entry:
 
 no_exit:		; preds = %no_exit, %entry
 	%i.0.0 = phi i32 [ 0, %entry ], [ %inc, %no_exit ]		; <i32> [#uses=2]
-	%tmp.6 = getelementptr [3 x [3 x i32]]* %X, i32 0, i32 0, i32 %i.0.0		; <i32*> [#uses=1]
+	%tmp.6 = getelementptr [3 x [3 x i32]], [3 x [3 x i32]]* %X, i32 0, i32 0, i32 %i.0.0		; <i32*> [#uses=1]
 	store i32 1, i32* %tmp.6
-	%tmp.8 = getelementptr [3 x [3 x i32]]* %X, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
+	%tmp.8 = getelementptr [3 x [3 x i32]], [3 x [3 x i32]]* %X, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
 	%tmp.9 = load i32* %tmp.8		; <i32> [#uses=1]
-	%tmp.11 = getelementptr [3 x [3 x i32]]* %X, i32 0, i32 1, i32 0		; <i32*> [#uses=1]
+	%tmp.11 = getelementptr [3 x [3 x i32]], [3 x [3 x i32]]* %X, i32 0, i32 1, i32 0		; <i32*> [#uses=1]
 	%tmp.12 = load i32* %tmp.11		; <i32> [#uses=1]
 	%tmp.13 = add i32 %tmp.12, %tmp.9		; <i32> [#uses=1]
 	%inc = add i32 %i.0.0, 1		; <i32> [#uses=2]
@@ -25,7 +25,7 @@ no_exit:		; preds = %no_exit, %entry
 
 loopexit:		; preds = %no_exit, %entry
 	%Y.0.1 = phi i32 [ 0, %entry ], [ %tmp.13, %no_exit ]		; <i32> [#uses=1]
-	%tmp.4 = getelementptr [3 x [3 x i32]]* %X, i32 0, i32 0		; <[3 x i32]*> [#uses=1]
+	%tmp.4 = getelementptr [3 x [3 x i32]], [3 x [3 x i32]]* %X, i32 0, i32 0		; <[3 x i32]*> [#uses=1]
 	%tmp.15 = call i32 (...)* @foo( [3 x i32]* %tmp.4, i32 %Y.0.1 )		; <i32> [#uses=0]
 	ret void
 }

Modified: llvm/trunk/test/Analysis/BasicAA/2006-11-03-BasicAAVectorCrash.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2006-11-03-BasicAAVectorCrash.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2006-11-03-BasicAAVectorCrash.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2006-11-03-BasicAAVectorCrash.ll Fri Feb 27 13:29:02 2015
@@ -22,7 +22,7 @@ cond_true264.i:		; preds = %bb239.i
 	ret void
 
 cond_false277.i:		; preds = %bb239.i
-	%tmp1062.i = getelementptr [2 x <4 x i32>]* null, i32 0, i32 1		; <<4 x i32>*> [#uses=1]
+	%tmp1062.i = getelementptr [2 x <4 x i32>], [2 x <4 x i32>]* null, i32 0, i32 1		; <<4 x i32>*> [#uses=1]
 	store <4 x i32> zeroinitializer, <4 x i32>* %tmp1062.i
 	br i1 false, label %cond_true1032.i, label %cond_false1063.i85
 
@@ -33,7 +33,7 @@ bb1013.i:		; preds = %bb205.i
 	ret void
 
 cond_true1032.i:		; preds = %cond_false277.i
-	%tmp1187.i = getelementptr [2 x <4 x i32>]* null, i32 0, i32 0, i32 7		; <i32*> [#uses=1]
+	%tmp1187.i = getelementptr [2 x <4 x i32>], [2 x <4 x i32>]* null, i32 0, i32 0, i32 7		; <i32*> [#uses=1]
 	store i32 0, i32* %tmp1187.i
 	br label %bb2037.i
 

Modified: llvm/trunk/test/Analysis/BasicAA/2007-01-13-BasePointerBadNoAlias.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2007-01-13-BasePointerBadNoAlias.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2007-01-13-BasePointerBadNoAlias.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2007-01-13-BasePointerBadNoAlias.ll Fri Feb 27 13:29:02 2015
@@ -21,11 +21,11 @@ target triple = "i686-apple-darwin8"
 ; CHECK:   ret i32 %Z
 
 define i32 @test(%struct.closure_type* %tmp18169) {
-	%tmp18174 = getelementptr %struct.closure_type* %tmp18169, i32 0, i32 4, i32 0, i32 0		; <i32*> [#uses=2]
+	%tmp18174 = getelementptr %struct.closure_type, %struct.closure_type* %tmp18169, i32 0, i32 4, i32 0, i32 0		; <i32*> [#uses=2]
 	%tmp18269 = bitcast i32* %tmp18174  to %struct.STYLE*		; <%struct.STYLE*> [#uses=1]
 	%A = load i32* %tmp18174		; <i32> [#uses=1]
 
-        %tmp18272 = getelementptr %struct.STYLE* %tmp18269, i32 0, i32 0, i32 0, i32 2          ; <i16*> [#uses=1]
+        %tmp18272 = getelementptr %struct.STYLE, %struct.STYLE* %tmp18269, i32 0, i32 0, i32 0, i32 2          ; <i16*> [#uses=1]
         store i16 123, i16* %tmp18272
 
 	%Q = load i32* %tmp18174		; <i32> [#uses=1]

Modified: llvm/trunk/test/Analysis/BasicAA/2007-08-01-NoAliasAndGEP.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2007-08-01-NoAliasAndGEP.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2007-08-01-NoAliasAndGEP.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2007-08-01-NoAliasAndGEP.ll Fri Feb 27 13:29:02 2015
@@ -8,10 +8,10 @@ target datalayout = "e-m:e-i64:64-f80:12
 ; CHECK: 6 partial alias responses
 
 define void @foo(i32* noalias %p, i32* noalias %q, i32 %i, i32 %j) {
-  %Ipointer = getelementptr i32* %p, i32 %i
-  %qi = getelementptr i32* %q, i32 %i
-  %Jpointer = getelementptr i32* %p, i32 %j
-  %qj = getelementptr i32* %q, i32 %j
+  %Ipointer = getelementptr i32, i32* %p, i32 %i
+  %qi = getelementptr i32, i32* %q, i32 %i
+  %Jpointer = getelementptr i32, i32* %p, i32 %j
+  %qj = getelementptr i32, i32* %q, i32 %j
   store i32 0, i32* %p
   store i32 0, i32* %Ipointer
   store i32 0, i32* %Jpointer

Modified: llvm/trunk/test/Analysis/BasicAA/2007-10-24-ArgumentsGlobals.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2007-10-24-ArgumentsGlobals.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2007-10-24-ArgumentsGlobals.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2007-10-24-ArgumentsGlobals.ll Fri Feb 27 13:29:02 2015
@@ -9,7 +9,7 @@ define i32 @_Z3fooP1A(%struct.A* %b) {
 ; CHECK: ret i32 %tmp7
 entry:
         store i32 1, i32* getelementptr (%struct.B* @a, i32 0, i32 0, i32 0), align 8
-        %tmp4 = getelementptr %struct.A* %b, i32 0, i32 0               ;<i32*> [#uses=1]
+        %tmp4 = getelementptr %struct.A, %struct.A* %b, i32 0, i32 0               ;<i32*> [#uses=1]
         store i32 0, i32* %tmp4, align 4
         %tmp7 = load i32* getelementptr (%struct.B* @a, i32 0, i32 0, i32 0), align 8           ; <i32> [#uses=1]
         ret i32 %tmp7

Modified: llvm/trunk/test/Analysis/BasicAA/2007-11-05-SizeCrash.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2007-11-05-SizeCrash.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2007-11-05-SizeCrash.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2007-11-05-SizeCrash.ll Fri Feb 27 13:29:02 2015
@@ -14,14 +14,14 @@ target triple = "x86_64-unknown-linux-gn
 
 define i32 @uhci_suspend(%struct.usb_hcd* %hcd) {
 entry:
-        %tmp17 = getelementptr %struct.usb_hcd* %hcd, i32 0, i32 2, i64 1      
+        %tmp17 = getelementptr %struct.usb_hcd, %struct.usb_hcd* %hcd, i32 0, i32 2, i64 1      
         ; <i64*> [#uses=1]
         %tmp1718 = bitcast i64* %tmp17 to i32*          ; <i32*> [#uses=1]
         %tmp19 = load i32* %tmp1718, align 4            ; <i32> [#uses=0]
         br i1 false, label %cond_true34, label %done_okay
 
 cond_true34:            ; preds = %entry
-        %tmp631 = getelementptr %struct.usb_hcd* %hcd, i32 0, i32 2, i64
+        %tmp631 = getelementptr %struct.usb_hcd, %struct.usb_hcd* %hcd, i32 0, i32 2, i64
 2305843009213693950            ; <i64*> [#uses=1]
         %tmp70 = bitcast i64* %tmp631 to %struct.device**
 

Modified: llvm/trunk/test/Analysis/BasicAA/2007-12-08-OutOfBoundsCrash.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2007-12-08-OutOfBoundsCrash.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2007-12-08-OutOfBoundsCrash.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2007-12-08-OutOfBoundsCrash.ll Fri Feb 27 13:29:02 2015
@@ -13,7 +13,7 @@ target triple = "x86_64-unknown-linux-gn
 
 define i32 @ehci_pci_setup(%struct.usb_hcd* %hcd) {
 entry:
-	%tmp14 = getelementptr %struct.usb_hcd* %hcd, i32 0, i32 0, i32 0		; <%struct.device**> [#uses=1]
+	%tmp14 = getelementptr %struct.usb_hcd, %struct.usb_hcd* %hcd, i32 0, i32 0, i32 0		; <%struct.device**> [#uses=1]
 	%tmp15 = load %struct.device** %tmp14, align 8		; <%struct.device*> [#uses=0]
 	br i1 false, label %bb25, label %return
 
@@ -21,7 +21,7 @@ bb25:		; preds = %entry
 	br i1 false, label %cond_true, label %return
 
 cond_true:		; preds = %bb25
-	%tmp601 = getelementptr %struct.usb_hcd* %hcd, i32 0, i32 1, i64 2305843009213693951		; <i64*> [#uses=1]
+	%tmp601 = getelementptr %struct.usb_hcd, %struct.usb_hcd* %hcd, i32 0, i32 1, i64 2305843009213693951		; <i64*> [#uses=1]
 	%tmp67 = bitcast i64* %tmp601 to %struct.device**		; <%struct.device**> [#uses=1]
 	%tmp68 = load %struct.device** %tmp67, align 8		; <%struct.device*> [#uses=0]
 	ret i32 undef

Modified: llvm/trunk/test/Analysis/BasicAA/2008-04-15-Byval.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2008-04-15-Byval.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2008-04-15-Byval.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2008-04-15-Byval.ll Fri Feb 27 13:29:02 2015
@@ -7,8 +7,8 @@ target triple = "i386-apple-darwin8"
 define void @foo(%struct.x* byval align 4  %X) nounwind  {
 ; CHECK: store i32 2, i32* %tmp1
 entry:
-	%tmp = getelementptr %struct.x* %X, i32 0, i32 0		; <[4 x i32]*> [#uses=1]
-	%tmp1 = getelementptr [4 x i32]* %tmp, i32 0, i32 3		; <i32*> [#uses=1]
+	%tmp = getelementptr %struct.x, %struct.x* %X, i32 0, i32 0		; <[4 x i32]*> [#uses=1]
+	%tmp1 = getelementptr [4 x i32], [4 x i32]* %tmp, i32 0, i32 3		; <i32*> [#uses=1]
 	store i32 2, i32* %tmp1, align 4
 	%tmp2 = call i32 (...)* @bar( %struct.x* byval align 4  %X ) nounwind 		; <i32> [#uses=0]
 	br label %return

Modified: llvm/trunk/test/Analysis/BasicAA/2009-03-04-GEPNoalias.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2009-03-04-GEPNoalias.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2009-03-04-GEPNoalias.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2009-03-04-GEPNoalias.ll Fri Feb 27 13:29:02 2015
@@ -6,7 +6,7 @@ define i32 @test(i32 %x) {
 ; CHECK: load i32* %a
   %a = call i32* @noalias()
   store i32 1, i32* %a
-  %b = getelementptr i32* %a, i32 %x
+  %b = getelementptr i32, i32* %a, i32 %x
   store i32 2, i32* %b
 
   %c = load i32* %a

Modified: llvm/trunk/test/Analysis/BasicAA/2009-10-13-AtomicModRef.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2009-10-13-AtomicModRef.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2009-10-13-AtomicModRef.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2009-10-13-AtomicModRef.ll Fri Feb 27 13:29:02 2015
@@ -2,8 +2,8 @@
 target datalayout = "E-p:64:64:64-a0:0:8-f32:32:32-f64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-v64:64:64-v128:128:128"
 
 define i8 @foo(i8* %ptr) {
-  %P = getelementptr i8* %ptr, i32 0
-  %Q = getelementptr i8* %ptr, i32 1
+  %P = getelementptr i8, i8* %ptr, i32 0
+  %Q = getelementptr i8, i8* %ptr, i32 1
 ; CHECK: getelementptr
   %X = load i8* %P
   %Y = atomicrmw add i8* %Q, i8 1 monotonic

Modified: llvm/trunk/test/Analysis/BasicAA/2009-10-13-GEP-BaseNoAlias.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2009-10-13-GEP-BaseNoAlias.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2009-10-13-GEP-BaseNoAlias.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2009-10-13-GEP-BaseNoAlias.ll Fri Feb 27 13:29:02 2015
@@ -15,7 +15,7 @@ entry:
   br i1 %tmp, label %bb, label %bb1
 
 bb:
-  %b = getelementptr i32* %a, i32 0
+  %b = getelementptr i32, i32* %a, i32 0
   br label %bb2
 
 bb1:

Modified: llvm/trunk/test/Analysis/BasicAA/2010-09-15-GEP-SignedArithmetic.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2010-09-15-GEP-SignedArithmetic.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2010-09-15-GEP-SignedArithmetic.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2010-09-15-GEP-SignedArithmetic.ll Fri Feb 27 13:29:02 2015
@@ -8,7 +8,7 @@ target datalayout = "e-p:32:32:32"
 define i32 @test(i32* %tab, i32 %indvar) nounwind {
   %tmp31 = mul i32 %indvar, -2
   %tmp32 = add i32 %tmp31, 30
-  %t.5 = getelementptr i32* %tab, i32 %tmp32
+  %t.5 = getelementptr i32, i32* %tab, i32 %tmp32
   %loada = load i32* %tab
   store i32 0, i32* %t.5
   %loadb = load i32* %tab

Modified: llvm/trunk/test/Analysis/BasicAA/2014-03-18-Maxlookup-reached.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/2014-03-18-Maxlookup-reached.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/2014-03-18-Maxlookup-reached.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/2014-03-18-Maxlookup-reached.ll Fri Feb 27 13:29:02 2015
@@ -10,25 +10,25 @@ target datalayout = "e"
 
 define i32 @main() {
   %t = alloca %struct.foo, align 4
-  %1 = getelementptr inbounds %struct.foo* %t, i32 0, i32 0
+  %1 = getelementptr inbounds %struct.foo, %struct.foo* %t, i32 0, i32 0
   store i32 1, i32* %1, align 4
-  %2 = getelementptr inbounds %struct.foo* %t, i64 1
+  %2 = getelementptr inbounds %struct.foo, %struct.foo* %t, i64 1
   %3 = bitcast %struct.foo* %2 to i8*
-  %4 = getelementptr inbounds i8* %3, i32 -1
+  %4 = getelementptr inbounds i8, i8* %3, i32 -1
   store i8 0, i8* %4
-  %5 = getelementptr inbounds i8* %4, i32 -1
+  %5 = getelementptr inbounds i8, i8* %4, i32 -1
   store i8 0, i8* %5
-  %6 = getelementptr inbounds i8* %5, i32 -1
+  %6 = getelementptr inbounds i8, i8* %5, i32 -1
   store i8 0, i8* %6
-  %7 = getelementptr inbounds i8* %6, i32 -1
+  %7 = getelementptr inbounds i8, i8* %6, i32 -1
   store i8 0, i8* %7
-  %8 = getelementptr inbounds i8* %7, i32 -1
+  %8 = getelementptr inbounds i8, i8* %7, i32 -1
   store i8 0, i8* %8
-  %9 = getelementptr inbounds i8* %8, i32 -1
+  %9 = getelementptr inbounds i8, i8* %8, i32 -1
   store i8 0, i8* %9
-  %10 = getelementptr inbounds i8* %9, i32 -1
+  %10 = getelementptr inbounds i8, i8* %9, i32 -1
   store i8 0, i8* %10
-  %11 = getelementptr inbounds i8* %10, i32 -1
+  %11 = getelementptr inbounds i8, i8* %10, i32 -1
   store i8 0, i8* %11
   %12 = load i32* %1, align 4
   ret i32 %12

Modified: llvm/trunk/test/Analysis/BasicAA/byval.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/byval.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/byval.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/byval.ll Fri Feb 27 13:29:02 2015
@@ -7,7 +7,7 @@ target triple = "i686-apple-darwin8"
 define i32 @foo(%struct.x* byval  %a) nounwind  {
 ; CHECK: ret i32 1
   %tmp1 = tail call i32 (...)* @bar( %struct.x* %a ) nounwind 		; <i32> [#uses=0]
-  %tmp2 = getelementptr %struct.x* %a, i32 0, i32 0		; <i32*> [#uses=2]
+  %tmp2 = getelementptr %struct.x, %struct.x* %a, i32 0, i32 0		; <i32*> [#uses=2]
   store i32 1, i32* %tmp2, align 4
   store i32 2, i32* @g, align 4
   %tmp4 = load i32* %tmp2, align 4		; <i32> [#uses=1]

Modified: llvm/trunk/test/Analysis/BasicAA/constant-over-index.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/constant-over-index.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/constant-over-index.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/constant-over-index.ll Fri Feb 27 13:29:02 2015
@@ -11,13 +11,13 @@ target datalayout = "e-m:e-i64:64-f80:12
 
 define void @foo([3 x [3 x double]]* noalias %p) {
 entry:
-  %p3 = getelementptr [3 x [3 x double]]* %p, i64 0, i64 0, i64 3
+  %p3 = getelementptr [3 x [3 x double]], [3 x [3 x double]]* %p, i64 0, i64 0, i64 3
   br label %loop
 
 loop:
   %i = phi i64 [ 0, %entry ], [ %i.next, %loop ]
 
-  %p.0.i.0 = getelementptr [3 x [3 x double]]* %p, i64 0, i64 %i, i64 0
+  %p.0.i.0 = getelementptr [3 x [3 x double]], [3 x [3 x double]]* %p, i64 0, i64 %i, i64 0
 
   store volatile double 0.0, double* %p3
   store volatile double 0.1, double* %p.0.i.0

Modified: llvm/trunk/test/Analysis/BasicAA/cs-cs.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/cs-cs.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/cs-cs.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/cs-cs.ll Fri Feb 27 13:29:02 2015
@@ -12,7 +12,7 @@ declare void @a_readonly_func(i8 *) noin
 
 define <8 x i16> @test1(i8* %p, <8 x i16> %y) {
 entry:
-  %q = getelementptr i8* %p, i64 16
+  %q = getelementptr i8, i8* %p, i64 16
   %a = call <8 x i16> @llvm.arm.neon.vld1.v8i16(i8* %p, i32 16) nounwind
   call void @llvm.arm.neon.vst1.v8i16(i8* %q, <8 x i16> %y, i32 16)
   %b = call <8 x i16> @llvm.arm.neon.vld1.v8i16(i8* %p, i32 16) nounwind
@@ -70,7 +70,7 @@ define void @test2a(i8* noalias %P, i8*
 
 define void @test2b(i8* noalias %P, i8* noalias %Q) nounwind ssp {
   tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %P, i8* %Q, i64 12, i32 1, i1 false)
-  %R = getelementptr i8* %P, i64 12
+  %R = getelementptr i8, i8* %P, i64 12
   tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %R, i8* %Q, i64 12, i32 1, i1 false)
   ret void
 
@@ -91,7 +91,7 @@ define void @test2b(i8* noalias %P, i8*
 
 define void @test2c(i8* noalias %P, i8* noalias %Q) nounwind ssp {
   tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %P, i8* %Q, i64 12, i32 1, i1 false)
-  %R = getelementptr i8* %P, i64 11
+  %R = getelementptr i8, i8* %P, i64 11
   tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %R, i8* %Q, i64 12, i32 1, i1 false)
   ret void
 
@@ -112,7 +112,7 @@ define void @test2c(i8* noalias %P, i8*
 
 define void @test2d(i8* noalias %P, i8* noalias %Q) nounwind ssp {
   tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %P, i8* %Q, i64 12, i32 1, i1 false)
-  %R = getelementptr i8* %P, i64 -12
+  %R = getelementptr i8, i8* %P, i64 -12
   tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %R, i8* %Q, i64 12, i32 1, i1 false)
   ret void
 
@@ -133,7 +133,7 @@ define void @test2d(i8* noalias %P, i8*
 
 define void @test2e(i8* noalias %P, i8* noalias %Q) nounwind ssp {
   tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %P, i8* %Q, i64 12, i32 1, i1 false)
-  %R = getelementptr i8* %P, i64 -11
+  %R = getelementptr i8, i8* %P, i64 -11
   tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %R, i8* %Q, i64 12, i32 1, i1 false)
   ret void
 

Modified: llvm/trunk/test/Analysis/BasicAA/featuretest.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/featuretest.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/featuretest.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/featuretest.ll Fri Feb 27 13:29:02 2015
@@ -18,10 +18,10 @@ define i32 @different_array_test(i64 %A,
         call void @external(i32* %Array1)
         call void @external(i32* %Array2)
 
-	%pointer = getelementptr i32* %Array1, i64 %A
+	%pointer = getelementptr i32, i32* %Array1, i64 %A
 	%val = load i32* %pointer
 
-	%pointer2 = getelementptr i32* %Array2, i64 %B
+	%pointer2 = getelementptr i32, i32* %Array2, i64 %B
 	store i32 7, i32* %pointer2
 
 	%REMOVE = load i32* %pointer ; redundant with above load
@@ -38,8 +38,8 @@ define i32 @constant_array_index_test()
 	%Array = alloca i32, i32 100
         call void @external(i32* %Array)
 
-	%P1 = getelementptr i32* %Array, i64 7
-	%P2 = getelementptr i32* %Array, i64 6
+	%P1 = getelementptr i32, i32* %Array, i64 7
+	%P2 = getelementptr i32, i32* %Array, i64 6
 	
 	%A = load i32* %P1
 	store i32 1, i32* %P2   ; Should not invalidate load
@@ -54,7 +54,7 @@ define i32 @constant_array_index_test()
 ; they cannot alias.
 define i32 @gep_distance_test(i32* %A) {
         %REMOVEu = load i32* %A
-        %B = getelementptr i32* %A, i64 2  ; Cannot alias A
+        %B = getelementptr i32, i32* %A, i64 2  ; Cannot alias A
         store i32 7, i32* %B
         %REMOVEv = load i32* %A
         %r = sub i32 %REMOVEu, %REMOVEv
@@ -66,9 +66,9 @@ define i32 @gep_distance_test(i32* %A) {
 ; Test that if two pointers are spaced out by a constant offset, that they
 ; cannot alias, even if there is a variable offset between them...
 define i32 @gep_distance_test2({i32,i32}* %A, i64 %distance) {
-	%A1 = getelementptr {i32,i32}* %A, i64 0, i32 0
+	%A1 = getelementptr {i32,i32}, {i32,i32}* %A, i64 0, i32 0
 	%REMOVEu = load i32* %A1
-	%B = getelementptr {i32,i32}* %A, i64 %distance, i32 1
+	%B = getelementptr {i32,i32}, {i32,i32}* %A, i64 %distance, i32 1
 	store i32 7, i32* %B    ; B cannot alias A, it's at least 4 bytes away
 	%REMOVEv = load i32* %A1
         %r = sub i32 %REMOVEu, %REMOVEv
@@ -82,7 +82,7 @@ define i32 @gep_distance_test2({i32,i32}
 define i32 @gep_distance_test3(i32 * %A) {
 	%X = load i32* %A
 	%B = bitcast i32* %A to i8*
-	%C = getelementptr i8* %B, i64 4
+	%C = getelementptr i8, i8* %B, i64 4
         store i8 42, i8* %C
 	%Y = load i32* %A
         %R = sub i32 %X, %Y
@@ -112,12 +112,12 @@ define i32 @constexpr_test() {
 define i16 @zext_sext_confusion(i16* %row2col, i5 %j) nounwind{
 entry:
   %sum5.cast = zext i5 %j to i64             ; <i64> [#uses=1]
-  %P1 = getelementptr i16* %row2col, i64 %sum5.cast
+  %P1 = getelementptr i16, i16* %row2col, i64 %sum5.cast
   %row2col.load.1.2 = load i16* %P1, align 1 ; <i16> [#uses=1]
   
   %sum13.cast31 = sext i5 %j to i6          ; <i6> [#uses=1]
   %sum13.cast = zext i6 %sum13.cast31 to i64      ; <i64> [#uses=1]
-  %P2 = getelementptr i16* %row2col, i64 %sum13.cast
+  %P2 = getelementptr i16, i16* %row2col, i64 %sum13.cast
   %row2col.load.1.6 = load i16* %P2, align 1 ; <i16> [#uses=1]
   
   %.ret = sub i16 %row2col.load.1.6, %row2col.load.1.2 ; <i16> [#uses=1]

Modified: llvm/trunk/test/Analysis/BasicAA/full-store-partial-alias.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/full-store-partial-alias.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/full-store-partial-alias.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/full-store-partial-alias.ll Fri Feb 27 13:29:02 2015
@@ -18,12 +18,12 @@ define i32 @signbit(double %x) nounwind
 ; CHECK:   ret i32 0
 entry:
   %u = alloca %union.anon, align 8
-  %tmp9 = getelementptr inbounds %union.anon* %u, i64 0, i32 0
+  %tmp9 = getelementptr inbounds %union.anon, %union.anon* %u, i64 0, i32 0
   store double %x, double* %tmp9, align 8, !tbaa !0
   %tmp2 = load i32* bitcast (i64* @endianness_test to i32*), align 8, !tbaa !3
   %idxprom = sext i32 %tmp2 to i64
   %tmp4 = bitcast %union.anon* %u to [2 x i32]*
-  %arrayidx = getelementptr inbounds [2 x i32]* %tmp4, i64 0, i64 %idxprom
+  %arrayidx = getelementptr inbounds [2 x i32], [2 x i32]* %tmp4, i64 0, i64 %idxprom
   %tmp5 = load i32* %arrayidx, align 4, !tbaa !3
   %tmp5.lobit = lshr i32 %tmp5, 31
   ret i32 %tmp5.lobit

Modified: llvm/trunk/test/Analysis/BasicAA/gep-alias.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/gep-alias.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/gep-alias.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/gep-alias.ll Fri Feb 27 13:29:02 2015
@@ -6,11 +6,11 @@ target datalayout = "e-p:32:32:32-p1:16:
 define i32 @test1(i8 * %P) {
 entry:
 	%Q = bitcast i8* %P to {i32, i32}*
-	%R = getelementptr {i32, i32}* %Q, i32 0, i32 1
+	%R = getelementptr {i32, i32}, {i32, i32}* %Q, i32 0, i32 1
 	%S = load i32* %R
 
 	%q = bitcast i8* %P to {i32, i32}*
-	%r = getelementptr {i32, i32}* %q, i32 0, i32 1
+	%r = getelementptr {i32, i32}, {i32, i32}* %q, i32 0, i32 1
 	%s = load i32* %r
 
 	%t = sub i32 %S, %s
@@ -22,10 +22,10 @@ entry:
 define i32 @test2(i8 * %P) {
 entry:
 	%Q = bitcast i8* %P to {i32, i32, i32}*
-	%R = getelementptr {i32, i32, i32}* %Q, i32 0, i32 1
+	%R = getelementptr {i32, i32, i32}, {i32, i32, i32}* %Q, i32 0, i32 1
 	%S = load i32* %R
 
-	%r = getelementptr {i32, i32, i32}* %Q, i32 0, i32 2
+	%r = getelementptr {i32, i32, i32}, {i32, i32, i32}* %Q, i32 0, i32 2
   store i32 42, i32* %r
 
 	%s = load i32* %R
@@ -40,11 +40,11 @@ entry:
 ; This was a miscompilation.
 define i32 @test3({float, {i32, i32, i32}}* %P) {
 entry:
-  %P2 = getelementptr {float, {i32, i32, i32}}* %P, i32 0, i32 1
-	%R = getelementptr {i32, i32, i32}* %P2, i32 0, i32 1
+  %P2 = getelementptr {float, {i32, i32, i32}}, {float, {i32, i32, i32}}* %P, i32 0, i32 1
+	%R = getelementptr {i32, i32, i32}, {i32, i32, i32}* %P2, i32 0, i32 1
 	%S = load i32* %R
 
-	%r = getelementptr {i32, i32, i32}* %P2, i32 0, i32 2
+	%r = getelementptr {i32, i32, i32}, {i32, i32, i32}* %P2, i32 0, i32 2
   store i32 42, i32* %r
 
 	%s = load i32* %R
@@ -62,9 +62,9 @@ entry:
 
 define i32 @test4(%SmallPtrSet64* %P) {
 entry:
-  %tmp2 = getelementptr inbounds %SmallPtrSet64* %P, i64 0, i32 0, i32 1
+  %tmp2 = getelementptr inbounds %SmallPtrSet64, %SmallPtrSet64* %P, i64 0, i32 0, i32 1
   store i32 64, i32* %tmp2, align 8
-  %tmp3 = getelementptr inbounds %SmallPtrSet64* %P, i64 0, i32 0, i32 4, i64 64
+  %tmp3 = getelementptr inbounds %SmallPtrSet64, %SmallPtrSet64* %P, i64 0, i32 0, i32 4, i64 64
   store i8* null, i8** %tmp3, align 8
   %tmp4 = load i32* %tmp2, align 8
 	ret i32 %tmp4
@@ -74,9 +74,9 @@ entry:
 
 ; P[i] != p[i+1]
 define i32 @test5(i32* %p, i64 %i) {
-  %pi = getelementptr i32* %p, i64 %i
+  %pi = getelementptr i32, i32* %p, i64 %i
   %i.next = add i64 %i, 1
-  %pi.next = getelementptr i32* %p, i64 %i.next
+  %pi.next = getelementptr i32, i32* %p, i64 %i.next
   %x = load i32* %pi
   store i32 42, i32* %pi.next
   %y = load i32* %pi
@@ -87,9 +87,9 @@ define i32 @test5(i32* %p, i64 %i) {
 }
 
 define i32 @test5_as1_smaller_size(i32 addrspace(1)* %p, i8 %i) {
-  %pi = getelementptr i32 addrspace(1)* %p, i8 %i
+  %pi = getelementptr i32, i32 addrspace(1)* %p, i8 %i
   %i.next = add i8 %i, 1
-  %pi.next = getelementptr i32 addrspace(1)* %p, i8 %i.next
+  %pi.next = getelementptr i32, i32 addrspace(1)* %p, i8 %i.next
   %x = load i32 addrspace(1)* %pi
   store i32 42, i32 addrspace(1)* %pi.next
   %y = load i32 addrspace(1)* %pi
@@ -101,9 +101,9 @@ define i32 @test5_as1_smaller_size(i32 a
 }
 
 define i32 @test5_as1_same_size(i32 addrspace(1)* %p, i16 %i) {
-  %pi = getelementptr i32 addrspace(1)* %p, i16 %i
+  %pi = getelementptr i32, i32 addrspace(1)* %p, i16 %i
   %i.next = add i16 %i, 1
-  %pi.next = getelementptr i32 addrspace(1)* %p, i16 %i.next
+  %pi.next = getelementptr i32, i32 addrspace(1)* %p, i16 %i.next
   %x = load i32 addrspace(1)* %pi
   store i32 42, i32 addrspace(1)* %pi.next
   %y = load i32 addrspace(1)* %pi
@@ -116,9 +116,9 @@ define i32 @test5_as1_same_size(i32 addr
 ; P[i] != p[(i*4)|1]
 define i32 @test6(i32* %p, i64 %i1) {
   %i = shl i64 %i1, 2
-  %pi = getelementptr i32* %p, i64 %i
+  %pi = getelementptr i32, i32* %p, i64 %i
   %i.next = or i64 %i, 1
-  %pi.next = getelementptr i32* %p, i64 %i.next
+  %pi.next = getelementptr i32, i32* %p, i64 %i.next
   %x = load i32* %pi
   store i32 42, i32* %pi.next
   %y = load i32* %pi
@@ -130,9 +130,9 @@ define i32 @test6(i32* %p, i64 %i1) {
 
 ; P[1] != P[i*4]
 define i32 @test7(i32* %p, i64 %i) {
-  %pi = getelementptr i32* %p, i64 1
+  %pi = getelementptr i32, i32* %p, i64 1
   %i.next = shl i64 %i, 2
-  %pi.next = getelementptr i32* %p, i64 %i.next
+  %pi.next = getelementptr i32, i32* %p, i64 %i.next
   %x = load i32* %pi
   store i32 42, i32* %pi.next
   %y = load i32* %pi
@@ -146,10 +146,10 @@ define i32 @test7(i32* %p, i64 %i) {
 ; PR1143
 define i32 @test8(i32* %p, i16 %i) {
   %i1 = zext i16 %i to i32
-  %pi = getelementptr i32* %p, i32 %i1
+  %pi = getelementptr i32, i32* %p, i32 %i1
   %i.next = add i16 %i, 1
   %i.next2 = zext i16 %i.next to i32
-  %pi.next = getelementptr i32* %p, i32 %i.next2
+  %pi.next = getelementptr i32, i32* %p, i32 %i.next2
   %x = load i32* %pi
   store i32 42, i32* %pi.next
   %y = load i32* %pi
@@ -163,12 +163,12 @@ define i8 @test9([4 x i8] *%P, i32 %i, i
   %i2 = shl i32 %i, 2
   %i3 = add i32 %i2, 1
   ; P2 = P + 1 + 4*i
-  %P2 = getelementptr [4 x i8] *%P, i32 0, i32 %i3
+  %P2 = getelementptr [4 x i8], [4 x i8] *%P, i32 0, i32 %i3
 
   %j2 = shl i32 %j, 2
 
   ; P4 = P + 4*j
-  %P4 = getelementptr [4 x i8]* %P, i32 0, i32 %j2
+  %P4 = getelementptr [4 x i8], [4 x i8]* %P, i32 0, i32 %j2
 
   %x = load i8* %P2
   store i8 42, i8* %P4
@@ -183,10 +183,10 @@ define i8 @test10([4 x i8] *%P, i32 %i)
   %i2 = shl i32 %i, 2
   %i3 = add i32 %i2, 4
   ; P2 = P + 4 + 4*i
-  %P2 = getelementptr [4 x i8] *%P, i32 0, i32 %i3
+  %P2 = getelementptr [4 x i8], [4 x i8] *%P, i32 0, i32 %i3
 
   ; P4 = P + 4*i
-  %P4 = getelementptr [4 x i8]* %P, i32 0, i32 %i2
+  %P4 = getelementptr [4 x i8], [4 x i8]* %P, i32 0, i32 %i2
 
   %x = load i8* %P2
   store i8 42, i8* %P4
@@ -201,10 +201,10 @@ define i8 @test10([4 x i8] *%P, i32 %i)
 define float @test11(i32 %indvar, [4 x [2 x float]]* %q) nounwind ssp {
   %tmp = mul i32 %indvar, -1
   %dec = add i32 %tmp, 3
-  %scevgep = getelementptr [4 x [2 x float]]* %q, i32 0, i32 %dec
+  %scevgep = getelementptr [4 x [2 x float]], [4 x [2 x float]]* %q, i32 0, i32 %dec
   %scevgep35 = bitcast [2 x float]* %scevgep to i64*
-  %arrayidx28 = getelementptr inbounds [4 x [2 x float]]* %q, i32 0, i32 0
-  %y29 = getelementptr inbounds [2 x float]* %arrayidx28, i32 0, i32 1
+  %arrayidx28 = getelementptr inbounds [4 x [2 x float]], [4 x [2 x float]]* %q, i32 0, i32 0
+  %y29 = getelementptr inbounds [2 x float], [2 x float]* %arrayidx28, i32 0, i32 1
   store float 1.0, float* %y29, align 4
   store i64 0, i64* %scevgep35, align 4
   %tmp30 = load float* %y29, align 4
@@ -216,9 +216,9 @@ define float @test11(i32 %indvar, [4 x [
 ; (This was a miscompilation.)
 define i32 @test12(i32 %x, i32 %y, i8* %p) nounwind {
   %a = bitcast i8* %p to [13 x i8]*
-  %b = getelementptr [13 x i8]* %a, i32 %x
+  %b = getelementptr [13 x i8], [13 x i8]* %a, i32 %x
   %c = bitcast [13 x i8]* %b to [15 x i8]*
-  %d = getelementptr [15 x i8]* %c, i32 %y, i32 8
+  %d = getelementptr [15 x i8], [15 x i8]* %c, i32 %y, i32 8
   %castd = bitcast i8* %d to i32*
   %castp = bitcast i8* %p to i32*
   store i32 1, i32* %castp

Modified: llvm/trunk/test/Analysis/BasicAA/global-size.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/global-size.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/global-size.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/global-size.ll Fri Feb 27 13:29:02 2015
@@ -35,9 +35,9 @@ define i16 @test1_as1(i32 addrspace(1)*
 ; CHECK-LABEL: @test2(
 define i8 @test2(i32 %tmp79, i32 %w.2, i32 %indvar89) nounwind {
   %tmp92 = add i32 %tmp79, %indvar89
-  %arrayidx412 = getelementptr [0 x i8]* @window, i32 0, i32 %tmp92
+  %arrayidx412 = getelementptr [0 x i8], [0 x i8]* @window, i32 0, i32 %tmp92
   %tmp93 = add i32 %w.2, %indvar89
-  %arrayidx416 = getelementptr [0 x i8]* @window, i32 0, i32 %tmp93
+  %arrayidx416 = getelementptr [0 x i8], [0 x i8]* @window, i32 0, i32 %tmp93
 
   %A = load i8* %arrayidx412, align 1
   store i8 4, i8* %arrayidx416, align 1

Modified: llvm/trunk/test/Analysis/BasicAA/intrinsics.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/intrinsics.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/intrinsics.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/intrinsics.ll Fri Feb 27 13:29:02 2015
@@ -21,13 +21,13 @@ entry:
 
 ; CHECK:      define <8 x i16> @test1(i8* %p, <8 x i16> %y) {
 ; CHECK-NEXT: entry:
-; CHECK-NEXT:   %q = getelementptr i8* %p, i64 16
+; CHECK-NEXT:   %q = getelementptr i8, i8* %p, i64 16
 ; CHECK-NEXT:   %a = call <8 x i16> @llvm.arm.neon.vld1.v8i16(i8* %p, i32 16) [[ATTR]]
 ; CHECK-NEXT:   call void @llvm.arm.neon.vst1.v8i16(i8* %q, <8 x i16> %y, i32 16)
 ; CHECK-NEXT:   %c = add <8 x i16> %a, %a
 define <8 x i16> @test1(i8* %p, <8 x i16> %y) {
 entry:
-  %q = getelementptr i8* %p, i64 16
+  %q = getelementptr i8, i8* %p, i64 16
   %a = call <8 x i16> @llvm.arm.neon.vld1.v8i16(i8* %p, i32 16) nounwind
   call void @llvm.arm.neon.vst1.v8i16(i8* %q, <8 x i16> %y, i32 16)
   %b = call <8 x i16> @llvm.arm.neon.vld1.v8i16(i8* %p, i32 16) nounwind

Modified: llvm/trunk/test/Analysis/BasicAA/modref.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/modref.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/modref.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/modref.ll Fri Feb 27 13:29:02 2015
@@ -36,7 +36,7 @@ define i8 @test1() {
 
 define i8 @test2(i8* %P) {
 ; CHECK-LABEL: @test2
-  %P2 = getelementptr i8* %P, i32 127
+  %P2 = getelementptr i8, i8* %P, i32 127
   store i8 1, i8* %P2  ;; Not dead across memset
   call void @llvm.memset.p0i8.i8(i8* %P, i8 2, i8 127, i32 0, i1 false)
   %A = load i8* %P2
@@ -46,7 +46,7 @@ define i8 @test2(i8* %P) {
 
 define i8 @test2a(i8* %P) {
 ; CHECK-LABEL: @test2
-  %P2 = getelementptr i8* %P, i32 126
+  %P2 = getelementptr i8, i8* %P, i32 126
 
   ;; FIXME: DSE isn't zapping this dead store.
   store i8 1, i8* %P2  ;; Dead, clobbered by memset.
@@ -64,7 +64,7 @@ define void @test3(i8* %P, i8 %X) {
 ; CHECK-NOT: %Y
   %Y = add i8 %X, 1     ;; Dead, because the only use (the store) is dead.
 
-  %P2 = getelementptr i8* %P, i32 2
+  %P2 = getelementptr i8, i8* %P, i32 2
   store i8 %Y, i8* %P2  ;; Not read by lifetime.end, should be removed.
 ; CHECK: store i8 2, i8* %P2
   call void @llvm.lifetime.end(i64 1, i8* %P)
@@ -78,7 +78,7 @@ define void @test3a(i8* %P, i8 %X) {
 ; CHECK-LABEL: @test3a
   %Y = add i8 %X, 1     ;; Dead, because the only use (the store) is dead.
 
-  %P2 = getelementptr i8* %P, i32 2
+  %P2 = getelementptr i8, i8* %P, i32 2
   store i8 %Y, i8* %P2
 ; CHECK-NEXT: call void @llvm.lifetime.end
   call void @llvm.lifetime.end(i64 10, i8* %P)
@@ -135,7 +135,7 @@ define i32 @test7() nounwind uwtable ssp
 entry:
   %x = alloca i32, align 4
   store i32 0, i32* %x, align 4
-  %add.ptr = getelementptr inbounds i32* %x, i64 1
+  %add.ptr = getelementptr inbounds i32, i32* %x, i64 1
   call void @test7decl(i32* %add.ptr)
   %tmp = load i32* %x, align 4
   ret i32 %tmp

Modified: llvm/trunk/test/Analysis/BasicAA/must-and-partial.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/must-and-partial.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/must-and-partial.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/must-and-partial.ll Fri Feb 27 13:29:02 2015
@@ -9,7 +9,7 @@ target datalayout = "e-p:64:64:64-i1:8:8
 ; CHECK: PartialAlias:  i16* %bigbase0, i8* %phi
 define i8 @test0(i8* %base, i1 %x) {
 entry:
-  %baseplusone = getelementptr i8* %base, i64 1
+  %baseplusone = getelementptr i8, i8* %base, i64 1
   br i1 %x, label %red, label %green
 red:
   br label %green
@@ -27,7 +27,7 @@ green:
 ; CHECK: PartialAlias:  i16* %bigbase1, i8* %sel
 define i8 @test1(i8* %base, i1 %x) {
 entry:
-  %baseplusone = getelementptr i8* %base, i64 1
+  %baseplusone = getelementptr i8, i8* %base, i64 1
   %sel = select i1 %x, i8* %baseplusone, i8* %base
   store i8 0, i8* %sel
 

Modified: llvm/trunk/test/Analysis/BasicAA/no-escape-call.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/no-escape-call.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/no-escape-call.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/no-escape-call.ll Fri Feb 27 13:29:02 2015
@@ -8,12 +8,12 @@ define i1 @foo(i32 %i) nounwind  {
 entry:
 	%arr = alloca [10 x i8*]		; <[10 x i8*]*> [#uses=1]
 	%tmp2 = call i8* @getPtr( ) nounwind 		; <i8*> [#uses=2]
-	%tmp4 = getelementptr [10 x i8*]* %arr, i32 0, i32 %i		; <i8**> [#uses=2]
+	%tmp4 = getelementptr [10 x i8*], [10 x i8*]* %arr, i32 0, i32 %i		; <i8**> [#uses=2]
 	store i8* %tmp2, i8** %tmp4, align 4
-	%tmp10 = getelementptr i8* %tmp2, i32 10		; <i8*> [#uses=1]
+	%tmp10 = getelementptr i8, i8* %tmp2, i32 10		; <i8*> [#uses=1]
 	store i8 42, i8* %tmp10, align 1
 	%tmp14 = load i8** %tmp4, align 4		; <i8*> [#uses=1]
-	%tmp16 = getelementptr i8* %tmp14, i32 10		; <i8*> [#uses=1]
+	%tmp16 = getelementptr i8, i8* %tmp14, i32 10		; <i8*> [#uses=1]
 	%tmp17 = load i8* %tmp16, align 1		; <i8> [#uses=1]
 	%tmp19 = icmp eq i8 %tmp17, 42		; <i1> [#uses=1]
 	ret i1 %tmp19

Modified: llvm/trunk/test/Analysis/BasicAA/noalias-bugs.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/noalias-bugs.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/noalias-bugs.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/noalias-bugs.ll Fri Feb 27 13:29:02 2015
@@ -12,12 +12,12 @@ target triple = "x86_64-unknown-linux-gn
 
 define i64 @testcase(%nested * noalias %p1, %nested * noalias %p2,
                      i32 %a, i32 %b) {
-  %ptr = getelementptr inbounds %nested* %p1, i64 -1, i32 0
-  %ptr.64 = getelementptr inbounds %nested.i64* %ptr, i64 0, i32 0
-  %ptr2= getelementptr inbounds %nested* %p2, i64 0, i32 0
+  %ptr = getelementptr inbounds %nested, %nested* %p1, i64 -1, i32 0
+  %ptr.64 = getelementptr inbounds %nested.i64, %nested.i64* %ptr, i64 0, i32 0
+  %ptr2= getelementptr inbounds %nested, %nested* %p2, i64 0, i32 0
   %cmp = icmp ult i32 %a, %b
   %either_ptr = select i1 %cmp, %nested.i64* %ptr2, %nested.i64* %ptr
-  %either_ptr.64 = getelementptr inbounds %nested.i64* %either_ptr, i64 0, i32 0
+  %either_ptr.64 = getelementptr inbounds %nested.i64, %nested.i64* %either_ptr, i64 0, i32 0
 
 ; Because either_ptr.64 and ptr.64 can alias (we used to return noalias)
 ; elimination of the first store is not valid.

Modified: llvm/trunk/test/Analysis/BasicAA/noalias-geps.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/noalias-geps.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/noalias-geps.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/noalias-geps.ll Fri Feb 27 13:29:02 2015
@@ -5,26 +5,26 @@ target datalayout = "e-p:32:32:32-i1:8:8
 ; Check that geps with equal base offsets of noalias base pointers stay noalias.
 define i32 @test(i32* %p, i16 %i) {
 ; CHECK-LABEL: Function: test:
-  %pi = getelementptr i32* %p, i32 0
-  %pi.next = getelementptr i32* %p, i32 1
+  %pi = getelementptr i32, i32* %p, i32 0
+  %pi.next = getelementptr i32, i32* %p, i32 1
   %b = icmp eq i16 %i, 0
   br i1 %b, label %bb1, label %bb2
 
 bb1:
-  %f = getelementptr i32* %pi, i32 1
-  %g = getelementptr i32* %pi.next, i32 1
+  %f = getelementptr i32, i32* %pi, i32 1
+  %g = getelementptr i32, i32* %pi.next, i32 1
   br label %bb3
 bb2:
-  %f2 = getelementptr i32* %pi, i32 1
-  %g2 = getelementptr i32* %pi.next, i32 1
+  %f2 = getelementptr i32, i32* %pi, i32 1
+  %g2 = getelementptr i32, i32* %pi.next, i32 1
   br label %bb3
 
 bb3:
   %ptr_phi = phi i32* [ %f, %bb1 ], [ %f2, %bb2 ]
   %ptr_phi2 = phi i32* [ %g, %bb1 ], [ %g2, %bb2 ]
 ; CHECK: NoAlias: i32* %f1, i32* %g1
-  %f1 = getelementptr i32* %ptr_phi , i32 1
-  %g1 = getelementptr i32* %ptr_phi2 , i32 1
+  %f1 = getelementptr i32, i32* %ptr_phi , i32 1
+  %g1 = getelementptr i32, i32* %ptr_phi2 , i32 1
 
 ret i32 0
 }
@@ -32,25 +32,25 @@ ret i32 0
 ; Check that geps with equal indices of noalias base pointers stay noalias.
 define i32 @test2([2 x i32]* %p, i32 %i) {
 ; CHECK-LABEL: Function: test2:
-  %pi = getelementptr [2 x i32]* %p, i32 0
-  %pi.next = getelementptr [2 x i32]* %p, i32 1
+  %pi = getelementptr [2 x i32], [2 x i32]* %p, i32 0
+  %pi.next = getelementptr [2 x i32], [2 x i32]* %p, i32 1
   %b = icmp eq i32 %i, 0
   br i1 %b, label %bb1, label %bb2
 
 bb1:
-  %f = getelementptr [2 x i32]* %pi, i32 1
-  %g = getelementptr [2 x i32]* %pi.next, i32 1
+  %f = getelementptr [2 x i32], [2 x i32]* %pi, i32 1
+  %g = getelementptr [2 x i32], [2 x i32]* %pi.next, i32 1
   br label %bb3
 bb2:
-  %f2 = getelementptr [2 x i32]* %pi, i32 1
-  %g2 = getelementptr [2 x i32]* %pi.next, i32 1
+  %f2 = getelementptr [2 x i32], [2 x i32]* %pi, i32 1
+  %g2 = getelementptr [2 x i32], [2 x i32]* %pi.next, i32 1
   br label %bb3
 bb3:
   %ptr_phi = phi [2 x i32]* [ %f, %bb1 ], [ %f2, %bb2 ]
   %ptr_phi2 = phi [2 x i32]* [ %g, %bb1 ], [ %g2, %bb2 ]
 ; CHECK: NoAlias: i32* %f1, i32* %g1
-  %f1 = getelementptr [2 x i32]* %ptr_phi , i32 1, i32 %i
-  %g1 = getelementptr [2 x i32]* %ptr_phi2 , i32 1, i32 %i
+  %f1 = getelementptr [2 x i32], [2 x i32]* %ptr_phi , i32 1, i32 %i
+  %g1 = getelementptr [2 x i32], [2 x i32]* %ptr_phi2 , i32 1, i32 %i
 
 ret i32 0
 }

Modified: llvm/trunk/test/Analysis/BasicAA/phi-aa.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/phi-aa.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/phi-aa.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/phi-aa.ll Fri Feb 27 13:29:02 2015
@@ -54,7 +54,7 @@ codeRepl:
 for.body:
   %1 = load i32* %jj7, align 4
   %idxprom4 = zext i32 %1 to i64
-  %arrayidx5 = getelementptr inbounds [100 x i32]* %oa5, i64 0, i64 %idxprom4
+  %arrayidx5 = getelementptr inbounds [100 x i32], [100 x i32]* %oa5, i64 0, i64 %idxprom4
   %2 = load i32* %arrayidx5, align 4
   %sub6 = sub i32 %2, 6
   store i32 %sub6, i32* %arrayidx5, align 4
@@ -63,7 +63,7 @@ for.body:
   store i32 %3, i32* %arrayidx5, align 4
   %sub11 = add i32 %1, -1
   %idxprom12 = zext i32 %sub11 to i64
-  %arrayidx13 = getelementptr inbounds [100 x i32]* %oa5, i64 0, i64 %idxprom12
+  %arrayidx13 = getelementptr inbounds [100 x i32], [100 x i32]* %oa5, i64 0, i64 %idxprom12
   call void @inc(i32* %jj7)
   br label %codeRepl
 

Modified: llvm/trunk/test/Analysis/BasicAA/phi-spec-order.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/phi-spec-order.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/phi-spec-order.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/phi-spec-order.ll Fri Feb 27 13:29:02 2015
@@ -23,23 +23,23 @@ for.body4:
   %lsr.iv = phi i32 [ %lsr.iv.next, %for.body4 ], [ 16000, %for.cond2.preheader ]
   %lsr.iv46 = bitcast [16000 x double]* %lsr.iv4 to <4 x double>*
   %lsr.iv12 = bitcast [16000 x double]* %lsr.iv1 to <4 x double>*
-  %scevgep11 = getelementptr <4 x double>* %lsr.iv46, i64 -2
+  %scevgep11 = getelementptr <4 x double>, <4 x double>* %lsr.iv46, i64 -2
   %i6 = load <4 x double>* %scevgep11, align 32
   %add = fadd <4 x double> %i6, <double 1.000000e+00, double 1.000000e+00, double 1.000000e+00, double 1.000000e+00>
   store <4 x double> %add, <4 x double>* %lsr.iv12, align 32
-  %scevgep10 = getelementptr <4 x double>* %lsr.iv46, i64 -1
+  %scevgep10 = getelementptr <4 x double>, <4 x double>* %lsr.iv46, i64 -1
   %i7 = load <4 x double>* %scevgep10, align 32
   %add.4 = fadd <4 x double> %i7, <double 1.000000e+00, double 1.000000e+00, double 1.000000e+00, double 1.000000e+00>
-  %scevgep9 = getelementptr <4 x double>* %lsr.iv12, i64 1
+  %scevgep9 = getelementptr <4 x double>, <4 x double>* %lsr.iv12, i64 1
   store <4 x double> %add.4, <4 x double>* %scevgep9, align 32
   %i8 = load <4 x double>* %lsr.iv46, align 32
   %add.8 = fadd <4 x double> %i8, <double 1.000000e+00, double 1.000000e+00, double 1.000000e+00, double 1.000000e+00>
-  %scevgep8 = getelementptr <4 x double>* %lsr.iv12, i64 2
+  %scevgep8 = getelementptr <4 x double>, <4 x double>* %lsr.iv12, i64 2
   store <4 x double> %add.8, <4 x double>* %scevgep8, align 32
-  %scevgep7 = getelementptr <4 x double>* %lsr.iv46, i64 1
+  %scevgep7 = getelementptr <4 x double>, <4 x double>* %lsr.iv46, i64 1
   %i9 = load <4 x double>* %scevgep7, align 32
   %add.12 = fadd <4 x double> %i9, <double 1.000000e+00, double 1.000000e+00, double 1.000000e+00, double 1.000000e+00>
-  %scevgep3 = getelementptr <4 x double>* %lsr.iv12, i64 3
+  %scevgep3 = getelementptr <4 x double>, <4 x double>* %lsr.iv12, i64 3
   store <4 x double> %add.12, <4 x double>* %scevgep3, align 32
 
 ; CHECK: NoAlias:{{[ \t]+}}<4 x double>* %scevgep11, <4 x double>* %scevgep7
@@ -50,9 +50,9 @@ for.body4:
 ; CHECK: NoAlias:{{[ \t]+}}<4 x double>* %scevgep3, <4 x double>* %scevgep9
 
   %lsr.iv.next = add i32 %lsr.iv, -16
-  %scevgep = getelementptr [16000 x double]* %lsr.iv1, i64 0, i64 16
+  %scevgep = getelementptr [16000 x double], [16000 x double]* %lsr.iv1, i64 0, i64 16
   %i10 = bitcast double* %scevgep to [16000 x double]*
-  %scevgep5 = getelementptr [16000 x double]* %lsr.iv4, i64 0, i64 16
+  %scevgep5 = getelementptr [16000 x double], [16000 x double]* %lsr.iv4, i64 0, i64 16
   %i11 = bitcast double* %scevgep5 to [16000 x double]*
   %exitcond.15 = icmp eq i32 %lsr.iv.next, 0
   br i1 %exitcond.15, label %for.end, label %for.body4

Modified: llvm/trunk/test/Analysis/BasicAA/phi-speculation.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/phi-speculation.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/phi-speculation.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/phi-speculation.ll Fri Feb 27 13:29:02 2015
@@ -8,7 +8,7 @@ target datalayout =
 ; CHECK: NoAlias: i32* %ptr2_phi, i32* %ptr_phi
 define i32 @test_noalias_1(i32* %ptr2, i32 %count, i32* %coeff) {
 entry:
-  %ptr = getelementptr inbounds i32* %ptr2, i64 1
+  %ptr = getelementptr inbounds i32, i32* %ptr2, i64 1
   br label %while.body
 
 while.body:
@@ -24,8 +24,8 @@ while.body:
   %mul = mul nsw i32 %1, %2
   %add = add nsw i32 %mul, %result.09
   %tobool = icmp eq i32 %dec, 0
-  %ptr_inc = getelementptr inbounds i32* %ptr_phi, i64 1
-  %ptr2_inc = getelementptr inbounds i32* %ptr2_phi, i64 1
+  %ptr_inc = getelementptr inbounds i32, i32* %ptr_phi, i64 1
+  %ptr2_inc = getelementptr inbounds i32, i32* %ptr2_phi, i64 1
   br i1 %tobool, label %the_exit, label %while.body
 
 the_exit:
@@ -37,7 +37,7 @@ the_exit:
 ; CHECK: NoAlias: i32* %ptr2_phi, i32* %ptr_phi
 define i32 @test_noalias_2(i32* %ptr2, i32 %count, i32* %coeff) {
 entry:
-  %ptr = getelementptr inbounds i32* %ptr2, i64 1
+  %ptr = getelementptr inbounds i32, i32* %ptr2, i64 1
   br label %outer.while.header
 
 outer.while.header:
@@ -59,13 +59,13 @@ while.body:
   %mul = mul nsw i32 %1, %2
   %add = add nsw i32 %mul, %result.09
   %tobool = icmp eq i32 %dec, 0
-  %ptr_inc = getelementptr inbounds i32* %ptr_phi, i64 1
-  %ptr2_inc = getelementptr inbounds i32* %ptr2_phi, i64 1
+  %ptr_inc = getelementptr inbounds i32, i32* %ptr_phi, i64 1
+  %ptr2_inc = getelementptr inbounds i32, i32* %ptr2_phi, i64 1
   br i1 %tobool, label %outer.while.backedge, label %while.body
 
 outer.while.backedge:
-  %ptr_inc_outer = getelementptr inbounds i32* %ptr_phi, i64 1
-  %ptr2_inc_outer = getelementptr inbounds i32* %ptr2_phi, i64 1
+  %ptr_inc_outer = getelementptr inbounds i32, i32* %ptr_phi, i64 1
+  %ptr2_inc_outer = getelementptr inbounds i32, i32* %ptr2_phi, i64 1
   %dec.outer = add nsw i32 %num.outer, -1
   %br.cond = icmp eq i32 %dec.outer, 0
   br i1 %br.cond, label %the_exit, label %outer.while.header

Modified: llvm/trunk/test/Analysis/BasicAA/pr18573.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/pr18573.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/pr18573.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/pr18573.ll Fri Feb 27 13:29:02 2015
@@ -11,7 +11,7 @@ declare <8 x float> @llvm.x86.avx2.gathe
 define <8 x float> @foo1(i8* noalias readonly %arr.ptr, <8 x i32>* noalias readonly %vix.ptr, i8* noalias %t2.ptr) #1 {
 allocas:
   %vix = load <8 x i32>* %vix.ptr, align 4
-  %t1.ptr = getelementptr i8* %arr.ptr, i8 4
+  %t1.ptr = getelementptr i8, i8* %arr.ptr, i8 4
   
   %v1 = tail call <8 x float> @llvm.x86.avx2.gather.d.ps.256(<8 x float> undef, i8* %arr.ptr, <8 x i32> %vix, <8 x float> <float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000>, i8 1) #2
   store i8 1, i8* %t1.ptr, align 4
@@ -32,7 +32,7 @@ allocas:
 define <8 x float> @foo2(i8* noalias readonly %arr.ptr, <8 x i32>* noalias readonly %vix.ptr, i8* noalias %t2.ptr) #1 {
 allocas:
   %vix = load <8 x i32>* %vix.ptr, align 4
-  %t1.ptr = getelementptr i8* %arr.ptr, i8 4
+  %t1.ptr = getelementptr i8, i8* %arr.ptr, i8 4
   
   %v1 = tail call <8 x float> @llvm.x86.avx2.gather.d.ps.256(<8 x float> undef, i8* %arr.ptr, <8 x i32> %vix, <8 x float> <float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000, float 0xFFFFFFFFE0000000>, i8 1) #2
   store i8 1, i8* %t2.ptr, align 4

Modified: llvm/trunk/test/Analysis/BasicAA/store-promote.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/store-promote.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/store-promote.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/store-promote.ll Fri Feb 27 13:29:02 2015
@@ -36,10 +36,10 @@ define i32 @test2(i1 %c) {
 
 Loop:           ; preds = %Loop, %0
         %AVal = load i32* @A            ; <i32> [#uses=2]
-        %C0 = getelementptr [2 x i32]* @C, i64 0, i64 0         ; <i32*> [#uses=1]
+        %C0 = getelementptr [2 x i32], [2 x i32]* @C, i64 0, i64 0         ; <i32*> [#uses=1]
         store i32 %AVal, i32* %C0
         %BVal = load i32* @B            ; <i32> [#uses=2]
-        %C1 = getelementptr [2 x i32]* @C, i64 0, i64 1         ; <i32*> [#uses=1]
+        %C1 = getelementptr [2 x i32], [2 x i32]* @C, i64 0, i64 1         ; <i32*> [#uses=1]
         store i32 %BVal, i32* %C1
         br i1 %c, label %Out, label %Loop
 

Modified: llvm/trunk/test/Analysis/BasicAA/struct-geps.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/struct-geps.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/struct-geps.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/struct-geps.ll Fri Feb 27 13:29:02 2015
@@ -27,9 +27,9 @@ target datalayout = "e-m:e-i64:64-f80:12
 ; CHECK-DAG: MustAlias: i32* %y, i80* %y_10
 
 define void @test_simple(%struct* %st, i64 %i, i64 %j, i64 %k) {
-  %x = getelementptr %struct* %st, i64 %i, i32 0
-  %y = getelementptr %struct* %st, i64 %j, i32 1
-  %z = getelementptr %struct* %st, i64 %k, i32 2
+  %x = getelementptr %struct, %struct* %st, i64 %i, i32 0
+  %y = getelementptr %struct, %struct* %st, i64 %j, i32 1
+  %z = getelementptr %struct, %struct* %st, i64 %k, i32 2
   %y_12 = bitcast i32* %y to %struct*
   %y_10 = bitcast i32* %y to i80*
   %y_8 = bitcast i32* %y to i64*
@@ -59,9 +59,9 @@ define void @test_simple(%struct* %st, i
 ; CHECK-DAG: MustAlias: i32* %y, i80* %y_10
 
 define void @test_in_array([1 x %struct]* %st, i64 %i, i64 %j, i64 %k, i64 %i1, i64 %j1, i64 %k1) {
-  %x = getelementptr [1 x %struct]* %st, i64 %i, i64 %i1, i32 0
-  %y = getelementptr [1 x %struct]* %st, i64 %j, i64 %j1, i32 1
-  %z = getelementptr [1 x %struct]* %st, i64 %k, i64 %k1, i32 2
+  %x = getelementptr [1 x %struct], [1 x %struct]* %st, i64 %i, i64 %i1, i32 0
+  %y = getelementptr [1 x %struct], [1 x %struct]* %st, i64 %j, i64 %j1, i32 1
+  %z = getelementptr [1 x %struct], [1 x %struct]* %st, i64 %k, i64 %k1, i32 2
   %y_12 = bitcast i32* %y to %struct*
   %y_10 = bitcast i32* %y to i80*
   %y_8 = bitcast i32* %y to i64*
@@ -91,9 +91,9 @@ define void @test_in_array([1 x %struct]
 ; CHECK-DAG: MustAlias: i32* %y, i80* %y_10
 
 define void @test_in_3d_array([1 x [1 x [1 x %struct]]]* %st, i64 %i, i64 %j, i64 %k, i64 %i1, i64 %j1, i64 %k1, i64 %i2, i64 %j2, i64 %k2, i64 %i3, i64 %j3, i64 %k3) {
-  %x = getelementptr [1 x [1 x [1 x %struct]]]* %st, i64 %i, i64 %i1, i64 %i2, i64 %i3, i32 0
-  %y = getelementptr [1 x [1 x [1 x %struct]]]* %st, i64 %j, i64 %j1, i64 %j2, i64 %j3, i32 1
-  %z = getelementptr [1 x [1 x [1 x %struct]]]* %st, i64 %k, i64 %k1, i64 %k2, i64 %k3, i32 2
+  %x = getelementptr [1 x [1 x [1 x %struct]]], [1 x [1 x [1 x %struct]]]* %st, i64 %i, i64 %i1, i64 %i2, i64 %i3, i32 0
+  %y = getelementptr [1 x [1 x [1 x %struct]]], [1 x [1 x [1 x %struct]]]* %st, i64 %j, i64 %j1, i64 %j2, i64 %j3, i32 1
+  %z = getelementptr [1 x [1 x [1 x %struct]]], [1 x [1 x [1 x %struct]]]* %st, i64 %k, i64 %k1, i64 %k2, i64 %k3, i32 2
   %y_12 = bitcast i32* %y to %struct*
   %y_10 = bitcast i32* %y to i80*
   %y_8 = bitcast i32* %y to i64*
@@ -116,13 +116,13 @@ define void @test_in_3d_array([1 x [1 x
 ; CHECK-DAG: PartialAlias: i32* %y2, i32* %z
 
 define void @test_same_underlying_object_same_indices(%struct* %st, i64 %i, i64 %j, i64 %k) {
-  %st2 = getelementptr %struct* %st, i32 10
-  %x2 = getelementptr %struct* %st2, i64 %i, i32 0
-  %y2 = getelementptr %struct* %st2, i64 %j, i32 1
-  %z2 = getelementptr %struct* %st2, i64 %k, i32 2
-  %x = getelementptr %struct* %st, i64 %i, i32 0
-  %y = getelementptr %struct* %st, i64 %j, i32 1
-  %z = getelementptr %struct* %st, i64 %k, i32 2
+  %st2 = getelementptr %struct, %struct* %st, i32 10
+  %x2 = getelementptr %struct, %struct* %st2, i64 %i, i32 0
+  %y2 = getelementptr %struct, %struct* %st2, i64 %j, i32 1
+  %z2 = getelementptr %struct, %struct* %st2, i64 %k, i32 2
+  %x = getelementptr %struct, %struct* %st, i64 %i, i32 0
+  %y = getelementptr %struct, %struct* %st, i64 %j, i32 1
+  %z = getelementptr %struct, %struct* %st, i64 %k, i32 2
   ret void
 }
 
@@ -142,13 +142,13 @@ define void @test_same_underlying_object
 ; CHECK-DAG: PartialAlias: i32* %y2, i32* %z
 
 define void @test_same_underlying_object_different_indices(%struct* %st, i64 %i1, i64 %j1, i64 %k1, i64 %i2, i64 %k2, i64 %j2) {
-  %st2 = getelementptr %struct* %st, i32 10
-  %x2 = getelementptr %struct* %st2, i64 %i2, i32 0
-  %y2 = getelementptr %struct* %st2, i64 %j2, i32 1
-  %z2 = getelementptr %struct* %st2, i64 %k2, i32 2
-  %x = getelementptr %struct* %st, i64 %i1, i32 0
-  %y = getelementptr %struct* %st, i64 %j1, i32 1
-  %z = getelementptr %struct* %st, i64 %k1, i32 2
+  %st2 = getelementptr %struct, %struct* %st, i32 10
+  %x2 = getelementptr %struct, %struct* %st2, i64 %i2, i32 0
+  %y2 = getelementptr %struct, %struct* %st2, i64 %j2, i32 1
+  %z2 = getelementptr %struct, %struct* %st2, i64 %k2, i32 2
+  %x = getelementptr %struct, %struct* %st, i64 %i1, i32 0
+  %y = getelementptr %struct, %struct* %st, i64 %j1, i32 1
+  %z = getelementptr %struct, %struct* %st, i64 %k1, i32 2
   ret void
 }
 
@@ -158,7 +158,7 @@ define void @test_same_underlying_object
 ; CHECK-LABEL: test_struct_in_array
 ; CHECK-DAG: MustAlias: i32* %x, i32* %y
 define void @test_struct_in_array(%struct2* %st, i64 %i, i64 %j, i64 %k) {
-  %x = getelementptr %struct2* %st, i32 0, i32 1, i32 1, i32 0
-  %y = getelementptr %struct2* %st, i32 0, i32 0, i32 1, i32 1
+  %x = getelementptr %struct2, %struct2* %st, i32 0, i32 1, i32 1, i32 0
+  %y = getelementptr %struct2, %struct2* %st, i32 0, i32 0, i32 1, i32 1
   ret void
 }

Modified: llvm/trunk/test/Analysis/BasicAA/underlying-value.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/underlying-value.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/underlying-value.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/underlying-value.ll Fri Feb 27 13:29:02 2015
@@ -14,9 +14,9 @@ for.cond2:
   br i1 false, label %for.body5, label %for.cond
 
 for.body5:                                        ; preds = %for.cond2
-  %arrayidx = getelementptr inbounds [2 x i64]* undef, i32 0, i64 0
+  %arrayidx = getelementptr inbounds [2 x i64], [2 x i64]* undef, i32 0, i64 0
   %tmp7 = load i64* %arrayidx, align 8
-  %arrayidx9 = getelementptr inbounds [2 x i64]* undef, i32 0, i64 undef
+  %arrayidx9 = getelementptr inbounds [2 x i64], [2 x i64]* undef, i32 0, i64 undef
   %tmp10 = load i64* %arrayidx9, align 8
   br label %for.cond2
 

Modified: llvm/trunk/test/Analysis/BasicAA/unreachable-block.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/unreachable-block.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/unreachable-block.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/unreachable-block.ll Fri Feb 27 13:29:02 2015
@@ -11,6 +11,6 @@ bb:
   %t = select i1 undef, i32* %t, i32* undef
   %p = select i1 undef, i32* %p, i32* %p
   %q = select i1 undef, i32* undef, i32* %p
-  %a = getelementptr i8* %a, i32 0
+  %a = getelementptr i8, i8* %a, i32 0
   unreachable
 }

Modified: llvm/trunk/test/Analysis/BasicAA/zext.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/zext.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BasicAA/zext.ll (original)
+++ llvm/trunk/test/Analysis/BasicAA/zext.ll Fri Feb 27 13:29:02 2015
@@ -7,10 +7,10 @@ target triple = "x86_64-unknown-linux-gn
 
 define void @test_with_zext() {
   %1 = tail call i8* @malloc(i64 120)
-  %a = getelementptr inbounds i8* %1, i64 8
-  %2 = getelementptr inbounds i8* %1, i64 16
+  %a = getelementptr inbounds i8, i8* %1, i64 8
+  %2 = getelementptr inbounds i8, i8* %1, i64 16
   %3 = zext i32 3 to i64
-  %b = getelementptr inbounds i8* %2, i64 %3
+  %b = getelementptr inbounds i8, i8* %2, i64 %3
   ret void
 }
 
@@ -19,10 +19,10 @@ define void @test_with_zext() {
 
 define void @test_with_lshr(i64 %i) {
   %1 = tail call i8* @malloc(i64 120)
-  %a = getelementptr inbounds i8* %1, i64 8
-  %2 = getelementptr inbounds i8* %1, i64 16
+  %a = getelementptr inbounds i8, i8* %1, i64 8
+  %2 = getelementptr inbounds i8, i8* %1, i64 16
   %3 = lshr i64 %i, 2
-  %b = getelementptr inbounds i8* %2, i64 %3
+  %b = getelementptr inbounds i8, i8* %2, i64 %3
   ret void
 }
 
@@ -34,10 +34,10 @@ define void @test_with_a_loop(i8* %mem)
 
 for.loop:
   %i = phi i32 [ 0, %0 ], [ %i.plus1, %for.loop ]
-  %a = getelementptr inbounds i8* %mem, i64 8
-  %a.plus1 = getelementptr inbounds i8* %mem, i64 16
+  %a = getelementptr inbounds i8, i8* %mem, i64 8
+  %a.plus1 = getelementptr inbounds i8, i8* %mem, i64 16
   %i.64 = zext i32 %i to i64
-  %b = getelementptr inbounds i8* %a.plus1, i64 %i.64
+  %b = getelementptr inbounds i8, i8* %a.plus1, i64 %i.64
   %i.plus1 = add nuw nsw i32 %i, 1
   %cmp = icmp eq i32 %i.plus1, 10
   br i1 %cmp, label %for.loop.exit, label %for.loop
@@ -55,12 +55,12 @@ define void @test_with_varying_base_poin
 for.loop:
   %mem = phi i8* [ %mem.orig, %0 ], [ %mem.plus1, %for.loop ]
   %i = phi i32 [ 0, %0 ], [ %i.plus1, %for.loop ]
-  %a = getelementptr inbounds i8* %mem, i64 8
-  %a.plus1 = getelementptr inbounds i8* %mem, i64 16
+  %a = getelementptr inbounds i8, i8* %mem, i64 8
+  %a.plus1 = getelementptr inbounds i8, i8* %mem, i64 16
   %i.64 = zext i32 %i to i64
-  %b = getelementptr inbounds i8* %a.plus1, i64 %i.64
+  %b = getelementptr inbounds i8, i8* %a.plus1, i64 %i.64
   %i.plus1 = add nuw nsw i32 %i, 1
-  %mem.plus1 = getelementptr inbounds i8* %mem, i64 8
+  %mem.plus1 = getelementptr inbounds i8, i8* %mem, i64 8
   %cmp = icmp eq i32 %i.plus1, 10
   br i1 %cmp, label %for.loop.exit, label %for.loop
 
@@ -74,10 +74,10 @@ for.loop.exit:
 define void @test_sign_extension(i32 %p) {
   %1 = tail call i8* @malloc(i64 120)
   %p.64 = zext i32 %p to i64
-  %a = getelementptr inbounds i8* %1, i64 %p.64
+  %a = getelementptr inbounds i8, i8* %1, i64 %p.64
   %p.minus1 = add i32 %p, -1
   %p.minus1.64 = zext i32 %p.minus1 to i64
-  %b.i8 = getelementptr inbounds i8* %1, i64 %p.minus1.64
+  %b.i8 = getelementptr inbounds i8, i8* %1, i64 %p.minus1.64
   %b.i64 = bitcast i8* %b.i8 to i64*
   ret void
 }
@@ -91,13 +91,13 @@ define void @test_fe_tools([8 x i32]* %v
 for.loop:
   %i = phi i32 [ 0, %reorder ], [ %i.next, %for.loop ]
   %idxprom = zext i32 %i to i64
-  %b = getelementptr inbounds [8 x i32]* %values, i64 0, i64 %idxprom
+  %b = getelementptr inbounds [8 x i32], [8 x i32]* %values, i64 0, i64 %idxprom
   %i.next = add nuw nsw i32 %i, 1
   %1 = icmp eq i32 %i.next, 10
   br i1 %1, label %for.loop.exit, label %for.loop
 
 reorder:
-  %a = getelementptr inbounds [8 x i32]* %values, i64 0, i64 1
+  %a = getelementptr inbounds [8 x i32], [8 x i32]* %values, i64 0, i64 1
   br label %for.loop
 
 for.loop.exit:
@@ -123,13 +123,13 @@ define void @test_spec2006() {
 ; <label>:2                                       ; preds = %.lr.ph, %2
   %i = phi i32 [ %d.val, %.lr.ph ], [ %i.plus1, %2 ]
   %i.promoted = sext i32 %i to i64
-  %x = getelementptr inbounds [1 x [2 x i32*]]* %h, i64 0, i64 %d.promoted, i64 %i.promoted
+  %x = getelementptr inbounds [1 x [2 x i32*]], [1 x [2 x i32*]]* %h, i64 0, i64 %d.promoted, i64 %i.promoted
   %i.plus1 = add nsw i32 %i, 1
   %cmp = icmp slt i32 %i.plus1, 2
   br i1 %cmp, label %2, label %3
 
 ; <label>:3                                      ; preds = %._crit_edge, %0
-  %y = getelementptr inbounds [1 x [2 x i32*]]* %h, i64 0, i64 0, i64 1
+  %y = getelementptr inbounds [1 x [2 x i32*]], [1 x [2 x i32*]]* %h, i64 0, i64 0, i64 1
   ret void
 }
 
@@ -138,8 +138,8 @@ define void @test_spec2006() {
 
 define void @test_modulo_analysis_easy_case(i64 %i) {
   %h = alloca [1 x [2 x i32*]], align 16
-  %x = getelementptr inbounds [1 x [2 x i32*]]* %h, i64 0, i64 %i, i64 0
-  %y = getelementptr inbounds [1 x [2 x i32*]]* %h, i64 0, i64 0, i64 1
+  %x = getelementptr inbounds [1 x [2 x i32*]], [1 x [2 x i32*]]* %h, i64 0, i64 %i, i64 0
+  %y = getelementptr inbounds [1 x [2 x i32*]], [1 x [2 x i32*]]* %h, i64 0, i64 0, i64 1
   ret void
 }
 
@@ -153,8 +153,8 @@ define void @test_modulo_analysis_in_loo
 for.loop:
   %i = phi i32 [ 0, %0 ], [ %i.plus1, %for.loop ]
   %i.promoted = sext i32 %i to i64
-  %x = getelementptr inbounds [1 x [2 x i32*]]* %h, i64 0, i64 %i.promoted, i64 0
-  %y = getelementptr inbounds [1 x [2 x i32*]]* %h, i64 0, i64 0, i64 1
+  %x = getelementptr inbounds [1 x [2 x i32*]], [1 x [2 x i32*]]* %h, i64 0, i64 %i.promoted, i64 0
+  %y = getelementptr inbounds [1 x [2 x i32*]], [1 x [2 x i32*]]* %h, i64 0, i64 0, i64 1
   %i.plus1 = add nsw i32 %i, 1
   %cmp = icmp slt i32 %i.plus1, 2
   br i1 %cmp, label %for.loop, label %for.loop.exit
@@ -175,8 +175,8 @@ define void @test_modulo_analysis_with_g
 for.loop:
   %i = phi i32 [ 0, %0 ], [ %i.plus1, %for.loop ]
   %i.promoted = sext i32 %i to i64
-  %x = getelementptr inbounds [1 x [2 x i32*]]* %h, i64 0, i64 %i.promoted, i64 %b.promoted
-  %y = getelementptr inbounds [1 x [2 x i32*]]* %h, i64 0, i64 0, i64 1
+  %x = getelementptr inbounds [1 x [2 x i32*]], [1 x [2 x i32*]]* %h, i64 0, i64 %i.promoted, i64 %b.promoted
+  %y = getelementptr inbounds [1 x [2 x i32*]], [1 x [2 x i32*]]* %h, i64 0, i64 0, i64 1
   %i.plus1 = add nsw i32 %i, 1
   %cmp = icmp slt i32 %i.plus1, 2
   br i1 %cmp, label %for.loop, label %for.loop.exit
@@ -188,10 +188,10 @@ for.loop.exit:
 ; CHECK-LABEL: test_const_eval
 ; CHECK: NoAlias: i8* %a, i8* %b
 define void @test_const_eval(i8* %ptr, i64 %offset) {
-  %a = getelementptr inbounds i8* %ptr, i64 %offset
-  %a.dup = getelementptr inbounds i8* %ptr, i64 %offset
+  %a = getelementptr inbounds i8, i8* %ptr, i64 %offset
+  %a.dup = getelementptr inbounds i8, i8* %ptr, i64 %offset
   %three = zext i32 3 to i64
-  %b = getelementptr inbounds i8* %a.dup, i64 %three
+  %b = getelementptr inbounds i8, i8* %a.dup, i64 %three
   ret void
 }
 
@@ -200,8 +200,8 @@ define void @test_const_eval(i8* %ptr, i
 define void @test_const_eval_scaled(i8* %ptr) {
   %three = zext i32 3 to i64
   %six = mul i64 %three, 2
-  %a = getelementptr inbounds i8* %ptr, i64 %six
-  %b = getelementptr inbounds i8* %ptr, i64 6
+  %a = getelementptr inbounds i8, i8* %ptr, i64 %six
+  %b = getelementptr inbounds i8, i8* %ptr, i64 6
   ret void
 }
 

Modified: llvm/trunk/test/Analysis/BlockFrequencyInfo/basic.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BlockFrequencyInfo/basic.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BlockFrequencyInfo/basic.ll (original)
+++ llvm/trunk/test/Analysis/BlockFrequencyInfo/basic.ll Fri Feb 27 13:29:02 2015
@@ -12,7 +12,7 @@ entry:
 body:
   %iv = phi i32 [ 0, %entry ], [ %next, %body ]
   %base = phi i32 [ 0, %entry ], [ %sum, %body ]
-  %arrayidx = getelementptr inbounds i32* %a, i32 %iv
+  %arrayidx = getelementptr inbounds i32, i32* %a, i32 %iv
   %0 = load i32* %arrayidx
   %sum = add nsw i32 %0, %base
   %next = add i32 %iv, 1

Modified: llvm/trunk/test/Analysis/BranchProbabilityInfo/basic.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BranchProbabilityInfo/basic.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BranchProbabilityInfo/basic.ll (original)
+++ llvm/trunk/test/Analysis/BranchProbabilityInfo/basic.ll Fri Feb 27 13:29:02 2015
@@ -9,7 +9,7 @@ entry:
 body:
   %iv = phi i32 [ 0, %entry ], [ %next, %body ]
   %base = phi i32 [ 0, %entry ], [ %sum, %body ]
-  %arrayidx = getelementptr inbounds i32* %a, i32 %iv
+  %arrayidx = getelementptr inbounds i32, i32* %a, i32 %iv
   %0 = load i32* %arrayidx
   %sum = add nsw i32 %0, %base
   %next = add i32 %iv, 1
@@ -153,7 +153,7 @@ define i32 @test_cold_call_sites(i32* %a
 ; CHECK: edge entry -> else probability is 64 / 68 = 94.1176% [HOT edge]
 
 entry:
-  %gep1 = getelementptr i32* %a, i32 1
+  %gep1 = getelementptr i32, i32* %a, i32 1
   %val1 = load i32* %gep1
   %cond1 = icmp ugt i32 %val1, 1
   br i1 %cond1, label %then, label %else
@@ -164,7 +164,7 @@ then:
   br label %exit
 
 else:
-  %gep2 = getelementptr i32* %a, i32 2
+  %gep2 = getelementptr i32, i32* %a, i32 2
   %val2 = load i32* %gep2
   %val3 = call i32 @regular_function(i32 %val2)
   br label %exit

Modified: llvm/trunk/test/Analysis/BranchProbabilityInfo/loop.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BranchProbabilityInfo/loop.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BranchProbabilityInfo/loop.ll (original)
+++ llvm/trunk/test/Analysis/BranchProbabilityInfo/loop.ll Fri Feb 27 13:29:02 2015
@@ -305,8 +305,8 @@ entry:
 
 for.body.lr.ph:
   %cmp216 = icmp sgt i32 %b, 0
-  %arrayidx5 = getelementptr inbounds i32* %c, i64 1
-  %arrayidx9 = getelementptr inbounds i32* %c, i64 2
+  %arrayidx5 = getelementptr inbounds i32, i32* %c, i64 1
+  %arrayidx9 = getelementptr inbounds i32, i32* %c, i64 2
   br label %for.body
 ; CHECK: edge for.body.lr.ph -> for.body probability is 16 / 16 = 100%
 

Modified: llvm/trunk/test/Analysis/BranchProbabilityInfo/pr18705.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BranchProbabilityInfo/pr18705.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/BranchProbabilityInfo/pr18705.ll (original)
+++ llvm/trunk/test/Analysis/BranchProbabilityInfo/pr18705.ll Fri Feb 27 13:29:02 2015
@@ -22,22 +22,22 @@ while.body:
   %d.addr.010 = phi i32* [ %d, %while.body.lr.ph ], [ %incdec.ptr4, %if.end ]
   %c.addr.09 = phi i32* [ %c, %while.body.lr.ph ], [ %c.addr.1, %if.end ]
   %indvars.iv.next = add nsw i64 %indvars.iv, -1
-  %arrayidx = getelementptr inbounds float* %f0, i64 %indvars.iv.next
+  %arrayidx = getelementptr inbounds float, float* %f0, i64 %indvars.iv.next
   %1 = load float* %arrayidx, align 4
-  %arrayidx2 = getelementptr inbounds float* %f1, i64 %indvars.iv.next
+  %arrayidx2 = getelementptr inbounds float, float* %f1, i64 %indvars.iv.next
   %2 = load float* %arrayidx2, align 4
   %cmp = fcmp une float %1, %2
   br i1 %cmp, label %if.then, label %if.else
 
 if.then:
-  %incdec.ptr = getelementptr inbounds i32* %b.addr.011, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %b.addr.011, i64 1
   %3 = load i32* %b.addr.011, align 4
   %add = add nsw i32 %3, 12
   store i32 %add, i32* %b.addr.011, align 4
   br label %if.end
 
 if.else:
-  %incdec.ptr3 = getelementptr inbounds i32* %c.addr.09, i64 1
+  %incdec.ptr3 = getelementptr inbounds i32, i32* %c.addr.09, i64 1
   %4 = load i32* %c.addr.09, align 4
   %sub = add nsw i32 %4, -13
   store i32 %sub, i32* %c.addr.09, align 4
@@ -46,7 +46,7 @@ if.else:
 if.end:
   %c.addr.1 = phi i32* [ %c.addr.09, %if.then ], [ %incdec.ptr3, %if.else ]
   %b.addr.1 = phi i32* [ %incdec.ptr, %if.then ], [ %b.addr.011, %if.else ]
-  %incdec.ptr4 = getelementptr inbounds i32* %d.addr.010, i64 1
+  %incdec.ptr4 = getelementptr inbounds i32, i32* %d.addr.010, i64 1
   store i32 14, i32* %d.addr.010, align 4
   %5 = trunc i64 %indvars.iv.next to i32
   %tobool = icmp eq i32 %5, 0

Modified: llvm/trunk/test/Analysis/CFLAliasAnalysis/const-expr-gep.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/CFLAliasAnalysis/const-expr-gep.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/CFLAliasAnalysis/const-expr-gep.ll (original)
+++ llvm/trunk/test/Analysis/CFLAliasAnalysis/const-expr-gep.ll Fri Feb 27 13:29:02 2015
@@ -12,10 +12,10 @@
 ; CHECK-NOT:   May:
 
 define void @test() {
-  %D = getelementptr %T* @G, i64 0, i32 0
-  %E = getelementptr %T* @G, i64 0, i32 1, i64 5
-  %F = getelementptr i32* getelementptr (%T* @G, i64 0, i32 0), i64 0
-  %X = getelementptr [10 x i8]* getelementptr (%T* @G, i64 0, i32 1), i64 0, i64 5
+  %D = getelementptr %T, %T* @G, i64 0, i32 0
+  %E = getelementptr %T, %T* @G, i64 0, i32 1, i64 5
+  %F = getelementptr i32, i32* getelementptr (%T* @G, i64 0, i32 0), i64 0
+  %X = getelementptr [10 x i8], [10 x i8]* getelementptr (%T* @G, i64 0, i32 1), i64 0, i64 5
 
   ret void
 }

Modified: llvm/trunk/test/Analysis/CFLAliasAnalysis/constant-over-index.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/CFLAliasAnalysis/constant-over-index.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/CFLAliasAnalysis/constant-over-index.ll (original)
+++ llvm/trunk/test/Analysis/CFLAliasAnalysis/constant-over-index.ll Fri Feb 27 13:29:02 2015
@@ -10,13 +10,13 @@
 
 define void @foo([3 x [3 x double]]* noalias %p) {
 entry:
-  %p3 = getelementptr [3 x [3 x double]]* %p, i64 0, i64 0, i64 3
+  %p3 = getelementptr [3 x [3 x double]], [3 x [3 x double]]* %p, i64 0, i64 0, i64 3
   br label %loop
 
 loop:
   %i = phi i64 [ 0, %entry ], [ %i.next, %loop ]
 
-  %p.0.i.0 = getelementptr [3 x [3 x double]]* %p, i64 0, i64 %i, i64 0
+  %p.0.i.0 = getelementptr [3 x [3 x double]], [3 x [3 x double]]* %p, i64 0, i64 %i, i64 0
 
   store volatile double 0.0, double* %p3
   store volatile double 0.1, double* %p.0.i.0

Modified: llvm/trunk/test/Analysis/CFLAliasAnalysis/full-store-partial-alias.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/CFLAliasAnalysis/full-store-partial-alias.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/CFLAliasAnalysis/full-store-partial-alias.ll (original)
+++ llvm/trunk/test/Analysis/CFLAliasAnalysis/full-store-partial-alias.ll Fri Feb 27 13:29:02 2015
@@ -20,12 +20,12 @@ define i32 @signbit(double %x) nounwind
 ; CHECK: ret i32 0
 entry:
   %u = alloca %union.anon, align 8
-  %tmp9 = getelementptr inbounds %union.anon* %u, i64 0, i32 0
+  %tmp9 = getelementptr inbounds %union.anon, %union.anon* %u, i64 0, i32 0
   store double %x, double* %tmp9, align 8, !tbaa !0
   %tmp2 = load i32* bitcast (i64* @endianness_test to i32*), align 8, !tbaa !3
   %idxprom = sext i32 %tmp2 to i64
   %tmp4 = bitcast %union.anon* %u to [2 x i32]*
-  %arrayidx = getelementptr inbounds [2 x i32]* %tmp4, i64 0, i64 %idxprom
+  %arrayidx = getelementptr inbounds [2 x i32], [2 x i32]* %tmp4, i64 0, i64 %idxprom
   %tmp5 = load i32* %arrayidx, align 4, !tbaa !3
   %tmp5.lobit = lshr i32 %tmp5, 31
   ret i32 %tmp5.lobit

Modified: llvm/trunk/test/Analysis/CFLAliasAnalysis/gep-signed-arithmetic.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/CFLAliasAnalysis/gep-signed-arithmetic.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/CFLAliasAnalysis/gep-signed-arithmetic.ll (original)
+++ llvm/trunk/test/Analysis/CFLAliasAnalysis/gep-signed-arithmetic.ll Fri Feb 27 13:29:02 2015
@@ -10,7 +10,7 @@ define i32 @test(i32 %indvar) nounwind {
   %tab = alloca i32, align 4
   %tmp31 = mul i32 %indvar, -2
   %tmp32 = add i32 %tmp31, 30
-  %t.5 = getelementptr i32* %tab, i32 %tmp32
+  %t.5 = getelementptr i32, i32* %tab, i32 %tmp32
   %loada = load i32* %tab
   store i32 0, i32* %t.5
   %loadb = load i32* %tab

Modified: llvm/trunk/test/Analysis/CFLAliasAnalysis/must-and-partial.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/CFLAliasAnalysis/must-and-partial.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/CFLAliasAnalysis/must-and-partial.ll (original)
+++ llvm/trunk/test/Analysis/CFLAliasAnalysis/must-and-partial.ll Fri Feb 27 13:29:02 2015
@@ -10,7 +10,7 @@ target datalayout = "e-p:64:64:64-i1:8:8
 define i8 @test0(i1 %x) {
 entry:
   %base = alloca i8, align 4
-  %baseplusone = getelementptr i8* %base, i64 1
+  %baseplusone = getelementptr i8, i8* %base, i64 1
   br i1 %x, label %red, label %green
 red:
   br label %green
@@ -30,7 +30,7 @@ green:
 define i8 @test1(i1 %x) {
 entry:
   %base = alloca i8, align 4
-  %baseplusone = getelementptr i8* %base, i64 1
+  %baseplusone = getelementptr i8, i8* %base, i64 1
   %sel = select i1 %x, i8* %baseplusone, i8* %base
   store i8 0, i8* %sel
 
@@ -45,9 +45,9 @@ entry:
 ; even if they are nocapture
 ; CHECK: MayAlias:  double* %A, double* %Index
 define void @testr2(double* nocapture readonly %A, double* nocapture readonly %Index) {
-  %arrayidx22 = getelementptr inbounds double* %Index, i64 2
+  %arrayidx22 = getelementptr inbounds double, double* %Index, i64 2
   %1 = load double* %arrayidx22
-  %arrayidx25 = getelementptr inbounds double* %A, i64 2
+  %arrayidx25 = getelementptr inbounds double, double* %A, i64 2
   %2 = load double* %arrayidx25
   %mul26 = fmul double %1, %2
   ret void

Modified: llvm/trunk/test/Analysis/CFLAliasAnalysis/simple.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/CFLAliasAnalysis/simple.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/CFLAliasAnalysis/simple.ll (original)
+++ llvm/trunk/test/Analysis/CFLAliasAnalysis/simple.ll Fri Feb 27 13:29:02 2015
@@ -9,10 +9,10 @@
 ; CHECK-NOT:   May:
 
 define void @test(%T* %P) {
-  %A = getelementptr %T* %P, i64 0
-  %B = getelementptr %T* %P, i64 0, i32 0
-  %C = getelementptr %T* %P, i64 0, i32 1
-  %D = getelementptr %T* %P, i64 0, i32 1, i64 0
-  %E = getelementptr %T* %P, i64 0, i32 1, i64 5
+  %A = getelementptr %T, %T* %P, i64 0
+  %B = getelementptr %T, %T* %P, i64 0, i32 0
+  %C = getelementptr %T, %T* %P, i64 0, i32 1
+  %D = getelementptr %T, %T* %P, i64 0, i32 1, i64 0
+  %E = getelementptr %T, %T* %P, i64 0, i32 1, i64 5
   ret void
 }

Modified: llvm/trunk/test/Analysis/CostModel/ARM/gep.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/CostModel/ARM/gep.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/CostModel/ARM/gep.ll (original)
+++ llvm/trunk/test/Analysis/CostModel/ARM/gep.ll Fri Feb 27 13:29:02 2015
@@ -6,37 +6,37 @@ target triple = "thumbv7-apple-ios6.0.0"
 define void @test_geps() {
   ; Cost of scalar integer geps should be one. We can't always expect it to be
   ; folded into the instruction addressing mode.
-;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds i8*
-  %a0 = getelementptr inbounds i8* undef, i32 0
-;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds i16*
-  %a1 = getelementptr inbounds i16* undef, i32 0
-;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds i32*
-  %a2 = getelementptr inbounds i32* undef, i32 0
+;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds i8, i8*
+  %a0 = getelementptr inbounds i8, i8* undef, i32 0
+;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds i16, i16*
+  %a1 = getelementptr inbounds i16, i16* undef, i32 0
+;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds i32, i32*
+  %a2 = getelementptr inbounds i32, i32* undef, i32 0
 
-;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds i64*
-  %a3 = getelementptr inbounds i64* undef, i32 0
+;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds i64, i64*
+  %a3 = getelementptr inbounds i64, i64* undef, i32 0
 
   ; Cost of scalar floating point geps should be one. We cannot fold the address
   ; computation.
-;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds float*
-  %a4 = getelementptr inbounds float* undef, i32 0
-;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds double*
-  %a5 = getelementptr inbounds double* undef, i32 0
+;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds float, float*
+  %a4 = getelementptr inbounds float, float* undef, i32 0
+;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds double, double*
+  %a5 = getelementptr inbounds double, double* undef, i32 0
 
 
   ; Cost of vector geps should be one. We cannot fold the address computation.
-;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds <4 x i8>*
-  %a7 = getelementptr inbounds <4 x i8>* undef, i32 0
-;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds <4 x i16>*
-  %a8 = getelementptr inbounds <4 x i16>* undef, i32 0
-;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds <4 x i32>*
-  %a9 = getelementptr inbounds <4 x i32>* undef, i32 0
-;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds <4 x i64>*
-  %a10 = getelementptr inbounds <4 x i64>* undef, i32 0
-;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds <4 x float>*
-  %a11 = getelementptr inbounds <4 x float>* undef, i32 0
-;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds <4 x double>*
-  %a12 = getelementptr inbounds <4 x double>* undef, i32 0
+;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds <4 x i8>, <4 x i8>*
+  %a7 = getelementptr inbounds <4 x i8>, <4 x i8>* undef, i32 0
+;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds <4 x i16>, <4 x i16>*
+  %a8 = getelementptr inbounds <4 x i16>, <4 x i16>* undef, i32 0
+;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds <4 x i32>, <4 x i32>*
+  %a9 = getelementptr inbounds <4 x i32>, <4 x i32>* undef, i32 0
+;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds <4 x i64>, <4 x i64>*
+  %a10 = getelementptr inbounds <4 x i64>, <4 x i64>* undef, i32 0
+;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds <4 x float>, <4 x float>*
+  %a11 = getelementptr inbounds <4 x float>, <4 x float>* undef, i32 0
+;CHECK: cost of 1 for instruction: {{.*}} getelementptr inbounds <4 x double>, <4 x double>*
+  %a12 = getelementptr inbounds <4 x double>, <4 x double>* undef, i32 0
 
 
   ret void

Modified: llvm/trunk/test/Analysis/CostModel/X86/gep.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/CostModel/X86/gep.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/CostModel/X86/gep.ll (original)
+++ llvm/trunk/test/Analysis/CostModel/X86/gep.ll Fri Feb 27 13:29:02 2015
@@ -7,33 +7,33 @@ target triple = "x86_64-apple-macosx10.8
 define void @test_geps() {
   ; Cost of should be zero. We expect it to be folded into
   ; the instruction addressing mode.
-;CHECK:  cost of 0 for instruction: {{.*}} getelementptr inbounds i8*
-  %a0 = getelementptr inbounds i8* undef, i32 0
-;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds i16*
-  %a1 = getelementptr inbounds i16* undef, i32 0
-;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds i32*
-  %a2 = getelementptr inbounds i32* undef, i32 0
-;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds i64*
-  %a3 = getelementptr inbounds i64* undef, i32 0
+;CHECK:  cost of 0 for instruction: {{.*}} getelementptr inbounds i8, i8*
+  %a0 = getelementptr inbounds i8, i8* undef, i32 0
+;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds i16, i16*
+  %a1 = getelementptr inbounds i16, i16* undef, i32 0
+;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds i32, i32*
+  %a2 = getelementptr inbounds i32, i32* undef, i32 0
+;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds i64, i64*
+  %a3 = getelementptr inbounds i64, i64* undef, i32 0
 
-;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds float*
-  %a4 = getelementptr inbounds float* undef, i32 0
-;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds double*
-  %a5 = getelementptr inbounds double* undef, i32 0
+;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds float, float*
+  %a4 = getelementptr inbounds float, float* undef, i32 0
+;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds double, double*
+  %a5 = getelementptr inbounds double, double* undef, i32 0
 
  ; Vector geps should also have zero cost.
-;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds <4 x i8>*
-  %a7 = getelementptr inbounds <4 x i8>* undef, i32 0
-;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds <4 x i16>*
-  %a8 = getelementptr inbounds <4 x i16>* undef, i32 0
-;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds <4 x i32>*
-  %a9 = getelementptr inbounds <4 x i32>* undef, i32 0
-;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds <4 x i64>*
-  %a10 = getelementptr inbounds <4 x i64>* undef, i32 0
-;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds <4 x float>*
-  %a11 = getelementptr inbounds <4 x float>* undef, i32 0
-;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds <4 x double>*
-  %a12 = getelementptr inbounds <4 x double>* undef, i32 0
+;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds <4 x i8>, <4 x i8>*
+  %a7 = getelementptr inbounds <4 x i8>, <4 x i8>* undef, i32 0
+;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds <4 x i16>, <4 x i16>*
+  %a8 = getelementptr inbounds <4 x i16>, <4 x i16>* undef, i32 0
+;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds <4 x i32>, <4 x i32>*
+  %a9 = getelementptr inbounds <4 x i32>, <4 x i32>* undef, i32 0
+;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds <4 x i64>, <4 x i64>*
+  %a10 = getelementptr inbounds <4 x i64>, <4 x i64>* undef, i32 0
+;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds <4 x float>, <4 x float>*
+  %a11 = getelementptr inbounds <4 x float>, <4 x float>* undef, i32 0
+;CHECK: cost of 0 for instruction: {{.*}} getelementptr inbounds <4 x double>, <4 x double>*
+  %a12 = getelementptr inbounds <4 x double>, <4 x double>* undef, i32 0
 
 
   ret void

Modified: llvm/trunk/test/Analysis/CostModel/X86/intrinsic-cost.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/CostModel/X86/intrinsic-cost.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/CostModel/X86/intrinsic-cost.ll (original)
+++ llvm/trunk/test/Analysis/CostModel/X86/intrinsic-cost.ll Fri Feb 27 13:29:02 2015
@@ -9,7 +9,7 @@ vector.ph:
 
 vector.body:                                      ; preds = %vector.body, %vector.ph
   %index = phi i64 [ 0, %vector.ph ], [ %index.next, %vector.body ]
-  %0 = getelementptr inbounds float* %f, i64 %index
+  %0 = getelementptr inbounds float, float* %f, i64 %index
   %1 = bitcast float* %0 to <4 x float>*
   %wide.load = load <4 x float>* %1, align 4
   %2 = call <4 x float> @llvm.ceil.v4f32(<4 x float> %wide.load)
@@ -37,7 +37,7 @@ vector.ph:
 
 vector.body:                                      ; preds = %vector.body, %vector.ph
   %index = phi i64 [ 0, %vector.ph ], [ %index.next, %vector.body ]
-  %0 = getelementptr inbounds float* %f, i64 %index
+  %0 = getelementptr inbounds float, float* %f, i64 %index
   %1 = bitcast float* %0 to <4 x float>*
   %wide.load = load <4 x float>* %1, align 4
   %2 = call <4 x float> @llvm.nearbyint.v4f32(<4 x float> %wide.load)
@@ -65,7 +65,7 @@ vector.ph:
 
 vector.body:                                      ; preds = %vector.body, %vector.ph
   %index = phi i64 [ 0, %vector.ph ], [ %index.next, %vector.body ]
-  %0 = getelementptr inbounds float* %f, i64 %index
+  %0 = getelementptr inbounds float, float* %f, i64 %index
   %1 = bitcast float* %0 to <4 x float>*
   %wide.load = load <4 x float>* %1, align 4
   %2 = call <4 x float> @llvm.fmuladd.v4f32(<4 x float> %wide.load, <4 x float> %b, <4 x float> %c)

Modified: llvm/trunk/test/Analysis/CostModel/X86/loop_v2.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/CostModel/X86/loop_v2.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/CostModel/X86/loop_v2.ll (original)
+++ llvm/trunk/test/Analysis/CostModel/X86/loop_v2.ll Fri Feb 27 13:29:02 2015
@@ -10,16 +10,16 @@ vector.ph:
 vector.body:                                      ; preds = %vector.body, %vector.ph
   %index = phi i64 [ 0, %vector.ph ], [ %index.next, %vector.body ]
   %vec.phi = phi <2 x i32> [ zeroinitializer, %vector.ph ], [ %12, %vector.body ]
-  %0 = getelementptr inbounds i32* %A, i64 %index
+  %0 = getelementptr inbounds i32, i32* %A, i64 %index
   %1 = bitcast i32* %0 to <2 x i32>*
   %2 = load <2 x i32>* %1, align 4
   %3 = sext <2 x i32> %2 to <2 x i64>
   ;CHECK: cost of 1 {{.*}} extract
   %4 = extractelement <2 x i64> %3, i32 0
-  %5 = getelementptr inbounds i32* %A, i64 %4
+  %5 = getelementptr inbounds i32, i32* %A, i64 %4
   ;CHECK: cost of 1 {{.*}} extract
   %6 = extractelement <2 x i64> %3, i32 1
-  %7 = getelementptr inbounds i32* %A, i64 %6
+  %7 = getelementptr inbounds i32, i32* %A, i64 %6
   %8 = load i32* %5, align 4
   ;CHECK: cost of 1 {{.*}} insert
   %9 = insertelement <2 x i32> undef, i32 %8, i32 0

Modified: llvm/trunk/test/Analysis/CostModel/X86/vectorized-loop.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/CostModel/X86/vectorized-loop.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/CostModel/X86/vectorized-loop.ll (original)
+++ llvm/trunk/test/Analysis/CostModel/X86/vectorized-loop.ll Fri Feb 27 13:29:02 2015
@@ -25,14 +25,14 @@ for.body.lr.ph:
 vector.body:                                      ; preds = %for.body.lr.ph, %vector.body
   %index = phi i64 [ %index.next, %vector.body ], [ %0, %for.body.lr.ph ]
   %3 = add i64 %index, 2
-  %4 = getelementptr inbounds i32* %B, i64 %3
+  %4 = getelementptr inbounds i32, i32* %B, i64 %3
   ;CHECK: cost of 0 {{.*}} bitcast
   %5 = bitcast i32* %4 to <8 x i32>*
   ;CHECK: cost of 2 {{.*}} load
   %6 = load <8 x i32>* %5, align 4
   ;CHECK: cost of 4 {{.*}} mul
   %7 = mul nsw <8 x i32> %6, <i32 5, i32 5, i32 5, i32 5, i32 5, i32 5, i32 5, i32 5>
-  %8 = getelementptr inbounds i32* %A, i64 %index
+  %8 = getelementptr inbounds i32, i32* %A, i64 %index
   %9 = bitcast i32* %8 to <8 x i32>*
   ;CHECK: cost of 2 {{.*}} load
   %10 = load <8 x i32>* %9, align 4
@@ -52,12 +52,12 @@ middle.block:
 for.body:                                         ; preds = %middle.block, %for.body
   %indvars.iv = phi i64 [ %indvars.iv.next, %for.body ], [ %end.idx.rnd.down, %middle.block ]
   %13 = add nsw i64 %indvars.iv, 2
-  %arrayidx = getelementptr inbounds i32* %B, i64 %13
+  %arrayidx = getelementptr inbounds i32, i32* %B, i64 %13
   ;CHECK: cost of 1 {{.*}} load
   %14 = load i32* %arrayidx, align 4
   ;CHECK: cost of 1 {{.*}} mul
   %mul = mul nsw i32 %14, 5
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %indvars.iv
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %indvars.iv
   ;CHECK: cost of 1 {{.*}} load
   %15 = load i32* %arrayidx2, align 4
   %add3 = add nsw i32 %15, %mul

Modified: llvm/trunk/test/Analysis/Delinearization/a.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/a.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/a.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/a.ll Fri Feb 27 13:29:02 2015
@@ -52,7 +52,7 @@ for.k:
   %mul.us.us = mul nsw i64 %k.029.us.us, 5
   %arrayidx.sum.us.us = add i64 %mul.us.us, 7
   %arrayidx10.sum.us.us = add i64 %arrayidx.sum.us.us, %tmp27.us.us
-  %arrayidx11.us.us = getelementptr inbounds i32* %A, i64 %arrayidx10.sum.us.us
+  %arrayidx11.us.us = getelementptr inbounds i32, i32* %A, i64 %arrayidx10.sum.us.us
   store i32 1, i32* %arrayidx11.us.us, align 4
   %inc.us.us = add nsw i64 %k.029.us.us, 1
   %exitcond = icmp eq i64 %inc.us.us, %o

Modified: llvm/trunk/test/Analysis/Delinearization/gcd_multiply_expr.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/gcd_multiply_expr.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/gcd_multiply_expr.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/gcd_multiply_expr.ll Fri Feb 27 13:29:02 2015
@@ -64,56 +64,56 @@ for.body4.i.preheader:
 for.body4.i:
   %8 = phi i32 [ %inc.7.i, %for.body4.i ], [ %.pr.i, %for.body4.i.preheader ]
   %arrayidx.sum1 = add i32 %add.i, %8
-  %arrayidx.i = getelementptr inbounds i8* %3, i32 %arrayidx.sum1
+  %arrayidx.i = getelementptr inbounds i8, i8* %3, i32 %arrayidx.sum1
   %9 = load i8* %arrayidx.i, align 1
   %conv.i = sext i8 %9 to i32
   store i32 %conv.i, i32* @c, align 4
   %inc.i = add nsw i32 %8, 1
   store i32 %inc.i, i32* @b, align 4
   %arrayidx.sum2 = add i32 %add.i, %inc.i
-  %arrayidx.1.i = getelementptr inbounds i8* %3, i32 %arrayidx.sum2
+  %arrayidx.1.i = getelementptr inbounds i8, i8* %3, i32 %arrayidx.sum2
   %10 = load i8* %arrayidx.1.i, align 1
   %conv.1.i = sext i8 %10 to i32
   store i32 %conv.1.i, i32* @c, align 4
   %inc.1.i = add nsw i32 %8, 2
   store i32 %inc.1.i, i32* @b, align 4
   %arrayidx.sum3 = add i32 %add.i, %inc.1.i
-  %arrayidx.2.i = getelementptr inbounds i8* %3, i32 %arrayidx.sum3
+  %arrayidx.2.i = getelementptr inbounds i8, i8* %3, i32 %arrayidx.sum3
   %11 = load i8* %arrayidx.2.i, align 1
   %conv.2.i = sext i8 %11 to i32
   store i32 %conv.2.i, i32* @c, align 4
   %inc.2.i = add nsw i32 %8, 3
   store i32 %inc.2.i, i32* @b, align 4
   %arrayidx.sum4 = add i32 %add.i, %inc.2.i
-  %arrayidx.3.i = getelementptr inbounds i8* %3, i32 %arrayidx.sum4
+  %arrayidx.3.i = getelementptr inbounds i8, i8* %3, i32 %arrayidx.sum4
   %12 = load i8* %arrayidx.3.i, align 1
   %conv.3.i = sext i8 %12 to i32
   store i32 %conv.3.i, i32* @c, align 4
   %inc.3.i = add nsw i32 %8, 4
   store i32 %inc.3.i, i32* @b, align 4
   %arrayidx.sum5 = add i32 %add.i, %inc.3.i
-  %arrayidx.4.i = getelementptr inbounds i8* %3, i32 %arrayidx.sum5
+  %arrayidx.4.i = getelementptr inbounds i8, i8* %3, i32 %arrayidx.sum5
   %13 = load i8* %arrayidx.4.i, align 1
   %conv.4.i = sext i8 %13 to i32
   store i32 %conv.4.i, i32* @c, align 4
   %inc.4.i = add nsw i32 %8, 5
   store i32 %inc.4.i, i32* @b, align 4
   %arrayidx.sum6 = add i32 %add.i, %inc.4.i
-  %arrayidx.5.i = getelementptr inbounds i8* %3, i32 %arrayidx.sum6
+  %arrayidx.5.i = getelementptr inbounds i8, i8* %3, i32 %arrayidx.sum6
   %14 = load i8* %arrayidx.5.i, align 1
   %conv.5.i = sext i8 %14 to i32
   store i32 %conv.5.i, i32* @c, align 4
   %inc.5.i = add nsw i32 %8, 6
   store i32 %inc.5.i, i32* @b, align 4
   %arrayidx.sum7 = add i32 %add.i, %inc.5.i
-  %arrayidx.6.i = getelementptr inbounds i8* %3, i32 %arrayidx.sum7
+  %arrayidx.6.i = getelementptr inbounds i8, i8* %3, i32 %arrayidx.sum7
   %15 = load i8* %arrayidx.6.i, align 1
   %conv.6.i = sext i8 %15 to i32
   store i32 %conv.6.i, i32* @c, align 4
   %inc.6.i = add nsw i32 %8, 7
   store i32 %inc.6.i, i32* @b, align 4
   %arrayidx.sum8 = add i32 %add.i, %inc.6.i
-  %arrayidx.7.i = getelementptr inbounds i8* %3, i32 %arrayidx.sum8
+  %arrayidx.7.i = getelementptr inbounds i8, i8* %3, i32 %arrayidx.sum8
   %16 = load i8* %arrayidx.7.i, align 1
   %conv.7.i = sext i8 %16 to i32
   store i32 %conv.7.i, i32* @c, align 4
@@ -135,7 +135,7 @@ for.body4.ur.i.preheader:
 for.body4.ur.i:
   %20 = phi i32 [ %inc.ur.i, %for.body4.ur.i ], [ %.ph, %for.body4.ur.i.preheader ]
   %arrayidx.sum = add i32 %add.i, %20
-  %arrayidx.ur.i = getelementptr inbounds i8* %3, i32 %arrayidx.sum
+  %arrayidx.ur.i = getelementptr inbounds i8, i8* %3, i32 %arrayidx.sum
   %21 = load i8* %arrayidx.ur.i, align 1
   %conv.ur.i = sext i8 %21 to i32
   store i32 %conv.ur.i, i32* @c, align 4

Modified: llvm/trunk/test/Analysis/Delinearization/himeno_1.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/himeno_1.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/himeno_1.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/himeno_1.ll Fri Feb 27 13:29:02 2015
@@ -35,23 +35,23 @@
 
 define void @jacobi(i32 %nn, %struct.Mat* nocapture %a, %struct.Mat* nocapture %p) nounwind uwtable {
 entry:
-  %p.rows.ptr = getelementptr inbounds %struct.Mat* %p, i64 0, i32 2
+  %p.rows.ptr = getelementptr inbounds %struct.Mat, %struct.Mat* %p, i64 0, i32 2
   %p.rows = load i32* %p.rows.ptr
   %p.rows.sub = add i32 %p.rows, -1
   %p.rows.sext = sext i32 %p.rows.sub to i64
-  %p.cols.ptr = getelementptr inbounds %struct.Mat* %p, i64 0, i32 3
+  %p.cols.ptr = getelementptr inbounds %struct.Mat, %struct.Mat* %p, i64 0, i32 3
   %p.cols = load i32* %p.cols.ptr
   %p.cols.sub = add i32 %p.cols, -1
   %p.cols.sext = sext i32 %p.cols.sub to i64
-  %p.deps.ptr = getelementptr inbounds %struct.Mat* %p, i64 0, i32 4
+  %p.deps.ptr = getelementptr inbounds %struct.Mat, %struct.Mat* %p, i64 0, i32 4
   %p.deps = load i32* %p.deps.ptr
   %p.deps.sub = add i32 %p.deps, -1
   %p.deps.sext = sext i32 %p.deps.sub to i64
-  %a.cols.ptr = getelementptr inbounds %struct.Mat* %a, i64 0, i32 3
+  %a.cols.ptr = getelementptr inbounds %struct.Mat, %struct.Mat* %a, i64 0, i32 3
   %a.cols = load i32* %a.cols.ptr
-  %a.deps.ptr = getelementptr inbounds %struct.Mat* %a, i64 0, i32 4
+  %a.deps.ptr = getelementptr inbounds %struct.Mat, %struct.Mat* %a, i64 0, i32 4
   %a.deps = load i32* %a.deps.ptr
-  %a.base.ptr = getelementptr inbounds %struct.Mat* %a, i64 0, i32 0
+  %a.base.ptr = getelementptr inbounds %struct.Mat, %struct.Mat* %a, i64 0, i32 0
   %a.base = load float** %a.base.ptr, align 8
   br label %for.i
 
@@ -71,7 +71,7 @@ for.k:
   %tmp2 = add i64 %tmp1, %j
   %tmp3 = mul i64 %tmp2, %a.deps.sext
   %tmp4 = add nsw i64 %k, %tmp3
-  %arrayidx = getelementptr inbounds float* %a.base, i64 %tmp4
+  %arrayidx = getelementptr inbounds float, float* %a.base, i64 %tmp4
   store float 1.000000e+00, float* %arrayidx
   %k.inc = add nsw i64 %k, 1
   %k.exitcond = icmp eq i64 %k.inc, %p.deps.sext

Modified: llvm/trunk/test/Analysis/Delinearization/himeno_2.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/himeno_2.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/himeno_2.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/himeno_2.ll Fri Feb 27 13:29:02 2015
@@ -35,25 +35,25 @@
 
 define void @jacobi(i32 %nn, %struct.Mat* nocapture %a, %struct.Mat* nocapture %p) nounwind uwtable {
 entry:
-  %p.rows.ptr = getelementptr inbounds %struct.Mat* %p, i64 0, i32 2
+  %p.rows.ptr = getelementptr inbounds %struct.Mat, %struct.Mat* %p, i64 0, i32 2
   %p.rows = load i32* %p.rows.ptr
   %p.rows.sub = add i32 %p.rows, -1
   %p.rows.sext = sext i32 %p.rows.sub to i64
-  %p.cols.ptr = getelementptr inbounds %struct.Mat* %p, i64 0, i32 3
+  %p.cols.ptr = getelementptr inbounds %struct.Mat, %struct.Mat* %p, i64 0, i32 3
   %p.cols = load i32* %p.cols.ptr
   %p.cols.sub = add i32 %p.cols, -1
   %p.cols.sext = sext i32 %p.cols.sub to i64
-  %p.deps.ptr = getelementptr inbounds %struct.Mat* %p, i64 0, i32 4
+  %p.deps.ptr = getelementptr inbounds %struct.Mat, %struct.Mat* %p, i64 0, i32 4
   %p.deps = load i32* %p.deps.ptr
   %p.deps.sub = add i32 %p.deps, -1
   %p.deps.sext = sext i32 %p.deps.sub to i64
-  %a.cols.ptr = getelementptr inbounds %struct.Mat* %a, i64 0, i32 3
+  %a.cols.ptr = getelementptr inbounds %struct.Mat, %struct.Mat* %a, i64 0, i32 3
   %a.cols = load i32* %a.cols.ptr
   %a.cols.sext = sext i32 %a.cols to i64
-  %a.deps.ptr = getelementptr inbounds %struct.Mat* %a, i64 0, i32 4
+  %a.deps.ptr = getelementptr inbounds %struct.Mat, %struct.Mat* %a, i64 0, i32 4
   %a.deps = load i32* %a.deps.ptr
   %a.deps.sext = sext i32 %a.deps to i64
-  %a.base.ptr = getelementptr inbounds %struct.Mat* %a, i64 0, i32 0
+  %a.base.ptr = getelementptr inbounds %struct.Mat, %struct.Mat* %a, i64 0, i32 0
   %a.base = load float** %a.base.ptr, align 8
   br label %for.i
 
@@ -71,7 +71,7 @@ for.k:
   %tmp2 = add i64 %tmp1, %j
   %tmp3 = mul i64 %tmp2, %a.deps.sext
   %tmp4 = add nsw i64 %k, %tmp3
-  %arrayidx = getelementptr inbounds float* %a.base, i64 %tmp4
+  %arrayidx = getelementptr inbounds float, float* %a.base, i64 %tmp4
   store float 1.000000e+00, float* %arrayidx
   %k.inc = add nsw i64 %k, 1
   %k.exitcond = icmp eq i64 %k.inc, %p.deps.sext

Modified: llvm/trunk/test/Analysis/Delinearization/iv_times_constant_in_subscript.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/iv_times_constant_in_subscript.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/iv_times_constant_in_subscript.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/iv_times_constant_in_subscript.ll Fri Feb 27 13:29:02 2015
@@ -29,7 +29,7 @@ for.j:
   %j = phi i64 [ 0, %for.i ], [ %j.inc, %for.j ]
   %prodj = mul i64 %j, 2
   %vlaarrayidx.sum = add i64 %prodj, %tmp
-  %arrayidx = getelementptr inbounds double* %A, i64 %vlaarrayidx.sum
+  %arrayidx = getelementptr inbounds double, double* %A, i64 %vlaarrayidx.sum
   store double 1.0, double* %arrayidx
   %j.inc = add nsw i64 %j, 1
   %j.exitcond = icmp eq i64 %j.inc, %m

Modified: llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_integer_offsets_3d.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_integer_offsets_3d.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_integer_offsets_3d.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_integer_offsets_3d.ll Fri Feb 27 13:29:02 2015
@@ -34,7 +34,7 @@ for.k:
   %subscript2 = mul i64 %subscript1, %o
   %offset2 = add nsw i64 %k, 7
   %subscript = add i64 %subscript2, %offset2
-  %idx = getelementptr inbounds double* %A, i64 %subscript
+  %idx = getelementptr inbounds double, double* %A, i64 %subscript
   store double 1.0, double* %idx
   br label %for.k.inc
 

Modified: llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_integer_offsets_nts_3d.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_integer_offsets_nts_3d.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_integer_offsets_nts_3d.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_integer_offsets_nts_3d.ll Fri Feb 27 13:29:02 2015
@@ -51,7 +51,7 @@ for.body6.us.us:
   %k.019.us.us = phi i64 [ 0, %for.body6.lr.ph.us.us ], [ %inc.us.us, %for.body6.us.us ]
   %arrayidx.sum.us.us = add i64 %k.019.us.us, 7
   %arrayidx9.sum.us.us = add i64 %arrayidx.sum.us.us, %tmp17.us.us
-  %arrayidx10.us.us = getelementptr inbounds double* %A, i64 %arrayidx9.sum.us.us
+  %arrayidx10.us.us = getelementptr inbounds double, double* %A, i64 %arrayidx9.sum.us.us
   store double 1.000000e+00, double* %arrayidx10.us.us, align 8
   %inc.us.us = add nsw i64 %k.019.us.us, 1
   %exitcond = icmp eq i64 %inc.us.us, %o

Modified: llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_parameteric_offsets_3d.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_parameteric_offsets_3d.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_parameteric_offsets_3d.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/multidim_ivs_and_parameteric_offsets_3d.ll Fri Feb 27 13:29:02 2015
@@ -34,7 +34,7 @@ for.k:
   %subscript2 = mul i64 %subscript1, %o
   %offset2 = add nsw i64 %k, %r
   %subscript = add i64 %subscript2, %offset2
-  %idx = getelementptr inbounds double* %A, i64 %subscript
+  %idx = getelementptr inbounds double, double* %A, i64 %subscript
   store double 1.0, double* %idx
   br label %for.k.inc
 

Modified: llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_2d.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_2d.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_2d.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_2d.ll Fri Feb 27 13:29:02 2015
@@ -34,7 +34,7 @@ for.i:
 for.j:
   %j = phi i64 [ 0, %for.i ], [ %j.inc, %for.j ]
   %vlaarrayidx.sum = add i64 %j, %tmp
-  %arrayidx = getelementptr inbounds double* %A, i64 %vlaarrayidx.sum
+  %arrayidx = getelementptr inbounds double, double* %A, i64 %vlaarrayidx.sum
   %val = load double* %arrayidx
   store double %val, double* %arrayidx
   %j.inc = add nsw i64 %j, 1

Modified: llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_2d_nested.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_2d_nested.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_2d_nested.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_2d_nested.ll Fri Feb 27 13:29:02 2015
@@ -53,7 +53,7 @@ for.body9.lr.ph.us.us:
 for.body9.us.us:                                  ; preds = %for.body9.us.us, %for.body9.lr.ph.us.us
   %j.021.us.us = phi i64 [ 0, %for.body9.lr.ph.us.us ], [ %inc.us.us, %for.body9.us.us ]
   %arrayidx.sum.us.us = add i64 %j.021.us.us, %0
-  %arrayidx10.us.us = getelementptr inbounds double* %vla.us, i64 %arrayidx.sum.us.us
+  %arrayidx10.us.us = getelementptr inbounds double, double* %vla.us, i64 %arrayidx.sum.us.us
   store double 1.000000e+00, double* %arrayidx10.us.us, align 8
   %inc.us.us = add nsw i64 %j.021.us.us, 1
   %exitcond50 = icmp eq i64 %inc.us.us, %indvars.iv48

Modified: llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_3d.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_3d.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_3d.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_3d.ll Fri Feb 27 13:29:02 2015
@@ -31,7 +31,7 @@ for.k:
   %subscript1 = add i64 %j, %subscript0
   %subscript2 = mul i64 %subscript1, %o
   %subscript = add i64 %subscript2, %k
-  %idx = getelementptr inbounds double* %A, i64 %subscript
+  %idx = getelementptr inbounds double, double* %A, i64 %subscript
   store double 1.0, double* %idx
   br label %for.k.inc
 

Modified: llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_3d_cast.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_3d_cast.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_3d_cast.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/multidim_only_ivs_3d_cast.ll Fri Feb 27 13:29:02 2015
@@ -38,7 +38,7 @@ for.k:
   %tmp.us.us = add i64 %j, %tmp
   %tmp17.us.us = mul i64 %tmp.us.us, %n_zext
   %subscript = add i64 %tmp17.us.us, %k
-  %idx = getelementptr inbounds double* %A, i64 %subscript
+  %idx = getelementptr inbounds double, double* %A, i64 %subscript
   store double 1.0, double* %idx
   br label %for.k.inc
 

Modified: llvm/trunk/test/Analysis/Delinearization/multidim_two_accesses_different_delinearization.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/multidim_two_accesses_different_delinearization.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/multidim_two_accesses_different_delinearization.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/multidim_two_accesses_different_delinearization.ll Fri Feb 27 13:29:02 2015
@@ -23,11 +23,11 @@ for.j:
   %j = phi i64 [ 0, %for.i ], [ %j.inc, %for.j ]
   %tmp = mul nsw i64 %i, %m
   %vlaarrayidx.sum = add i64 %j, %tmp
-  %arrayidx = getelementptr inbounds double* %A, i64 %vlaarrayidx.sum
+  %arrayidx = getelementptr inbounds double, double* %A, i64 %vlaarrayidx.sum
   store double 1.0, double* %arrayidx
   %tmp1 = mul nsw i64 %j, %n
   %vlaarrayidx.sum1 = add i64 %i, %tmp1
-  %arrayidx1 = getelementptr inbounds double* %A, i64 %vlaarrayidx.sum1
+  %arrayidx1 = getelementptr inbounds double, double* %A, i64 %vlaarrayidx.sum1
   store double 1.0, double* %arrayidx1
   %j.inc = add nsw i64 %j, 1
   %j.exitcond = icmp eq i64 %j.inc, %m

Modified: llvm/trunk/test/Analysis/Delinearization/undef.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Delinearization/undef.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Delinearization/undef.ll (original)
+++ llvm/trunk/test/Analysis/Delinearization/undef.ll Fri Feb 27 13:29:02 2015
@@ -20,7 +20,7 @@ for.body60:
   %tmp5 = add i64 %iy.067, %0
   %tmp6 = mul i64 %tmp5, undef
   %arrayidx69.sum = add i64 undef, %tmp6
-  %arrayidx70 = getelementptr inbounds double* %Ey, i64 %arrayidx69.sum
+  %arrayidx70 = getelementptr inbounds double, double* %Ey, i64 %arrayidx69.sum
   %1 = load double* %arrayidx70, align 8
   %inc = add nsw i64 %ix.062, 1
   br i1 false, label %for.body60, label %for.end

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/Banerjee.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/Banerjee.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/Banerjee.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/Banerjee.ll Fri Feb 27 13:29:02 2015
@@ -40,21 +40,21 @@ for.body3:
   %B.addr.11 = phi i64* [ %B.addr.04, %for.cond1.preheader ], [ %incdec.ptr, %for.body3 ]
   %mul = mul nsw i64 %i.03, 10
   %add = add nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %mul4 = mul nsw i64 %i.03, 10
   %add5 = add nsw i64 %mul4, %j.02
   %sub = add nsw i64 %add5, -1
-  %arrayidx6 = getelementptr inbounds i64* %A, i64 %sub
+  %arrayidx6 = getelementptr inbounds i64, i64* %A, i64 %sub
   %0 = load i64* %arrayidx6, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.11, i64 1
   store i64 %0, i64* %B.addr.11, align 8
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 11
   br i1 %exitcond, label %for.body3, label %for.inc7
 
 for.inc7:                                         ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.04, i64 10
+  %scevgep = getelementptr i64, i64* %B.addr.04, i64 10
   %inc8 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc8, 11
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end9
@@ -109,21 +109,21 @@ for.body3:
   %B.addr.12 = phi i64* [ %incdec.ptr, %for.body3 ], [ %B.addr.06, %for.body3.preheader ]
   %mul = mul nsw i64 %i.05, 10
   %add = add nsw i64 %mul, %j.03
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %mul4 = mul nsw i64 %i.05, 10
   %add5 = add nsw i64 %mul4, %j.03
   %sub = add nsw i64 %add5, -1
-  %arrayidx6 = getelementptr inbounds i64* %A, i64 %sub
+  %arrayidx6 = getelementptr inbounds i64, i64* %A, i64 %sub
   %2 = load i64* %arrayidx6, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.12, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.12, i64 1
   store i64 %2, i64* %B.addr.12, align 8
   %inc = add nsw i64 %j.03, 1
   %exitcond = icmp eq i64 %inc, %1
   br i1 %exitcond, label %for.inc7.loopexit, label %for.body3
 
 for.inc7.loopexit:                                ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.06, i64 %m
+  %scevgep = getelementptr i64, i64* %B.addr.06, i64 %m
   br label %for.inc7
 
 for.inc7:                                         ; preds = %for.inc7.loopexit, %for.cond1.preheader
@@ -175,21 +175,21 @@ for.body3:
   %B.addr.11 = phi i64* [ %B.addr.04, %for.cond1.preheader ], [ %incdec.ptr, %for.body3 ]
   %mul = mul nsw i64 %i.03, 10
   %add = add nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %mul4 = mul nsw i64 %i.03, 10
   %add5 = add nsw i64 %mul4, %j.02
   %add6 = add nsw i64 %add5, 100
-  %arrayidx7 = getelementptr inbounds i64* %A, i64 %add6
+  %arrayidx7 = getelementptr inbounds i64, i64* %A, i64 %add6
   %0 = load i64* %arrayidx7, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.11, i64 1
   store i64 %0, i64* %B.addr.11, align 8
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 10
   br i1 %exitcond, label %for.body3, label %for.inc8
 
 for.inc8:                                         ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.04, i64 10
+  %scevgep = getelementptr i64, i64* %B.addr.04, i64 10
   %inc9 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc9, 10
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end10
@@ -234,21 +234,21 @@ for.body3:
   %B.addr.11 = phi i64* [ %B.addr.04, %for.cond1.preheader ], [ %incdec.ptr, %for.body3 ]
   %mul = mul nsw i64 %i.03, 10
   %add = add nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %mul4 = mul nsw i64 %i.03, 10
   %add5 = add nsw i64 %mul4, %j.02
   %add6 = add nsw i64 %add5, 99
-  %arrayidx7 = getelementptr inbounds i64* %A, i64 %add6
+  %arrayidx7 = getelementptr inbounds i64, i64* %A, i64 %add6
   %0 = load i64* %arrayidx7, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.11, i64 1
   store i64 %0, i64* %B.addr.11, align 8
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 10
   br i1 %exitcond, label %for.body3, label %for.inc8
 
 for.inc8:                                         ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.04, i64 10
+  %scevgep = getelementptr i64, i64* %B.addr.04, i64 10
   %inc9 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc9, 10
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end10
@@ -293,21 +293,21 @@ for.body3:
   %B.addr.11 = phi i64* [ %B.addr.04, %for.cond1.preheader ], [ %incdec.ptr, %for.body3 ]
   %mul = mul nsw i64 %i.03, 10
   %add = add nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %mul4 = mul nsw i64 %i.03, 10
   %add5 = add nsw i64 %mul4, %j.02
   %sub = add nsw i64 %add5, -100
-  %arrayidx6 = getelementptr inbounds i64* %A, i64 %sub
+  %arrayidx6 = getelementptr inbounds i64, i64* %A, i64 %sub
   %0 = load i64* %arrayidx6, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.11, i64 1
   store i64 %0, i64* %B.addr.11, align 8
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 10
   br i1 %exitcond, label %for.body3, label %for.inc7
 
 for.inc7:                                         ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.04, i64 10
+  %scevgep = getelementptr i64, i64* %B.addr.04, i64 10
   %inc8 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc8, 10
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end9
@@ -352,21 +352,21 @@ for.body3:
   %B.addr.11 = phi i64* [ %B.addr.04, %for.cond1.preheader ], [ %incdec.ptr, %for.body3 ]
   %mul = mul nsw i64 %i.03, 10
   %add = add nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %mul4 = mul nsw i64 %i.03, 10
   %add5 = add nsw i64 %mul4, %j.02
   %sub = add nsw i64 %add5, -99
-  %arrayidx6 = getelementptr inbounds i64* %A, i64 %sub
+  %arrayidx6 = getelementptr inbounds i64, i64* %A, i64 %sub
   %0 = load i64* %arrayidx6, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.11, i64 1
   store i64 %0, i64* %B.addr.11, align 8
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 10
   br i1 %exitcond, label %for.body3, label %for.inc7
 
 for.inc7:                                         ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.04, i64 10
+  %scevgep = getelementptr i64, i64* %B.addr.04, i64 10
   %inc8 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc8, 10
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end9
@@ -411,21 +411,21 @@ for.body3:
   %B.addr.11 = phi i64* [ %B.addr.04, %for.cond1.preheader ], [ %incdec.ptr, %for.body3 ]
   %mul = mul nsw i64 %i.03, 10
   %add = add nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %mul4 = mul nsw i64 %i.03, 10
   %add5 = add nsw i64 %mul4, %j.02
   %add6 = add nsw i64 %add5, 9
-  %arrayidx7 = getelementptr inbounds i64* %A, i64 %add6
+  %arrayidx7 = getelementptr inbounds i64, i64* %A, i64 %add6
   %0 = load i64* %arrayidx7, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.11, i64 1
   store i64 %0, i64* %B.addr.11, align 8
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 10
   br i1 %exitcond, label %for.body3, label %for.inc8
 
 for.inc8:                                         ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.04, i64 10
+  %scevgep = getelementptr i64, i64* %B.addr.04, i64 10
   %inc9 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc9, 10
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end10
@@ -470,21 +470,21 @@ for.body3:
   %B.addr.11 = phi i64* [ %B.addr.04, %for.cond1.preheader ], [ %incdec.ptr, %for.body3 ]
   %mul = mul nsw i64 %i.03, 10
   %add = add nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %mul4 = mul nsw i64 %i.03, 10
   %add5 = add nsw i64 %mul4, %j.02
   %add6 = add nsw i64 %add5, 10
-  %arrayidx7 = getelementptr inbounds i64* %A, i64 %add6
+  %arrayidx7 = getelementptr inbounds i64, i64* %A, i64 %add6
   %0 = load i64* %arrayidx7, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.11, i64 1
   store i64 %0, i64* %B.addr.11, align 8
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 10
   br i1 %exitcond, label %for.body3, label %for.inc8
 
 for.inc8:                                         ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.04, i64 10
+  %scevgep = getelementptr i64, i64* %B.addr.04, i64 10
   %inc9 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc9, 10
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end10
@@ -529,21 +529,21 @@ for.body3:
   %B.addr.11 = phi i64* [ %B.addr.04, %for.cond1.preheader ], [ %incdec.ptr, %for.body3 ]
   %mul = mul nsw i64 %i.03, 10
   %add = add nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %mul4 = mul nsw i64 %i.03, 10
   %add5 = add nsw i64 %mul4, %j.02
   %add6 = add nsw i64 %add5, 11
-  %arrayidx7 = getelementptr inbounds i64* %A, i64 %add6
+  %arrayidx7 = getelementptr inbounds i64, i64* %A, i64 %add6
   %0 = load i64* %arrayidx7, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.11, i64 1
   store i64 %0, i64* %B.addr.11, align 8
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 10
   br i1 %exitcond, label %for.body3, label %for.inc8
 
 for.inc8:                                         ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.04, i64 10
+  %scevgep = getelementptr i64, i64* %B.addr.04, i64 10
   %inc9 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc9, 10
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end10
@@ -589,21 +589,21 @@ for.body3:
   %mul = mul nsw i64 %i.03, 30
   %mul4 = mul nsw i64 %j.02, 500
   %add = add nsw i64 %mul, %mul4
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %0 = mul i64 %j.02, -500
   %sub = add i64 %i.03, %0
   %add6 = add nsw i64 %sub, 11
-  %arrayidx7 = getelementptr inbounds i64* %A, i64 %add6
+  %arrayidx7 = getelementptr inbounds i64, i64* %A, i64 %add6
   %1 = load i64* %arrayidx7, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.11, i64 1
   store i64 %1, i64* %B.addr.11, align 8
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 20
   br i1 %exitcond, label %for.body3, label %for.inc8
 
 for.inc8:                                         ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.04, i64 20
+  %scevgep = getelementptr i64, i64* %B.addr.04, i64 20
   %inc9 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc9, 20
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end10
@@ -648,21 +648,21 @@ for.body3:
   %B.addr.11 = phi i64* [ %B.addr.04, %for.cond1.preheader ], [ %incdec.ptr, %for.body3 ]
   %mul = mul nsw i64 %j.02, 500
   %add = add nsw i64 %i.03, %mul
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %0 = mul i64 %j.02, -500
   %sub = add i64 %i.03, %0
   %add5 = add nsw i64 %sub, 11
-  %arrayidx6 = getelementptr inbounds i64* %A, i64 %add5
+  %arrayidx6 = getelementptr inbounds i64, i64* %A, i64 %add5
   %1 = load i64* %arrayidx6, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.11, i64 1
   store i64 %1, i64* %B.addr.11, align 8
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 20
   br i1 %exitcond, label %for.body3, label %for.inc7
 
 for.inc7:                                         ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.04, i64 20
+  %scevgep = getelementptr i64, i64* %B.addr.04, i64 20
   %inc8 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc8, 20
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end9
@@ -707,21 +707,21 @@ for.body3:
   %B.addr.11 = phi i64* [ %B.addr.04, %for.cond1.preheader ], [ %incdec.ptr, %for.body3 ]
   %mul = mul nsw i64 %i.03, 300
   %add = add nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %mul4 = mul nsw i64 %i.03, 250
   %sub = sub nsw i64 %mul4, %j.02
   %add5 = add nsw i64 %sub, 11
-  %arrayidx6 = getelementptr inbounds i64* %A, i64 %add5
+  %arrayidx6 = getelementptr inbounds i64, i64* %A, i64 %add5
   %0 = load i64* %arrayidx6, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.11, i64 1
   store i64 %0, i64* %B.addr.11, align 8
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 20
   br i1 %exitcond, label %for.body3, label %for.inc7
 
 for.inc7:                                         ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.04, i64 20
+  %scevgep = getelementptr i64, i64* %B.addr.04, i64 20
   %inc8 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc8, 20
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end9
@@ -766,21 +766,21 @@ for.body3:
   %B.addr.11 = phi i64* [ %B.addr.04, %for.cond1.preheader ], [ %incdec.ptr, %for.body3 ]
   %mul = mul nsw i64 %i.03, 100
   %add = add nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i64* %A, i64 %add
+  %arrayidx = getelementptr inbounds i64, i64* %A, i64 %add
   store i64 0, i64* %arrayidx, align 8
   %mul4 = mul nsw i64 %i.03, 100
   %sub = sub nsw i64 %mul4, %j.02
   %add5 = add nsw i64 %sub, 11
-  %arrayidx6 = getelementptr inbounds i64* %A, i64 %add5
+  %arrayidx6 = getelementptr inbounds i64, i64* %A, i64 %add5
   %0 = load i64* %arrayidx6, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.11, i64 1
   store i64 %0, i64* %B.addr.11, align 8
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 20
   br i1 %exitcond, label %for.body3, label %for.inc7
 
 for.inc7:                                         ; preds = %for.body3
-  %scevgep = getelementptr i64* %B.addr.04, i64 20
+  %scevgep = getelementptr i64, i64* %B.addr.04, i64 20
   %inc8 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc8, 20
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end9

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/Coupled.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/Coupled.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/Coupled.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/Coupled.ll Fri Feb 27 13:29:02 2015
@@ -24,13 +24,13 @@ for.body:
   %i.02 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
-  %arrayidx1 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx1 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   store i32 %conv, i32* %arrayidx1, align 4
   %add = add nsw i64 %i.02, 9
   %add2 = add nsw i64 %i.02, 10
-  %arrayidx4 = getelementptr inbounds [100 x i32]* %A, i64 %add2, i64 %add
+  %arrayidx4 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %add2, i64 %add
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 50
@@ -60,13 +60,13 @@ for.body:
   %i.02 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
-  %arrayidx1 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx1 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   store i32 %conv, i32* %arrayidx1, align 4
   %add = add nsw i64 %i.02, 9
   %add2 = add nsw i64 %i.02, 9
-  %arrayidx4 = getelementptr inbounds [100 x i32]* %A, i64 %add2, i64 %add
+  %arrayidx4 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %add2, i64 %add
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 50
@@ -100,11 +100,11 @@ for.body:
   %sub = add nsw i64 %mul, -6
   %mul1 = mul nsw i64 %i.02, 3
   %sub2 = add nsw i64 %mul1, -6
-  %arrayidx3 = getelementptr inbounds [100 x i32]* %A, i64 %sub2, i64 %sub
+  %arrayidx3 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %sub2, i64 %sub
   store i32 %conv, i32* %arrayidx3, align 4
-  %arrayidx5 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx5 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 50
@@ -138,11 +138,11 @@ for.body:
   %sub = add nsw i64 %mul, -5
   %mul1 = mul nsw i64 %i.02, 3
   %sub2 = add nsw i64 %mul1, -6
-  %arrayidx3 = getelementptr inbounds [100 x i32]* %A, i64 %sub2, i64 %sub
+  %arrayidx3 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %sub2, i64 %sub
   store i32 %conv, i32* %arrayidx3, align 4
-  %arrayidx5 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx5 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 50
@@ -177,11 +177,11 @@ for.body:
   %sub = sub nsw i64 %mul, %conv1
   %mul2 = mul nsw i64 %i.02, 3
   %sub3 = add nsw i64 %mul2, -6
-  %arrayidx4 = getelementptr inbounds [100 x i32]* %A, i64 %sub3, i64 %sub
+  %arrayidx4 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %sub3, i64 %sub
   store i32 %conv, i32* %arrayidx4, align 4
-  %arrayidx6 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx6 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx6, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 50
@@ -218,11 +218,11 @@ for.body:
   %conv3 = sext i32 %n to i64
   %sub4 = sub nsw i64 %mul2, %conv3
   %add = add nsw i64 %sub4, 1
-  %arrayidx5 = getelementptr inbounds [100 x i32]* %A, i64 %add, i64 %sub
+  %arrayidx5 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %add, i64 %sub
   store i32 %conv, i32* %arrayidx5, align 4
-  %arrayidx7 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx7 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx7, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 50
@@ -254,11 +254,11 @@ for.body:
   %conv = trunc i64 %i.02 to i32
   %mul = mul nsw i64 %i.02, 3
   %sub = add nsw i64 %mul, -6
-  %arrayidx1 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %sub
+  %arrayidx1 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %sub
   store i32 %conv, i32* %arrayidx1, align 4
-  %arrayidx3 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx3 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx3, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 50
@@ -290,11 +290,11 @@ for.body:
   %conv = trunc i64 %i.02 to i32
   %mul = mul nsw i64 %i.02, 3
   %sub = add nsw i64 %mul, -5
-  %arrayidx1 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %sub
+  %arrayidx1 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %sub
   store i32 %conv, i32* %arrayidx1, align 4
-  %arrayidx3 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx3 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx3, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 50
@@ -327,11 +327,11 @@ for.body:
   %sub = sub nsw i64 3, %i.02
   %mul = mul nsw i64 %i.02, 3
   %sub1 = add nsw i64 %mul, -18
-  %arrayidx2 = getelementptr inbounds [100 x i32]* %A, i64 %sub1, i64 %sub
+  %arrayidx2 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %sub1, i64 %sub
   store i32 %conv, i32* %arrayidx2, align 4
-  %arrayidx4 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx4 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 16
@@ -364,11 +364,11 @@ for.body:
   %sub = sub nsw i64 2, %i.02
   %mul = mul nsw i64 %i.02, 3
   %sub1 = add nsw i64 %mul, -18
-  %arrayidx2 = getelementptr inbounds [100 x i32]* %A, i64 %sub1, i64 %sub
+  %arrayidx2 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %sub1, i64 %sub
   store i32 %conv, i32* %arrayidx2, align 4
-  %arrayidx4 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx4 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 16
@@ -402,11 +402,11 @@ for.body:
   %sub = sub nsw i64 6, %i.02
   %mul = mul nsw i64 %i.02, 3
   %sub1 = add nsw i64 %mul, -18
-  %arrayidx2 = getelementptr inbounds [100 x i32]* %A, i64 %sub1, i64 %sub
+  %arrayidx2 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %sub1, i64 %sub
   store i32 %conv, i32* %arrayidx2, align 4
-  %arrayidx4 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx4 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 16
@@ -440,11 +440,11 @@ for.body:
   %sub = sub nsw i64 18, %i.02
   %mul = mul nsw i64 %i.02, 3
   %sub1 = add nsw i64 %mul, -18
-  %arrayidx2 = getelementptr inbounds [100 x i32]* %A, i64 %sub1, i64 %sub
+  %arrayidx2 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %sub1, i64 %sub
   store i32 %conv, i32* %arrayidx2, align 4
-  %arrayidx4 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx4 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 16
@@ -478,11 +478,11 @@ for.body:
   %sub = sub nsw i64 22, %i.02
   %mul = mul nsw i64 %i.02, 3
   %sub1 = add nsw i64 %mul, -18
-  %arrayidx2 = getelementptr inbounds [100 x i32]* %A, i64 %sub1, i64 %sub
+  %arrayidx2 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %sub1, i64 %sub
   store i32 %conv, i32* %arrayidx2, align 4
-  %arrayidx4 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx4 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 13
@@ -515,11 +515,11 @@ for.body:
   %sub = sub nsw i64 22, %i.02
   %mul = mul nsw i64 %i.02, 3
   %sub1 = add nsw i64 %mul, -18
-  %arrayidx2 = getelementptr inbounds [100 x i32]* %A, i64 %sub1, i64 %sub
+  %arrayidx2 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %sub1, i64 %sub
   store i32 %conv, i32* %arrayidx2, align 4
-  %arrayidx4 = getelementptr inbounds [100 x i32]* %A, i64 %i.02, i64 %i.02
+  %arrayidx4 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 12
@@ -552,11 +552,11 @@ for.body:
   %sub = sub nsw i64 18, %i.02
   %mul = mul nsw i64 %i.02, 3
   %sub1 = add nsw i64 %mul, -18
-  %arrayidx3 = getelementptr inbounds [100 x [100 x i32]]* %A, i64 %sub1, i64 %sub, i64 %i.02
+  %arrayidx3 = getelementptr inbounds [100 x [100 x i32]], [100 x [100 x i32]]* %A, i64 %sub1, i64 %sub, i64 %i.02
   store i32 %conv, i32* %arrayidx3, align 4
-  %arrayidx6 = getelementptr inbounds [100 x [100 x i32]]* %A, i64 %i.02, i64 %i.02, i64 %i.02
+  %arrayidx6 = getelementptr inbounds [100 x [100 x i32]], [100 x [100 x i32]]* %A, i64 %i.02, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx6, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 100
@@ -589,11 +589,11 @@ for.body:
   %sub = sub nsw i64 22, %i.02
   %mul = mul nsw i64 %i.02, 3
   %sub1 = add nsw i64 %mul, -18
-  %arrayidx3 = getelementptr inbounds [100 x [100 x i32]]* %A, i64 %sub1, i64 %sub, i64 %i.02
+  %arrayidx3 = getelementptr inbounds [100 x [100 x i32]], [100 x [100 x i32]]* %A, i64 %sub1, i64 %sub, i64 %i.02
   store i32 %conv, i32* %arrayidx3, align 4
-  %arrayidx6 = getelementptr inbounds [100 x [100 x i32]]* %A, i64 %i.02, i64 %i.02, i64 %i.02
+  %arrayidx6 = getelementptr inbounds [100 x [100 x i32]], [100 x [100 x i32]]* %A, i64 %i.02, i64 %i.02, i64 %i.02
   %0 = load i32* %arrayidx6, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add nsw i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 100

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/ExactRDIV.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/ExactRDIV.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/ExactRDIV.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/ExactRDIV.ll Fri Feb 27 13:29:02 2015
@@ -26,7 +26,7 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = shl nsw i64 %i.03, 2
   %add = add nsw i64 %mul, 10
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc, 10
@@ -40,9 +40,9 @@ for.body4:
   %B.addr.01 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
   %mul5 = shl nsw i64 %j.02, 1
   %add64 = or i64 %mul5, 1
-  %arrayidx7 = getelementptr inbounds i32* %A, i64 %add64
+  %arrayidx7 = getelementptr inbounds i32, i32* %A, i64 %add64
   %0 = load i32* %arrayidx7, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc9 = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc9, 10
@@ -74,7 +74,7 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = mul nsw i64 %i.03, 11
   %sub = add nsw i64 %mul, -45
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond4 = icmp ne i64 %inc, 5
@@ -86,9 +86,9 @@ for.body4.preheader:
 for.body4:                                        ; preds = %for.body4.preheader, %for.body4
   %j.02 = phi i64 [ %inc7, %for.body4 ], [ 0, %for.body4.preheader ]
   %B.addr.01 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
-  %arrayidx5 = getelementptr inbounds i32* %A, i64 %j.02
+  %arrayidx5 = getelementptr inbounds i32, i32* %A, i64 %j.02
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc7 = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc7, 10
@@ -120,7 +120,7 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = mul nsw i64 %i.03, 11
   %sub = add nsw i64 %mul, -45
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond4 = icmp ne i64 %inc, 6
@@ -132,9 +132,9 @@ for.body4.preheader:
 for.body4:                                        ; preds = %for.body4.preheader, %for.body4
   %j.02 = phi i64 [ %inc7, %for.body4 ], [ 0, %for.body4.preheader ]
   %B.addr.01 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
-  %arrayidx5 = getelementptr inbounds i32* %A, i64 %j.02
+  %arrayidx5 = getelementptr inbounds i32, i32* %A, i64 %j.02
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc7 = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc7, 10
@@ -166,7 +166,7 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = mul nsw i64 %i.03, 11
   %sub = add nsw i64 %mul, -45
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond4 = icmp ne i64 %inc, 5
@@ -178,9 +178,9 @@ for.body4.preheader:
 for.body4:                                        ; preds = %for.body4.preheader, %for.body4
   %j.02 = phi i64 [ %inc7, %for.body4 ], [ 0, %for.body4.preheader ]
   %B.addr.01 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
-  %arrayidx5 = getelementptr inbounds i32* %A, i64 %j.02
+  %arrayidx5 = getelementptr inbounds i32, i32* %A, i64 %j.02
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc7 = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc7, 11
@@ -212,7 +212,7 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = mul nsw i64 %i.03, 11
   %sub = add nsw i64 %mul, -45
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond4 = icmp ne i64 %inc, 6
@@ -224,9 +224,9 @@ for.body4.preheader:
 for.body4:                                        ; preds = %for.body4.preheader, %for.body4
   %j.02 = phi i64 [ %inc7, %for.body4 ], [ 0, %for.body4.preheader ]
   %B.addr.01 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
-  %arrayidx5 = getelementptr inbounds i32* %A, i64 %j.02
+  %arrayidx5 = getelementptr inbounds i32, i32* %A, i64 %j.02
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc7 = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc7, 11
@@ -258,7 +258,7 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = mul nsw i64 %i.03, -11
   %add = add nsw i64 %mul, 45
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond4 = icmp ne i64 %inc, 5
@@ -271,9 +271,9 @@ for.body4:
   %j.02 = phi i64 [ %inc7, %for.body4 ], [ 0, %for.body4.preheader ]
   %B.addr.01 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
   %sub = sub nsw i64 0, %j.02
-  %arrayidx5 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx5 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc7 = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc7, 10
@@ -305,7 +305,7 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = mul nsw i64 %i.03, -11
   %add = add nsw i64 %mul, 45
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond4 = icmp ne i64 %inc, 6
@@ -318,9 +318,9 @@ for.body4:
   %j.02 = phi i64 [ %inc7, %for.body4 ], [ 0, %for.body4.preheader ]
   %B.addr.01 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
   %sub = sub nsw i64 0, %j.02
-  %arrayidx5 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx5 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc7 = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc7, 10
@@ -352,7 +352,7 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = mul nsw i64 %i.03, -11
   %add = add nsw i64 %mul, 45
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond4 = icmp ne i64 %inc, 5
@@ -365,9 +365,9 @@ for.body4:
   %j.02 = phi i64 [ %inc7, %for.body4 ], [ 0, %for.body4.preheader ]
   %B.addr.01 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
   %sub = sub nsw i64 0, %j.02
-  %arrayidx5 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx5 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc7 = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc7, 11
@@ -399,7 +399,7 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = mul nsw i64 %i.03, -11
   %add = add nsw i64 %mul, 45
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond4 = icmp ne i64 %inc, 6
@@ -412,9 +412,9 @@ for.body4:
   %j.02 = phi i64 [ %inc7, %for.body4 ], [ 0, %for.body4.preheader ]
   %B.addr.01 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
   %sub = sub nsw i64 0, %j.02
-  %arrayidx5 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx5 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc7 = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc7, 11
@@ -452,18 +452,18 @@ for.body3:
   %conv = trunc i64 %i.03 to i32
   %mul = mul nsw i64 %i.03, 11
   %sub = sub nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx4 = getelementptr inbounds i32* %A, i64 45
+  %arrayidx4 = getelementptr inbounds i32, i32* %A, i64 45
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 10
   br i1 %exitcond, label %for.body3, label %for.inc5
 
 for.inc5:                                         ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 10
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 10
   %inc6 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc6, 5
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end7
@@ -501,18 +501,18 @@ for.body3:
   %conv = trunc i64 %i.03 to i32
   %mul = mul nsw i64 %i.03, 11
   %sub = sub nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx4 = getelementptr inbounds i32* %A, i64 45
+  %arrayidx4 = getelementptr inbounds i32, i32* %A, i64 45
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 10
   br i1 %exitcond, label %for.body3, label %for.inc5
 
 for.inc5:                                         ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 10
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 10
   %inc6 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc6, 6
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end7
@@ -549,18 +549,18 @@ for.body3:
   %conv = trunc i64 %i.03 to i32
   %mul = mul nsw i64 %i.03, 11
   %sub = sub nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx4 = getelementptr inbounds i32* %A, i64 45
+  %arrayidx4 = getelementptr inbounds i32, i32* %A, i64 45
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 11
   br i1 %exitcond, label %for.body3, label %for.inc5
 
 for.inc5:                                         ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 11
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 11
   %inc6 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc6, 5
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end7
@@ -597,18 +597,18 @@ for.body3:
   %conv = trunc i64 %i.03 to i32
   %mul = mul nsw i64 %i.03, 11
   %sub = sub nsw i64 %mul, %j.02
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx4 = getelementptr inbounds i32* %A, i64 45
+  %arrayidx4 = getelementptr inbounds i32, i32* %A, i64 45
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 11
   br i1 %exitcond, label %for.body3, label %for.inc5
 
 for.inc5:                                         ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 11
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 11
   %inc6 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc6, 6
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end7

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/ExactSIV.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/ExactSIV.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/ExactSIV.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/ExactSIV.ll Fri Feb 27 13:29:02 2015
@@ -25,13 +25,13 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %add = add i64 %i.02, 10
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul = shl i64 %i.02, 1
   %add13 = or i64 %mul, 1
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %add13
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %add13
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 10
@@ -63,13 +63,13 @@ for.body:
   %conv = trunc i64 %i.02 to i32
   %mul = shl i64 %i.02, 2
   %add = add i64 %mul, 10
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul1 = shl i64 %i.02, 1
   %add23 = or i64 %mul1, 1
-  %arrayidx3 = getelementptr inbounds i32* %A, i64 %add23
+  %arrayidx3 = getelementptr inbounds i32, i32* %A, i64 %add23
   %0 = load i32* %arrayidx3, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 10
@@ -100,12 +100,12 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, 6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %add = add i64 %i.02, 60
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 10
@@ -136,12 +136,12 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, 6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %add = add i64 %i.02, 60
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 11
@@ -172,12 +172,12 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, 6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %add = add i64 %i.02, 60
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 12
@@ -208,12 +208,12 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, 6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %add = add i64 %i.02, 60
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 13
@@ -244,12 +244,12 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, 6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %add = add i64 %i.02, 60
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 18
@@ -280,12 +280,12 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, 6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %add = add i64 %i.02, 60
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 19
@@ -316,12 +316,12 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, -6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %sub1 = sub i64 -60, %i.02
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %sub1
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %sub1
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 10
@@ -352,12 +352,12 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, -6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %sub1 = sub i64 -60, %i.02
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %sub1
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %sub1
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 11
@@ -388,12 +388,12 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, -6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %sub1 = sub i64 -60, %i.02
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %sub1
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %sub1
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 12
@@ -424,12 +424,12 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, -6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %sub1 = sub i64 -60, %i.02
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %sub1
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %sub1
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 13
@@ -460,12 +460,12 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, -6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %sub1 = sub i64 -60, %i.02
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %sub1
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %sub1
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 18
@@ -496,12 +496,12 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, -6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %sub1 = sub i64 -60, %i.02
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %sub1
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %sub1
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 19

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/GCD.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/GCD.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/GCD.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/GCD.ll Fri Feb 27 13:29:02 2015
@@ -43,21 +43,21 @@ for.body3:
   %mul = shl nsw i64 %i.03, 1
   %mul4 = shl nsw i64 %j.02, 2
   %sub = sub nsw i64 %mul, %mul4
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
   %mul5 = mul nsw i64 %i.03, 6
   %mul6 = shl nsw i64 %j.02, 3
   %add = add nsw i64 %mul5, %mul6
-  %arrayidx7 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx7 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx7, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc8
 
 for.inc8:                                         ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc9 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc9, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end10
@@ -104,22 +104,22 @@ for.body3:
   %mul = shl nsw i64 %i.03, 1
   %mul4 = shl nsw i64 %j.02, 2
   %sub = sub nsw i64 %mul, %mul4
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
   %mul5 = mul nsw i64 %i.03, 6
   %mul6 = shl nsw i64 %j.02, 3
   %add = add nsw i64 %mul5, %mul6
   %add7 = or i64 %add, 1
-  %arrayidx8 = getelementptr inbounds i32* %A, i64 %add7
+  %arrayidx8 = getelementptr inbounds i32, i32* %A, i64 %add7
   %0 = load i32* %arrayidx8, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc9
 
 for.inc9:                                         ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc10 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc10, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end11
@@ -167,21 +167,21 @@ for.body3:
   %mul4 = shl nsw i64 %j.02, 2
   %sub = sub nsw i64 %mul, %mul4
   %add5 = or i64 %sub, 1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add5
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add5
   store i32 %conv, i32* %arrayidx, align 4
   %mul5 = mul nsw i64 %i.03, 6
   %mul6 = shl nsw i64 %j.02, 3
   %add7 = add nsw i64 %mul5, %mul6
-  %arrayidx8 = getelementptr inbounds i32* %A, i64 %add7
+  %arrayidx8 = getelementptr inbounds i32, i32* %A, i64 %add7
   %0 = load i32* %arrayidx8, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc9
 
 for.inc9:                                         ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc10 = add nsw i64 %i.03, 1
   %exitcond6 = icmp ne i64 %inc10, 100
   br i1 %exitcond6, label %for.cond1.preheader, label %for.end11
@@ -227,21 +227,21 @@ for.body3:
   %conv = trunc i64 %i.03 to i32
   %mul = shl nsw i64 %j.02, 1
   %add = add nsw i64 %i.03, %mul
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul4 = shl nsw i64 %j.02, 1
   %add5 = add nsw i64 %i.03, %mul4
   %sub = add nsw i64 %add5, -1
-  %arrayidx6 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx6 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx6, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc7
 
 for.inc7:                                         ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc8 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc8, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end9
@@ -292,7 +292,7 @@ for.body3:
   %mul6 = mul nsw i64 %M, 9
   %mul7 = mul nsw i64 %mul6, %N
   %add8 = add nsw i64 %add, %mul7
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add8
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add8
   store i32 %conv, i32* %arrayidx, align 4
   %mul9 = mul nsw i64 %i.03, 15
   %mul10 = mul nsw i64 %j.02, 20
@@ -302,16 +302,16 @@ for.body3:
   %mul14 = mul nsw i64 %mul13, %M
   %sub = sub nsw i64 %add12, %mul14
   %add15 = add nsw i64 %sub, 4
-  %arrayidx16 = getelementptr inbounds i32* %A, i64 %add15
+  %arrayidx16 = getelementptr inbounds i32, i32* %A, i64 %add15
   %0 = load i32* %arrayidx16, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc17
 
 for.inc17:                                        ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc18 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc18, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end19
@@ -362,7 +362,7 @@ for.body3:
   %mul6 = mul nsw i64 %M, 9
   %mul7 = mul nsw i64 %mul6, %N
   %add8 = add nsw i64 %add, %mul7
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add8
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add8
   store i32 %conv, i32* %arrayidx, align 4
   %mul9 = mul nsw i64 %i.03, 15
   %mul10 = mul nsw i64 %j.02, 20
@@ -372,16 +372,16 @@ for.body3:
   %mul14 = mul nsw i64 %mul13, %M
   %sub = sub nsw i64 %add12, %mul14
   %add15 = add nsw i64 %sub, 5
-  %arrayidx16 = getelementptr inbounds i32* %A, i64 %add15
+  %arrayidx16 = getelementptr inbounds i32, i32* %A, i64 %add15
   %0 = load i32* %arrayidx16, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc17
 
 for.inc17:                                        ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc18 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc18, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end19
@@ -437,23 +437,23 @@ for.body3:
   %mul4 = shl nsw i64 %i.06, 1
   %0 = mul nsw i64 %mul4, %n
   %arrayidx.sum = add i64 %0, %mul
-  %arrayidx5 = getelementptr inbounds i32* %A, i64 %arrayidx.sum
+  %arrayidx5 = getelementptr inbounds i32, i32* %A, i64 %arrayidx.sum
   store i32 %conv, i32* %arrayidx5, align 4
   %mul6 = mul nsw i64 %j.03, 6
   %add7 = or i64 %mul6, 1
   %mul7 = shl nsw i64 %i.06, 3
   %1 = mul nsw i64 %mul7, %n
   %arrayidx8.sum = add i64 %1, %add7
-  %arrayidx9 = getelementptr inbounds i32* %A, i64 %arrayidx8.sum
+  %arrayidx9 = getelementptr inbounds i32, i32* %A, i64 %arrayidx8.sum
   %2 = load i32* %arrayidx9, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.12, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.12, i64 1
   store i32 %2, i32* %B.addr.12, align 4
   %inc = add nsw i64 %j.03, 1
   %exitcond = icmp ne i64 %inc, %n
   br i1 %exitcond, label %for.body3, label %for.inc10.loopexit
 
 for.inc10.loopexit:                               ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.05, i64 %n
+  %scevgep = getelementptr i32, i32* %B.addr.05, i64 %n
   br label %for.inc10
 
 for.inc10:                                        ; preds = %for.inc10.loopexit, %for.cond1.preheader
@@ -523,7 +523,7 @@ for.body3:
   %idxprom5 = sext i32 %mul4 to i64
   %6 = mul nsw i64 %idxprom5, %0
   %arrayidx.sum = add i64 %6, %idxprom
-  %arrayidx6 = getelementptr inbounds i32* %A, i64 %arrayidx.sum
+  %arrayidx6 = getelementptr inbounds i32, i32* %A, i64 %arrayidx.sum
   %7 = trunc i64 %indvars.iv8 to i32
   store i32 %7, i32* %arrayidx6, align 4
   %8 = trunc i64 %indvars.iv to i32
@@ -535,9 +535,9 @@ for.body3:
   %idxprom10 = sext i32 %mul9 to i64
   %10 = mul nsw i64 %idxprom10, %0
   %arrayidx11.sum = add i64 %10, %idxprom8
-  %arrayidx12 = getelementptr inbounds i32* %A, i64 %arrayidx11.sum
+  %arrayidx12 = getelementptr inbounds i32, i32* %A, i64 %arrayidx11.sum
   %11 = load i32* %arrayidx12, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.12, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.12, i64 1
   store i32 %11, i32* %B.addr.12, align 4
   %indvars.iv.next = add i64 %indvars.iv, 1
   %lftr.wideiv = trunc i64 %indvars.iv.next to i32
@@ -545,7 +545,7 @@ for.body3:
   br i1 %exitcond, label %for.body3, label %for.inc13.loopexit
 
 for.inc13.loopexit:                               ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.05, i64 %3
+  %scevgep = getelementptr i32, i32* %B.addr.05, i64 %3
   br label %for.inc13
 
 for.inc13:                                        ; preds = %for.inc13.loopexit, %for.cond1.preheader
@@ -613,7 +613,7 @@ for.body3:
   %mul5 = shl nsw i32 %3, 2
   %add = add nsw i32 %mul4, %mul5
   %idxprom = sext i32 %add to i64
-  %arrayidx = getelementptr inbounds i32* %A, i64 %idxprom
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %idxprom
   store i32 %i.06, i32* %arrayidx, align 4
   %mul6 = shl nsw i32 %n, 3
   %mul7 = mul nsw i32 %mul6, %i.06
@@ -622,9 +622,9 @@ for.body3:
   %add9 = add nsw i32 %mul7, %mul8
   %add10 = or i32 %add9, 1
   %idxprom11 = sext i32 %add10 to i64
-  %arrayidx12 = getelementptr inbounds i32* %A, i64 %idxprom11
+  %arrayidx12 = getelementptr inbounds i32, i32* %A, i64 %idxprom11
   %5 = load i32* %arrayidx12, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.12, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.12, i64 1
   store i32 %5, i32* %B.addr.12, align 4
   %indvars.iv.next = add i64 %indvars.iv, 1
   %lftr.wideiv = trunc i64 %indvars.iv.next to i32
@@ -632,7 +632,7 @@ for.body3:
   br i1 %exitcond, label %for.body3, label %for.inc13.loopexit
 
 for.inc13.loopexit:                               ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.05, i64 %2
+  %scevgep = getelementptr i32, i32* %B.addr.05, i64 %2
   br label %for.inc13
 
 for.inc13:                                        ; preds = %for.inc13.loopexit, %for.cond1.preheader
@@ -702,7 +702,7 @@ for.body3:
   %idxprom5 = zext i32 %mul4 to i64
   %6 = mul nsw i64 %idxprom5, %0
   %arrayidx.sum = add i64 %6, %idxprom
-  %arrayidx6 = getelementptr inbounds i32* %A, i64 %arrayidx.sum
+  %arrayidx6 = getelementptr inbounds i32, i32* %A, i64 %arrayidx.sum
   %7 = trunc i64 %indvars.iv8 to i32
   store i32 %7, i32* %arrayidx6, align 4
   %8 = trunc i64 %indvars.iv to i32
@@ -714,9 +714,9 @@ for.body3:
   %idxprom10 = zext i32 %mul9 to i64
   %10 = mul nsw i64 %idxprom10, %0
   %arrayidx11.sum = add i64 %10, %idxprom8
-  %arrayidx12 = getelementptr inbounds i32* %A, i64 %arrayidx11.sum
+  %arrayidx12 = getelementptr inbounds i32, i32* %A, i64 %arrayidx11.sum
   %11 = load i32* %arrayidx12, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.12, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.12, i64 1
   store i32 %11, i32* %B.addr.12, align 4
   %indvars.iv.next = add i64 %indvars.iv, 1
   %lftr.wideiv = trunc i64 %indvars.iv.next to i32
@@ -724,7 +724,7 @@ for.body3:
   br i1 %exitcond, label %for.body3, label %for.inc13.loopexit
 
 for.inc13.loopexit:                               ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.05, i64 %3
+  %scevgep = getelementptr i32, i32* %B.addr.05, i64 %3
   br label %for.inc13
 
 for.inc13:                                        ; preds = %for.inc13.loopexit, %for.cond1.preheader

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/Invariant.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/Invariant.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/Invariant.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/Invariant.ll Fri Feb 27 13:29:02 2015
@@ -19,9 +19,9 @@ for.cond1.preheader:
 for.body3:
   %j.02 = phi i32 [ 0, %for.cond1.preheader ], [ %add8, %for.body3 ]
   %res.11 = phi float [ %res.03, %for.cond1.preheader ], [ %add.res.1, %for.body3 ]
-  %arrayidx4 = getelementptr inbounds [40 x float]* %rr, i32 %j.02, i32 %j.02
+  %arrayidx4 = getelementptr inbounds [40 x float], [40 x float]* %rr, i32 %j.02, i32 %j.02
   %0 = load float* %arrayidx4, align 4
-  %arrayidx6 = getelementptr inbounds [40 x float]* %rr, i32 %i.04, i32 %j.02
+  %arrayidx6 = getelementptr inbounds [40 x float], [40 x float]* %rr, i32 %i.04, i32 %j.02
   %1 = load float* %arrayidx6, align 4
   %add = fadd float %0, %1
   %cmp7 = fcmp ogt float %add, %g

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/NonCanonicalizedSubscript.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/NonCanonicalizedSubscript.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/NonCanonicalizedSubscript.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/NonCanonicalizedSubscript.ll Fri Feb 27 13:29:02 2015
@@ -26,8 +26,8 @@ for.body:
 ; DELIN: da analyze - anti [=|<]!
 ; DELIN: da analyze - none!
   %i = phi i64 [ 0, %entry ], [ %i.inc, %for.body ]
-  %a.addr = getelementptr [100 x [100 x i32]]* %a, i64 0, i64 %i, i64 %i
-  %a.addr.2 = getelementptr [100 x [100 x i32]]* %a, i64 0, i64 %i, i32 5
+  %a.addr = getelementptr [100 x [100 x i32]], [100 x [100 x i32]]* %a, i64 0, i64 %i, i64 %i
+  %a.addr.2 = getelementptr [100 x [100 x i32]], [100 x [100 x i32]]* %a, i64 0, i64 %i, i32 5
   %0 = load i32* %a.addr, align 4
   %1 = add i32 %0, 1
   store i32 %1, i32* %a.addr.2, align 4

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/Preliminary.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/Preliminary.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/Preliminary.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/Preliminary.ll Fri Feb 27 13:29:02 2015
@@ -17,7 +17,7 @@ entry:
 ; CHECK: da analyze - confused!
 ; CHECK: da analyze - none!
 
-  %arrayidx1 = getelementptr inbounds i32* %B, i64 1
+  %arrayidx1 = getelementptr inbounds i32, i32* %B, i64 1
   %0 = load i32* %arrayidx1, align 4
   ret i32 %0
 }
@@ -35,7 +35,7 @@ entry:
 ; CHECK: da analyze - none!
 ; CHECK: da analyze - none!
 
-  %arrayidx1 = getelementptr inbounds i32* %B, i64 1
+  %arrayidx1 = getelementptr inbounds i32, i32* %B, i64 1
   %0 = load i32* %arrayidx1, align 4
   ret i32 %0
 }
@@ -84,7 +84,7 @@ for.body6.preheader:
 
 for.body6:                                        ; preds = %for.body6.preheader, %for.body6
   %k.02 = phi i64 [ %inc, %for.body6 ], [ 0, %for.body6.preheader ]
-  %arrayidx8 = getelementptr inbounds [100 x [100 x i64]]* %A, i64 %i.011, i64 %j.07, i64 %k.02
+  %arrayidx8 = getelementptr inbounds [100 x [100 x i64]], [100 x [100 x i64]]* %A, i64 %i.011, i64 %j.07, i64 %k.02
   store i64 %i.011, i64* %arrayidx8, align 8
   %inc = add nsw i64 %k.02, 1
   %exitcond13 = icmp ne i64 %inc, %n
@@ -106,16 +106,16 @@ for.body12:
   %add = add nsw i64 %k9.05, 1
   %add13 = add nsw i64 %j.07, 2
   %add14 = add nsw i64 %i.011, 3
-  %arrayidx17 = getelementptr inbounds [100 x [100 x i64]]* %A, i64 %add14, i64 %add13, i64 %add
+  %arrayidx17 = getelementptr inbounds [100 x [100 x i64]], [100 x [100 x i64]]* %A, i64 %add14, i64 %add13, i64 %add
   %0 = load i64* %arrayidx17, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.24, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.24, i64 1
   store i64 %0, i64* %B.addr.24, align 8
   %inc19 = add nsw i64 %k9.05, 1
   %exitcond = icmp ne i64 %inc19, %n
   br i1 %exitcond, label %for.body12, label %for.inc21.loopexit
 
 for.inc21.loopexit:                               ; preds = %for.body12
-  %scevgep = getelementptr i64* %B.addr.18, i64 %n
+  %scevgep = getelementptr i64, i64* %B.addr.18, i64 %n
   br label %for.inc21
 
 for.inc21:                                        ; preds = %for.inc21.loopexit, %for.cond10.loopexit
@@ -281,7 +281,7 @@ for.body33:
   %add3547 = or i64 %mul, 1
   %sub = add nsw i64 %k.037, -1
   %sub36 = add nsw i64 %i.045, -3
-  %arrayidx43 = getelementptr inbounds [100 x [100 x [100 x [100 x [100 x [100 x [100 x i64]]]]]]]* %A, i64 %sub36, i64 %j.041, i64 2, i64 %sub, i64 %add3547, i64 %m.029, i64 %add34, i64 %add
+  %arrayidx43 = getelementptr inbounds [100 x [100 x [100 x [100 x [100 x [100 x [100 x i64]]]]]]], [100 x [100 x [100 x [100 x [100 x [100 x [100 x i64]]]]]]]* %A, i64 %sub36, i64 %j.041, i64 2, i64 %sub, i64 %add3547, i64 %m.029, i64 %add34, i64 %add
   store i64 %i.045, i64* %arrayidx43, align 8
   %add44 = add nsw i64 %t.03, 2
   %add45 = add nsw i64 %n, 1
@@ -289,16 +289,16 @@ for.body33:
   %sub47 = add nsw i64 %mul46, -1
   %sub48 = sub nsw i64 1, %k.037
   %add49 = add nsw i64 %i.045, 3
-  %arrayidx57 = getelementptr inbounds [100 x [100 x [100 x [100 x [100 x [100 x [100 x i64]]]]]]]* %A, i64 %add49, i64 2, i64 %u.06, i64 %sub48, i64 %sub47, i64 %o.025, i64 %add45, i64 %add44
+  %arrayidx57 = getelementptr inbounds [100 x [100 x [100 x [100 x [100 x [100 x [100 x i64]]]]]]], [100 x [100 x [100 x [100 x [100 x [100 x [100 x i64]]]]]]]* %A, i64 %add49, i64 2, i64 %u.06, i64 %sub48, i64 %sub47, i64 %o.025, i64 %add45, i64 %add44
   %0 = load i64* %arrayidx57, align 8
-  %incdec.ptr = getelementptr inbounds i64* %B.addr.112, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %B.addr.112, i64 1
   store i64 %0, i64* %B.addr.112, align 8
   %inc = add nsw i64 %t.03, 1
   %exitcond = icmp ne i64 %inc, %n
   br i1 %exitcond, label %for.body33, label %for.inc58.loopexit
 
 for.inc58.loopexit:                               ; preds = %for.body33
-  %scevgep = getelementptr i64* %B.addr.105, i64 %n
+  %scevgep = getelementptr i64, i64* %B.addr.105, i64 %n
   br label %for.inc58
 
 for.inc58:                                        ; preds = %for.inc58.loopexit, %for.cond31.preheader
@@ -441,12 +441,12 @@ for.body:
   %conv2 = sext i8 %i.03 to i32
   %conv3 = sext i8 %i.03 to i64
   %add = add i64 %conv3, 2
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv2, i32* %arrayidx, align 4
   %idxprom4 = sext i8 %i.03 to i64
-  %arrayidx5 = getelementptr inbounds i32* %A, i64 %idxprom4
+  %arrayidx5 = getelementptr inbounds i32, i32* %A, i64 %idxprom4
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add i8 %i.03, 1
   %conv = sext i8 %inc to i64
@@ -487,12 +487,12 @@ for.body:
   %conv2 = sext i16 %i.03 to i32
   %conv3 = sext i16 %i.03 to i64
   %add = add i64 %conv3, 2
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv2, i32* %arrayidx, align 4
   %idxprom4 = sext i16 %i.03 to i64
-  %arrayidx5 = getelementptr inbounds i32* %A, i64 %idxprom4
+  %arrayidx5 = getelementptr inbounds i32, i32* %A, i64 %idxprom4
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add i16 %i.03, 1
   %conv = sext i16 %inc to i64
@@ -531,12 +531,12 @@ for.body:
   %indvars.iv = phi i64 [ 0, %for.body.preheader ], [ %indvars.iv.next, %for.body ]
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body ], [ %B, %for.body.preheader ]
   %0 = add nsw i64 %indvars.iv, 2
-  %arrayidx = getelementptr inbounds i32* %A, i64 %0
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %0
   %1 = trunc i64 %indvars.iv to i32
   store i32 %1, i32* %arrayidx, align 4
-  %arrayidx3 = getelementptr inbounds i32* %A, i64 %indvars.iv
+  %arrayidx3 = getelementptr inbounds i32, i32* %A, i64 %indvars.iv
   %2 = load i32* %arrayidx3, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %2, i32* %B.addr.02, align 4
   %indvars.iv.next = add i64 %indvars.iv, 1
   %exitcond = icmp ne i64 %indvars.iv.next, %n
@@ -557,7 +557,7 @@ for.end:
 define void @p7(i32* %A, i32* %B, i8 signext %n) nounwind uwtable ssp {
 entry:
   %idxprom = sext i8 %n to i64
-  %arrayidx = getelementptr inbounds i32* %A, i64 %idxprom
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %idxprom
 
 ; CHECK: da analyze - none!
 ; CHECK: da analyze - none!
@@ -569,7 +569,7 @@ entry:
   store i32 0, i32* %arrayidx, align 4
   %conv = sext i8 %n to i64
   %add = add i64 %conv, 1
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx2, align 4
   store i32 %0, i32* %B, align 4
   ret void
@@ -583,7 +583,7 @@ entry:
 define void @p8(i32* %A, i32* %B, i16 signext %n) nounwind uwtable ssp {
 entry:
   %idxprom = sext i16 %n to i64
-  %arrayidx = getelementptr inbounds i32* %A, i64 %idxprom
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %idxprom
   store i32 0, i32* %arrayidx, align 4
 
 ; CHECK: da analyze - none!
@@ -595,7 +595,7 @@ entry:
 
   %conv = sext i16 %n to i64
   %add = add i64 %conv, 1
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx2, align 4
   store i32 %0, i32* %B, align 4
   ret void
@@ -609,7 +609,7 @@ entry:
 define void @p9(i32* %A, i32* %B, i32 %n) nounwind uwtable ssp {
 entry:
   %idxprom = sext i32 %n to i64
-  %arrayidx = getelementptr inbounds i32* %A, i64 %idxprom
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %idxprom
   store i32 0, i32* %arrayidx, align 4
 
 ; CHECK: da analyze - none!
@@ -621,7 +621,7 @@ entry:
 
   %add = add nsw i32 %n, 1
   %idxprom1 = sext i32 %add to i64
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %idxprom1
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %idxprom1
   %0 = load i32* %arrayidx2, align 4
   store i32 %0, i32* %B, align 4
   ret void
@@ -635,7 +635,7 @@ entry:
 define void @p10(i32* %A, i32* %B, i32 %n) nounwind uwtable ssp {
 entry:
   %idxprom = zext i32 %n to i64
-  %arrayidx = getelementptr inbounds i32* %A, i64 %idxprom
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %idxprom
   store i32 0, i32* %arrayidx, align 4
 
 ; CHECK: da analyze - none!
@@ -647,7 +647,7 @@ entry:
 
   %add = add i32 %n, 1
   %idxprom1 = zext i32 %add to i64
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %idxprom1
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %idxprom1
   %0 = load i32* %arrayidx2, align 4
   store i32 %0, i32* %B, align 4
   ret void
@@ -668,7 +668,7 @@ define void @f(%struct.S* %s, i32 %size)
 entry:
   %idx.ext = zext i32 %size to i64
   %add.ptr.sum = add i64 %idx.ext, -1
-  %add.ptr1 = getelementptr inbounds %struct.S* %s, i64 %add.ptr.sum
+  %add.ptr1 = getelementptr inbounds %struct.S, %struct.S* %s, i64 %add.ptr.sum
   %cmp1 = icmp eq i64 %add.ptr.sum, 0
   br i1 %cmp1, label %while.end, label %while.body.preheader
 
@@ -681,11 +681,11 @@ while.body.preheader:
 
 while.body:                                       ; preds = %while.body.preheader, %while.body
   %i.02 = phi %struct.S* [ %incdec.ptr, %while.body ], [ %s, %while.body.preheader ]
-  %0 = getelementptr inbounds %struct.S* %i.02, i64 1, i32 0
+  %0 = getelementptr inbounds %struct.S, %struct.S* %i.02, i64 1, i32 0
   %1 = load i32* %0, align 4
-  %2 = getelementptr inbounds %struct.S* %i.02, i64 0, i32 0
+  %2 = getelementptr inbounds %struct.S, %struct.S* %i.02, i64 0, i32 0
   store i32 %1, i32* %2, align 4
-  %incdec.ptr = getelementptr inbounds %struct.S* %i.02, i64 1
+  %incdec.ptr = getelementptr inbounds %struct.S, %struct.S* %i.02, i64 1
   %cmp = icmp eq %struct.S* %incdec.ptr, %add.ptr1
   br i1 %cmp, label %while.end.loopexit, label %while.body
 

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/Propagating.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/Propagating.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/Propagating.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/Propagating.ll Fri Feb 27 13:29:02 2015
@@ -32,19 +32,19 @@ for.body3:
   %conv = trunc i64 %i.03 to i32
   %add = add nsw i64 %i.03, %j.02
   %add4 = add nsw i64 %i.03, 1
-  %arrayidx5 = getelementptr inbounds [100 x i32]* %A, i64 %add4, i64 %add
+  %arrayidx5 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %add4, i64 %add
   store i32 %conv, i32* %arrayidx5, align 4
   %add6 = add nsw i64 %i.03, %j.02
-  %arrayidx8 = getelementptr inbounds [100 x i32]* %A, i64 %i.03, i64 %add6
+  %arrayidx8 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.03, i64 %add6
   %0 = load i32* %arrayidx8, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc9
 
 for.inc9:                                         ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc10 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc10, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end11
@@ -88,26 +88,26 @@ for.body6:
   %add = add nsw i64 %j.03, %k.02
   %add7 = add nsw i64 %i.05, 1
   %sub = sub nsw i64 %j.03, %i.05
-  %arrayidx9 = getelementptr inbounds [100 x [100 x i32]]* %A, i64 %sub, i64 %add7, i64 %add
+  %arrayidx9 = getelementptr inbounds [100 x [100 x i32]], [100 x [100 x i32]]* %A, i64 %sub, i64 %add7, i64 %add
   store i32 %conv, i32* %arrayidx9, align 4
   %add10 = add nsw i64 %j.03, %k.02
   %sub11 = sub nsw i64 %j.03, %i.05
-  %arrayidx14 = getelementptr inbounds [100 x [100 x i32]]* %A, i64 %sub11, i64 %i.05, i64 %add10
+  %arrayidx14 = getelementptr inbounds [100 x [100 x i32]], [100 x [100 x i32]]* %A, i64 %sub11, i64 %i.05, i64 %add10
   %0 = load i32* %arrayidx14, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.21, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.21, i64 1
   store i32 %0, i32* %B.addr.21, align 4
   %inc = add nsw i64 %k.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body6, label %for.inc15
 
 for.inc15:                                        ; preds = %for.body6
-  %scevgep = getelementptr i32* %B.addr.14, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.14, i64 100
   %inc16 = add nsw i64 %j.03, 1
   %exitcond8 = icmp ne i64 %inc16, 100
   br i1 %exitcond8, label %for.cond4.preheader, label %for.inc18
 
 for.inc18:                                        ; preds = %for.inc15
-  %scevgep7 = getelementptr i32* %B.addr.06, i64 10000
+  %scevgep7 = getelementptr i32, i32* %B.addr.06, i64 10000
   %inc19 = add nsw i64 %i.05, 1
   %exitcond9 = icmp ne i64 %inc19, 100
   br i1 %exitcond9, label %for.cond1.preheader, label %for.end20
@@ -144,20 +144,20 @@ for.body3:
   %conv = trunc i64 %i.03 to i32
   %mul = shl nsw i64 %i.03, 1
   %sub = add nsw i64 %i.03, -1
-  %arrayidx4 = getelementptr inbounds [100 x i32]* %A, i64 %sub, i64 %mul
+  %arrayidx4 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %sub, i64 %mul
   store i32 %conv, i32* %arrayidx4, align 4
   %add = add nsw i64 %i.03, %j.02
   %add5 = add nsw i64 %add, 110
-  %arrayidx7 = getelementptr inbounds [100 x i32]* %A, i64 %i.03, i64 %add5
+  %arrayidx7 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.03, i64 %add5
   %0 = load i32* %arrayidx7, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc8
 
 for.inc8:                                         ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc9 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc9, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end10
@@ -194,21 +194,21 @@ for.body3:
   %conv = trunc i64 %i.03 to i32
   %mul = shl nsw i64 %j.02, 1
   %add = add nsw i64 %mul, %i.03
-  %arrayidx4 = getelementptr inbounds [100 x i32]* %A, i64 %i.03, i64 %add
+  %arrayidx4 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.03, i64 %add
   store i32 %conv, i32* %arrayidx4, align 4
   %mul5 = shl nsw i64 %j.02, 1
   %sub = sub nsw i64 %mul5, %i.03
   %add6 = add nsw i64 %sub, 5
-  %arrayidx8 = getelementptr inbounds [100 x i32]* %A, i64 %i.03, i64 %add6
+  %arrayidx8 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.03, i64 %add6
   %0 = load i32* %arrayidx8, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc9
 
 for.inc9:                                         ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc10 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc10, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end11
@@ -247,20 +247,20 @@ for.body3:
   %add = add nsw i64 %mul, %j.02
   %add4 = add nsw i64 %add, 1
   %add5 = add nsw i64 %i.03, 2
-  %arrayidx6 = getelementptr inbounds [100 x i32]* %A, i64 %add5, i64 %add4
+  %arrayidx6 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %add5, i64 %add4
   store i32 %conv, i32* %arrayidx6, align 4
   %mul7 = shl nsw i64 %i.03, 1
   %add8 = add nsw i64 %mul7, %j.02
-  %arrayidx10 = getelementptr inbounds [100 x i32]* %A, i64 %i.03, i64 %add8
+  %arrayidx10 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %i.03, i64 %add8
   %0 = load i32* %arrayidx10, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc11
 
 for.inc11:                                        ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc12 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc12, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end13
@@ -301,20 +301,20 @@ for.body3:
   %sub = sub nsw i64 22, %i.03
   %mul4 = mul nsw i64 %i.03, 3
   %sub5 = add nsw i64 %mul4, -18
-  %arrayidx7 = getelementptr inbounds [100 x [100 x i32]]* %A, i64 %sub5, i64 %sub, i64 %add
+  %arrayidx7 = getelementptr inbounds [100 x [100 x i32]], [100 x [100 x i32]]* %A, i64 %sub5, i64 %sub, i64 %add
   store i32 %conv, i32* %arrayidx7, align 4
   %mul8 = mul nsw i64 %i.03, 3
   %add9 = add nsw i64 %mul8, %j.02
-  %arrayidx12 = getelementptr inbounds [100 x [100 x i32]]* %A, i64 %i.03, i64 %i.03, i64 %add9
+  %arrayidx12 = getelementptr inbounds [100 x [100 x i32]], [100 x [100 x i32]]* %A, i64 %i.03, i64 %i.03, i64 %add9
   %0 = load i32* %arrayidx12, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc13
 
 for.inc13:                                        ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc14 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc14, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end15
@@ -353,21 +353,21 @@ for.body3:
   %add = add nsw i64 %mul, %j.02
   %add4 = add nsw i64 %add, 2
   %add5 = add nsw i64 %i.03, 1
-  %arrayidx6 = getelementptr inbounds [100 x i32]* %A, i64 %add5, i64 %add4
+  %arrayidx6 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %add5, i64 %add4
   store i32 %conv, i32* %arrayidx6, align 4
   %mul7 = shl nsw i64 %i.03, 3
   %add8 = add nsw i64 %mul7, %j.02
   %mul9 = shl nsw i64 %i.03, 1
-  %arrayidx11 = getelementptr inbounds [100 x i32]* %A, i64 %mul9, i64 %add8
+  %arrayidx11 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %mul9, i64 %add8
   %0 = load i32* %arrayidx11, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc12
 
 for.inc12:                                        ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc13 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc13, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end14
@@ -408,22 +408,22 @@ for.body3:
   %add4 = add nsw i64 %add, 2
   %mul5 = shl nsw i64 %i.03, 1
   %add6 = add nsw i64 %mul5, 4
-  %arrayidx7 = getelementptr inbounds [100 x i32]* %A, i64 %add6, i64 %add4
+  %arrayidx7 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %add6, i64 %add4
   store i32 %conv, i32* %arrayidx7, align 4
   %mul8 = mul nsw i64 %i.03, 5
   %add9 = add nsw i64 %mul8, %j.02
   %mul10 = mul nsw i64 %i.03, -2
   %add11 = add nsw i64 %mul10, 20
-  %arrayidx13 = getelementptr inbounds [100 x i32]* %A, i64 %add11, i64 %add9
+  %arrayidx13 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %add11, i64 %add9
   %0 = load i32* %arrayidx13, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc14
 
 for.inc14:                                        ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc15 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc15, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end16
@@ -459,22 +459,22 @@ for.body3:
   %B.addr.11 = phi i32* [ %B.addr.04, %for.cond1.preheader ], [ %incdec.ptr, %for.body3 ]
   %conv = trunc i64 %i.03 to i32
   %add = add nsw i64 %j.02, 2
-  %arrayidx4 = getelementptr inbounds [100 x i32]* %A, i64 4, i64 %add
+  %arrayidx4 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 4, i64 %add
   store i32 %conv, i32* %arrayidx4, align 4
   %mul = mul nsw i64 %i.03, 5
   %add5 = add nsw i64 %mul, %j.02
   %mul6 = mul nsw i64 %i.03, -2
   %add7 = add nsw i64 %mul6, 4
-  %arrayidx9 = getelementptr inbounds [100 x i32]* %A, i64 %add7, i64 %add5
+  %arrayidx9 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %add7, i64 %add5
   %0 = load i32* %arrayidx9, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc10
 
 for.inc10:                                        ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc11 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc11, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end12
@@ -514,18 +514,18 @@ for.body3:
   %add4 = add nsw i64 %add, 2
   %mul5 = shl nsw i64 %i.03, 1
   %add6 = add nsw i64 %mul5, 4
-  %arrayidx7 = getelementptr inbounds [100 x i32]* %A, i64 %add6, i64 %add4
+  %arrayidx7 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 %add6, i64 %add4
   store i32 %conv, i32* %arrayidx7, align 4
-  %arrayidx9 = getelementptr inbounds [100 x i32]* %A, i64 4, i64 %j.02
+  %arrayidx9 = getelementptr inbounds [100 x i32], [100 x i32]* %A, i64 4, i64 %j.02
   %0 = load i32* %arrayidx9, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.11, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.11, i64 1
   store i32 %0, i32* %B.addr.11, align 4
   %inc = add nsw i64 %j.02, 1
   %exitcond = icmp ne i64 %inc, 100
   br i1 %exitcond, label %for.body3, label %for.inc10
 
 for.inc10:                                        ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.04, i64 100
+  %scevgep = getelementptr i32, i32* %B.addr.04, i64 100
   %inc11 = add nsw i64 %i.03, 1
   %exitcond5 = icmp ne i64 %inc11, 100
   br i1 %exitcond5, label %for.cond1.preheader, label %for.end12

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/Separability.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/Separability.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/Separability.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/Separability.ll Fri Feb 27 13:29:02 2015
@@ -44,33 +44,33 @@ for.body9:
   %conv = trunc i64 %i.07 to i32
   %add = add nsw i64 %j.05, %k.03
   %idxprom = sext i32 %n to i64
-  %arrayidx11 = getelementptr inbounds [100 x [100 x i32]]* %A, i64 %idxprom, i64 %i.07, i64 %add
+  %arrayidx11 = getelementptr inbounds [100 x [100 x i32]], [100 x [100 x i32]]* %A, i64 %idxprom, i64 %i.07, i64 %add
   store i32 %conv, i32* %arrayidx11, align 4
   %mul = shl nsw i64 %j.05, 1
   %sub = sub nsw i64 %mul, %l.02
   %add12 = add nsw i64 %i.07, 10
-  %arrayidx15 = getelementptr inbounds [100 x [100 x i32]]* %A, i64 10, i64 %add12, i64 %sub
+  %arrayidx15 = getelementptr inbounds [100 x [100 x i32]], [100 x [100 x i32]]* %A, i64 10, i64 %add12, i64 %sub
   %0 = load i32* %arrayidx15, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.31, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.31, i64 1
   store i32 %0, i32* %B.addr.31, align 4
   %inc = add nsw i64 %l.02, 1
   %exitcond = icmp ne i64 %inc, 50
   br i1 %exitcond, label %for.body9, label %for.inc16
 
 for.inc16:                                        ; preds = %for.body9
-  %scevgep = getelementptr i32* %B.addr.24, i64 50
+  %scevgep = getelementptr i32, i32* %B.addr.24, i64 50
   %inc17 = add nsw i64 %k.03, 1
   %exitcond10 = icmp ne i64 %inc17, 50
   br i1 %exitcond10, label %for.cond7.preheader, label %for.inc19
 
 for.inc19:                                        ; preds = %for.inc16
-  %scevgep9 = getelementptr i32* %B.addr.16, i64 2500
+  %scevgep9 = getelementptr i32, i32* %B.addr.16, i64 2500
   %inc20 = add nsw i64 %j.05, 1
   %exitcond12 = icmp ne i64 %inc20, 50
   br i1 %exitcond12, label %for.cond4.preheader, label %for.inc22
 
 for.inc22:                                        ; preds = %for.inc19
-  %scevgep11 = getelementptr i32* %B.addr.08, i64 125000
+  %scevgep11 = getelementptr i32, i32* %B.addr.08, i64 125000
   %inc23 = add nsw i64 %i.07, 1
   %exitcond13 = icmp ne i64 %inc23, 50
   br i1 %exitcond13, label %for.cond1.preheader, label %for.end24
@@ -118,33 +118,33 @@ for.body9:
   %B.addr.31 = phi i32* [ %B.addr.24, %for.cond7.preheader ], [ %incdec.ptr, %for.body9 ]
   %conv = trunc i64 %i.07 to i32
   %add = add nsw i64 %j.05, %k.03
-  %arrayidx11 = getelementptr inbounds [100 x [100 x i32]]* %A, i64 %i.07, i64 %i.07, i64 %add
+  %arrayidx11 = getelementptr inbounds [100 x [100 x i32]], [100 x [100 x i32]]* %A, i64 %i.07, i64 %i.07, i64 %add
   store i32 %conv, i32* %arrayidx11, align 4
   %mul = shl nsw i64 %j.05, 1
   %sub = sub nsw i64 %mul, %l.02
   %add12 = add nsw i64 %i.07, 10
-  %arrayidx15 = getelementptr inbounds [100 x [100 x i32]]* %A, i64 10, i64 %add12, i64 %sub
+  %arrayidx15 = getelementptr inbounds [100 x [100 x i32]], [100 x [100 x i32]]* %A, i64 10, i64 %add12, i64 %sub
   %0 = load i32* %arrayidx15, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.31, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.31, i64 1
   store i32 %0, i32* %B.addr.31, align 4
   %inc = add nsw i64 %l.02, 1
   %exitcond = icmp ne i64 %inc, 50
   br i1 %exitcond, label %for.body9, label %for.inc16
 
 for.inc16:                                        ; preds = %for.body9
-  %scevgep = getelementptr i32* %B.addr.24, i64 50
+  %scevgep = getelementptr i32, i32* %B.addr.24, i64 50
   %inc17 = add nsw i64 %k.03, 1
   %exitcond10 = icmp ne i64 %inc17, 50
   br i1 %exitcond10, label %for.cond7.preheader, label %for.inc19
 
 for.inc19:                                        ; preds = %for.inc16
-  %scevgep9 = getelementptr i32* %B.addr.16, i64 2500
+  %scevgep9 = getelementptr i32, i32* %B.addr.16, i64 2500
   %inc20 = add nsw i64 %j.05, 1
   %exitcond12 = icmp ne i64 %inc20, 50
   br i1 %exitcond12, label %for.cond4.preheader, label %for.inc22
 
 for.inc22:                                        ; preds = %for.inc19
-  %scevgep11 = getelementptr i32* %B.addr.08, i64 125000
+  %scevgep11 = getelementptr i32, i32* %B.addr.08, i64 125000
   %inc23 = add nsw i64 %i.07, 1
   %exitcond13 = icmp ne i64 %inc23, 50
   br i1 %exitcond13, label %for.cond1.preheader, label %for.end24
@@ -192,33 +192,33 @@ for.body9:
   %B.addr.31 = phi i32* [ %B.addr.24, %for.cond7.preheader ], [ %incdec.ptr, %for.body9 ]
   %conv = trunc i64 %i.07 to i32
   %add = add nsw i64 %i.07, %k.03
-  %arrayidx12 = getelementptr inbounds [100 x [100 x [100 x i32]]]* %A, i64 %i.07, i64 %i.07, i64 %add, i64 %l.02
+  %arrayidx12 = getelementptr inbounds [100 x [100 x [100 x i32]]], [100 x [100 x [100 x i32]]]* %A, i64 %i.07, i64 %i.07, i64 %add, i64 %l.02
   store i32 %conv, i32* %arrayidx12, align 4
   %add13 = add nsw i64 %l.02, 10
   %add14 = add nsw i64 %j.05, %k.03
   %add15 = add nsw i64 %i.07, 10
-  %arrayidx19 = getelementptr inbounds [100 x [100 x [100 x i32]]]* %A, i64 10, i64 %add15, i64 %add14, i64 %add13
+  %arrayidx19 = getelementptr inbounds [100 x [100 x [100 x i32]]], [100 x [100 x [100 x i32]]]* %A, i64 10, i64 %add15, i64 %add14, i64 %add13
   %0 = load i32* %arrayidx19, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.31, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.31, i64 1
   store i32 %0, i32* %B.addr.31, align 4
   %inc = add nsw i64 %l.02, 1
   %exitcond = icmp ne i64 %inc, 50
   br i1 %exitcond, label %for.body9, label %for.inc20
 
 for.inc20:                                        ; preds = %for.body9
-  %scevgep = getelementptr i32* %B.addr.24, i64 50
+  %scevgep = getelementptr i32, i32* %B.addr.24, i64 50
   %inc21 = add nsw i64 %k.03, 1
   %exitcond10 = icmp ne i64 %inc21, 50
   br i1 %exitcond10, label %for.cond7.preheader, label %for.inc23
 
 for.inc23:                                        ; preds = %for.inc20
-  %scevgep9 = getelementptr i32* %B.addr.16, i64 2500
+  %scevgep9 = getelementptr i32, i32* %B.addr.16, i64 2500
   %inc24 = add nsw i64 %j.05, 1
   %exitcond12 = icmp ne i64 %inc24, 50
   br i1 %exitcond12, label %for.cond4.preheader, label %for.inc26
 
 for.inc26:                                        ; preds = %for.inc23
-  %scevgep11 = getelementptr i32* %B.addr.08, i64 125000
+  %scevgep11 = getelementptr i32, i32* %B.addr.08, i64 125000
   %inc27 = add nsw i64 %i.07, 1
   %exitcond13 = icmp ne i64 %inc27, 50
   br i1 %exitcond13, label %for.cond1.preheader, label %for.end28
@@ -267,33 +267,33 @@ for.body9:
   %conv = trunc i64 %i.07 to i32
   %add = add nsw i64 %l.02, %k.03
   %add10 = add nsw i64 %i.07, %k.03
-  %arrayidx13 = getelementptr inbounds [100 x [100 x [100 x i32]]]* %A, i64 %i.07, i64 %i.07, i64 %add10, i64 %add
+  %arrayidx13 = getelementptr inbounds [100 x [100 x [100 x i32]]], [100 x [100 x [100 x i32]]]* %A, i64 %i.07, i64 %i.07, i64 %add10, i64 %add
   store i32 %conv, i32* %arrayidx13, align 4
   %add14 = add nsw i64 %l.02, 10
   %add15 = add nsw i64 %j.05, %k.03
   %add16 = add nsw i64 %i.07, 10
-  %arrayidx20 = getelementptr inbounds [100 x [100 x [100 x i32]]]* %A, i64 10, i64 %add16, i64 %add15, i64 %add14
+  %arrayidx20 = getelementptr inbounds [100 x [100 x [100 x i32]]], [100 x [100 x [100 x i32]]]* %A, i64 10, i64 %add16, i64 %add15, i64 %add14
   %0 = load i32* %arrayidx20, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.31, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.31, i64 1
   store i32 %0, i32* %B.addr.31, align 4
   %inc = add nsw i64 %l.02, 1
   %exitcond = icmp ne i64 %inc, 50
   br i1 %exitcond, label %for.body9, label %for.inc21
 
 for.inc21:                                        ; preds = %for.body9
-  %scevgep = getelementptr i32* %B.addr.24, i64 50
+  %scevgep = getelementptr i32, i32* %B.addr.24, i64 50
   %inc22 = add nsw i64 %k.03, 1
   %exitcond10 = icmp ne i64 %inc22, 50
   br i1 %exitcond10, label %for.cond7.preheader, label %for.inc24
 
 for.inc24:                                        ; preds = %for.inc21
-  %scevgep9 = getelementptr i32* %B.addr.16, i64 2500
+  %scevgep9 = getelementptr i32, i32* %B.addr.16, i64 2500
   %inc25 = add nsw i64 %j.05, 1
   %exitcond12 = icmp ne i64 %inc25, 50
   br i1 %exitcond12, label %for.cond4.preheader, label %for.inc27
 
 for.inc27:                                        ; preds = %for.inc24
-  %scevgep11 = getelementptr i32* %B.addr.08, i64 125000
+  %scevgep11 = getelementptr i32, i32* %B.addr.08, i64 125000
   %inc28 = add nsw i64 %i.07, 1
   %exitcond13 = icmp ne i64 %inc28, 50
   br i1 %exitcond13, label %for.cond1.preheader, label %for.end29

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/StrongSIV.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/StrongSIV.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/StrongSIV.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/StrongSIV.ll Fri Feb 27 13:29:02 2015
@@ -28,12 +28,12 @@ for.body:
   %indvars.iv = phi i64 [ 0, %for.body.preheader ], [ %indvars.iv.next, %for.body ]
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body ], [ %B, %for.body.preheader ]
   %0 = add nsw i64 %indvars.iv, 2
-  %arrayidx = getelementptr inbounds i32* %A, i64 %0
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %0
   %1 = trunc i64 %indvars.iv to i32
   store i32 %1, i32* %arrayidx, align 4
-  %arrayidx3 = getelementptr inbounds i32* %A, i64 %indvars.iv
+  %arrayidx3 = getelementptr inbounds i32, i32* %A, i64 %indvars.iv
   %2 = load i32* %arrayidx3, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %2, i32* %B.addr.02, align 4
   %indvars.iv.next = add i64 %indvars.iv, 1
   %exitcond = icmp ne i64 %indvars.iv.next, %n
@@ -72,11 +72,11 @@ for.body:
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body ], [ %B, %for.body.preheader ]
   %conv2 = trunc i64 %i.03 to i32
   %add = add nsw i64 %i.03, 2
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv2, i32* %arrayidx, align 4
-  %arrayidx3 = getelementptr inbounds i32* %A, i64 %i.03
+  %arrayidx3 = getelementptr inbounds i32, i32* %A, i64 %i.03
   %1 = load i32* %arrayidx3, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %1, i32* %B.addr.02, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %0
@@ -114,11 +114,11 @@ for.body:
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body ], [ %B, %for.body.preheader ]
   %conv = trunc i64 %i.03 to i32
   %add = add i64 %i.03, 2
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %i.03
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %i.03
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -155,12 +155,12 @@ for.body:
   %indvars.iv = phi i64 [ 0, %for.body.preheader ], [ %indvars.iv.next, %for.body ]
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body ], [ %B, %for.body.preheader ]
   %0 = add nsw i64 %indvars.iv, 2
-  %arrayidx = getelementptr inbounds i32* %A, i64 %0
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %0
   %1 = trunc i64 %indvars.iv to i32
   store i32 %1, i32* %arrayidx, align 4
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %indvars.iv
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %indvars.iv
   %2 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %2, i32* %B.addr.02, align 4
   %indvars.iv.next = add i64 %indvars.iv, 1
   %lftr.wideiv = trunc i64 %indvars.iv.next to i32
@@ -195,11 +195,11 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %add = add i64 %i.02, 19
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %i.02
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %i.02
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 19
@@ -230,11 +230,11 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %add = add i64 %i.02, 19
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %i.02
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %i.02
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 20
@@ -266,12 +266,12 @@ for.body:
   %conv = trunc i64 %i.02 to i32
   %mul = shl i64 %i.02, 1
   %add = add i64 %mul, 6
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul1 = shl i64 %i.02, 1
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %mul1
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %mul1
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 20
@@ -303,12 +303,12 @@ for.body:
   %conv = trunc i64 %i.02 to i32
   %mul = shl i64 %i.02, 1
   %add = add i64 %mul, 7
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul1 = shl i64 %i.02, 1
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %mul1
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %mul1
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 20
@@ -339,11 +339,11 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %add = add i64 %i.02, %n
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %i.02
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %i.02
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 20
@@ -378,13 +378,13 @@ for.body:
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body ], [ %B, %for.body.preheader ]
   %conv = trunc i64 %i.03 to i32
   %add = add i64 %i.03, %n
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul = shl i64 %n, 1
   %add1 = add i64 %i.03, %mul
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %add1
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %add1
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -419,13 +419,13 @@ for.body:
   %conv = trunc i64 %i.02 to i32
   %mul = mul i64 %i.02, %n
   %add = add i64 %mul, 5
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul1 = mul i64 %i.02, %n
   %add2 = add i64 %mul1, 5
-  %arrayidx3 = getelementptr inbounds i32* %A, i64 %add2
+  %arrayidx3 = getelementptr inbounds i32, i32* %A, i64 %add2
   %0 = load i32* %arrayidx3, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 1000

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/SymbolicRDIV.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/SymbolicRDIV.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/SymbolicRDIV.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/SymbolicRDIV.ll Fri Feb 27 13:29:02 2015
@@ -41,7 +41,7 @@ for.body:
   %conv = trunc i64 %i.05 to i32
   %mul = shl nsw i64 %i.05, 1
   %add = add i64 %mul, %n1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.05, 1
   %exitcond = icmp ne i64 %inc, %n1
@@ -52,9 +52,9 @@ for.body4:
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
   %mul56 = add i64 %j.03, %n1
   %add7 = mul i64 %mul56, 3
-  %arrayidx8 = getelementptr inbounds i32* %A, i64 %add7
+  %arrayidx8 = getelementptr inbounds i32, i32* %A, i64 %add7
   %0 = load i32* %arrayidx8, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc10 = add nsw i64 %j.03, 1
   %exitcond7 = icmp ne i64 %inc10, %n2
@@ -105,7 +105,7 @@ for.body:
   %mul = shl nsw i64 %i.05, 1
   %mul1 = mul i64 %n2, 5
   %add = add i64 %mul, %mul1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.05, 1
   %exitcond = icmp ne i64 %inc, %n1
@@ -117,9 +117,9 @@ for.body5:
   %mul6 = mul nsw i64 %j.03, 3
   %mul7 = shl i64 %n2, 1
   %add8 = add i64 %mul6, %mul7
-  %arrayidx9 = getelementptr inbounds i32* %A, i64 %add8
+  %arrayidx9 = getelementptr inbounds i32, i32* %A, i64 %add8
   %0 = load i32* %arrayidx9, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc11 = add nsw i64 %j.03, 1
   %exitcond6 = icmp ne i64 %inc11, %n2
@@ -169,7 +169,7 @@ for.body:
   %conv = trunc i64 %i.05 to i32
   %mul = shl nsw i64 %i.05, 1
   %sub = sub i64 %mul, %n2
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.05, 1
   %exitcond = icmp ne i64 %inc, %n1
@@ -180,9 +180,9 @@ for.body4:
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
   %mul6 = shl i64 %n1, 1
   %add = sub i64 %mul6, %j.03
-  %arrayidx7 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx7 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx7, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc9 = add nsw i64 %j.03, 1
   %exitcond6 = icmp ne i64 %inc9, %n2
@@ -231,7 +231,7 @@ for.body:
   %i.05 = phi i64 [ %inc, %for.body ], [ 0, %for.body.preheader ]
   %conv = trunc i64 %i.05 to i32
   %add = sub i64 %n2, %i.05
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.05, 1
   %exitcond = icmp ne i64 %inc, %n1
@@ -241,9 +241,9 @@ for.body4:
   %j.03 = phi i64 [ %inc8, %for.body4 ], [ 0, %for.body4.preheader ]
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
   %sub5 = sub i64 %j.03, %n1
-  %arrayidx6 = getelementptr inbounds i32* %A, i64 %sub5
+  %arrayidx6 = getelementptr inbounds i32, i32* %A, i64 %sub5
   %0 = load i32* %arrayidx6, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc8 = add nsw i64 %j.03, 1
   %exitcond6 = icmp ne i64 %inc8, %n2
@@ -293,7 +293,7 @@ for.body:
   %conv = trunc i64 %i.05 to i32
   %mul = shl i64 %n1, 1
   %add = sub i64 %mul, %i.05
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.05, 1
   %exitcond = icmp ne i64 %inc, %n1
@@ -303,9 +303,9 @@ for.body4:
   %j.03 = phi i64 [ %inc9, %for.body4 ], [ 0, %for.body4.preheader ]
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
   %add6 = sub i64 %n1, %j.03
-  %arrayidx7 = getelementptr inbounds i32* %A, i64 %add6
+  %arrayidx7 = getelementptr inbounds i32, i32* %A, i64 %add6
   %0 = load i32* %arrayidx7, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc9 = add nsw i64 %j.03, 1
   %exitcond6 = icmp ne i64 %inc9, %n2
@@ -354,7 +354,7 @@ for.body:
   %i.05 = phi i64 [ %inc, %for.body ], [ 0, %for.body.preheader ]
   %conv = trunc i64 %i.05 to i32
   %add = sub i64 %n2, %i.05
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %inc = add nsw i64 %i.05, 1
   %exitcond = icmp ne i64 %inc, %n1
@@ -365,9 +365,9 @@ for.body4:
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body4 ], [ %B, %for.body4.preheader ]
   %mul = shl i64 %n2, 1
   %add6 = sub i64 %mul, %j.03
-  %arrayidx7 = getelementptr inbounds i32* %A, i64 %add6
+  %arrayidx7 = getelementptr inbounds i32, i32* %A, i64 %add6
   %0 = load i32* %arrayidx7, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc9 = add nsw i64 %j.03, 1
   %exitcond6 = icmp ne i64 %inc9, %n2
@@ -417,19 +417,19 @@ for.body3:
   %conv = trunc i64 %i.05 to i32
   %sub = sub nsw i64 %j.03, %i.05
   %add = add i64 %sub, %n2
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul = shl i64 %n2, 1
-  %arrayidx4 = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx4 = getelementptr inbounds i32, i32* %A, i64 %mul
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.12, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.12, i64 1
   store i32 %0, i32* %B.addr.12, align 4
   %inc = add nsw i64 %j.03, 1
   %exitcond = icmp ne i64 %inc, %n2
   br i1 %exitcond, label %for.body3, label %for.inc5.loopexit
 
 for.inc5.loopexit:                                ; preds = %for.body3
-  %scevgep = getelementptr i32* %B.addr.06, i64 %n2
+  %scevgep = getelementptr i32, i32* %B.addr.06, i64 %n2
   br label %for.inc5
 
 for.inc5:                                         ; preds = %for.inc5.loopexit, %for.cond1.preheader

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/SymbolicSIV.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/SymbolicSIV.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/SymbolicSIV.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/SymbolicSIV.ll Fri Feb 27 13:29:02 2015
@@ -30,13 +30,13 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = shl nsw i64 %i.03, 1
   %add = add i64 %mul, %n
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul14 = add i64 %i.03, %n
   %add3 = mul i64 %mul14, 3
-  %arrayidx4 = getelementptr inbounds i32* %A, i64 %add3
+  %arrayidx4 = getelementptr inbounds i32, i32* %A, i64 %add3
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -76,14 +76,14 @@ for.body:
   %mul = shl nsw i64 %i.03, 1
   %mul1 = mul i64 %n, 5
   %add = add i64 %mul, %mul1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul2 = mul nsw i64 %i.03, 3
   %mul3 = shl i64 %n, 1
   %add4 = add i64 %mul2, %mul3
-  %arrayidx5 = getelementptr inbounds i32* %A, i64 %add4
+  %arrayidx5 = getelementptr inbounds i32, i32* %A, i64 %add4
   %0 = load i32* %arrayidx5, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -122,13 +122,13 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = shl nsw i64 %i.03, 1
   %sub = sub i64 %mul, %n
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
   %mul2 = shl i64 %n, 1
   %add = sub i64 %mul2, %i.03
-  %arrayidx3 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx3 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx3, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -168,13 +168,13 @@ for.body:
   %mul = mul nsw i64 %i.03, -2
   %add = add i64 %mul, %n
   %add1 = add i64 %add, 1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add1
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add1
   store i32 %conv, i32* %arrayidx, align 4
   %mul2 = shl i64 %n, 1
   %sub = sub i64 %i.03, %mul2
-  %arrayidx3 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx3 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx3, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -214,12 +214,12 @@ for.body:
   %mul = mul nsw i64 %i.03, -2
   %mul1 = mul i64 %n, 3
   %add = add i64 %mul, %mul1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %add2 = sub i64 %n, %i.03
-  %arrayidx3 = getelementptr inbounds i32* %A, i64 %add2
+  %arrayidx3 = getelementptr inbounds i32, i32* %A, i64 %add2
   %0 = load i32* %arrayidx3, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -259,13 +259,13 @@ for.body:
   %mul = mul nsw i64 %i.03, -2
   %mul1 = shl i64 %n, 1
   %sub = sub i64 %mul, %mul1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %sub
   store i32 %conv, i32* %arrayidx, align 4
   %sub2 = sub nsw i64 0, %i.03
   %sub3 = sub i64 %sub2, %n
-  %arrayidx4 = getelementptr inbounds i32* %A, i64 %sub3
+  %arrayidx4 = getelementptr inbounds i32, i32* %A, i64 %sub3
   %0 = load i32* %arrayidx4, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -306,12 +306,12 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %add = add i64 %i.03, %n
   %add1 = add i64 %add, 1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add1
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add1
   store i32 %conv, i32* %arrayidx, align 4
   %sub = sub i64 0, %i.03
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -351,16 +351,16 @@ for.body:
   %mul = shl i64 %N, 2
   %mul1 = mul i64 %mul, %i.03
   %add = add i64 %mul1, %M
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul2 = shl i64 %N, 2
   %mul3 = mul i64 %mul2, %i.03
   %mul4 = mul i64 %M, 3
   %add5 = add i64 %mul3, %mul4
   %add6 = add i64 %add5, 1
-  %arrayidx7 = getelementptr inbounds i32* %A, i64 %add6
+  %arrayidx7 = getelementptr inbounds i32, i32* %A, i64 %add6
   %0 = load i32* %arrayidx7, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -400,16 +400,16 @@ for.body:
   %mul = shl i64 %N, 1
   %mul1 = mul i64 %mul, %i.03
   %add = add i64 %mul1, %M
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul2 = shl i64 %N, 1
   %mul3 = mul i64 %mul2, %i.03
   %0 = mul i64 %M, -3
   %sub = add i64 %mul3, %0
   %add5 = add i64 %sub, 2
-  %arrayidx6 = getelementptr inbounds i32* %A, i64 %add5
+  %arrayidx6 = getelementptr inbounds i32, i32* %A, i64 %add5
   %1 = load i32* %arrayidx6, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %1, i32* %B.addr.02, align 4
   %inc = add nsw i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/WeakCrossingSIV.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/WeakCrossingSIV.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/WeakCrossingSIV.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/WeakCrossingSIV.ll Fri Feb 27 13:29:02 2015
@@ -30,13 +30,13 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = mul i64 %i.03, %n
   %add = add i64 %mul, 1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %mul1 = mul i64 %i.03, %n
   %sub = sub i64 1, %mul1
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -75,13 +75,13 @@ for.body:
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body ], [ %B, %for.body.preheader ]
   %conv = trunc i64 %i.03 to i32
   %add = add i64 %i.03, %n
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
   %add1 = add i64 %n, 1
   %sub = sub i64 %add1, %i.03
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -114,12 +114,12 @@ for.body:
   %i.02 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
-  %arrayidx = getelementptr inbounds i32* %A, i64 %i.02
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %i.02
   store i32 %conv, i32* %arrayidx, align 4
   %sub = sub i64 6, %i.02
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 3
@@ -149,12 +149,12 @@ for.body:
   %i.02 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
-  %arrayidx = getelementptr inbounds i32* %A, i64 %i.02
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %i.02
   store i32 %conv, i32* %arrayidx, align 4
   %sub = sub i64 6, %i.02
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 4
@@ -184,12 +184,12 @@ for.body:
   %i.02 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
-  %arrayidx = getelementptr inbounds i32* %A, i64 %i.02
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %i.02
   store i32 %conv, i32* %arrayidx, align 4
   %sub = sub i64 -6, %i.02
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 10
@@ -224,13 +224,13 @@ for.body:
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body ], [ %B, %for.body.preheader ]
   %conv = trunc i64 %i.03 to i32
   %mul = mul i64 %i.03, 3
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
   %0 = mul i64 %i.03, -3
   %sub = add i64 %0, 5
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %sub
   %1 = load i32* %arrayidx2, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %1, i32* %B.addr.02, align 4
   %inc = add i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -264,12 +264,12 @@ for.body:
   %i.02 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
-  %arrayidx = getelementptr inbounds i32* %A, i64 %i.02
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %i.02
   store i32 %conv, i32* %arrayidx, align 4
   %sub = sub i64 5, %i.02
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %sub
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %sub
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 4

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/WeakZeroDstSIV.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/WeakZeroDstSIV.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/WeakZeroDstSIV.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/WeakZeroDstSIV.ll Fri Feb 27 13:29:02 2015
@@ -26,11 +26,11 @@ for.body:
   %conv = trunc i64 %i.02 to i32
   %mul = shl i64 %i.02, 1
   %add = add i64 %mul, 10
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 10
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 10
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 30
@@ -66,11 +66,11 @@ for.body:
   %conv = trunc i64 %i.03 to i32
   %mul = mul i64 %i.03, %n
   %add = add i64 %mul, 10
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 10
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 10
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -104,11 +104,11 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = shl i64 %i.02, 1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 10
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 10
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 5
@@ -139,11 +139,11 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = shl i64 %i.02, 1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 10
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 10
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 6
@@ -174,11 +174,11 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = shl i64 %i.02, 1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 10
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 10
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 7
@@ -209,11 +209,11 @@ for.body:
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
   %mul = shl i64 %i.02, 1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 -10
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 -10
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 7
@@ -248,11 +248,11 @@ for.body:
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body ], [ %B, %for.body.preheader ]
   %conv = trunc i64 %i.03 to i32
   %mul = mul i64 %i.03, 3
-  %arrayidx = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %mul
   store i32 %conv, i32* %arrayidx, align 4
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 10
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 10
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/WeakZeroSrcSIV.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/WeakZeroSrcSIV.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/WeakZeroSrcSIV.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/WeakZeroSrcSIV.ll Fri Feb 27 13:29:02 2015
@@ -24,13 +24,13 @@ for.body:
   %i.02 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
-  %arrayidx = getelementptr inbounds i32* %A, i64 10
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 10
   store i32 %conv, i32* %arrayidx, align 4
   %mul = shl i64 %i.02, 1
   %add = add i64 %mul, 10
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 30
@@ -64,13 +64,13 @@ for.body:
   %i.03 = phi i64 [ %inc, %for.body ], [ 0, %for.body.preheader ]
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body ], [ %B, %for.body.preheader ]
   %conv = trunc i64 %i.03 to i32
-  %arrayidx = getelementptr inbounds i32* %A, i64 10
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 10
   store i32 %conv, i32* %arrayidx, align 4
   %mul = mul i64 %i.03, %n
   %add = add i64 %mul, 10
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n
@@ -103,12 +103,12 @@ for.body:
   %i.02 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
-  %arrayidx = getelementptr inbounds i32* %A, i64 10
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 10
   store i32 %conv, i32* %arrayidx, align 4
   %mul = shl i64 %i.02, 1
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %mul
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 5
@@ -138,12 +138,12 @@ for.body:
   %i.02 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
-  %arrayidx = getelementptr inbounds i32* %A, i64 10
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 10
   store i32 %conv, i32* %arrayidx, align 4
   %mul = shl i64 %i.02, 1
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %mul
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 6
@@ -173,12 +173,12 @@ for.body:
   %i.02 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
-  %arrayidx = getelementptr inbounds i32* %A, i64 10
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 10
   store i32 %conv, i32* %arrayidx, align 4
   %mul = shl i64 %i.02, 1
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %mul
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 7
@@ -208,12 +208,12 @@ for.body:
   %i.02 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
   %B.addr.01 = phi i32* [ %B, %entry ], [ %incdec.ptr, %for.body ]
   %conv = trunc i64 %i.02 to i32
-  %arrayidx = getelementptr inbounds i32* %A, i64 -10
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 -10
   store i32 %conv, i32* %arrayidx, align 4
   %mul = shl i64 %i.02, 1
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %mul
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.01, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.01, i64 1
   store i32 %0, i32* %B.addr.01, align 4
   %inc = add i64 %i.02, 1
   %exitcond = icmp ne i64 %inc, 7
@@ -247,12 +247,12 @@ for.body:
   %i.03 = phi i64 [ %inc, %for.body ], [ 0, %for.body.preheader ]
   %B.addr.02 = phi i32* [ %incdec.ptr, %for.body ], [ %B, %for.body.preheader ]
   %conv = trunc i64 %i.03 to i32
-  %arrayidx = getelementptr inbounds i32* %A, i64 10
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 10
   store i32 %conv, i32* %arrayidx, align 4
   %mul = mul i64 %i.03, 3
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %mul
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %mul
   %0 = load i32* %arrayidx1, align 4
-  %incdec.ptr = getelementptr inbounds i32* %B.addr.02, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %B.addr.02, i64 1
   store i32 %0, i32* %B.addr.02, align 4
   %inc = add i64 %i.03, 1
   %exitcond = icmp ne i64 %inc, %n

Modified: llvm/trunk/test/Analysis/DependenceAnalysis/ZIV.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/DependenceAnalysis/ZIV.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/DependenceAnalysis/ZIV.ll (original)
+++ llvm/trunk/test/Analysis/DependenceAnalysis/ZIV.ll Fri Feb 27 13:29:02 2015
@@ -11,7 +11,7 @@ target triple = "x86_64-apple-macosx10.6
 define void @z0(i32* %A, i32* %B, i64 %n) nounwind uwtable ssp {
 entry:
   %add = add i64 %n, 1
-  %arrayidx = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %add
   store i32 0, i32* %arrayidx, align 4
 
 ; CHECK: da analyze - none!
@@ -22,7 +22,7 @@ entry:
 ; CHECK: da analyze - none!
 
   %add1 = add i64 %n, 1
-  %arrayidx2 = getelementptr inbounds i32* %A, i64 %add1
+  %arrayidx2 = getelementptr inbounds i32, i32* %A, i64 %add1
   %0 = load i32* %arrayidx2, align 4
   store i32 %0, i32* %B, align 4
   ret void
@@ -34,7 +34,7 @@ entry:
 
 define void @z1(i32* %A, i32* %B, i64 %n) nounwind uwtable ssp {
 entry:
-  %arrayidx = getelementptr inbounds i32* %A, i64 %n
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %n
   store i32 0, i32* %arrayidx, align 4
 
 ; CHECK: da analyze - none!
@@ -45,7 +45,7 @@ entry:
 ; CHECK: da analyze - none!
 
   %add = add i64 %n, 1
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %add
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %add
   %0 = load i32* %arrayidx1, align 4
   store i32 %0, i32* %B, align 4
   ret void
@@ -57,7 +57,7 @@ entry:
 
 define void @z2(i32* %A, i32* %B, i64 %n, i64 %m) nounwind uwtable ssp {
 entry:
-  %arrayidx = getelementptr inbounds i32* %A, i64 %n
+  %arrayidx = getelementptr inbounds i32, i32* %A, i64 %n
   store i32 0, i32* %arrayidx, align 4
 
 ; CHECK: da analyze - none!
@@ -67,7 +67,7 @@ entry:
 ; CHECK: da analyze - confused!
 ; CHECK: da analyze - none!
 
-  %arrayidx1 = getelementptr inbounds i32* %A, i64 %m
+  %arrayidx1 = getelementptr inbounds i32, i32* %A, i64 %m
   %0 = load i32* %arrayidx1, align 4
   store i32 %0, i32* %B, align 4
   ret void

Modified: llvm/trunk/test/Analysis/Dominators/invoke.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/Dominators/invoke.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/Dominators/invoke.ll (original)
+++ llvm/trunk/test/Analysis/Dominators/invoke.ll Fri Feb 27 13:29:02 2015
@@ -7,7 +7,7 @@ define void @f() {
   invoke void @__dynamic_cast()
           to label %bb1 unwind label %bb2
 bb1:
-  %Hidden = getelementptr inbounds i32* %v1, i64 1
+  %Hidden = getelementptr inbounds i32, i32* %v1, i64 1
   ret void
 bb2:
   %lpad.loopexit80 = landingpad { i8*, i32 } personality i8* bitcast (i32 (...)* @__gxx_personality_v0 to i8*)

Modified: llvm/trunk/test/Analysis/LoopAccessAnalysis/backward-dep-different-types.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/LoopAccessAnalysis/backward-dep-different-types.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/LoopAccessAnalysis/backward-dep-different-types.ll (original)
+++ llvm/trunk/test/Analysis/LoopAccessAnalysis/backward-dep-different-types.ll Fri Feb 27 13:29:02 2015
@@ -27,10 +27,10 @@ entry:
 for.body:                                         ; preds = %for.body, %entry
   %storemerge3 = phi i64 [ 0, %entry ], [ %add, %for.body ]
 
-  %arrayidxA = getelementptr inbounds i32* %a, i64 %storemerge3
+  %arrayidxA = getelementptr inbounds i32, i32* %a, i64 %storemerge3
   %loadA = load i32* %arrayidxA, align 2
 
-  %arrayidxB = getelementptr inbounds i32* %b, i64 %storemerge3
+  %arrayidxB = getelementptr inbounds i32, i32* %b, i64 %storemerge3
   %loadB = load i32* %arrayidxB, align 2
 
   %mul = mul i32 %loadB, %loadA
@@ -38,7 +38,7 @@ for.body:
   %add = add nuw nsw i64 %storemerge3, 1
 
   %a_float = bitcast i32* %a to float*
-  %arrayidxA_plus_2 = getelementptr inbounds float* %a_float, i64 %add
+  %arrayidxA_plus_2 = getelementptr inbounds float, float* %a_float, i64 %add
   %mul_float = sitofp i32 %mul to float
   store float %mul_float, float* %arrayidxA_plus_2, align 2
 

Modified: llvm/trunk/test/Analysis/LoopAccessAnalysis/unsafe-and-rt-checks-no-dbg.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/LoopAccessAnalysis/unsafe-and-rt-checks-no-dbg.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/LoopAccessAnalysis/unsafe-and-rt-checks-no-dbg.ll (original)
+++ llvm/trunk/test/Analysis/LoopAccessAnalysis/unsafe-and-rt-checks-no-dbg.ll Fri Feb 27 13:29:02 2015
@@ -15,11 +15,11 @@ target triple = "x86_64-apple-macosx10.1
 
 ; CHECK: Run-time memory checks:
 ; CHECK-NEXT: 0:
-; CHECK-NEXT:   %arrayidxA_plus_2 = getelementptr inbounds i16* %a, i64 %add
-; CHECK-NEXT:   %arrayidxB = getelementptr inbounds i16* %b, i64 %storemerge3
+; CHECK-NEXT:   %arrayidxA_plus_2 = getelementptr inbounds i16, i16* %a, i64 %add
+; CHECK-NEXT:   %arrayidxB = getelementptr inbounds i16, i16* %b, i64 %storemerge3
 ; CHECK-NEXT: 1:
-; CHECK-NEXT:   %arrayidxA_plus_2 = getelementptr inbounds i16* %a, i64 %add
-; CHECK-NEXT:   %arrayidxC = getelementptr inbounds i16* %c, i64 %storemerge3
+; CHECK-NEXT:   %arrayidxA_plus_2 = getelementptr inbounds i16, i16* %a, i64 %add
+; CHECK-NEXT:   %arrayidxC = getelementptr inbounds i16, i16* %c, i64 %storemerge3
 
 @n = global i32 20, align 4
 @B = common global i16* null, align 8
@@ -36,20 +36,20 @@ entry:
 for.body:                                         ; preds = %for.body, %entry
   %storemerge3 = phi i64 [ 0, %entry ], [ %add, %for.body ]
 
-  %arrayidxA = getelementptr inbounds i16* %a, i64 %storemerge3
+  %arrayidxA = getelementptr inbounds i16, i16* %a, i64 %storemerge3
   %loadA = load i16* %arrayidxA, align 2
 
-  %arrayidxB = getelementptr inbounds i16* %b, i64 %storemerge3
+  %arrayidxB = getelementptr inbounds i16, i16* %b, i64 %storemerge3
   %loadB = load i16* %arrayidxB, align 2
 
-  %arrayidxC = getelementptr inbounds i16* %c, i64 %storemerge3
+  %arrayidxC = getelementptr inbounds i16, i16* %c, i64 %storemerge3
   %loadC = load i16* %arrayidxC, align 2
 
   %mul = mul i16 %loadB, %loadA
   %mul1 = mul i16 %mul, %loadC
 
   %add = add nuw nsw i64 %storemerge3, 1
-  %arrayidxA_plus_2 = getelementptr inbounds i16* %a, i64 %add
+  %arrayidxA_plus_2 = getelementptr inbounds i16, i16* %a, i64 %add
   store i16 %mul1, i16* %arrayidxA_plus_2, align 2
 
   %exitcond = icmp eq i64 %add, 20

Modified: llvm/trunk/test/Analysis/LoopAccessAnalysis/unsafe-and-rt-checks.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/LoopAccessAnalysis/unsafe-and-rt-checks.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/LoopAccessAnalysis/unsafe-and-rt-checks.ll (original)
+++ llvm/trunk/test/Analysis/LoopAccessAnalysis/unsafe-and-rt-checks.ll Fri Feb 27 13:29:02 2015
@@ -16,11 +16,11 @@ target triple = "x86_64-apple-macosx10.1
 
 ; CHECK: Run-time memory checks:
 ; CHECK-NEXT: 0:
-; CHECK-NEXT:   %arrayidxA_plus_2 = getelementptr inbounds i16* %a, i64 %add
-; CHECK-NEXT:   %arrayidxB = getelementptr inbounds i16* %b, i64 %storemerge3
+; CHECK-NEXT:   %arrayidxA_plus_2 = getelementptr inbounds i16, i16* %a, i64 %add
+; CHECK-NEXT:   %arrayidxB = getelementptr inbounds i16, i16* %b, i64 %storemerge3
 ; CHECK-NEXT: 1:
-; CHECK-NEXT:   %arrayidxA_plus_2 = getelementptr inbounds i16* %a, i64 %add
-; CHECK-NEXT:   %arrayidxC = getelementptr inbounds i16* %c, i64 %storemerge3
+; CHECK-NEXT:   %arrayidxA_plus_2 = getelementptr inbounds i16, i16* %a, i64 %add
+; CHECK-NEXT:   %arrayidxC = getelementptr inbounds i16, i16* %c, i64 %storemerge3
 
 @n = global i32 20, align 4
 @B = common global i16* null, align 8
@@ -37,20 +37,20 @@ entry:
 for.body:                                         ; preds = %for.body, %entry
   %storemerge3 = phi i64 [ 0, %entry ], [ %add, %for.body ]
 
-  %arrayidxA = getelementptr inbounds i16* %a, i64 %storemerge3
+  %arrayidxA = getelementptr inbounds i16, i16* %a, i64 %storemerge3
   %loadA = load i16* %arrayidxA, align 2
 
-  %arrayidxB = getelementptr inbounds i16* %b, i64 %storemerge3
+  %arrayidxB = getelementptr inbounds i16, i16* %b, i64 %storemerge3
   %loadB = load i16* %arrayidxB, align 2
 
-  %arrayidxC = getelementptr inbounds i16* %c, i64 %storemerge3
+  %arrayidxC = getelementptr inbounds i16, i16* %c, i64 %storemerge3
   %loadC = load i16* %arrayidxC, align 2
 
   %mul = mul i16 %loadB, %loadA
   %mul1 = mul i16 %mul, %loadC
 
   %add = add nuw nsw i64 %storemerge3, 1
-  %arrayidxA_plus_2 = getelementptr inbounds i16* %a, i64 %add
+  %arrayidxA_plus_2 = getelementptr inbounds i16, i16* %a, i64 %add
   store i16 %mul1, i16* %arrayidxA_plus_2, align 2
 
   %exitcond = icmp eq i64 %add, 20

Modified: llvm/trunk/test/Analysis/MemoryDependenceAnalysis/memdep_requires_dominator_tree.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/MemoryDependenceAnalysis/memdep_requires_dominator_tree.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/MemoryDependenceAnalysis/memdep_requires_dominator_tree.ll (original)
+++ llvm/trunk/test/Analysis/MemoryDependenceAnalysis/memdep_requires_dominator_tree.ll Fri Feb 27 13:29:02 2015
@@ -9,8 +9,8 @@ for.exit:
 
 for.body:                                         ; preds = %for.body, %entry
   %i.01 = phi i32 [ 0, %entry ], [ %tmp8.7, %for.body ]
-  %arrayidx = getelementptr i32* %bufUInt, i32 %i.01
-  %arrayidx5 = getelementptr i32* %pattern, i32 %i.01
+  %arrayidx = getelementptr i32, i32* %bufUInt, i32 %i.01
+  %arrayidx5 = getelementptr i32, i32* %pattern, i32 %i.01
   %tmp6 = load i32* %arrayidx5, align 4
   store i32 %tmp6, i32* %arrayidx, align 4
   %tmp8.7 = add i32 %i.01, 8

Modified: llvm/trunk/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll Fri Feb 27 13:29:02 2015
@@ -11,7 +11,7 @@ entry:
 
 bb:		; preds = %bb, %entry
 	%i.01.0 = phi i32 [ 100, %entry ], [ %tmp4, %bb ]		; <i32> [#uses=2]
-	%tmp1 = getelementptr [101 x i32]* @array, i32 0, i32 %i.01.0		; <i32*> [#uses=1]
+	%tmp1 = getelementptr [101 x i32], [101 x i32]* @array, i32 0, i32 %i.01.0		; <i32*> [#uses=1]
 	store i32 %x, i32* %tmp1
 	%tmp4 = add i32 %i.01.0, -1		; <i32> [#uses=2]
 	%tmp7 = icmp sgt i32 %tmp4, -1		; <i1> [#uses=1]

Modified: llvm/trunk/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll Fri Feb 27 13:29:02 2015
@@ -19,7 +19,7 @@ bb:		; preds = %bb1, %bb.nph
 	load i32* %srcptr, align 4		; <i32>:1 [#uses=2]
 	and i32 %1, 255		; <i32>:2 [#uses=1]
 	and i32 %1, -256		; <i32>:3 [#uses=1]
-	getelementptr [256 x i8]* @lut, i32 0, i32 %2		; <i8*>:4 [#uses=1]
+	getelementptr [256 x i8], [256 x i8]* @lut, i32 0, i32 %2		; <i8*>:4 [#uses=1]
 	load i8* %4, align 1		; <i8>:5 [#uses=1]
 	zext i8 %5 to i32		; <i32>:6 [#uses=1]
 	or i32 %6, %3		; <i32>:7 [#uses=1]

Modified: llvm/trunk/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll Fri Feb 27 13:29:02 2015
@@ -9,9 +9,9 @@ bb1.thread:
 bb1:		; preds = %bb1, %bb1.thread
 	%indvar = phi i32 [ 0, %bb1.thread ], [ %indvar.next, %bb1 ]		; <i32> [#uses=4]
 	%i.0.reg2mem.0 = sub i32 255, %indvar		; <i32> [#uses=2]
-	%0 = getelementptr i32* %alp, i32 %i.0.reg2mem.0		; <i32*> [#uses=1]
+	%0 = getelementptr i32, i32* %alp, i32 %i.0.reg2mem.0		; <i32*> [#uses=1]
 	%1 = load i32* %0, align 4		; <i32> [#uses=1]
-	%2 = getelementptr i32* %lam, i32 %i.0.reg2mem.0		; <i32*> [#uses=1]
+	%2 = getelementptr i32, i32* %lam, i32 %i.0.reg2mem.0		; <i32*> [#uses=1]
 	store i32 %1, i32* %2, align 4
 	%3 = sub i32 254, %indvar		; <i32> [#uses=1]
 	%4 = icmp slt i32 %3, 0		; <i1> [#uses=1]

Modified: llvm/trunk/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll Fri Feb 27 13:29:02 2015
@@ -11,18 +11,18 @@ target datalayout = "E-p:64:64:64-a0:0:8
 define void @_Z3foov() nounwind {
 entry:
 	%x = alloca %struct.NonPod, align 8		; <%struct.NonPod*> [#uses=2]
-	%0 = getelementptr %struct.NonPod* %x, i32 0, i32 0		; <[2 x %struct.Foo]*> [#uses=1]
-	%1 = getelementptr [2 x %struct.Foo]* %0, i32 1, i32 0		; <%struct.Foo*> [#uses=1]
+	%0 = getelementptr %struct.NonPod, %struct.NonPod* %x, i32 0, i32 0		; <[2 x %struct.Foo]*> [#uses=1]
+	%1 = getelementptr [2 x %struct.Foo], [2 x %struct.Foo]* %0, i32 1, i32 0		; <%struct.Foo*> [#uses=1]
 	br label %bb1.i
 
 bb1.i:		; preds = %bb2.i, %entry
 	%.0.i = phi %struct.Foo* [ %1, %entry ], [ %4, %bb2.i ]		; <%struct.Foo*> [#uses=2]
-	%2 = getelementptr %struct.NonPod* %x, i32 0, i32 0, i32 0		; <%struct.Foo*> [#uses=1]
+	%2 = getelementptr %struct.NonPod, %struct.NonPod* %x, i32 0, i32 0, i32 0		; <%struct.Foo*> [#uses=1]
 	%3 = icmp eq %struct.Foo* %.0.i, %2		; <i1> [#uses=1]
 	br i1 %3, label %_ZN6NonPodD1Ev.exit, label %bb2.i
 
 bb2.i:		; preds = %bb1.i
-	%4 = getelementptr %struct.Foo* %.0.i, i32 -1		; <%struct.Foo*> [#uses=1]
+	%4 = getelementptr %struct.Foo, %struct.Foo* %.0.i, i32 -1		; <%struct.Foo*> [#uses=1]
 	br label %bb1.i
 
 _ZN6NonPodD1Ev.exit:		; preds = %bb1.i

Modified: llvm/trunk/test/Analysis/ScalarEvolution/2012-03-26-LoadConstant.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/2012-03-26-LoadConstant.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/2012-03-26-LoadConstant.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/2012-03-26-LoadConstant.ll Fri Feb 27 13:29:02 2015
@@ -25,7 +25,7 @@ for.cond:
 
 for.body:                                         ; preds = %for.cond
   %idxprom = sext i32 %0 to i64
-  %arrayidx = getelementptr inbounds [0 x i32]* getelementptr inbounds ([1 x [0 x i32]]* @g_244, i32 0, i64 0), i32 0, i64 %idxprom
+  %arrayidx = getelementptr inbounds [0 x i32], [0 x i32]* getelementptr inbounds ([1 x [0 x i32]]* @g_244, i32 0, i64 0), i32 0, i64 %idxprom
   %1 = load i32* %arrayidx, align 1
   store i32 %1, i32* @func_21_l_773, align 4
   store i32 1, i32* @g_814, align 4

Modified: llvm/trunk/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll Fri Feb 27 13:29:02 2015
@@ -10,7 +10,7 @@ entry:
         br label %bb3
 
 bb:             ; preds = %bb3
-        %tmp = getelementptr [1000 x i32]* @A, i32 0, i32 %i.0          ; <i32*> [#uses=1]
+        %tmp = getelementptr [1000 x i32], [1000 x i32]* @A, i32 0, i32 %i.0          ; <i32*> [#uses=1]
         store i32 123, i32* %tmp
         %tmp2 = add i32 %i.0, 1         ; <i32> [#uses=1]
         br label %bb3

Modified: llvm/trunk/test/Analysis/ScalarEvolution/avoid-smax-0.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/avoid-smax-0.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/avoid-smax-0.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/avoid-smax-0.ll Fri Feb 27 13:29:02 2015
@@ -20,10 +20,10 @@ bb3.preheader:
 
 bb3:
 	%i.0 = phi i32 [ %7, %bb3 ], [ 0, %bb3.preheader ]
-	getelementptr i32* %p, i32 %i.0
+	getelementptr i32, i32* %p, i32 %i.0
 	load i32* %3, align 4
 	add i32 %4, 1
-	getelementptr i32* %p, i32 %i.0
+	getelementptr i32, i32* %p, i32 %i.0
 	store i32 %5, i32* %6, align 4
 	add i32 %i.0, 1
 	icmp slt i32 %7, %n

Modified: llvm/trunk/test/Analysis/ScalarEvolution/avoid-smax-1.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/avoid-smax-1.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/avoid-smax-1.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/avoid-smax-1.ll Fri Feb 27 13:29:02 2015
@@ -35,9 +35,9 @@ bb6:		; preds = %bb7, %bb.nph7
 	%7 = add i32 %x.06, %4		; <i32> [#uses=1]
 	%8 = shl i32 %x.06, 1		; <i32> [#uses=1]
 	%9 = add i32 %6, %8		; <i32> [#uses=1]
-	%10 = getelementptr i8* %r, i32 %9		; <i8*> [#uses=1]
+	%10 = getelementptr i8, i8* %r, i32 %9		; <i8*> [#uses=1]
 	%11 = load i8* %10, align 1		; <i8> [#uses=1]
-	%12 = getelementptr i8* %j, i32 %7		; <i8*> [#uses=1]
+	%12 = getelementptr i8, i8* %j, i32 %7		; <i8*> [#uses=1]
 	store i8 %11, i8* %12, align 1
 	%13 = add i32 %x.06, 1		; <i32> [#uses=2]
 	br label %bb7
@@ -102,18 +102,18 @@ bb14:		; preds = %bb15, %bb.nph3
 	%x.12 = phi i32 [ %40, %bb15 ], [ 0, %bb.nph3 ]		; <i32> [#uses=5]
 	%29 = shl i32 %x.12, 2		; <i32> [#uses=1]
 	%30 = add i32 %29, %25		; <i32> [#uses=1]
-	%31 = getelementptr i8* %r, i32 %30		; <i8*> [#uses=1]
+	%31 = getelementptr i8, i8* %r, i32 %30		; <i8*> [#uses=1]
 	%32 = load i8* %31, align 1		; <i8> [#uses=1]
 	%.sum = add i32 %26, %x.12		; <i32> [#uses=1]
-	%33 = getelementptr i8* %j, i32 %.sum		; <i8*> [#uses=1]
+	%33 = getelementptr i8, i8* %j, i32 %.sum		; <i8*> [#uses=1]
 	store i8 %32, i8* %33, align 1
 	%34 = shl i32 %x.12, 2		; <i32> [#uses=1]
 	%35 = or i32 %34, 2		; <i32> [#uses=1]
 	%36 = add i32 %35, %25		; <i32> [#uses=1]
-	%37 = getelementptr i8* %r, i32 %36		; <i8*> [#uses=1]
+	%37 = getelementptr i8, i8* %r, i32 %36		; <i8*> [#uses=1]
 	%38 = load i8* %37, align 1		; <i8> [#uses=1]
 	%.sum6 = add i32 %27, %x.12		; <i32> [#uses=1]
-	%39 = getelementptr i8* %j, i32 %.sum6		; <i8*> [#uses=1]
+	%39 = getelementptr i8, i8* %j, i32 %.sum6		; <i8*> [#uses=1]
 	store i8 %38, i8* %39, align 1
 	%40 = add i32 %x.12, 1		; <i32> [#uses=2]
 	br label %bb15
@@ -168,10 +168,10 @@ bb23:		; preds = %bb24, %bb.nph
 	%y.21 = phi i32 [ %57, %bb24 ], [ 0, %bb.nph ]		; <i32> [#uses=3]
 	%53 = mul i32 %y.21, %50		; <i32> [#uses=1]
 	%.sum1 = add i32 %53, %51		; <i32> [#uses=1]
-	%54 = getelementptr i8* %r, i32 %.sum1		; <i8*> [#uses=1]
+	%54 = getelementptr i8, i8* %r, i32 %.sum1		; <i8*> [#uses=1]
 	%55 = mul i32 %y.21, %w		; <i32> [#uses=1]
 	%.sum5 = add i32 %55, %.sum3		; <i32> [#uses=1]
-	%56 = getelementptr i8* %j, i32 %.sum5		; <i8*> [#uses=1]
+	%56 = getelementptr i8, i8* %j, i32 %.sum5		; <i8*> [#uses=1]
 	tail call void @llvm.memcpy.p0i8.p0i8.i32(i8* %56, i8* %54, i32 %w, i32 1, i1 false)
 	%57 = add i32 %y.21, 1		; <i32> [#uses=2]
 	br label %bb24
@@ -186,7 +186,7 @@ bb24.bb26_crit_edge:		; preds = %bb24
 bb26:		; preds = %bb24.bb26_crit_edge, %bb22
 	%59 = mul i32 %x, %w		; <i32> [#uses=1]
 	%.sum4 = add i32 %.sum3, %59		; <i32> [#uses=1]
-	%60 = getelementptr i8* %j, i32 %.sum4		; <i8*> [#uses=1]
+	%60 = getelementptr i8, i8* %j, i32 %.sum4		; <i8*> [#uses=1]
 	%61 = mul i32 %x, %w		; <i32> [#uses=1]
 	%62 = sdiv i32 %61, 2		; <i32> [#uses=1]
 	tail call void @llvm.memset.p0i8.i32(i8* %60, i8 -128, i32 %62, i32 1, i1 false)
@@ -204,9 +204,9 @@ bb.nph11:		; preds = %bb29
 bb30:		; preds = %bb31, %bb.nph11
 	%y.310 = phi i32 [ %70, %bb31 ], [ 0, %bb.nph11 ]		; <i32> [#uses=3]
 	%66 = mul i32 %y.310, %64		; <i32> [#uses=1]
-	%67 = getelementptr i8* %r, i32 %66		; <i8*> [#uses=1]
+	%67 = getelementptr i8, i8* %r, i32 %66		; <i8*> [#uses=1]
 	%68 = mul i32 %y.310, %w		; <i32> [#uses=1]
-	%69 = getelementptr i8* %j, i32 %68		; <i8*> [#uses=1]
+	%69 = getelementptr i8, i8* %j, i32 %68		; <i8*> [#uses=1]
 	tail call void @llvm.memcpy.p0i8.p0i8.i32(i8* %69, i8* %67, i32 %w, i32 1, i1 false)
 	%70 = add i32 %y.310, 1		; <i32> [#uses=2]
 	br label %bb31
@@ -220,7 +220,7 @@ bb31.bb33_crit_edge:		; preds = %bb31
 
 bb33:		; preds = %bb31.bb33_crit_edge, %bb29
 	%72 = mul i32 %x, %w		; <i32> [#uses=1]
-	%73 = getelementptr i8* %j, i32 %72		; <i8*> [#uses=1]
+	%73 = getelementptr i8, i8* %j, i32 %72		; <i8*> [#uses=1]
 	%74 = mul i32 %x, %w		; <i32> [#uses=1]
 	%75 = sdiv i32 %74, 2		; <i32> [#uses=1]
 	tail call void @llvm.memset.p0i8.i32(i8* %73, i8 -128, i32 %75, i32 1, i1 false)

Modified: llvm/trunk/test/Analysis/ScalarEvolution/load.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/load.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/load.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/load.ll Fri Feb 27 13:29:02 2015
@@ -16,10 +16,10 @@ for.body:
   %sum.04 = phi i32 [ 0, %entry ], [ %add2, %for.body ]
 ; CHECK: -->  %sum.04{{ *}}Exits: 2450
   %i.03 = phi i32 [ 0, %entry ], [ %inc, %for.body ]
-  %arrayidx = getelementptr inbounds [50 x i32]* @arr1, i32 0, i32 %i.03
+  %arrayidx = getelementptr inbounds [50 x i32], [50 x i32]* @arr1, i32 0, i32 %i.03
   %0 = load i32* %arrayidx, align 4
 ; CHECK: -->  %0{{ *}}Exits: 50
-  %arrayidx1 = getelementptr inbounds [50 x i32]* @arr2, i32 0, i32 %i.03
+  %arrayidx1 = getelementptr inbounds [50 x i32], [50 x i32]* @arr2, i32 0, i32 %i.03
   %1 = load i32* %arrayidx1, align 4
 ; CHECK: -->  %1{{ *}}Exits: 0
   %add = add i32 %0, %sum.04
@@ -51,10 +51,10 @@ for.body:
 ; CHECK: -->  %sum.02{{ *}}Exits: 10
   %n.01 = phi %struct.ListNode* [ bitcast ({ %struct.ListNode*, i32, [4 x i8] }* @node5 to %struct.ListNode*), %entry ], [ %1, %for.body ]
 ; CHECK: -->  %n.01{{ *}}Exits: @node1
-  %i = getelementptr inbounds %struct.ListNode* %n.01, i64 0, i32 1
+  %i = getelementptr inbounds %struct.ListNode, %struct.ListNode* %n.01, i64 0, i32 1
   %0 = load i32* %i, align 4
   %add = add nsw i32 %0, %sum.02
-  %next = getelementptr inbounds %struct.ListNode* %n.01, i64 0, i32 0
+  %next = getelementptr inbounds %struct.ListNode, %struct.ListNode* %n.01, i64 0, i32 0
   %1 = load %struct.ListNode** %next, align 8
 ; CHECK: -->  %1{{ *}}Exits: 0
   %cmp = icmp eq %struct.ListNode* %1, null

Modified: llvm/trunk/test/Analysis/ScalarEvolution/max-trip-count-address-space.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/max-trip-count-address-space.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/max-trip-count-address-space.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/max-trip-count-address-space.ll Fri Feb 27 13:29:02 2015
@@ -21,7 +21,7 @@ bb:		; preds = %bb1, %bb.nph
 	%p.01 = phi i8 [ %4, %bb1 ], [ -1, %bb.nph ]		; <i8> [#uses=2]
 	%1 = sext i8 %p.01 to i32		; <i32> [#uses=1]
 	%2 = sext i32 %i.02 to i64		; <i64> [#uses=1]
-	%3 = getelementptr i32 addrspace(1)* %d, i64 %2		; <i32*> [#uses=1]
+	%3 = getelementptr i32, i32 addrspace(1)* %d, i64 %2		; <i32*> [#uses=1]
 	store i32 %1, i32 addrspace(1)* %3, align 4
 	%4 = add i8 %p.01, 1		; <i8> [#uses=1]
 	%5 = add i32 %i.02, 1		; <i32> [#uses=2]
@@ -50,7 +50,7 @@ for.body.lr.ph:
 
 for.body:                                         ; preds = %for.body, %for.body.lr.ph
   %indvar = phi i64 [ %indvar.next, %for.body ], [ 0, %for.body.lr.ph ]
-  %arrayidx = getelementptr i8 addrspace(1)* %a, i64 %indvar
+  %arrayidx = getelementptr i8, i8 addrspace(1)* %a, i64 %indvar
   store i8 0, i8 addrspace(1)* %arrayidx, align 1
   %indvar.next = add i64 %indvar, 1
   %exitcond = icmp ne i64 %indvar.next, %tmp

Modified: llvm/trunk/test/Analysis/ScalarEvolution/max-trip-count.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/max-trip-count.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/max-trip-count.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/max-trip-count.ll Fri Feb 27 13:29:02 2015
@@ -17,7 +17,7 @@ bb:		; preds = %bb1, %bb.nph
 	%p.01 = phi i8 [ %4, %bb1 ], [ -1, %bb.nph ]		; <i8> [#uses=2]
 	%1 = sext i8 %p.01 to i32		; <i32> [#uses=1]
 	%2 = sext i32 %i.02 to i64		; <i64> [#uses=1]
-	%3 = getelementptr i32* %d, i64 %2		; <i32*> [#uses=1]
+	%3 = getelementptr i32, i32* %d, i64 %2		; <i32*> [#uses=1]
 	store i32 %1, i32* %3, align 4
 	%4 = add i8 %p.01, 1		; <i8> [#uses=1]
 	%5 = add i32 %i.02, 1		; <i32> [#uses=2]
@@ -82,7 +82,7 @@ for.body.lr.ph:
 
 for.body:                                         ; preds = %for.body, %for.body.lr.ph
   %indvar = phi i64 [ %indvar.next, %for.body ], [ 0, %for.body.lr.ph ]
-  %arrayidx = getelementptr i8* %a, i64 %indvar
+  %arrayidx = getelementptr i8, i8* %a, i64 %indvar
   store i8 0, i8* %arrayidx, align 1
   %indvar.next = add i64 %indvar, 1
   %exitcond = icmp ne i64 %indvar.next, %tmp

Modified: llvm/trunk/test/Analysis/ScalarEvolution/min-max-exprs.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/min-max-exprs.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/min-max-exprs.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/min-max-exprs.ll Fri Feb 27 13:29:02 2015
@@ -34,7 +34,7 @@ bb2:
 ;                  min(N, i+3)
 ; CHECK:           select i1 %tmp4, i64 %tmp5, i64 %tmp6
 ; CHECK-NEXT:  --> (-1 + (-1 * ((-1 + (-1 * (sext i32 {3,+,1}<nw><%bb1> to i64))) smax (-1 + (-1 * (sext i32 %N to i64))))))
-  %tmp11 = getelementptr inbounds i32* %A, i64 %tmp9
+  %tmp11 = getelementptr inbounds i32, i32* %A, i64 %tmp9
   %tmp12 = load i32* %tmp11, align 4
   %tmp13 = shl nsw i32 %tmp12, 1
   %tmp14 = icmp sge i32 3, %i.0
@@ -43,7 +43,7 @@ bb2:
 ;                  max(0, i - 3)
 ; CHECK:           select i1 %tmp14, i64 0, i64 %tmp17
 ; CHECK-NEXT: -->  (-3 + (3 smax {0,+,1}<nuw><nsw><%bb1>))
-  %tmp21 = getelementptr inbounds i32* %A, i64 %tmp19
+  %tmp21 = getelementptr inbounds i32, i32* %A, i64 %tmp19
   store i32 %tmp13, i32* %tmp21, align 4
   %tmp23 = add nuw nsw i32 %i.0, 1
   br label %bb1

Modified: llvm/trunk/test/Analysis/ScalarEvolution/nsw-offset-assume.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/nsw-offset-assume.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/nsw-offset-assume.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/nsw-offset-assume.ll Fri Feb 27 13:29:02 2015
@@ -24,13 +24,13 @@ bb:
 ; CHECK: -->  {0,+,2}<nuw><nsw><%bb>
   %1 = sext i32 %i.01 to i64                      ; <i64> [#uses=1]
 
-; CHECK: %2 = getelementptr inbounds double* %d, i64 %1
+; CHECK: %2 = getelementptr inbounds double, double* %d, i64 %1
 ; CHECK: -->  {%d,+,16}<nsw><%bb>
-  %2 = getelementptr inbounds double* %d, i64 %1  ; <double*> [#uses=1]
+  %2 = getelementptr inbounds double, double* %d, i64 %1  ; <double*> [#uses=1]
 
   %3 = load double* %2, align 8                   ; <double> [#uses=1]
   %4 = sext i32 %i.01 to i64                      ; <i64> [#uses=1]
-  %5 = getelementptr inbounds double* %q, i64 %4  ; <double*> [#uses=1]
+  %5 = getelementptr inbounds double, double* %q, i64 %4  ; <double*> [#uses=1]
   %6 = load double* %5, align 8                   ; <double> [#uses=1]
   %7 = or i32 %i.01, 1                            ; <i32> [#uses=1]
 
@@ -38,9 +38,9 @@ bb:
 ; CHECK: -->  {1,+,2}<nuw><nsw><%bb>
   %8 = sext i32 %7 to i64                         ; <i64> [#uses=1]
 
-; CHECK: %9 = getelementptr inbounds double* %q, i64 %8
+; CHECK: %9 = getelementptr inbounds double, double* %q, i64 %8
 ; CHECK: {(8 + %q),+,16}<nsw><%bb>
-  %9 = getelementptr inbounds double* %q, i64 %8  ; <double*> [#uses=1]
+  %9 = getelementptr inbounds double, double* %q, i64 %8  ; <double*> [#uses=1]
 
 ; Artificially repeat the above three instructions, this time using
 ; add nsw instead of or.
@@ -50,16 +50,16 @@ bb:
 ; CHECK: -->  {1,+,2}<nuw><nsw><%bb>
   %t8 = sext i32 %t7 to i64                         ; <i64> [#uses=1]
 
-; CHECK: %t9 = getelementptr inbounds double* %q, i64 %t8
+; CHECK: %t9 = getelementptr inbounds double, double* %q, i64 %t8
 ; CHECK: {(8 + %q),+,16}<nsw><%bb>
-  %t9 = getelementptr inbounds double* %q, i64 %t8  ; <double*> [#uses=1]
+  %t9 = getelementptr inbounds double, double* %q, i64 %t8  ; <double*> [#uses=1]
 
   %10 = load double* %9, align 8                  ; <double> [#uses=1]
   %11 = fadd double %6, %10                       ; <double> [#uses=1]
   %12 = fadd double %11, 3.200000e+00             ; <double> [#uses=1]
   %13 = fmul double %3, %12                       ; <double> [#uses=1]
   %14 = sext i32 %i.01 to i64                     ; <i64> [#uses=1]
-  %15 = getelementptr inbounds double* %d, i64 %14 ; <double*> [#uses=1]
+  %15 = getelementptr inbounds double, double* %d, i64 %14 ; <double*> [#uses=1]
   store double %13, double* %15, align 8
   %16 = add nsw i32 %i.01, 2                      ; <i32> [#uses=2]
   br label %bb1

Modified: llvm/trunk/test/Analysis/ScalarEvolution/nsw-offset.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/nsw-offset.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/nsw-offset.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/nsw-offset.ll Fri Feb 27 13:29:02 2015
@@ -22,13 +22,13 @@ bb:
 ; CHECK: -->  {0,+,2}<nuw><nsw><%bb>
   %1 = sext i32 %i.01 to i64                      ; <i64> [#uses=1]
 
-; CHECK: %2 = getelementptr inbounds double* %d, i64 %1
+; CHECK: %2 = getelementptr inbounds double, double* %d, i64 %1
 ; CHECK: -->  {%d,+,16}<nsw><%bb>
-  %2 = getelementptr inbounds double* %d, i64 %1  ; <double*> [#uses=1]
+  %2 = getelementptr inbounds double, double* %d, i64 %1  ; <double*> [#uses=1]
 
   %3 = load double* %2, align 8                   ; <double> [#uses=1]
   %4 = sext i32 %i.01 to i64                      ; <i64> [#uses=1]
-  %5 = getelementptr inbounds double* %q, i64 %4  ; <double*> [#uses=1]
+  %5 = getelementptr inbounds double, double* %q, i64 %4  ; <double*> [#uses=1]
   %6 = load double* %5, align 8                   ; <double> [#uses=1]
   %7 = or i32 %i.01, 1                            ; <i32> [#uses=1]
 
@@ -36,9 +36,9 @@ bb:
 ; CHECK: -->  {1,+,2}<nuw><nsw><%bb>
   %8 = sext i32 %7 to i64                         ; <i64> [#uses=1]
 
-; CHECK: %9 = getelementptr inbounds double* %q, i64 %8
+; CHECK: %9 = getelementptr inbounds double, double* %q, i64 %8
 ; CHECK: {(8 + %q),+,16}<nsw><%bb>
-  %9 = getelementptr inbounds double* %q, i64 %8  ; <double*> [#uses=1]
+  %9 = getelementptr inbounds double, double* %q, i64 %8  ; <double*> [#uses=1]
 
 ; Artificially repeat the above three instructions, this time using
 ; add nsw instead of or.
@@ -48,16 +48,16 @@ bb:
 ; CHECK: -->  {1,+,2}<nuw><nsw><%bb>
   %t8 = sext i32 %t7 to i64                         ; <i64> [#uses=1]
 
-; CHECK: %t9 = getelementptr inbounds double* %q, i64 %t8
+; CHECK: %t9 = getelementptr inbounds double, double* %q, i64 %t8
 ; CHECK: {(8 + %q),+,16}<nsw><%bb>
-  %t9 = getelementptr inbounds double* %q, i64 %t8  ; <double*> [#uses=1]
+  %t9 = getelementptr inbounds double, double* %q, i64 %t8  ; <double*> [#uses=1]
 
   %10 = load double* %9, align 8                  ; <double> [#uses=1]
   %11 = fadd double %6, %10                       ; <double> [#uses=1]
   %12 = fadd double %11, 3.200000e+00             ; <double> [#uses=1]
   %13 = fmul double %3, %12                       ; <double> [#uses=1]
   %14 = sext i32 %i.01 to i64                     ; <i64> [#uses=1]
-  %15 = getelementptr inbounds double* %d, i64 %14 ; <double*> [#uses=1]
+  %15 = getelementptr inbounds double, double* %d, i64 %14 ; <double*> [#uses=1]
   store double %13, double* %15, align 8
   %16 = add nsw i32 %i.01, 2                      ; <i32> [#uses=2]
   br label %bb1

Modified: llvm/trunk/test/Analysis/ScalarEvolution/nsw.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/nsw.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/nsw.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/nsw.ll Fri Feb 27 13:29:02 2015
@@ -19,11 +19,11 @@ bb:		; preds = %bb1, %bb.nph
 ; CHECK: %i.01
 ; CHECK-NEXT: -->  {0,+,1}<nuw><nsw><%bb>
 	%tmp2 = sext i32 %i.01 to i64		; <i64> [#uses=1]
-	%tmp3 = getelementptr double* %p, i64 %tmp2		; <double*> [#uses=1]
+	%tmp3 = getelementptr double, double* %p, i64 %tmp2		; <double*> [#uses=1]
 	%tmp4 = load double* %tmp3, align 8		; <double> [#uses=1]
 	%tmp5 = fmul double %tmp4, 9.200000e+00		; <double> [#uses=1]
 	%tmp6 = sext i32 %i.01 to i64		; <i64> [#uses=1]
-	%tmp7 = getelementptr double* %p, i64 %tmp6		; <double*> [#uses=1]
+	%tmp7 = getelementptr double, double* %p, i64 %tmp6		; <double*> [#uses=1]
 ; CHECK: %tmp7
 ; CHECK-NEXT:   -->  {%p,+,8}<%bb>
 	store double %tmp5, double* %tmp7, align 8
@@ -36,7 +36,7 @@ bb1:		; preds = %bb
 	%phitmp = sext i32 %tmp8 to i64		; <i64> [#uses=1]
 ; CHECK: %phitmp
 ; CHECK-NEXT: -->  {1,+,1}<nuw><nsw><%bb>
-	%tmp9 = getelementptr double* %p, i64 %phitmp		; <double*> [#uses=1]
+	%tmp9 = getelementptr double, double* %p, i64 %phitmp		; <double*> [#uses=1]
 ; CHECK: %tmp9
 ; CHECK-NEXT:  -->  {(8 + %p),+,8}<%bb>
 	%tmp10 = load double* %tmp9, align 8		; <double> [#uses=1]
@@ -64,7 +64,7 @@ for.body.i.i:
 ; CHECK: %__first.addr.02.i.i
 ; CHECK-NEXT: -->  {%begin,+,4}<nuw><%for.body.i.i>
   store i32 0, i32* %__first.addr.02.i.i, align 4
-  %ptrincdec.i.i = getelementptr inbounds i32* %__first.addr.02.i.i, i64 1
+  %ptrincdec.i.i = getelementptr inbounds i32, i32* %__first.addr.02.i.i, i64 1
 ; CHECK: %ptrincdec.i.i
 ; CHECK-NEXT: -->  {(4 + %begin),+,4}<nuw><%for.body.i.i>
   %cmp.i.i = icmp eq i32* %ptrincdec.i.i, %end
@@ -90,10 +90,10 @@ for.body.i.i:
   %tmp = add nsw i64 %indvar.i.i, 1
 ; CHECK: %tmp =
 ; CHECK: {1,+,1}<nuw><nsw><%for.body.i.i>
-  %ptrincdec.i.i = getelementptr inbounds i32* %begin, i64 %tmp
+  %ptrincdec.i.i = getelementptr inbounds i32, i32* %begin, i64 %tmp
 ; CHECK: %ptrincdec.i.i =
 ; CHECK: {(4 + %begin),+,4}<nsw><%for.body.i.i>
-  %__first.addr.08.i.i = getelementptr inbounds i32* %begin, i64 %indvar.i.i
+  %__first.addr.08.i.i = getelementptr inbounds i32, i32* %begin, i64 %indvar.i.i
 ; CHECK: %__first.addr.08.i.i
 ; CHECK: {%begin,+,4}<nsw><%for.body.i.i>
   store i32 0, i32* %__first.addr.08.i.i, align 4
@@ -127,14 +127,14 @@ exit:
 ; CHECK: -->  {(4 + %arg),+,4}<nuw><%bb1>		Exits: (8 + %arg)<nsw>
 define i32 @PR12375(i32* readnone %arg) {
 bb:
-  %tmp = getelementptr inbounds i32* %arg, i64 2
+  %tmp = getelementptr inbounds i32, i32* %arg, i64 2
   br label %bb1
 
 bb1:                                              ; preds = %bb1, %bb
   %tmp2 = phi i32* [ %arg, %bb ], [ %tmp5, %bb1 ]
   %tmp3 = phi i32 [ 0, %bb ], [ %tmp4, %bb1 ]
   %tmp4 = add nsw i32 %tmp3, 1
-  %tmp5 = getelementptr inbounds i32* %tmp2, i64 1
+  %tmp5 = getelementptr inbounds i32, i32* %tmp2, i64 1
   %tmp6 = icmp ult i32* %tmp5, %tmp
   br i1 %tmp6, label %bb1, label %bb7
 
@@ -151,7 +151,7 @@ bb:
 bb2:                                              ; preds = %bb2, %bb
   %tmp = phi i32* [ %arg, %bb ], [ %tmp4, %bb2 ]
   %tmp3 = icmp ult i32* %tmp, %arg1
-  %tmp4 = getelementptr inbounds i32* %tmp, i64 1
+  %tmp4 = getelementptr inbounds i32, i32* %tmp, i64 1
   br i1 %tmp3, label %bb2, label %bb5
 
 bb5:                                              ; preds = %bb2

Modified: llvm/trunk/test/Analysis/ScalarEvolution/pr22674.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/pr22674.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/pr22674.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/pr22674.ll Fri Feb 27 13:29:02 2015
@@ -44,11 +44,11 @@ cond.false:
   unreachable
 
 _ZNK4llvm12AttributeSet3endEj.exit:               ; preds = %for.end
-  %second.i.i.i = getelementptr inbounds %"struct.std::pair.241.2040.3839.6152.6923.7694.8465.9493.10007.10264.18507"* undef, i32 %I.099.lcssa129, i32 1
+  %second.i.i.i = getelementptr inbounds %"struct.std::pair.241.2040.3839.6152.6923.7694.8465.9493.10007.10264.18507", %"struct.std::pair.241.2040.3839.6152.6923.7694.8465.9493.10007.10264.18507"* undef, i32 %I.099.lcssa129, i32 1
   %0 = load %"class.llvm::AttributeSetNode.230.2029.3828.6141.6912.7683.8454.9482.9996.10253.18506"** %second.i.i.i, align 4, !tbaa !2
-  %NumAttrs.i.i.i = getelementptr inbounds %"class.llvm::AttributeSetNode.230.2029.3828.6141.6912.7683.8454.9482.9996.10253.18506"* %0, i32 0, i32 1
+  %NumAttrs.i.i.i = getelementptr inbounds %"class.llvm::AttributeSetNode.230.2029.3828.6141.6912.7683.8454.9482.9996.10253.18506", %"class.llvm::AttributeSetNode.230.2029.3828.6141.6912.7683.8454.9482.9996.10253.18506"* %0, i32 0, i32 1
   %1 = load i32* %NumAttrs.i.i.i, align 4, !tbaa !8
-  %add.ptr.i.i.i55 = getelementptr inbounds %"class.llvm::Attribute.222.2021.3820.6133.6904.7675.8446.9474.9988.10245.18509"* undef, i32 %1
+  %add.ptr.i.i.i55 = getelementptr inbounds %"class.llvm::Attribute.222.2021.3820.6133.6904.7675.8446.9474.9988.10245.18509", %"class.llvm::Attribute.222.2021.3820.6133.6904.7675.8446.9474.9988.10245.18509"* undef, i32 %1
   br i1 undef, label %return, label %for.body11
 
 for.cond9:                                        ; preds = %_ZNK4llvm9Attribute13getKindAsEnumEv.exit
@@ -70,7 +70,7 @@ _ZNK4llvm9Attribute15isEnumAttributeEv.e
   ]
 
 _ZNK4llvm9Attribute13getKindAsEnumEv.exit:        ; preds = %_ZNK4llvm9Attribute15isEnumAttributeEv.exit, %_ZNK4llvm9Attribute15isEnumAttributeEv.exit
-  %incdec.ptr = getelementptr inbounds %"class.llvm::Attribute.222.2021.3820.6133.6904.7675.8446.9474.9988.10245.18509"* %I5.096, i32 1
+  %incdec.ptr = getelementptr inbounds %"class.llvm::Attribute.222.2021.3820.6133.6904.7675.8446.9474.9988.10245.18509", %"class.llvm::Attribute.222.2021.3820.6133.6904.7675.8446.9474.9988.10245.18509"* %I5.096, i32 1
   br i1 undef, label %for.cond9, label %return
 
 cond.false21:                                     ; preds = %_ZNK4llvm9Attribute15isEnumAttributeEv.exit, %for.body11

Modified: llvm/trunk/test/Analysis/ScalarEvolution/scev-aa.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/scev-aa.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/scev-aa.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/scev-aa.ll Fri Feb 27 13:29:02 2015
@@ -19,9 +19,9 @@ entry:
 
 bb:
   %i = phi i64 [ 0, %entry ], [ %i.next, %bb ]
-  %pi = getelementptr double* %p, i64 %i
+  %pi = getelementptr double, double* %p, i64 %i
   %i.next = add i64 %i, 1
-  %pi.next = getelementptr double* %p, i64 %i.next
+  %pi.next = getelementptr double, double* %p, i64 %i.next
   %x = load double* %pi
   %y = load double* %pi.next
   %z = fmul double %x, %y
@@ -58,9 +58,9 @@ bb:
   %i.next = add i64 %i, 1
 
   %e = add i64 %i, %j
-  %pi.j = getelementptr double* %p, i64 %e
+  %pi.j = getelementptr double, double* %p, i64 %e
   %f = add i64 %i.next, %j
-  %pi.next.j = getelementptr double* %p, i64 %f
+  %pi.next.j = getelementptr double, double* %p, i64 %f
   %x = load double* %pi.j
   %y = load double* %pi.next.j
   %z = fmul double %x, %y
@@ -68,7 +68,7 @@ bb:
 
   %o = add i64 %j, 91
   %g = add i64 %i, %o
-  %pi.j.next = getelementptr double* %p, i64 %g
+  %pi.j.next = getelementptr double, double* %p, i64 %g
   %a = load double* %pi.j.next
   %b = fmul double %x, %a
   store double %b, double* %pi.j.next
@@ -115,9 +115,9 @@ bb:
   %i.next = add i64 %i, 1
 
   %e = add i64 %i, %j
-  %pi.j = getelementptr double* %p, i64 %e
+  %pi.j = getelementptr double, double* %p, i64 %e
   %f = add i64 %i.next, %j
-  %pi.next.j = getelementptr double* %p, i64 %f
+  %pi.next.j = getelementptr double, double* %p, i64 %f
   %x = load double* %pi.j
   %y = load double* %pi.next.j
   %z = fmul double %x, %y
@@ -125,7 +125,7 @@ bb:
 
   %o = add i64 %j, %n
   %g = add i64 %i, %o
-  %pi.j.next = getelementptr double* %p, i64 %g
+  %pi.j.next = getelementptr double, double* %p, i64 %g
   %a = load double* %pi.j.next
   %b = fmul double %x, %a
   store double %b, double* %pi.j.next
@@ -161,12 +161,12 @@ return:
 define void @foo() {
 entry:
   %A = alloca %struct.A
-  %B = getelementptr %struct.A* %A, i32 0, i32 0
+  %B = getelementptr %struct.A, %struct.A* %A, i32 0, i32 0
   %Q = bitcast %struct.B* %B to %struct.A*
-  %Z = getelementptr %struct.A* %Q, i32 0, i32 1
-  %C = getelementptr %struct.B* %B, i32 1
+  %Z = getelementptr %struct.A, %struct.A* %Q, i32 0, i32 1
+  %C = getelementptr %struct.B, %struct.B* %B, i32 1
   %X = bitcast %struct.B* %C to i32*
-  %Y = getelementptr %struct.A* %A, i32 0, i32 1
+  %Y = getelementptr %struct.A, %struct.A* %A, i32 0, i32 1
   ret void
 }
 
@@ -181,12 +181,12 @@ entry:
 
 define void @bar() {
   %M = alloca %struct.A
-  %N = getelementptr %struct.A* %M, i32 0, i32 0
+  %N = getelementptr %struct.A, %struct.A* %M, i32 0, i32 0
   %O = bitcast %struct.B* %N to %struct.A*
-  %P = getelementptr %struct.A* %O, i32 0, i32 1
-  %R = getelementptr %struct.B* %N, i32 1
+  %P = getelementptr %struct.A, %struct.A* %O, i32 0, i32 1
+  %R = getelementptr %struct.B, %struct.B* %N, i32 1
   %W = bitcast %struct.B* %R to i32*
-  %V = getelementptr %struct.A* %M, i32 0, i32 1
+  %V = getelementptr %struct.A, %struct.A* %M, i32 0, i32 1
   ret void
 }
 
@@ -200,7 +200,7 @@ entry:
 for.body:                                         ; preds = %entry, %for.body
   %i = phi i64 [ %inc, %for.body ], [ 0, %entry ] ; <i64> [#uses=2]
   %inc = add nsw i64 %i, 1                         ; <i64> [#uses=2]
-  %arrayidx = getelementptr inbounds i64* %p, i64 %inc
+  %arrayidx = getelementptr inbounds i64, i64* %p, i64 %inc
   store i64 0, i64* %arrayidx
   %tmp6 = load i64* %p                            ; <i64> [#uses=1]
   %cmp = icmp slt i64 %inc, %tmp6                 ; <i1> [#uses=1]

Modified: llvm/trunk/test/Analysis/ScalarEvolution/sext-inreg.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/sext-inreg.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/sext-inreg.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/sext-inreg.ll Fri Feb 27 13:29:02 2015
@@ -16,7 +16,7 @@ bb:		; preds = %bb, %entry
 	%t2 = ashr i64 %t1, 7		; <i32> [#uses=1]
 	%s1 = shl i64 %i.01, 5		; <i32> [#uses=1]
 	%s2 = ashr i64 %s1, 5		; <i32> [#uses=1]
-	%t3 = getelementptr i64* %x, i64 %i.01		; <i64*> [#uses=1]
+	%t3 = getelementptr i64, i64* %x, i64 %i.01		; <i64*> [#uses=1]
 	store i64 0, i64* %t3, align 1
 	%indvar.next = add i64 %i.01, 199		; <i32> [#uses=2]
 	%exitcond = icmp eq i64 %indvar.next, %n		; <i1> [#uses=1]

Modified: llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-0.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-0.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-0.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-0.ll Fri Feb 27 13:29:02 2015
@@ -23,11 +23,11 @@ bb1:		; preds = %bb1, %bb1.thread
 	%2 = sext i9 %1 to i64		; <i64> [#uses=1]
 ; CHECK: %2
 ; CHECK-NEXT: -->  {-128,+,1}<nsw><%bb1>	Exits: 127
-	%3 = getelementptr double* %x, i64 %2		; <double*> [#uses=1]
+	%3 = getelementptr double, double* %x, i64 %2		; <double*> [#uses=1]
 	%4 = load double* %3, align 8		; <double> [#uses=1]
 	%5 = fmul double %4, 3.900000e+00		; <double> [#uses=1]
 	%6 = sext i8 %0 to i64		; <i64> [#uses=1]
-	%7 = getelementptr double* %x, i64 %6		; <double*> [#uses=1]
+	%7 = getelementptr double, double* %x, i64 %6		; <double*> [#uses=1]
 	store double %5, double* %7, align 8
 	%8 = add i64 %i.0.reg2mem.0, 1		; <i64> [#uses=2]
 	%9 = icmp sgt i64 %8, 127		; <i1> [#uses=1]

Modified: llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-1.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-1.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-1.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-1.ll Fri Feb 27 13:29:02 2015
@@ -23,11 +23,11 @@ bb1:		; preds = %bb1, %bb1.thread
 	%0 = trunc i64 %i.0.reg2mem.0 to i7		; <i8> [#uses=1]
 	%1 = trunc i64 %i.0.reg2mem.0 to i9		; <i8> [#uses=1]
 	%2 = sext i9 %1 to i64		; <i64> [#uses=1]
-	%3 = getelementptr double* %x, i64 %2		; <double*> [#uses=1]
+	%3 = getelementptr double, double* %x, i64 %2		; <double*> [#uses=1]
 	%4 = load double* %3, align 8		; <double> [#uses=1]
 	%5 = fmul double %4, 3.900000e+00		; <double> [#uses=1]
 	%6 = sext i7 %0 to i64		; <i64> [#uses=1]
-	%7 = getelementptr double* %x, i64 %6		; <double*> [#uses=1]
+	%7 = getelementptr double, double* %x, i64 %6		; <double*> [#uses=1]
 	store double %5, double* %7, align 8
 	%8 = add i64 %i.0.reg2mem.0, 1		; <i64> [#uses=2]
 	%9 = icmp sgt i64 %8, 127		; <i1> [#uses=1]
@@ -46,11 +46,11 @@ bb1:		; preds = %bb1, %bb1.thread
 	%0 = trunc i64 %i.0.reg2mem.0 to i8		; <i8> [#uses=1]
 	%1 = trunc i64 %i.0.reg2mem.0 to i9		; <i8> [#uses=1]
 	%2 = sext i9 %1 to i64		; <i64> [#uses=1]
-	%3 = getelementptr double* %x, i64 %2		; <double*> [#uses=1]
+	%3 = getelementptr double, double* %x, i64 %2		; <double*> [#uses=1]
 	%4 = load double* %3, align 8		; <double> [#uses=1]
 	%5 = fmul double %4, 3.900000e+00		; <double> [#uses=1]
 	%6 = sext i8 %0 to i64		; <i64> [#uses=1]
-	%7 = getelementptr double* %x, i64 %6		; <double*> [#uses=1]
+	%7 = getelementptr double, double* %x, i64 %6		; <double*> [#uses=1]
 	store double %5, double* %7, align 8
 	%8 = add i64 %i.0.reg2mem.0, 1		; <i64> [#uses=2]
 	%9 = icmp sgt i64 %8, 128		; <i1> [#uses=1]
@@ -69,11 +69,11 @@ bb1:		; preds = %bb1, %bb1.thread
 	%0 = trunc i64 %i.0.reg2mem.0 to i8		; <i8> [#uses=1]
 	%1 = trunc i64 %i.0.reg2mem.0 to i9		; <i8> [#uses=1]
 	%2 = sext i9 %1 to i64		; <i64> [#uses=1]
-	%3 = getelementptr double* %x, i64 %2		; <double*> [#uses=1]
+	%3 = getelementptr double, double* %x, i64 %2		; <double*> [#uses=1]
 	%4 = load double* %3, align 8		; <double> [#uses=1]
 	%5 = fmul double %4, 3.900000e+00		; <double> [#uses=1]
 	%6 = sext i8 %0 to i64		; <i64> [#uses=1]
-	%7 = getelementptr double* %x, i64 %6		; <double*> [#uses=1]
+	%7 = getelementptr double, double* %x, i64 %6		; <double*> [#uses=1]
 	store double %5, double* %7, align 8
 	%8 = add i64 %i.0.reg2mem.0, 1		; <i64> [#uses=2]
 	%9 = icmp sgt i64 %8, 127		; <i1> [#uses=1]
@@ -92,11 +92,11 @@ bb1:		; preds = %bb1, %bb1.thread
 	%0 = trunc i64 %i.0.reg2mem.0 to i8		; <i8> [#uses=1]
 	%1 = trunc i64 %i.0.reg2mem.0 to i9		; <i8> [#uses=1]
 	%2 = sext i9 %1 to i64		; <i64> [#uses=1]
-	%3 = getelementptr double* %x, i64 %2		; <double*> [#uses=1]
+	%3 = getelementptr double, double* %x, i64 %2		; <double*> [#uses=1]
 	%4 = load double* %3, align 8		; <double> [#uses=1]
 	%5 = fmul double %4, 3.900000e+00		; <double> [#uses=1]
 	%6 = sext i8 %0 to i64		; <i64> [#uses=1]
-	%7 = getelementptr double* %x, i64 %6		; <double*> [#uses=1]
+	%7 = getelementptr double, double* %x, i64 %6		; <double*> [#uses=1]
 	store double %5, double* %7, align 8
 	%8 = add i64 %i.0.reg2mem.0, -1		; <i64> [#uses=2]
 	%9 = icmp sgt i64 %8, 127		; <i1> [#uses=1]

Modified: llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-2.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-2.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-2.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/sext-iv-2.ll Fri Feb 27 13:29:02 2015
@@ -32,7 +32,7 @@ bb1:		; preds = %bb2, %bb.nph
 	%tmp4 = mul i32 %tmp3, %i.02		; <i32> [#uses=1]
 	%tmp5 = sext i32 %i.02 to i64		; <i64> [#uses=1]
 	%tmp6 = sext i32 %j.01 to i64		; <i64> [#uses=1]
-	%tmp7 = getelementptr [32 x [256 x i32]]* @table, i64 0, i64 %tmp5, i64 %tmp6		; <i32*> [#uses=1]
+	%tmp7 = getelementptr [32 x [256 x i32]], [32 x [256 x i32]]* @table, i64 0, i64 %tmp5, i64 %tmp6		; <i32*> [#uses=1]
 	store i32 %tmp4, i32* %tmp7, align 4
 	%tmp8 = add i32 %j.01, 1		; <i32> [#uses=2]
 	br label %bb2

Modified: llvm/trunk/test/Analysis/ScalarEvolution/sle.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/sle.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/sle.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/sle.ll Fri Feb 27 13:29:02 2015
@@ -14,7 +14,7 @@ entry:
 
 for.body:                                         ; preds = %for.body, %entry
   %i = phi i64 [ %i.next, %for.body ], [ 0, %entry ] ; <i64> [#uses=2]
-  %arrayidx = getelementptr double* %p, i64 %i    ; <double*> [#uses=2]
+  %arrayidx = getelementptr double, double* %p, i64 %i    ; <double*> [#uses=2]
   %t4 = load double* %arrayidx                    ; <double> [#uses=1]
   %mul = fmul double %t4, 2.200000e+00            ; <double> [#uses=1]
   store double %mul, double* %arrayidx

Modified: llvm/trunk/test/Analysis/ScalarEvolution/trip-count.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/trip-count.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/trip-count.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/trip-count.ll Fri Feb 27 13:29:02 2015
@@ -10,7 +10,7 @@ entry:
         br label %bb3
 
 bb:             ; preds = %bb3
-        %tmp = getelementptr [1000 x i32]* @A, i32 0, i32 %i.0          ; <i32*> [#uses=1]
+        %tmp = getelementptr [1000 x i32], [1000 x i32]* @A, i32 0, i32 %i.0          ; <i32*> [#uses=1]
         store i32 123, i32* %tmp
         %tmp2 = add i32 %i.0, 1         ; <i32> [#uses=1]
         br label %bb3

Modified: llvm/trunk/test/Analysis/ScalarEvolution/trip-count11.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/trip-count11.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/trip-count11.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/trip-count11.ll Fri Feb 27 13:29:02 2015
@@ -20,7 +20,7 @@ for.cond:
 
 for.inc:                                          ; preds = %for.cond
   %idxprom = sext i32 %i.0 to i64
-  %arrayidx = getelementptr inbounds [8 x i32]* @foo.a, i64 0, i64 %idxprom
+  %arrayidx = getelementptr inbounds [8 x i32], [8 x i32]* @foo.a, i64 0, i64 %idxprom
   %0 = load i32* %arrayidx, align 4
   %add = add nsw i32 %sum.0, %0
   %inc = add nsw i32 %i.0, 1
@@ -43,7 +43,7 @@ for.cond:
 
 for.inc:                                          ; preds = %for.cond
   %idxprom = sext i32 %i.0 to i64
-  %arrayidx = getelementptr inbounds [8 x i32] addrspace(1)* @foo.a_as1, i64 0, i64 %idxprom
+  %arrayidx = getelementptr inbounds [8 x i32], [8 x i32] addrspace(1)* @foo.a_as1, i64 0, i64 %idxprom
   %0 = load i32 addrspace(1)* %arrayidx, align 4
   %add = add nsw i32 %sum.0, %0
   %inc = add nsw i32 %i.0, 1

Modified: llvm/trunk/test/Analysis/ScalarEvolution/trip-count12.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/trip-count12.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/trip-count12.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/trip-count12.ll Fri Feb 27 13:29:02 2015
@@ -16,7 +16,7 @@ for.body:
   %p.addr.05 = phi i16* [ %incdec.ptr, %for.body ], [ %p, %for.body.preheader ]
   %len.addr.04 = phi i32 [ %sub, %for.body ], [ %len, %for.body.preheader ]
   %res.03 = phi i32 [ %add, %for.body ], [ 0, %for.body.preheader ]
-  %incdec.ptr = getelementptr inbounds i16* %p.addr.05, i32 1
+  %incdec.ptr = getelementptr inbounds i16, i16* %p.addr.05, i32 1
   %0 = load i16* %p.addr.05, align 2
   %conv = zext i16 %0 to i32
   %add = add i32 %conv, %res.03

Modified: llvm/trunk/test/Analysis/ScalarEvolution/trip-count2.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/trip-count2.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/trip-count2.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/trip-count2.ll Fri Feb 27 13:29:02 2015
@@ -10,7 +10,7 @@ entry:
         br label %bb3
 
 bb:             ; preds = %bb3
-        %tmp = getelementptr [1000 x i32]* @A, i32 0, i32 %i.0          ; <i32*> [#uses=1]
+        %tmp = getelementptr [1000 x i32], [1000 x i32]* @A, i32 0, i32 %i.0          ; <i32*> [#uses=1]
         store i32 123, i32* %tmp
         %tmp4 = mul i32 %i.0, 4         ; <i32> [#uses=1]
         %tmp5 = or i32 %tmp4, 1

Modified: llvm/trunk/test/Analysis/ScalarEvolution/trip-count3.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/trip-count3.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/trip-count3.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/trip-count3.ll Fri Feb 27 13:29:02 2015
@@ -48,10 +48,10 @@ sha_update.exit.exitStub:
   ret void
 
 bb2.i:                                            ; preds = %bb3.i
-  %1 = getelementptr %struct.SHA_INFO* %sha_info, i64 0, i32 3
+  %1 = getelementptr %struct.SHA_INFO, %struct.SHA_INFO* %sha_info, i64 0, i32 3
   %2 = bitcast [16 x i32]* %1 to i8*
   call void @llvm.memcpy.p0i8.p0i8.i64(i8* %2, i8* %buffer_addr.0.i, i64 64, i32 1, i1 false)
-  %3 = getelementptr %struct.SHA_INFO* %sha_info, i64 0, i32 3, i64 0
+  %3 = getelementptr %struct.SHA_INFO, %struct.SHA_INFO* %sha_info, i64 0, i32 3, i64 0
   %4 = bitcast i32* %3 to i8*
   br label %codeRepl
 
@@ -61,7 +61,7 @@ codeRepl:
 
 byte_reverse.exit.i:                              ; preds = %codeRepl
   call fastcc void @sha_transform(%struct.SHA_INFO* %sha_info) nounwind
-  %5 = getelementptr i8* %buffer_addr.0.i, i64 64
+  %5 = getelementptr i8, i8* %buffer_addr.0.i, i64 64
   %6 = add i32 %count_addr.0.i, -64
   br label %bb3.i
 

Modified: llvm/trunk/test/Analysis/ScalarEvolution/trip-count4.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/trip-count4.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/trip-count4.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/trip-count4.ll Fri Feb 27 13:29:02 2015
@@ -12,7 +12,7 @@ loop:		; preds = %loop, %entry
 	%indvar = phi i64 [ %n, %entry ], [ %indvar.next, %loop ]		; <i64> [#uses=4]
 	%s0 = shl i64 %indvar, 8		; <i64> [#uses=1]
 	%indvar.i8 = ashr i64 %s0, 8		; <i64> [#uses=1]
-	%t0 = getelementptr double* %d, i64 %indvar.i8		; <double*> [#uses=2]
+	%t0 = getelementptr double, double* %d, i64 %indvar.i8		; <double*> [#uses=2]
 	%t1 = load double* %t0		; <double> [#uses=1]
 	%t2 = fmul double %t1, 1.000000e-01		; <double> [#uses=1]
 	store double %t2, double* %t0

Modified: llvm/trunk/test/Analysis/ScalarEvolution/trip-count5.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/trip-count5.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/trip-count5.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/trip-count5.ll Fri Feb 27 13:29:02 2015
@@ -21,12 +21,12 @@ bb:		; preds = %bb1, %bb.nph
 	%hiPart.035 = phi i32 [ %tmp12, %bb1 ], [ 0, %bb.nph ]		; <i32> [#uses=2]
 	%peakCount.034 = phi float [ %tmp19, %bb1 ], [ %tmp3, %bb.nph ]		; <float> [#uses=1]
 	%tmp6 = sext i32 %hiPart.035 to i64		; <i64> [#uses=1]
-	%tmp7 = getelementptr float* %pTmp1, i64 %tmp6		; <float*> [#uses=1]
+	%tmp7 = getelementptr float, float* %pTmp1, i64 %tmp6		; <float*> [#uses=1]
 	%tmp8 = load float* %tmp7, align 4		; <float> [#uses=1]
 	%tmp10 = fadd float %tmp8, %distERBhi.036		; <float> [#uses=3]
 	%tmp12 = add i32 %hiPart.035, 1		; <i32> [#uses=3]
 	%tmp15 = sext i32 %tmp12 to i64		; <i64> [#uses=1]
-	%tmp16 = getelementptr float* %peakWeight, i64 %tmp15		; <float*> [#uses=1]
+	%tmp16 = getelementptr float, float* %peakWeight, i64 %tmp15		; <float*> [#uses=1]
 	%tmp17 = load float* %tmp16, align 4		; <float> [#uses=1]
 	%tmp19 = fadd float %tmp17, %peakCount.034		; <float> [#uses=2]
 	br label %bb1

Modified: llvm/trunk/test/Analysis/ScalarEvolution/trip-count6.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/trip-count6.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/trip-count6.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/trip-count6.ll Fri Feb 27 13:29:02 2015
@@ -12,7 +12,7 @@ entry:
 bb:             ; preds = %bb4, %entry
   %mode.0 = phi i8 [ 0, %entry ], [ %indvar.next, %bb4 ]                ; <i8> [#uses=4]
   zext i8 %mode.0 to i32                ; <i32>:1 [#uses=1]
-  getelementptr [4 x i32]* @mode_table, i32 0, i32 %1           ; <i32*>:2 [#uses=1]
+  getelementptr [4 x i32], [4 x i32]* @mode_table, i32 0, i32 %1           ; <i32*>:2 [#uses=1]
   load i32* %2, align 4         ; <i32>:3 [#uses=1]
   icmp eq i32 %3, %0            ; <i1>:4 [#uses=1]
   br i1 %4, label %bb1, label %bb2

Modified: llvm/trunk/test/Analysis/ScalarEvolution/trip-count7.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScalarEvolution/trip-count7.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScalarEvolution/trip-count7.ll (original)
+++ llvm/trunk/test/Analysis/ScalarEvolution/trip-count7.ll Fri Feb 27 13:29:02 2015
@@ -72,7 +72,7 @@ bb.i:		; preds = %bb7.i
 	%tmp = add i32 %j.0.i, 1		; <i32> [#uses=5]
 	store i32 0, i32* %q, align 4
 	%tmp1 = sext i32 %tmp to i64		; <i64> [#uses=1]
-	%tmp2 = getelementptr [9 x i32]* %a, i64 0, i64 %tmp1		; <i32*> [#uses=1]
+	%tmp2 = getelementptr [9 x i32], [9 x i32]* %a, i64 0, i64 %tmp1		; <i32*> [#uses=1]
 	%tmp3 = load i32* %tmp2, align 4		; <i32> [#uses=1]
 	%tmp4 = icmp eq i32 %tmp3, 0		; <i1> [#uses=1]
 	br i1 %tmp4, label %bb.i.bb7.i.backedge_crit_edge, label %bb1.i
@@ -80,7 +80,7 @@ bb.i:		; preds = %bb7.i
 bb1.i:		; preds = %bb.i
 	%tmp5 = add i32 %j.0.i, 2		; <i32> [#uses=1]
 	%tmp6 = sext i32 %tmp5 to i64		; <i64> [#uses=1]
-	%tmp7 = getelementptr [17 x i32]* %b, i64 0, i64 %tmp6		; <i32*> [#uses=1]
+	%tmp7 = getelementptr [17 x i32], [17 x i32]* %b, i64 0, i64 %tmp6		; <i32*> [#uses=1]
 	%tmp8 = load i32* %tmp7, align 4		; <i32> [#uses=1]
 	%tmp9 = icmp eq i32 %tmp8, 0		; <i1> [#uses=1]
 	br i1 %tmp9, label %bb1.i.bb7.i.backedge_crit_edge, label %bb2.i
@@ -88,24 +88,24 @@ bb1.i:		; preds = %bb.i
 bb2.i:		; preds = %bb1.i
 	%tmp10 = sub i32 7, %j.0.i		; <i32> [#uses=1]
 	%tmp11 = sext i32 %tmp10 to i64		; <i64> [#uses=1]
-	%tmp12 = getelementptr [15 x i32]* %c, i64 0, i64 %tmp11		; <i32*> [#uses=1]
+	%tmp12 = getelementptr [15 x i32], [15 x i32]* %c, i64 0, i64 %tmp11		; <i32*> [#uses=1]
 	%tmp13 = load i32* %tmp12, align 4		; <i32> [#uses=1]
 	%tmp14 = icmp eq i32 %tmp13, 0		; <i1> [#uses=1]
 	br i1 %tmp14, label %bb2.i.bb7.i.backedge_crit_edge, label %bb3.i
 
 bb3.i:		; preds = %bb2.i
-	%tmp15 = getelementptr [9 x i32]* %x1, i64 0, i64 1		; <i32*> [#uses=1]
+	%tmp15 = getelementptr [9 x i32], [9 x i32]* %x1, i64 0, i64 1		; <i32*> [#uses=1]
 	store i32 %tmp, i32* %tmp15, align 4
 	%tmp16 = sext i32 %tmp to i64		; <i64> [#uses=1]
-	%tmp17 = getelementptr [9 x i32]* %a, i64 0, i64 %tmp16		; <i32*> [#uses=1]
+	%tmp17 = getelementptr [9 x i32], [9 x i32]* %a, i64 0, i64 %tmp16		; <i32*> [#uses=1]
 	store i32 0, i32* %tmp17, align 4
 	%tmp18 = add i32 %j.0.i, 2		; <i32> [#uses=1]
 	%tmp19 = sext i32 %tmp18 to i64		; <i64> [#uses=1]
-	%tmp20 = getelementptr [17 x i32]* %b, i64 0, i64 %tmp19		; <i32*> [#uses=1]
+	%tmp20 = getelementptr [17 x i32], [17 x i32]* %b, i64 0, i64 %tmp19		; <i32*> [#uses=1]
 	store i32 0, i32* %tmp20, align 4
 	%tmp21 = sub i32 7, %j.0.i		; <i32> [#uses=1]
 	%tmp22 = sext i32 %tmp21 to i64		; <i64> [#uses=1]
-	%tmp23 = getelementptr [15 x i32]* %c, i64 0, i64 %tmp22		; <i32*> [#uses=1]
+	%tmp23 = getelementptr [15 x i32], [15 x i32]* %c, i64 0, i64 %tmp22		; <i32*> [#uses=1]
 	store i32 0, i32* %tmp23, align 4
 	call void @Try(i32 2, i32* %q, i32* %b9, i32* %a10, i32* %c11, i32* %x1.sub) nounwind
 	%tmp24 = load i32* %q, align 4		; <i32> [#uses=1]
@@ -114,15 +114,15 @@ bb3.i:		; preds = %bb2.i
 
 bb5.i:		; preds = %bb3.i
 	%tmp26 = sext i32 %tmp to i64		; <i64> [#uses=1]
-	%tmp27 = getelementptr [9 x i32]* %a, i64 0, i64 %tmp26		; <i32*> [#uses=1]
+	%tmp27 = getelementptr [9 x i32], [9 x i32]* %a, i64 0, i64 %tmp26		; <i32*> [#uses=1]
 	store i32 1, i32* %tmp27, align 4
 	%tmp28 = add i32 %j.0.i, 2		; <i32> [#uses=1]
 	%tmp29 = sext i32 %tmp28 to i64		; <i64> [#uses=1]
-	%tmp30 = getelementptr [17 x i32]* %b, i64 0, i64 %tmp29		; <i32*> [#uses=1]
+	%tmp30 = getelementptr [17 x i32], [17 x i32]* %b, i64 0, i64 %tmp29		; <i32*> [#uses=1]
 	store i32 1, i32* %tmp30, align 4
 	%tmp31 = sub i32 7, %j.0.i		; <i32> [#uses=1]
 	%tmp32 = sext i32 %tmp31 to i64		; <i64> [#uses=1]
-	%tmp33 = getelementptr [15 x i32]* %c, i64 0, i64 %tmp32		; <i32*> [#uses=1]
+	%tmp33 = getelementptr [15 x i32], [15 x i32]* %c, i64 0, i64 %tmp32		; <i32*> [#uses=1]
 	store i32 1, i32* %tmp33, align 4
 	br label %bb7.i.backedge
 

Modified: llvm/trunk/test/Analysis/ScopedNoAliasAA/basic-domains.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScopedNoAliasAA/basic-domains.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScopedNoAliasAA/basic-domains.ll (original)
+++ llvm/trunk/test/Analysis/ScopedNoAliasAA/basic-domains.ll Fri Feb 27 13:29:02 2015
@@ -6,15 +6,15 @@ define void @foo1(float* nocapture %a, f
 entry:
 ; CHECK-LABEL: Function: foo1
   %0 = load float* %c, align 4, !alias.scope !9
-  %arrayidx.i = getelementptr inbounds float* %a, i64 5
+  %arrayidx.i = getelementptr inbounds float, float* %a, i64 5
   store float %0, float* %arrayidx.i, align 4, !noalias !6
 
   %1 = load float* %c, align 4, !alias.scope !5
-  %arrayidx.i2 = getelementptr inbounds float* %a, i64 15
+  %arrayidx.i2 = getelementptr inbounds float, float* %a, i64 15
   store float %1, float* %arrayidx.i2, align 4, !noalias !6
 
   %2 = load float* %c, align 4, !alias.scope !6
-  %arrayidx.i3 = getelementptr inbounds float* %a, i64 16
+  %arrayidx.i3 = getelementptr inbounds float, float* %a, i64 16
   store float %2, float* %arrayidx.i3, align 4, !noalias !5
 
   ret void

Modified: llvm/trunk/test/Analysis/ScopedNoAliasAA/basic.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScopedNoAliasAA/basic.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScopedNoAliasAA/basic.ll (original)
+++ llvm/trunk/test/Analysis/ScopedNoAliasAA/basic.ll Fri Feb 27 13:29:02 2015
@@ -6,10 +6,10 @@ define void @foo1(float* nocapture %a, f
 entry:
 ; CHECK-LABEL: Function: foo1
   %0 = load float* %c, align 4, !alias.scope !1
-  %arrayidx.i = getelementptr inbounds float* %a, i64 5
+  %arrayidx.i = getelementptr inbounds float, float* %a, i64 5
   store float %0, float* %arrayidx.i, align 4, !noalias !1
   %1 = load float* %c, align 4
-  %arrayidx = getelementptr inbounds float* %a, i64 7
+  %arrayidx = getelementptr inbounds float, float* %a, i64 7
   store float %1, float* %arrayidx, align 4
   ret void
 

Modified: llvm/trunk/test/Analysis/ScopedNoAliasAA/basic2.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ScopedNoAliasAA/basic2.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ScopedNoAliasAA/basic2.ll (original)
+++ llvm/trunk/test/Analysis/ScopedNoAliasAA/basic2.ll Fri Feb 27 13:29:02 2015
@@ -6,12 +6,12 @@ define void @foo2(float* nocapture %a, f
 entry:
 ; CHECK-LABEL: Function: foo2
   %0 = load float* %c, align 4, !alias.scope !0
-  %arrayidx.i = getelementptr inbounds float* %a, i64 5
+  %arrayidx.i = getelementptr inbounds float, float* %a, i64 5
   store float %0, float* %arrayidx.i, align 4, !alias.scope !5, !noalias !4
-  %arrayidx1.i = getelementptr inbounds float* %b, i64 8
+  %arrayidx1.i = getelementptr inbounds float, float* %b, i64 8
   store float %0, float* %arrayidx1.i, align 4, !alias.scope !0, !noalias !5
   %1 = load float* %c, align 4
-  %arrayidx = getelementptr inbounds float* %a, i64 7
+  %arrayidx = getelementptr inbounds float, float* %a, i64 7
   store float %1, float* %arrayidx, align 4
   ret void
 

Modified: llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/dynamic-indices.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/dynamic-indices.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/dynamic-indices.ll (original)
+++ llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/dynamic-indices.ll Fri Feb 27 13:29:02 2015
@@ -12,7 +12,7 @@ target datalayout = "e-p:64:64:64-i1:8:8
 ; CHECK: define void @vrlh(
 
 ; CHECK: for.end:
-; CHECK:   %arrayidx31 = getelementptr inbounds %union.vector_t* %t, i64 0, i32 0, i64 1
+; CHECK:   %arrayidx31 = getelementptr inbounds %union.vector_t, %union.vector_t* %t, i64 0, i32 0, i64 1
 ; CHECK:   %tmp32 = load i64* %arrayidx31, align 8, !tbaa [[TAG:!.*]]
 
 define void @vrlh(%union.vector_t* %va, %union.vector_t* %vb, %union.vector_t* %vd) nounwind {
@@ -25,21 +25,21 @@ for.body:
   %sub = sub nsw i32 7, %i.01
   %idxprom = sext i32 %sub to i64
   %half = bitcast %union.vector_t* %vb to [8 x i16]*
-  %arrayidx = getelementptr inbounds [8 x i16]* %half, i64 0, i64 %idxprom
+  %arrayidx = getelementptr inbounds [8 x i16], [8 x i16]* %half, i64 0, i64 %idxprom
   %tmp4 = load i16* %arrayidx, align 2, !tbaa !0
   %conv = zext i16 %tmp4 to i32
   %and = and i32 %conv, 15
   %sub6 = sub nsw i32 7, %i.01
   %idxprom7 = sext i32 %sub6 to i64
   %half9 = bitcast %union.vector_t* %va to [8 x i16]*
-  %arrayidx10 = getelementptr inbounds [8 x i16]* %half9, i64 0, i64 %idxprom7
+  %arrayidx10 = getelementptr inbounds [8 x i16], [8 x i16]* %half9, i64 0, i64 %idxprom7
   %tmp11 = load i16* %arrayidx10, align 2, !tbaa !0
   %conv12 = zext i16 %tmp11 to i32
   %shl = shl i32 %conv12, %and
   %sub15 = sub nsw i32 7, %i.01
   %idxprom16 = sext i32 %sub15 to i64
   %half18 = bitcast %union.vector_t* %va to [8 x i16]*
-  %arrayidx19 = getelementptr inbounds [8 x i16]* %half18, i64 0, i64 %idxprom16
+  %arrayidx19 = getelementptr inbounds [8 x i16], [8 x i16]* %half18, i64 0, i64 %idxprom16
   %tmp20 = load i16* %arrayidx19, align 2, !tbaa !0
   %conv21 = zext i16 %tmp20 to i32
   %sub23 = sub nsw i32 16, %and
@@ -49,20 +49,20 @@ for.body:
   %sub26 = sub nsw i32 7, %i.01
   %idxprom27 = sext i32 %sub26 to i64
   %half28 = bitcast %union.vector_t* %t to [8 x i16]*
-  %arrayidx29 = getelementptr inbounds [8 x i16]* %half28, i64 0, i64 %idxprom27
+  %arrayidx29 = getelementptr inbounds [8 x i16], [8 x i16]* %half28, i64 0, i64 %idxprom27
   store i16 %conv24, i16* %arrayidx29, align 2, !tbaa !0
   %inc = add nsw i32 %i.01, 1
   %cmp = icmp slt i32 %inc, 8
   br i1 %cmp, label %for.body, label %for.end
 
 for.end:                                          ; preds = %for.body
-  %arrayidx31 = getelementptr inbounds %union.vector_t* %t, i64 0, i32 0, i64 1
+  %arrayidx31 = getelementptr inbounds %union.vector_t, %union.vector_t* %t, i64 0, i32 0, i64 1
   %tmp32 = load i64* %arrayidx31, align 8, !tbaa !3
-  %arrayidx35 = getelementptr inbounds %union.vector_t* %vd, i64 0, i32 0, i64 1
+  %arrayidx35 = getelementptr inbounds %union.vector_t, %union.vector_t* %vd, i64 0, i32 0, i64 1
   store i64 %tmp32, i64* %arrayidx35, align 8, !tbaa !3
-  %arrayidx37 = getelementptr inbounds %union.vector_t* %t, i64 0, i32 0, i64 0
+  %arrayidx37 = getelementptr inbounds %union.vector_t, %union.vector_t* %t, i64 0, i32 0, i64 0
   %tmp38 = load i64* %arrayidx37, align 8, !tbaa !3
-  %arrayidx41 = getelementptr inbounds %union.vector_t* %vd, i64 0, i32 0, i64 0
+  %arrayidx41 = getelementptr inbounds %union.vector_t, %union.vector_t* %vd, i64 0, i32 0, i64 0
   store i64 %tmp38, i64* %arrayidx41, align 8, !tbaa !3
   ret void
 }
@@ -75,13 +75,13 @@ for.end:
 
 define i32 @test0(%struct.X* %a) nounwind {
 entry:
-  %i = getelementptr inbounds %struct.X* %a, i64 0, i32 0
+  %i = getelementptr inbounds %struct.X, %struct.X* %a, i64 0, i32 0
   store i32 0, i32* %i, align 4, !tbaa !4
   br label %for.body
 
 for.body:                                         ; preds = %entry, %for.body
   %i2.01 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
-  %f = getelementptr inbounds %struct.X* %a, i64 %i2.01, i32 1
+  %f = getelementptr inbounds %struct.X, %struct.X* %a, i64 %i2.01, i32 1
   %tmp6 = load float* %f, align 4, !tbaa !5
   %mul = fmul float %tmp6, 0x40019999A0000000
   store float %mul, float* %f, align 4, !tbaa !5
@@ -90,7 +90,7 @@ for.body:
   br i1 %cmp, label %for.body, label %for.end
 
 for.end:                                          ; preds = %for.body
-  %i9 = getelementptr inbounds %struct.X* %a, i64 0, i32 0
+  %i9 = getelementptr inbounds %struct.X, %struct.X* %a, i64 0, i32 0
   %tmp10 = load i32* %i9, align 4, !tbaa !4
   ret i32 %tmp10
 }
@@ -103,13 +103,13 @@ for.end:
 
 define float @test1(%struct.X* %a) nounwind {
 entry:
-  %f = getelementptr inbounds %struct.X* %a, i64 0, i32 1
+  %f = getelementptr inbounds %struct.X, %struct.X* %a, i64 0, i32 1
   store float 0x3FD3333340000000, float* %f, align 4, !tbaa !5
   br label %for.body
 
 for.body:                                         ; preds = %entry, %for.body
   %i.01 = phi i64 [ 0, %entry ], [ %inc, %for.body ]
-  %i5 = getelementptr inbounds %struct.X* %a, i64 %i.01, i32 0
+  %i5 = getelementptr inbounds %struct.X, %struct.X* %a, i64 %i.01, i32 0
   %tmp6 = load i32* %i5, align 4, !tbaa !4
   %mul = mul nsw i32 %tmp6, 3
   store i32 %mul, i32* %i5, align 4, !tbaa !4
@@ -118,7 +118,7 @@ for.body:
   br i1 %cmp, label %for.body, label %for.end
 
 for.end:                                          ; preds = %for.body
-  %f9 = getelementptr inbounds %struct.X* %a, i64 0, i32 1
+  %f9 = getelementptr inbounds %struct.X, %struct.X* %a, i64 0, i32 1
   %tmp10 = load float* %f9, align 4, !tbaa !5
   ret float %tmp10
 }

Modified: llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/licm.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/licm.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/licm.ll (original)
+++ llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/licm.ll Fri Feb 27 13:29:02 2015
@@ -17,7 +17,7 @@ entry:
 for.body:                                         ; preds = %entry, %for.body
   %i.07 = phi i64 [ %inc, %for.body ], [ 0, %entry ]
   %tmp3 = load double** @P, !tbaa !1
-  %scevgep = getelementptr double* %tmp3, i64 %i.07
+  %scevgep = getelementptr double, double* %tmp3, i64 %i.07
   %tmp4 = load double* %scevgep, !tbaa !2
   %mul = fmul double %tmp4, 2.300000e+00
   store double %mul, double* %scevgep, !tbaa !2

Modified: llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/placement-tbaa.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/placement-tbaa.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/placement-tbaa.ll (original)
+++ llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/placement-tbaa.ll Fri Feb 27 13:29:02 2015
@@ -34,7 +34,7 @@ entry:
   %0 = bitcast i8* %call to %struct.Foo*
   store %struct.Foo* %0, %struct.Foo** %f, align 8, !tbaa !4
   %1 = load %struct.Foo** %f, align 8, !tbaa !4
-  %i = getelementptr inbounds %struct.Foo* %1, i32 0, i32 0
+  %i = getelementptr inbounds %struct.Foo, %struct.Foo* %1, i32 0, i32 0
   store i64 1, i64* %i, align 8, !tbaa !6
   store i32 0, i32* %i1, align 4, !tbaa !0
   br label %for.cond
@@ -59,7 +59,7 @@ new.cont:
   %7 = phi %struct.Bar* [ %6, %new.notnull ], [ null, %for.body ]
   store %struct.Bar* %7, %struct.Bar** %b, align 8, !tbaa !4
   %8 = load %struct.Bar** %b, align 8, !tbaa !4
-  %p = getelementptr inbounds %struct.Bar* %8, i32 0, i32 0
+  %p = getelementptr inbounds %struct.Bar, %struct.Bar* %8, i32 0, i32 0
   store i8* null, i8** %p, align 8, !tbaa !9
   %9 = load %struct.Foo** %f, align 8, !tbaa !4
   %10 = bitcast %struct.Foo* %9 to i8*
@@ -76,7 +76,7 @@ new.cont4:
   %13 = load i32* %i1, align 4, !tbaa !0
   %conv = sext i32 %13 to i64
   %14 = load %struct.Foo** %f, align 8, !tbaa !4
-  %i5 = getelementptr inbounds %struct.Foo* %14, i32 0, i32 0
+  %i5 = getelementptr inbounds %struct.Foo, %struct.Foo* %14, i32 0, i32 0
   store i64 %conv, i64* %i5, align 8, !tbaa !6
   br label %for.inc
 
@@ -88,7 +88,7 @@ for.inc:
 
 for.end:
   %16 = load %struct.Foo** %f, align 8, !tbaa !4
-  %i6 = getelementptr inbounds %struct.Foo* %16, i32 0, i32 0
+  %i6 = getelementptr inbounds %struct.Foo, %struct.Foo* %16, i32 0, i32 0
   %17 = load i64* %i6, align 8, !tbaa !6
   ret i64 %17
 }

Modified: llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/precedence.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/precedence.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/precedence.ll (original)
+++ llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/precedence.ll Fri Feb 27 13:29:02 2015
@@ -33,7 +33,7 @@ define i64 @offset(i64* %x) nounwind {
 entry:
   store i64 0, i64* %x, !tbaa !4
   %0 = bitcast i64* %x to i8*
-  %1 = getelementptr i8* %0, i64 1
+  %1 = getelementptr i8, i8* %0, i64 1
   store i8 1, i8* %1, !tbaa !5
   %tmp3 = load i64* %x, !tbaa !4
   ret i64 %tmp3

Modified: llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/tbaa-path.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/tbaa-path.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/tbaa-path.ll (original)
+++ llvm/trunk/test/Analysis/TypeBasedAliasAnalysis/tbaa-path.ll Fri Feb 27 13:29:02 2015
@@ -28,7 +28,7 @@ entry:
   %0 = load i32** %s.addr, align 8, !tbaa !0
   store i32 1, i32* %0, align 4, !tbaa !6
   %1 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f32 = getelementptr inbounds %struct.StructA* %1, i32 0, i32 1
+  %f32 = getelementptr inbounds %struct.StructA, %struct.StructA* %1, i32 0, i32 1
   store i32 4, i32* %f32, align 4, !tbaa !8
   %2 = load i32** %s.addr, align 8, !tbaa !0
   %3 = load i32* %2, align 4, !tbaa !6
@@ -54,7 +54,7 @@ entry:
   %0 = load i32** %s.addr, align 8, !tbaa !0
   store i32 1, i32* %0, align 4, !tbaa !6
   %1 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f16 = getelementptr inbounds %struct.StructA* %1, i32 0, i32 0
+  %f16 = getelementptr inbounds %struct.StructA, %struct.StructA* %1, i32 0, i32 0
   store i16 4, i16* %f16, align 2, !tbaa !11
   %2 = load i32** %s.addr, align 8, !tbaa !0
   %3 = load i32* %2, align 4, !tbaa !6
@@ -78,14 +78,14 @@ entry:
   store %struct.StructB* %B, %struct.StructB** %B.addr, align 8, !tbaa !0
   store i64 %count, i64* %count.addr, align 8, !tbaa !4
   %0 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f32 = getelementptr inbounds %struct.StructA* %0, i32 0, i32 1
+  %f32 = getelementptr inbounds %struct.StructA, %struct.StructA* %0, i32 0, i32 1
   store i32 1, i32* %f32, align 4, !tbaa !8
   %1 = load %struct.StructB** %B.addr, align 8, !tbaa !0
-  %a = getelementptr inbounds %struct.StructB* %1, i32 0, i32 1
-  %f321 = getelementptr inbounds %struct.StructA* %a, i32 0, i32 1
+  %a = getelementptr inbounds %struct.StructB, %struct.StructB* %1, i32 0, i32 1
+  %f321 = getelementptr inbounds %struct.StructA, %struct.StructA* %a, i32 0, i32 1
   store i32 4, i32* %f321, align 4, !tbaa !12
   %2 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f322 = getelementptr inbounds %struct.StructA* %2, i32 0, i32 1
+  %f322 = getelementptr inbounds %struct.StructA, %struct.StructA* %2, i32 0, i32 1
   %3 = load i32* %f322, align 4, !tbaa !8
   ret i32 %3
 }
@@ -107,14 +107,14 @@ entry:
   store %struct.StructB* %B, %struct.StructB** %B.addr, align 8, !tbaa !0
   store i64 %count, i64* %count.addr, align 8, !tbaa !4
   %0 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f32 = getelementptr inbounds %struct.StructA* %0, i32 0, i32 1
+  %f32 = getelementptr inbounds %struct.StructA, %struct.StructA* %0, i32 0, i32 1
   store i32 1, i32* %f32, align 4, !tbaa !8
   %1 = load %struct.StructB** %B.addr, align 8, !tbaa !0
-  %a = getelementptr inbounds %struct.StructB* %1, i32 0, i32 1
-  %f16 = getelementptr inbounds %struct.StructA* %a, i32 0, i32 0
+  %a = getelementptr inbounds %struct.StructB, %struct.StructB* %1, i32 0, i32 1
+  %f16 = getelementptr inbounds %struct.StructA, %struct.StructA* %a, i32 0, i32 0
   store i16 4, i16* %f16, align 2, !tbaa !14
   %2 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f321 = getelementptr inbounds %struct.StructA* %2, i32 0, i32 1
+  %f321 = getelementptr inbounds %struct.StructA, %struct.StructA* %2, i32 0, i32 1
   %3 = load i32* %f321, align 4, !tbaa !8
   ret i32 %3
 }
@@ -136,13 +136,13 @@ entry:
   store %struct.StructB* %B, %struct.StructB** %B.addr, align 8, !tbaa !0
   store i64 %count, i64* %count.addr, align 8, !tbaa !4
   %0 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f32 = getelementptr inbounds %struct.StructA* %0, i32 0, i32 1
+  %f32 = getelementptr inbounds %struct.StructA, %struct.StructA* %0, i32 0, i32 1
   store i32 1, i32* %f32, align 4, !tbaa !8
   %1 = load %struct.StructB** %B.addr, align 8, !tbaa !0
-  %f321 = getelementptr inbounds %struct.StructB* %1, i32 0, i32 2
+  %f321 = getelementptr inbounds %struct.StructB, %struct.StructB* %1, i32 0, i32 2
   store i32 4, i32* %f321, align 4, !tbaa !15
   %2 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f322 = getelementptr inbounds %struct.StructA* %2, i32 0, i32 1
+  %f322 = getelementptr inbounds %struct.StructA, %struct.StructA* %2, i32 0, i32 1
   %3 = load i32* %f322, align 4, !tbaa !8
   ret i32 %3
 }
@@ -164,14 +164,14 @@ entry:
   store %struct.StructB* %B, %struct.StructB** %B.addr, align 8, !tbaa !0
   store i64 %count, i64* %count.addr, align 8, !tbaa !4
   %0 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f32 = getelementptr inbounds %struct.StructA* %0, i32 0, i32 1
+  %f32 = getelementptr inbounds %struct.StructA, %struct.StructA* %0, i32 0, i32 1
   store i32 1, i32* %f32, align 4, !tbaa !8
   %1 = load %struct.StructB** %B.addr, align 8, !tbaa !0
-  %a = getelementptr inbounds %struct.StructB* %1, i32 0, i32 1
-  %f32_2 = getelementptr inbounds %struct.StructA* %a, i32 0, i32 3
+  %a = getelementptr inbounds %struct.StructB, %struct.StructB* %1, i32 0, i32 1
+  %f32_2 = getelementptr inbounds %struct.StructA, %struct.StructA* %a, i32 0, i32 3
   store i32 4, i32* %f32_2, align 4, !tbaa !16
   %2 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f321 = getelementptr inbounds %struct.StructA* %2, i32 0, i32 1
+  %f321 = getelementptr inbounds %struct.StructA, %struct.StructA* %2, i32 0, i32 1
   %3 = load i32* %f321, align 4, !tbaa !8
   ret i32 %3
 }
@@ -193,13 +193,13 @@ entry:
   store %struct.StructS* %S, %struct.StructS** %S.addr, align 8, !tbaa !0
   store i64 %count, i64* %count.addr, align 8, !tbaa !4
   %0 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f32 = getelementptr inbounds %struct.StructA* %0, i32 0, i32 1
+  %f32 = getelementptr inbounds %struct.StructA, %struct.StructA* %0, i32 0, i32 1
   store i32 1, i32* %f32, align 4, !tbaa !8
   %1 = load %struct.StructS** %S.addr, align 8, !tbaa !0
-  %f321 = getelementptr inbounds %struct.StructS* %1, i32 0, i32 1
+  %f321 = getelementptr inbounds %struct.StructS, %struct.StructS* %1, i32 0, i32 1
   store i32 4, i32* %f321, align 4, !tbaa !17
   %2 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f322 = getelementptr inbounds %struct.StructA* %2, i32 0, i32 1
+  %f322 = getelementptr inbounds %struct.StructA, %struct.StructA* %2, i32 0, i32 1
   %3 = load i32* %f322, align 4, !tbaa !8
   ret i32 %3
 }
@@ -221,13 +221,13 @@ entry:
   store %struct.StructS* %S, %struct.StructS** %S.addr, align 8, !tbaa !0
   store i64 %count, i64* %count.addr, align 8, !tbaa !4
   %0 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f32 = getelementptr inbounds %struct.StructA* %0, i32 0, i32 1
+  %f32 = getelementptr inbounds %struct.StructA, %struct.StructA* %0, i32 0, i32 1
   store i32 1, i32* %f32, align 4, !tbaa !8
   %1 = load %struct.StructS** %S.addr, align 8, !tbaa !0
-  %f16 = getelementptr inbounds %struct.StructS* %1, i32 0, i32 0
+  %f16 = getelementptr inbounds %struct.StructS, %struct.StructS* %1, i32 0, i32 0
   store i16 4, i16* %f16, align 2, !tbaa !19
   %2 = load %struct.StructA** %A.addr, align 8, !tbaa !0
-  %f321 = getelementptr inbounds %struct.StructA* %2, i32 0, i32 1
+  %f321 = getelementptr inbounds %struct.StructA, %struct.StructA* %2, i32 0, i32 1
   %3 = load i32* %f321, align 4, !tbaa !8
   ret i32 %3
 }
@@ -249,13 +249,13 @@ entry:
   store %struct.StructS2* %S2, %struct.StructS2** %S2.addr, align 8, !tbaa !0
   store i64 %count, i64* %count.addr, align 8, !tbaa !4
   %0 = load %struct.StructS** %S.addr, align 8, !tbaa !0
-  %f32 = getelementptr inbounds %struct.StructS* %0, i32 0, i32 1
+  %f32 = getelementptr inbounds %struct.StructS, %struct.StructS* %0, i32 0, i32 1
   store i32 1, i32* %f32, align 4, !tbaa !17
   %1 = load %struct.StructS2** %S2.addr, align 8, !tbaa !0
-  %f321 = getelementptr inbounds %struct.StructS2* %1, i32 0, i32 1
+  %f321 = getelementptr inbounds %struct.StructS2, %struct.StructS2* %1, i32 0, i32 1
   store i32 4, i32* %f321, align 4, !tbaa !20
   %2 = load %struct.StructS** %S.addr, align 8, !tbaa !0
-  %f322 = getelementptr inbounds %struct.StructS* %2, i32 0, i32 1
+  %f322 = getelementptr inbounds %struct.StructS, %struct.StructS* %2, i32 0, i32 1
   %3 = load i32* %f322, align 4, !tbaa !17
   ret i32 %3
 }
@@ -277,13 +277,13 @@ entry:
   store %struct.StructS2* %S2, %struct.StructS2** %S2.addr, align 8, !tbaa !0
   store i64 %count, i64* %count.addr, align 8, !tbaa !4
   %0 = load %struct.StructS** %S.addr, align 8, !tbaa !0
-  %f32 = getelementptr inbounds %struct.StructS* %0, i32 0, i32 1
+  %f32 = getelementptr inbounds %struct.StructS, %struct.StructS* %0, i32 0, i32 1
   store i32 1, i32* %f32, align 4, !tbaa !17
   %1 = load %struct.StructS2** %S2.addr, align 8, !tbaa !0
-  %f16 = getelementptr inbounds %struct.StructS2* %1, i32 0, i32 0
+  %f16 = getelementptr inbounds %struct.StructS2, %struct.StructS2* %1, i32 0, i32 0
   store i16 4, i16* %f16, align 2, !tbaa !22
   %2 = load %struct.StructS** %S.addr, align 8, !tbaa !0
-  %f321 = getelementptr inbounds %struct.StructS* %2, i32 0, i32 1
+  %f321 = getelementptr inbounds %struct.StructS, %struct.StructS* %2, i32 0, i32 1
   %3 = load i32* %f321, align 4, !tbaa !17
   ret i32 %3
 }
@@ -305,19 +305,19 @@ entry:
   store %struct.StructD* %D, %struct.StructD** %D.addr, align 8, !tbaa !0
   store i64 %count, i64* %count.addr, align 8, !tbaa !4
   %0 = load %struct.StructC** %C.addr, align 8, !tbaa !0
-  %b = getelementptr inbounds %struct.StructC* %0, i32 0, i32 1
-  %a = getelementptr inbounds %struct.StructB* %b, i32 0, i32 1
-  %f32 = getelementptr inbounds %struct.StructA* %a, i32 0, i32 1
+  %b = getelementptr inbounds %struct.StructC, %struct.StructC* %0, i32 0, i32 1
+  %a = getelementptr inbounds %struct.StructB, %struct.StructB* %b, i32 0, i32 1
+  %f32 = getelementptr inbounds %struct.StructA, %struct.StructA* %a, i32 0, i32 1
   store i32 1, i32* %f32, align 4, !tbaa !23
   %1 = load %struct.StructD** %D.addr, align 8, !tbaa !0
-  %b1 = getelementptr inbounds %struct.StructD* %1, i32 0, i32 1
-  %a2 = getelementptr inbounds %struct.StructB* %b1, i32 0, i32 1
-  %f323 = getelementptr inbounds %struct.StructA* %a2, i32 0, i32 1
+  %b1 = getelementptr inbounds %struct.StructD, %struct.StructD* %1, i32 0, i32 1
+  %a2 = getelementptr inbounds %struct.StructB, %struct.StructB* %b1, i32 0, i32 1
+  %f323 = getelementptr inbounds %struct.StructA, %struct.StructA* %a2, i32 0, i32 1
   store i32 4, i32* %f323, align 4, !tbaa !25
   %2 = load %struct.StructC** %C.addr, align 8, !tbaa !0
-  %b4 = getelementptr inbounds %struct.StructC* %2, i32 0, i32 1
-  %a5 = getelementptr inbounds %struct.StructB* %b4, i32 0, i32 1
-  %f326 = getelementptr inbounds %struct.StructA* %a5, i32 0, i32 1
+  %b4 = getelementptr inbounds %struct.StructC, %struct.StructC* %2, i32 0, i32 1
+  %a5 = getelementptr inbounds %struct.StructB, %struct.StructB* %b4, i32 0, i32 1
+  %f326 = getelementptr inbounds %struct.StructA, %struct.StructA* %a5, i32 0, i32 1
   %3 = load i32* %f326, align 4, !tbaa !23
   ret i32 %3
 }
@@ -341,22 +341,22 @@ entry:
   store %struct.StructD* %D, %struct.StructD** %D.addr, align 8, !tbaa !0
   store i64 %count, i64* %count.addr, align 8, !tbaa !4
   %0 = load %struct.StructC** %C.addr, align 8, !tbaa !0
-  %b = getelementptr inbounds %struct.StructC* %0, i32 0, i32 1
+  %b = getelementptr inbounds %struct.StructC, %struct.StructC* %0, i32 0, i32 1
   store %struct.StructB* %b, %struct.StructB** %b1, align 8, !tbaa !0
   %1 = load %struct.StructD** %D.addr, align 8, !tbaa !0
-  %b3 = getelementptr inbounds %struct.StructD* %1, i32 0, i32 1
+  %b3 = getelementptr inbounds %struct.StructD, %struct.StructD* %1, i32 0, i32 1
   store %struct.StructB* %b3, %struct.StructB** %b2, align 8, !tbaa !0
   %2 = load %struct.StructB** %b1, align 8, !tbaa !0
-  %a = getelementptr inbounds %struct.StructB* %2, i32 0, i32 1
-  %f32 = getelementptr inbounds %struct.StructA* %a, i32 0, i32 1
+  %a = getelementptr inbounds %struct.StructB, %struct.StructB* %2, i32 0, i32 1
+  %f32 = getelementptr inbounds %struct.StructA, %struct.StructA* %a, i32 0, i32 1
   store i32 1, i32* %f32, align 4, !tbaa !12
   %3 = load %struct.StructB** %b2, align 8, !tbaa !0
-  %a4 = getelementptr inbounds %struct.StructB* %3, i32 0, i32 1
-  %f325 = getelementptr inbounds %struct.StructA* %a4, i32 0, i32 1
+  %a4 = getelementptr inbounds %struct.StructB, %struct.StructB* %3, i32 0, i32 1
+  %f325 = getelementptr inbounds %struct.StructA, %struct.StructA* %a4, i32 0, i32 1
   store i32 4, i32* %f325, align 4, !tbaa !12
   %4 = load %struct.StructB** %b1, align 8, !tbaa !0
-  %a6 = getelementptr inbounds %struct.StructB* %4, i32 0, i32 1
-  %f327 = getelementptr inbounds %struct.StructA* %a6, i32 0, i32 1
+  %a6 = getelementptr inbounds %struct.StructB, %struct.StructB* %4, i32 0, i32 1
+  %f327 = getelementptr inbounds %struct.StructA, %struct.StructA* %a6, i32 0, i32 1
   %5 = load i32* %f327, align 4, !tbaa !12
   ret i32 %5
 }

Modified: llvm/trunk/test/Analysis/ValueTracking/memory-dereferenceable.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/ValueTracking/memory-dereferenceable.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Analysis/ValueTracking/memory-dereferenceable.ll (original)
+++ llvm/trunk/test/Analysis/ValueTracking/memory-dereferenceable.ll Fri Feb 27 13:29:02 2015
@@ -17,7 +17,7 @@ define void @test(i32 addrspace(1)* dere
 ; CHECK: %relocate
 ; CHECK-NOT: %nparam
 entry:
-    %globalptr = getelementptr inbounds [6 x i8]* @globalstr, i32 0, i32 0
+    %globalptr = getelementptr inbounds [6 x i8], [6 x i8]* @globalstr, i32 0, i32 0
     %load1 = load i8* %globalptr
     %alloca = alloca i1
     %load2 = load i1* %alloca
@@ -25,7 +25,7 @@ entry:
     %tok = tail call i32 (i1 ()*, i32, i32, ...)* @llvm.experimental.gc.statepoint.p0f_i1f(i1 ()* @return_i1, i32 0, i32 0, i32 0, i32 addrspace(1)* %dparam)
     %relocate = call i32 addrspace(1)* @llvm.experimental.gc.relocate.p1i32(i32 %tok, i32 4, i32 4)
     %load4 = load i32 addrspace(1)* %relocate
-    %nparam = getelementptr i32 addrspace(1)* %dparam, i32 5
+    %nparam = getelementptr i32, i32 addrspace(1)* %dparam, i32 5
     %load5 = load i32 addrspace(1)* %nparam
     ret void
 }

Modified: llvm/trunk/test/Assembler/2002-08-19-BytecodeReader.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/2002-08-19-BytecodeReader.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Assembler/2002-08-19-BytecodeReader.ll (original)
+++ llvm/trunk/test/Assembler/2002-08-19-BytecodeReader.ll Fri Feb 27 13:29:02 2015
@@ -10,9 +10,9 @@
 @search = external global %CHESS_POSITION		; <%CHESS_POSITION*> [#uses=2]
 
 define void @Evaluate() {
-	%reg1321 = getelementptr %CHESS_POSITION* @search, i64 0, i32 1		; <i32*> [#uses=1]
+	%reg1321 = getelementptr %CHESS_POSITION, %CHESS_POSITION* @search, i64 0, i32 1		; <i32*> [#uses=1]
 	%reg114 = load i32* %reg1321		; <i32> [#uses=0]
-	%reg1801 = getelementptr %CHESS_POSITION* @search, i64 0, i32 0		; <i32*> [#uses=1]
+	%reg1801 = getelementptr %CHESS_POSITION, %CHESS_POSITION* @search, i64 0, i32 0		; <i32*> [#uses=1]
 	%reg182 = load i32* %reg1801		; <i32> [#uses=0]
 	ret void
 }

Modified: llvm/trunk/test/Assembler/2004-04-04-GetElementPtrIndexTypes.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/2004-04-04-GetElementPtrIndexTypes.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Assembler/2004-04-04-GetElementPtrIndexTypes.ll (original)
+++ llvm/trunk/test/Assembler/2004-04-04-GetElementPtrIndexTypes.ll Fri Feb 27 13:29:02 2015
@@ -2,10 +2,10 @@
 ; RUN: verify-uselistorder %s
 
 define i32* @t1({ float, i32 }* %X) {
-        %W = getelementptr { float, i32 }* %X, i32 20, i32 1            ; <i32*> [#uses=0]
-        %X.upgrd.1 = getelementptr { float, i32 }* %X, i64 20, i32 1            ; <i32*> [#uses=0]
-        %Y = getelementptr { float, i32 }* %X, i64 20, i32 1            ; <i32*> [#uses=1]
-        %Z = getelementptr { float, i32 }* %X, i64 20, i32 1            ; <i32*> [#uses=0]
+        %W = getelementptr { float, i32 }, { float, i32 }* %X, i32 20, i32 1            ; <i32*> [#uses=0]
+        %X.upgrd.1 = getelementptr { float, i32 }, { float, i32 }* %X, i64 20, i32 1            ; <i32*> [#uses=0]
+        %Y = getelementptr { float, i32 }, { float, i32 }* %X, i64 20, i32 1            ; <i32*> [#uses=1]
+        %Z = getelementptr { float, i32 }, { float, i32 }* %X, i64 20, i32 1            ; <i32*> [#uses=0]
         ret i32* %Y
 }
 

Modified: llvm/trunk/test/Assembler/2004-06-07-VerifierBug.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/2004-06-07-VerifierBug.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Assembler/2004-06-07-VerifierBug.ll (original)
+++ llvm/trunk/test/Assembler/2004-06-07-VerifierBug.ll Fri Feb 27 13:29:02 2015
@@ -6,7 +6,7 @@ entry:
      ret void
 
 loop:           ; preds = %loop
-     %tmp.4.i9 = getelementptr i32* null, i32 %tmp.5.i10             ; <i32*> [#uses=1]
+     %tmp.4.i9 = getelementptr i32, i32* null, i32 %tmp.5.i10             ; <i32*> [#uses=1]
      %tmp.5.i10 = load i32* %tmp.4.i9                ; <i32> [#uses=1]
      br label %loop
 }

Modified: llvm/trunk/test/Assembler/2007-01-05-Cmp-ConstExpr.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/2007-01-05-Cmp-ConstExpr.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Assembler/2007-01-05-Cmp-ConstExpr.ll (original)
+++ llvm/trunk/test/Assembler/2007-01-05-Cmp-ConstExpr.ll Fri Feb 27 13:29:02 2015
@@ -6,7 +6,7 @@
 
 define i32 @main(i32 %argc, i8** %argv) {
 entry:
-        %tmp65 = getelementptr i8** %argv, i32 1                ; <i8**> [#uses=1]
+        %tmp65 = getelementptr i8*, i8** %argv, i32 1                ; <i8**> [#uses=1]
         %tmp66 = load i8** %tmp65               ; <i8*> [#uses=0]
         br i1 icmp ne (i32 sub (i32 ptrtoint (i8* getelementptr ([4 x i8]* @str, i32 0, i64 1) to i32), i32 ptrtoint ([4 x i8]* @str to i32)), i32 1), label %exit_1, label %exit_2
 

Modified: llvm/trunk/test/Assembler/flags.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/flags.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Assembler/flags.ll (original)
+++ llvm/trunk/test/Assembler/flags.ll Fri Feb 27 13:29:02 2015
@@ -148,14 +148,14 @@ define i64 @lshr_exact(i64 %x, i64 %y) {
 }
 
 define i64* @gep_nw(i64* %p, i64 %x) {
-; CHECK: %z = getelementptr inbounds i64* %p, i64 %x
-	%z = getelementptr inbounds i64* %p, i64 %x
+; CHECK: %z = getelementptr inbounds i64, i64* %p, i64 %x
+	%z = getelementptr inbounds i64, i64* %p, i64 %x
         ret i64* %z
 }
 
 define i64* @gep_plain(i64* %p, i64 %x) {
-; CHECK: %z = getelementptr i64* %p, i64 %x
-	%z = getelementptr i64* %p, i64 %x
+; CHECK: %z = getelementptr i64, i64* %p, i64 %x
+	%z = getelementptr i64, i64* %p, i64 %x
         ret i64* %z
 }
 

Modified: llvm/trunk/test/Assembler/getelementptr.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/getelementptr.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Assembler/getelementptr.ll (original)
+++ llvm/trunk/test/Assembler/getelementptr.ll Fri Feb 27 13:29:02 2015
@@ -22,8 +22,8 @@
 ; See if i92 indices work too.
 define i32 *@test({i32, i32}* %t, i92 %n) {
 ; CHECK: @test
-; CHECK: %B = getelementptr { i32, i32 }* %t, i92 %n, i32 0
-  %B = getelementptr {i32, i32}* %t, i92 %n, i32 0
+; CHECK: %B = getelementptr { i32, i32 }, { i32, i32 }* %t, i92 %n, i32 0
+  %B = getelementptr {i32, i32}, {i32, i32}* %t, i92 %n, i32 0
   ret i32* %B
 }
 
@@ -33,12 +33,12 @@ define i32 *@test({i32, i32}* %t, i92 %n
 
 ; Verify that struct GEP works with a vector of pointers.
 define <2 x i32*> @test7(<2 x {i32, i32}*> %a) {
-  %w = getelementptr <2 x {i32, i32}*> %a, <2 x i32> <i32 5, i32 9>, <2 x i32> zeroinitializer
+  %w = getelementptr {i32, i32}, <2 x {i32, i32}*> %a, <2 x i32> <i32 5, i32 9>, <2 x i32> zeroinitializer
   ret <2 x i32*> %w
 }
 
 ; Verify that array GEP works with a vector of pointers.
 define <2 x i8*> @test8(<2 x [2 x i8]*> %a) {
-  %w = getelementptr <2 x  [2 x i8]*> %a, <2 x i32> <i32 0, i32 0>, <2 x i8> <i8 0, i8 1>
+  %w = getelementptr  [2 x i8], <2 x  [2 x i8]*> %a, <2 x i32> <i32 0, i32 0>, <2 x i8> <i8 0, i8 1>
   ret <2 x i8*> %w
 }

Modified: llvm/trunk/test/Assembler/getelementptr_struct.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/getelementptr_struct.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Assembler/getelementptr_struct.ll (original)
+++ llvm/trunk/test/Assembler/getelementptr_struct.ll Fri Feb 27 13:29:02 2015
@@ -9,7 +9,7 @@
 
 define i32* @foo(%ST* %s) {
 entry:
-  %reg = getelementptr %ST* %s, i32 1, i64 2, i32 1, i32 5, i32 13
+  %reg = getelementptr %ST, %ST* %s, i32 1, i64 2, i32 1, i32 5, i32 13
   ret i32* %reg
 }
 

Modified: llvm/trunk/test/Assembler/getelementptr_vec_idx1.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/getelementptr_vec_idx1.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Assembler/getelementptr_vec_idx1.ll (original)
+++ llvm/trunk/test/Assembler/getelementptr_vec_idx1.ll Fri Feb 27 13:29:02 2015
@@ -5,6 +5,6 @@
 ; CHECK: getelementptr index type missmatch
 
 define i32 @test(i32* %a) {
-  %w = getelementptr i32* %a, <2 x i32> <i32 5, i32 9>
+  %w = getelementptr i32, i32* %a, <2 x i32> <i32 5, i32 9>
   ret i32 %w
 }

Modified: llvm/trunk/test/Assembler/getelementptr_vec_idx2.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/getelementptr_vec_idx2.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Assembler/getelementptr_vec_idx2.ll (original)
+++ llvm/trunk/test/Assembler/getelementptr_vec_idx2.ll Fri Feb 27 13:29:02 2015
@@ -5,6 +5,6 @@
 ; CHECK: getelementptr index type missmatch
 
 define <2 x i32> @test(<2 x i32*> %a) {
-  %w = getelementptr <2 x i32*> %a, i32 2
+  %w = getelementptr i32, <2 x i32*> %a, i32 2
   ret <2 x i32> %w
 }

Modified: llvm/trunk/test/Assembler/getelementptr_vec_idx3.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/getelementptr_vec_idx3.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Assembler/getelementptr_vec_idx3.ll (original)
+++ llvm/trunk/test/Assembler/getelementptr_vec_idx3.ll Fri Feb 27 13:29:02 2015
@@ -5,6 +5,6 @@
 ; CHECK: getelementptr index type missmatch
 
 define <4 x i32> @test(<4 x i32>* %a) {
-  %w = getelementptr <4 x i32>* %a, <2 x i32> <i32 5, i32 9>
+  %w = getelementptr <4 x i32>, <4 x i32>* %a, <2 x i32> <i32 5, i32 9>
   ret i32 %w
 }

Modified: llvm/trunk/test/Assembler/getelementptr_vec_struct.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/getelementptr_vec_struct.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Assembler/getelementptr_vec_struct.ll (original)
+++ llvm/trunk/test/Assembler/getelementptr_vec_struct.ll Fri Feb 27 13:29:02 2015
@@ -5,6 +5,6 @@
 ; CHECK: invalid getelementptr indices
 
 define <2 x i32*> @test7(<2 x {i32, i32}*> %a) {
-  %w = getelementptr <2 x {i32, i32}*> %a, <2 x i32> <i32 5, i32 9>, <2 x i32> <i32 0, i32 1>
+  %w = getelementptr {i32, i32}, <2 x {i32, i32}*> %a, <2 x i32> <i32 5, i32 9>, <2 x i32> <i32 0, i32 1>
   ret <2 x i32*> %w
 }

Added: llvm/trunk/test/Assembler/invalid-gep-mismatched-explicit-type.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/invalid-gep-mismatched-explicit-type.ll?rev=230786&view=auto
==============================================================================
--- llvm/trunk/test/Assembler/invalid-gep-mismatched-explicit-type.ll (added)
+++ llvm/trunk/test/Assembler/invalid-gep-mismatched-explicit-type.ll Fri Feb 27 13:29:02 2015
@@ -0,0 +1,6 @@
+; RUN: not llvm-as < %s 2>&1 | FileCheck %s
+; CHECK: <stdin>:4:22: error: explicit pointee type doesn't match operand's pointee type
+define void @test(i32* %t) {
+  %x = getelementptr i16, i32* %t, i32 0
+  ret void
+}

Added: llvm/trunk/test/Assembler/invalid-gep-missing-explicit-type.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/invalid-gep-missing-explicit-type.ll?rev=230786&view=auto
==============================================================================
--- llvm/trunk/test/Assembler/invalid-gep-missing-explicit-type.ll (added)
+++ llvm/trunk/test/Assembler/invalid-gep-missing-explicit-type.ll Fri Feb 27 13:29:02 2015
@@ -0,0 +1,7 @@
+; RUN: not llvm-as < %s 2>&1 | FileCheck %s
+; CHECK: <stdin>:4:27: error: expected comma after getelementptr's type
+define void @test(i32* %t) {
+  %x = getelementptr i32* %t, i32 0
+  ret void
+}
+

Modified: llvm/trunk/test/Bitcode/constantsTest.3.2.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/constantsTest.3.2.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Bitcode/constantsTest.3.2.ll (original)
+++ llvm/trunk/test/Bitcode/constantsTest.3.2.ll Fri Feb 27 13:29:02 2015
@@ -99,10 +99,10 @@ entry:
   inttoptr i8 1 to i8*
   ; CHECK-NEXT: bitcast i32 1 to <2 x i16>
   bitcast i32 1 to <2 x i16>
-  ; CHECK-NEXT: getelementptr i32* @X, i32 0
-  getelementptr i32* @X, i32 0
-  ; CHECK-NEXT: getelementptr inbounds i32* @X, i32 0
-  getelementptr inbounds i32* @X, i32 0
+  ; CHECK-NEXT: getelementptr i32, i32* @X, i32 0
+  getelementptr i32, i32* @X, i32 0
+  ; CHECK-NEXT: getelementptr inbounds i32, i32* @X, i32 0
+  getelementptr inbounds i32, i32* @X, i32 0
   ; CHECK: select i1 true, i32 1, i32 0
   select i1 true ,i32 1, i32 0
   ; CHECK-NEXT: icmp eq i32 1, 0

Modified: llvm/trunk/test/Bitcode/function-encoding-rel-operands.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/function-encoding-rel-operands.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Bitcode/function-encoding-rel-operands.ll (original)
+++ llvm/trunk/test/Bitcode/function-encoding-rel-operands.ll Fri Feb 27 13:29:02 2015
@@ -43,7 +43,7 @@ define double @test_float_binops(i32 %a)
 ; CHECK: INST_RET {{.*}}op0=1
 define i1 @test_load(i32 %a, {i32, i32}* %ptr) nounwind {
 entry:
-  %0 = getelementptr inbounds {i32, i32}* %ptr, i32 %a, i32 0
+  %0 = getelementptr inbounds {i32, i32}, {i32, i32}* %ptr, i32 %a, i32 0
   %1 = load i32* %0
   %2 = icmp eq i32 %1, %a
   ret i1 %2

Modified: llvm/trunk/test/Bitcode/memInstructions.3.2.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/memInstructions.3.2.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/Bitcode/memInstructions.3.2.ll (original)
+++ llvm/trunk/test/Bitcode/memInstructions.3.2.ll Fri Feb 27 13:29:02 2015
@@ -313,14 +313,14 @@ entry:
 
 define void @getelementptr({i8, i8}* %s, <4 x i8*> %ptrs, <4 x i64> %offsets ){
 entry:
-; CHECK: %res1 = getelementptr { i8, i8 }* %s, i32 1, i32 1
-  %res1 = getelementptr {i8, i8}* %s, i32 1, i32 1
+; CHECK: %res1 = getelementptr { i8, i8 }, { i8, i8 }* %s, i32 1, i32 1
+  %res1 = getelementptr {i8, i8}, {i8, i8}* %s, i32 1, i32 1
 
-; CHECK-NEXT: %res2 = getelementptr inbounds { i8, i8 }* %s, i32 1, i32 1
-  %res2 = getelementptr inbounds {i8, i8}* %s, i32 1, i32 1
+; CHECK-NEXT: %res2 = getelementptr inbounds { i8, i8 }, { i8, i8 }* %s, i32 1, i32 1
+  %res2 = getelementptr inbounds {i8, i8}, {i8, i8}* %s, i32 1, i32 1
 
-; CHECK-NEXT: %res3 = getelementptr <4 x i8*> %ptrs, <4 x i64> %offsets
-  %res3 = getelementptr <4 x i8*> %ptrs, <4 x i64> %offsets
+; CHECK-NEXT: %res3 = getelementptr i8, <4 x i8*> %ptrs, <4 x i64> %offsets
+  %res3 = getelementptr i8, <4 x i8*> %ptrs, <4 x i64> %offsets
 
   ret void
 }

Modified: llvm/trunk/test/BugPoint/compile-custom.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/BugPoint/compile-custom.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
    (empty)

Propchange: llvm/trunk/test/BugPoint/compile-custom.ll
------------------------------------------------------------------------------
--- svn:executable (original)
+++ svn:executable (removed)
@@ -1 +0,0 @@
-*

Modified: llvm/trunk/test/CodeGen/AArch64/128bit_load_store.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/128bit_load_store.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/128bit_load_store.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/128bit_load_store.ll Fri Feb 27 13:29:02 2015
@@ -45,7 +45,7 @@ define void @test_ld_st_p128(i128* nocap
 entry:
   %0 = bitcast i128* %ptr to fp128*
   %1 = load fp128* %0, align 16
-  %add.ptr = getelementptr inbounds i128* %ptr, i64 1
+  %add.ptr = getelementptr inbounds i128, i128* %ptr, i64 1
   %2 = bitcast i128* %add.ptr to fp128*
   store fp128 %1, fp128* %2, align 16
   ret void

Modified: llvm/trunk/test/CodeGen/AArch64/PBQP-chain.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/PBQP-chain.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/PBQP-chain.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/PBQP-chain.ll Fri Feb 27 13:29:02 2015
@@ -27,73 +27,73 @@ entry:
   %mul = fmul fast double %1, %0
   %2 = load double* %y, align 8
   %mul7 = fmul fast double %2, %0
-  %arrayidx.1 = getelementptr inbounds double* %c, i64 1
+  %arrayidx.1 = getelementptr inbounds double, double* %c, i64 1
   %3 = load double* %arrayidx.1, align 8
-  %arrayidx2.1 = getelementptr inbounds double* %x, i64 1
+  %arrayidx2.1 = getelementptr inbounds double, double* %x, i64 1
   %4 = load double* %arrayidx2.1, align 8
   %mul.1 = fmul fast double %4, %3
   %add.1 = fadd fast double %mul.1, %mul
-  %arrayidx6.1 = getelementptr inbounds double* %y, i64 1
+  %arrayidx6.1 = getelementptr inbounds double, double* %y, i64 1
   %5 = load double* %arrayidx6.1, align 8
   %mul7.1 = fmul fast double %5, %3
   %add8.1 = fadd fast double %mul7.1, %mul7
-  %arrayidx.2 = getelementptr inbounds double* %c, i64 2
+  %arrayidx.2 = getelementptr inbounds double, double* %c, i64 2
   %6 = load double* %arrayidx.2, align 8
-  %arrayidx2.2 = getelementptr inbounds double* %x, i64 2
+  %arrayidx2.2 = getelementptr inbounds double, double* %x, i64 2
   %7 = load double* %arrayidx2.2, align 8
   %mul.2 = fmul fast double %7, %6
   %add.2 = fadd fast double %mul.2, %add.1
-  %arrayidx6.2 = getelementptr inbounds double* %y, i64 2
+  %arrayidx6.2 = getelementptr inbounds double, double* %y, i64 2
   %8 = load double* %arrayidx6.2, align 8
   %mul7.2 = fmul fast double %8, %6
   %add8.2 = fadd fast double %mul7.2, %add8.1
-  %arrayidx.3 = getelementptr inbounds double* %c, i64 3
+  %arrayidx.3 = getelementptr inbounds double, double* %c, i64 3
   %9 = load double* %arrayidx.3, align 8
-  %arrayidx2.3 = getelementptr inbounds double* %x, i64 3
+  %arrayidx2.3 = getelementptr inbounds double, double* %x, i64 3
   %10 = load double* %arrayidx2.3, align 8
   %mul.3 = fmul fast double %10, %9
   %add.3 = fadd fast double %mul.3, %add.2
-  %arrayidx6.3 = getelementptr inbounds double* %y, i64 3
+  %arrayidx6.3 = getelementptr inbounds double, double* %y, i64 3
   %11 = load double* %arrayidx6.3, align 8
   %mul7.3 = fmul fast double %11, %9
   %add8.3 = fadd fast double %mul7.3, %add8.2
-  %arrayidx.4 = getelementptr inbounds double* %c, i64 4
+  %arrayidx.4 = getelementptr inbounds double, double* %c, i64 4
   %12 = load double* %arrayidx.4, align 8
-  %arrayidx2.4 = getelementptr inbounds double* %x, i64 4
+  %arrayidx2.4 = getelementptr inbounds double, double* %x, i64 4
   %13 = load double* %arrayidx2.4, align 8
   %mul.4 = fmul fast double %13, %12
   %add.4 = fadd fast double %mul.4, %add.3
-  %arrayidx6.4 = getelementptr inbounds double* %y, i64 4
+  %arrayidx6.4 = getelementptr inbounds double, double* %y, i64 4
   %14 = load double* %arrayidx6.4, align 8
   %mul7.4 = fmul fast double %14, %12
   %add8.4 = fadd fast double %mul7.4, %add8.3
-  %arrayidx.5 = getelementptr inbounds double* %c, i64 5
+  %arrayidx.5 = getelementptr inbounds double, double* %c, i64 5
   %15 = load double* %arrayidx.5, align 8
-  %arrayidx2.5 = getelementptr inbounds double* %x, i64 5
+  %arrayidx2.5 = getelementptr inbounds double, double* %x, i64 5
   %16 = load double* %arrayidx2.5, align 8
   %mul.5 = fmul fast double %16, %15
   %add.5 = fadd fast double %mul.5, %add.4
-  %arrayidx6.5 = getelementptr inbounds double* %y, i64 5
+  %arrayidx6.5 = getelementptr inbounds double, double* %y, i64 5
   %17 = load double* %arrayidx6.5, align 8
   %mul7.5 = fmul fast double %17, %15
   %add8.5 = fadd fast double %mul7.5, %add8.4
-  %arrayidx.6 = getelementptr inbounds double* %c, i64 6
+  %arrayidx.6 = getelementptr inbounds double, double* %c, i64 6
   %18 = load double* %arrayidx.6, align 8
-  %arrayidx2.6 = getelementptr inbounds double* %x, i64 6
+  %arrayidx2.6 = getelementptr inbounds double, double* %x, i64 6
   %19 = load double* %arrayidx2.6, align 8
   %mul.6 = fmul fast double %19, %18
   %add.6 = fadd fast double %mul.6, %add.5
-  %arrayidx6.6 = getelementptr inbounds double* %y, i64 6
+  %arrayidx6.6 = getelementptr inbounds double, double* %y, i64 6
   %20 = load double* %arrayidx6.6, align 8
   %mul7.6 = fmul fast double %20, %18
   %add8.6 = fadd fast double %mul7.6, %add8.5
-  %arrayidx.7 = getelementptr inbounds double* %c, i64 7
+  %arrayidx.7 = getelementptr inbounds double, double* %c, i64 7
   %21 = load double* %arrayidx.7, align 8
-  %arrayidx2.7 = getelementptr inbounds double* %x, i64 7
+  %arrayidx2.7 = getelementptr inbounds double, double* %x, i64 7
   %22 = load double* %arrayidx2.7, align 8
   %mul.7 = fmul fast double %22, %21
   %add.7 = fadd fast double %mul.7, %add.6
-  %arrayidx6.7 = getelementptr inbounds double* %y, i64 7
+  %arrayidx6.7 = getelementptr inbounds double, double* %y, i64 7
   %23 = load double* %arrayidx6.7, align 8
   %mul7.7 = fmul fast double %23, %21
   %add8.7 = fadd fast double %mul7.7, %add8.6

Modified: llvm/trunk/test/CodeGen/AArch64/PBQP-coalesce-benefit.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/PBQP-coalesce-benefit.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/PBQP-coalesce-benefit.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/PBQP-coalesce-benefit.ll Fri Feb 27 13:29:02 2015
@@ -6,7 +6,7 @@ entry:
   %0 = load i32* %c, align 4
 ; CHECK-NOT: mov	 w{{[0-9]*}}, w0
   %add = add nsw i32 %0, %acc
-  %arrayidx1 = getelementptr inbounds i32* %c, i64 1
+  %arrayidx1 = getelementptr inbounds i32, i32* %c, i64 1
   %1 = load i32* %arrayidx1, align 4
   %add2 = add nsw i32 %add, %1
   ret i32 %add2

Modified: llvm/trunk/test/CodeGen/AArch64/PBQP-csr.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/PBQP-csr.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/PBQP-csr.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/PBQP-csr.ll Fri Feb 27 13:29:02 2015
@@ -11,16 +11,16 @@
 define void @test_csr(%pl* nocapture readnone %this, %rs* nocapture %r) align 2 {
 ;CHECK-NOT: stp {{d[0-9]+}}, {{d[0-9]+}}
 entry:
-  %x.i = getelementptr inbounds %rs* %r, i64 0, i32 7, i32 0
-  %y.i = getelementptr inbounds %rs* %r, i64 0, i32 7, i32 1
-  %z.i = getelementptr inbounds %rs* %r, i64 0, i32 7, i32 2
-  %x.i61 = getelementptr inbounds %rs* %r, i64 0, i32 8, i32 0
-  %y.i62 = getelementptr inbounds %rs* %r, i64 0, i32 8, i32 1
-  %z.i63 = getelementptr inbounds %rs* %r, i64 0, i32 8, i32 2
-  %x.i58 = getelementptr inbounds %rs* %r, i64 0, i32 9, i32 0
-  %y.i59 = getelementptr inbounds %rs* %r, i64 0, i32 9, i32 1
-  %z.i60 = getelementptr inbounds %rs* %r, i64 0, i32 9, i32 2
-  %na = getelementptr inbounds %rs* %r, i64 0, i32 0
+  %x.i = getelementptr inbounds %rs, %rs* %r, i64 0, i32 7, i32 0
+  %y.i = getelementptr inbounds %rs, %rs* %r, i64 0, i32 7, i32 1
+  %z.i = getelementptr inbounds %rs, %rs* %r, i64 0, i32 7, i32 2
+  %x.i61 = getelementptr inbounds %rs, %rs* %r, i64 0, i32 8, i32 0
+  %y.i62 = getelementptr inbounds %rs, %rs* %r, i64 0, i32 8, i32 1
+  %z.i63 = getelementptr inbounds %rs, %rs* %r, i64 0, i32 8, i32 2
+  %x.i58 = getelementptr inbounds %rs, %rs* %r, i64 0, i32 9, i32 0
+  %y.i59 = getelementptr inbounds %rs, %rs* %r, i64 0, i32 9, i32 1
+  %z.i60 = getelementptr inbounds %rs, %rs* %r, i64 0, i32 9, i32 2
+  %na = getelementptr inbounds %rs, %rs* %r, i64 0, i32 0
   %0 = bitcast double* %x.i to i8*
   call void @llvm.memset.p0i8.i64(i8* %0, i8 0, i64 72, i32 8, i1 false)
   %1 = load i32* %na, align 4
@@ -28,9 +28,9 @@ entry:
   br i1 %cmp70, label %for.body.lr.ph, label %for.end
 
 for.body.lr.ph:                                   ; preds = %entry
-  %fn = getelementptr inbounds %rs* %r, i64 0, i32 4
+  %fn = getelementptr inbounds %rs, %rs* %r, i64 0, i32 4
   %2 = load %v** %fn, align 8
-  %fs = getelementptr inbounds %rs* %r, i64 0, i32 5
+  %fs = getelementptr inbounds %rs, %rs* %r, i64 0, i32 5
   %3 = load %v** %fs, align 8
   %4 = sext i32 %1 to i64
   br label %for.body
@@ -42,18 +42,18 @@ for.body:
   %7 = phi <2 x double> [ zeroinitializer, %for.body.lr.ph ], [ %22, %for.body ]
   %8 = phi <2 x double> [ zeroinitializer, %for.body.lr.ph ], [ %26, %for.body ]
   %9 = phi <2 x double> [ zeroinitializer, %for.body.lr.ph ], [ %28, %for.body ]
-  %x.i54 = getelementptr inbounds %v* %2, i64 %indvars.iv, i32 0
-  %x1.i = getelementptr inbounds %v* %3, i64 %indvars.iv, i32 0
-  %y.i56 = getelementptr inbounds %v* %2, i64 %indvars.iv, i32 1
+  %x.i54 = getelementptr inbounds %v, %v* %2, i64 %indvars.iv, i32 0
+  %x1.i = getelementptr inbounds %v, %v* %3, i64 %indvars.iv, i32 0
+  %y.i56 = getelementptr inbounds %v, %v* %2, i64 %indvars.iv, i32 1
   %10 = bitcast double* %x.i54 to <2 x double>*
   %11 = load <2 x double>* %10, align 8
-  %y2.i = getelementptr inbounds %v* %3, i64 %indvars.iv, i32 1
+  %y2.i = getelementptr inbounds %v, %v* %3, i64 %indvars.iv, i32 1
   %12 = bitcast double* %x1.i to <2 x double>*
   %13 = load <2 x double>* %12, align 8
   %14 = fadd fast <2 x double> %13, %11
-  %z.i57 = getelementptr inbounds %v* %2, i64 %indvars.iv, i32 2
+  %z.i57 = getelementptr inbounds %v, %v* %2, i64 %indvars.iv, i32 2
   %15 = load double* %z.i57, align 8
-  %z4.i = getelementptr inbounds %v* %3, i64 %indvars.iv, i32 2
+  %z4.i = getelementptr inbounds %v, %v* %3, i64 %indvars.iv, i32 2
   %16 = load double* %z4.i, align 8
   %add5.i = fadd fast double %16, %15
   %17 = fadd fast <2 x double> %6, %11

Modified: llvm/trunk/test/CodeGen/AArch64/Redundantstore.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/Redundantstore.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/Redundantstore.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/Redundantstore.ll Fri Feb 27 13:29:02 2015
@@ -13,11 +13,11 @@ entry:
   %and = and i64 %conv, -8
   %conv2 = trunc i64 %and to i32
   %add.ptr.sum = add nsw i64 %and, -4
-  %add.ptr3 = getelementptr inbounds i8* %0, i64 %add.ptr.sum
+  %add.ptr3 = getelementptr inbounds i8, i8* %0, i64 %add.ptr.sum
   %size4 = bitcast i8* %add.ptr3 to i32*
   store i32 %conv2, i32* %size4, align 4
   %add.ptr.sum9 = add nsw i64 %and, -4
-  %add.ptr5 = getelementptr inbounds i8* %0, i64 %add.ptr.sum9
+  %add.ptr5 = getelementptr inbounds i8, i8* %0, i64 %add.ptr.sum9
   %size6 = bitcast i8* %add.ptr5 to i32*
   store i32 %conv2, i32* %size6, align 4
   ret i8* %0

Modified: llvm/trunk/test/CodeGen/AArch64/aarch64-2014-08-11-MachineCombinerCrash.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/aarch64-2014-08-11-MachineCombinerCrash.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/aarch64-2014-08-11-MachineCombinerCrash.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/aarch64-2014-08-11-MachineCombinerCrash.ll Fri Feb 27 13:29:02 2015
@@ -8,11 +8,11 @@ entry:
   br label %for.body, !dbg !39
 
 for.body:                                         ; preds = %for.body, %entry
-  %arrayidx5 = getelementptr inbounds i32* null, i64 1, !dbg !43
+  %arrayidx5 = getelementptr inbounds i32, i32* null, i64 1, !dbg !43
   %0 = load i32* null, align 4, !dbg !45, !tbaa !46
   %s1 = sub nsw i32 0, %0, !dbg !50
   %n1 = sext i32 %s1 to i64, !dbg !50
-  %arrayidx21 = getelementptr inbounds i32* null, i64 3, !dbg !51
+  %arrayidx21 = getelementptr inbounds i32, i32* null, i64 3, !dbg !51
   %add53 = add nsw i64 %n1, 0, !dbg !52
   %add55 = add nsw i64 %n1, 0, !dbg !53
   %mul63 = mul nsw i64 %add53, -20995, !dbg !54

Modified: llvm/trunk/test/CodeGen/AArch64/aarch64-a57-fp-load-balancing.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/aarch64-a57-fp-load-balancing.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/aarch64-a57-fp-load-balancing.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/aarch64-a57-fp-load-balancing.ll Fri Feb 27 13:29:02 2015
@@ -30,13 +30,13 @@ target triple = "aarch64"
 define void @f1(double* nocapture readonly %p, double* nocapture %q) #0 {
 entry:
   %0 = load double* %p, align 8
-  %arrayidx1 = getelementptr inbounds double* %p, i64 1
+  %arrayidx1 = getelementptr inbounds double, double* %p, i64 1
   %1 = load double* %arrayidx1, align 8
-  %arrayidx2 = getelementptr inbounds double* %p, i64 2
+  %arrayidx2 = getelementptr inbounds double, double* %p, i64 2
   %2 = load double* %arrayidx2, align 8
-  %arrayidx3 = getelementptr inbounds double* %p, i64 3
+  %arrayidx3 = getelementptr inbounds double, double* %p, i64 3
   %3 = load double* %arrayidx3, align 8
-  %arrayidx4 = getelementptr inbounds double* %p, i64 4
+  %arrayidx4 = getelementptr inbounds double, double* %p, i64 4
   %4 = load double* %arrayidx4, align 8
   %mul = fmul fast double %0, %1
   %add = fadd fast double %mul, %4
@@ -47,18 +47,18 @@ entry:
   %mul8 = fmul fast double %2, %3
   %add9 = fadd fast double %mul8, %sub
   store double %add9, double* %q, align 8
-  %arrayidx11 = getelementptr inbounds double* %p, i64 5
+  %arrayidx11 = getelementptr inbounds double, double* %p, i64 5
   %5 = load double* %arrayidx11, align 8
-  %arrayidx12 = getelementptr inbounds double* %p, i64 6
+  %arrayidx12 = getelementptr inbounds double, double* %p, i64 6
   %6 = load double* %arrayidx12, align 8
-  %arrayidx13 = getelementptr inbounds double* %p, i64 7
+  %arrayidx13 = getelementptr inbounds double, double* %p, i64 7
   %7 = load double* %arrayidx13, align 8
   %mul15 = fmul fast double %6, %7
   %mul16 = fmul fast double %0, %5
   %add17 = fadd fast double %mul16, %mul15
   %mul18 = fmul fast double %5, %6
   %add19 = fadd fast double %mul18, %add17
-  %arrayidx20 = getelementptr inbounds double* %q, i64 1
+  %arrayidx20 = getelementptr inbounds double, double* %q, i64 1
   store double %add19, double* %arrayidx20, align 8
   ret void
 }
@@ -82,19 +82,19 @@ entry:
 define void @f2(double* nocapture readonly %p, double* nocapture %q) #0 {
 entry:
   %0 = load double* %p, align 8
-  %arrayidx1 = getelementptr inbounds double* %p, i64 1
+  %arrayidx1 = getelementptr inbounds double, double* %p, i64 1
   %1 = load double* %arrayidx1, align 8
-  %arrayidx2 = getelementptr inbounds double* %p, i64 2
+  %arrayidx2 = getelementptr inbounds double, double* %p, i64 2
   %2 = load double* %arrayidx2, align 8
-  %arrayidx3 = getelementptr inbounds double* %p, i64 3
+  %arrayidx3 = getelementptr inbounds double, double* %p, i64 3
   %3 = load double* %arrayidx3, align 8
-  %arrayidx4 = getelementptr inbounds double* %p, i64 4
+  %arrayidx4 = getelementptr inbounds double, double* %p, i64 4
   %4 = load double* %arrayidx4, align 8
-  %arrayidx5 = getelementptr inbounds double* %p, i64 5
+  %arrayidx5 = getelementptr inbounds double, double* %p, i64 5
   %5 = load double* %arrayidx5, align 8
-  %arrayidx6 = getelementptr inbounds double* %p, i64 6
+  %arrayidx6 = getelementptr inbounds double, double* %p, i64 6
   %6 = load double* %arrayidx6, align 8
-  %arrayidx7 = getelementptr inbounds double* %p, i64 7
+  %arrayidx7 = getelementptr inbounds double, double* %p, i64 7
   %7 = load double* %arrayidx7, align 8
   %mul = fmul fast double %0, %1
   %add = fadd fast double %mul, %7
@@ -110,7 +110,7 @@ entry:
   %mul16 = fmul fast double %2, %3
   %add17 = fadd fast double %mul16, %sub
   store double %add17, double* %q, align 8
-  %arrayidx19 = getelementptr inbounds double* %q, i64 1
+  %arrayidx19 = getelementptr inbounds double, double* %q, i64 1
   store double %add15, double* %arrayidx19, align 8
   ret void
 }
@@ -128,13 +128,13 @@ entry:
 define void @f3(double* nocapture readonly %p, double* nocapture %q) #0 {
 entry:
   %0 = load double* %p, align 8
-  %arrayidx1 = getelementptr inbounds double* %p, i64 1
+  %arrayidx1 = getelementptr inbounds double, double* %p, i64 1
   %1 = load double* %arrayidx1, align 8
-  %arrayidx2 = getelementptr inbounds double* %p, i64 2
+  %arrayidx2 = getelementptr inbounds double, double* %p, i64 2
   %2 = load double* %arrayidx2, align 8
-  %arrayidx3 = getelementptr inbounds double* %p, i64 3
+  %arrayidx3 = getelementptr inbounds double, double* %p, i64 3
   %3 = load double* %arrayidx3, align 8
-  %arrayidx4 = getelementptr inbounds double* %p, i64 4
+  %arrayidx4 = getelementptr inbounds double, double* %p, i64 4
   %4 = load double* %arrayidx4, align 8
   %mul = fmul fast double %0, %1
   %add = fadd fast double %mul, %4
@@ -177,19 +177,19 @@ declare void @g(...) #1
 define void @f4(float* nocapture readonly %p, float* nocapture %q) #0 {
 entry:
   %0 = load float* %p, align 4
-  %arrayidx1 = getelementptr inbounds float* %p, i64 1
+  %arrayidx1 = getelementptr inbounds float, float* %p, i64 1
   %1 = load float* %arrayidx1, align 4
-  %arrayidx2 = getelementptr inbounds float* %p, i64 2
+  %arrayidx2 = getelementptr inbounds float, float* %p, i64 2
   %2 = load float* %arrayidx2, align 4
-  %arrayidx3 = getelementptr inbounds float* %p, i64 3
+  %arrayidx3 = getelementptr inbounds float, float* %p, i64 3
   %3 = load float* %arrayidx3, align 4
-  %arrayidx4 = getelementptr inbounds float* %p, i64 4
+  %arrayidx4 = getelementptr inbounds float, float* %p, i64 4
   %4 = load float* %arrayidx4, align 4
-  %arrayidx5 = getelementptr inbounds float* %p, i64 5
+  %arrayidx5 = getelementptr inbounds float, float* %p, i64 5
   %5 = load float* %arrayidx5, align 4
-  %arrayidx6 = getelementptr inbounds float* %p, i64 6
+  %arrayidx6 = getelementptr inbounds float, float* %p, i64 6
   %6 = load float* %arrayidx6, align 4
-  %arrayidx7 = getelementptr inbounds float* %p, i64 7
+  %arrayidx7 = getelementptr inbounds float, float* %p, i64 7
   %7 = load float* %arrayidx7, align 4
   %mul = fmul fast float %0, %1
   %add = fadd fast float %mul, %7
@@ -205,7 +205,7 @@ entry:
   %mul16 = fmul fast float %2, %3
   %add17 = fadd fast float %mul16, %sub
   store float %add17, float* %q, align 4
-  %arrayidx19 = getelementptr inbounds float* %q, i64 1
+  %arrayidx19 = getelementptr inbounds float, float* %q, i64 1
   store float %add15, float* %arrayidx19, align 4
   ret void
 }
@@ -223,13 +223,13 @@ entry:
 define void @f5(float* nocapture readonly %p, float* nocapture %q) #0 {
 entry:
   %0 = load float* %p, align 4
-  %arrayidx1 = getelementptr inbounds float* %p, i64 1
+  %arrayidx1 = getelementptr inbounds float, float* %p, i64 1
   %1 = load float* %arrayidx1, align 4
-  %arrayidx2 = getelementptr inbounds float* %p, i64 2
+  %arrayidx2 = getelementptr inbounds float, float* %p, i64 2
   %2 = load float* %arrayidx2, align 4
-  %arrayidx3 = getelementptr inbounds float* %p, i64 3
+  %arrayidx3 = getelementptr inbounds float, float* %p, i64 3
   %3 = load float* %arrayidx3, align 4
-  %arrayidx4 = getelementptr inbounds float* %p, i64 4
+  %arrayidx4 = getelementptr inbounds float, float* %p, i64 4
   %4 = load float* %arrayidx4, align 4
   %mul = fmul fast float %0, %1
   %add = fadd fast float %mul, %4
@@ -265,13 +265,13 @@ if.end:
 define void @f6(double* nocapture readonly %p, double* nocapture %q) #0 {
 entry:
   %0 = load double* %p, align 8
-  %arrayidx1 = getelementptr inbounds double* %p, i64 1
+  %arrayidx1 = getelementptr inbounds double, double* %p, i64 1
   %1 = load double* %arrayidx1, align 8
-  %arrayidx2 = getelementptr inbounds double* %p, i64 2
+  %arrayidx2 = getelementptr inbounds double, double* %p, i64 2
   %2 = load double* %arrayidx2, align 8
-  %arrayidx3 = getelementptr inbounds double* %p, i64 3
+  %arrayidx3 = getelementptr inbounds double, double* %p, i64 3
   %3 = load double* %arrayidx3, align 8
-  %arrayidx4 = getelementptr inbounds double* %p, i64 4
+  %arrayidx4 = getelementptr inbounds double, double* %p, i64 4
   %4 = load double* %arrayidx4, align 8
   %mul = fmul fast double %0, %1
   %add = fadd fast double %mul, %4
@@ -300,13 +300,13 @@ declare double @hh(double) #1
 define void @f7(double* nocapture readonly %p, double* nocapture %q) #0 {
 entry:
   %0 = load double* %p, align 8
-  %arrayidx1 = getelementptr inbounds double* %p, i64 1
+  %arrayidx1 = getelementptr inbounds double, double* %p, i64 1
   %1 = load double* %arrayidx1, align 8
-  %arrayidx2 = getelementptr inbounds double* %p, i64 2
+  %arrayidx2 = getelementptr inbounds double, double* %p, i64 2
   %2 = load double* %arrayidx2, align 8
-  %arrayidx3 = getelementptr inbounds double* %p, i64 3
+  %arrayidx3 = getelementptr inbounds double, double* %p, i64 3
   %3 = load double* %arrayidx3, align 8
-  %arrayidx4 = getelementptr inbounds double* %p, i64 4
+  %arrayidx4 = getelementptr inbounds double, double* %p, i64 4
   %4 = load double* %arrayidx4, align 8
   %mul = fmul fast double %0, %1
   %add = fadd fast double %mul, %4

Modified: llvm/trunk/test/CodeGen/AArch64/aarch64-address-type-promotion-assertion.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/aarch64-address-type-promotion-assertion.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/aarch64-address-type-promotion-assertion.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/aarch64-address-type-promotion-assertion.ll Fri Feb 27 13:29:02 2015
@@ -16,9 +16,9 @@ if.then274:
 
 invoke.cont291:
   %idxprom.i.i.i605 = sext i32 %0 to i64
-  %arrayidx.i.i.i607 = getelementptr inbounds double* undef, i64 %idxprom.i.i.i605
+  %arrayidx.i.i.i607 = getelementptr inbounds double, double* undef, i64 %idxprom.i.i.i605
   %idxprom.i.i.i596 = sext i32 %0 to i64
-  %arrayidx.i.i.i598 = getelementptr inbounds double* undef, i64 %idxprom.i.i.i596
+  %arrayidx.i.i.i598 = getelementptr inbounds double, double* undef, i64 %idxprom.i.i.i596
   br label %if.end356
 
 if.else313:
@@ -30,7 +30,7 @@ invoke.cont317:
 
 invoke.cont326:
   %idxprom.i.i.i587 = sext i32 %0 to i64
-  %arrayidx.i.i.i589 = getelementptr inbounds double* undef, i64 %idxprom.i.i.i587
+  %arrayidx.i.i.i589 = getelementptr inbounds double, double* undef, i64 %idxprom.i.i.i587
   %sub329 = fsub fast double undef, undef
   br label %invoke.cont334
 
@@ -40,12 +40,12 @@ invoke.cont334:
 
 invoke.cont342:
   %idxprom.i.i.i578 = sext i32 %0 to i64
-  %arrayidx.i.i.i580 = getelementptr inbounds double* undef, i64 %idxprom.i.i.i578
+  %arrayidx.i.i.i580 = getelementptr inbounds double, double* undef, i64 %idxprom.i.i.i578
   br label %if.end356
 
 invoke.cont353:
   %idxprom.i.i.i572 = sext i32 %0 to i64
-  %arrayidx.i.i.i574 = getelementptr inbounds double* undef, i64 %idxprom.i.i.i572
+  %arrayidx.i.i.i574 = getelementptr inbounds double, double* undef, i64 %idxprom.i.i.i572
   br label %if.end356
 
 if.end356:

Modified: llvm/trunk/test/CodeGen/AArch64/aarch64-address-type-promotion.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/aarch64-address-type-promotion.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/aarch64-address-type-promotion.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/aarch64-address-type-promotion.ll Fri Feb 27 13:29:02 2015
@@ -14,15 +14,15 @@ entry:
 ; CHECK-NEXT: ret
   %add = add nsw i32 %i, 1
   %idxprom = sext i32 %add to i64
-  %arrayidx = getelementptr inbounds i32* %a, i64 %idxprom
+  %arrayidx = getelementptr inbounds i32, i32* %a, i64 %idxprom
   %0 = load i32* %arrayidx, align 4
   %add1 = add nsw i32 %i, 2
   %idxprom2 = sext i32 %add1 to i64
-  %arrayidx3 = getelementptr inbounds i32* %a, i64 %idxprom2
+  %arrayidx3 = getelementptr inbounds i32, i32* %a, i64 %idxprom2
   %1 = load i32* %arrayidx3, align 4
   %add4 = add nsw i32 %1, %0
   %idxprom5 = sext i32 %i to i64
-  %arrayidx6 = getelementptr inbounds i32* %a, i64 %idxprom5
+  %arrayidx6 = getelementptr inbounds i32, i32* %a, i64 %idxprom5
   store i32 %add4, i32* %arrayidx6, align 4
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/aarch64-gep-opt.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/aarch64-gep-opt.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/aarch64-gep-opt.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/aarch64-gep-opt.ll Fri Feb 27 13:29:02 2015
@@ -14,13 +14,13 @@ target triple = "aarch64-linux-gnueabi"
 ; Check that when two complex GEPs are used in two basic blocks, LLVM can
 ; elimilate the common subexpression for the second use.
 define void @test_GEP_CSE([240 x %struct]* %string, i32* %adj, i32 %lib, i64 %idxprom) {
-  %liberties = getelementptr [240 x %struct]* %string, i64 1, i64 %idxprom, i32 3
+  %liberties = getelementptr [240 x %struct], [240 x %struct]* %string, i64 1, i64 %idxprom, i32 3
   %1 = load i32* %liberties, align 4
   %cmp = icmp eq i32 %1, %lib
   br i1 %cmp, label %if.then, label %if.end
 
 if.then:                                          ; preds = %entry
-  %origin = getelementptr [240 x %struct]* %string, i64 1, i64 %idxprom, i32 2
+  %origin = getelementptr [240 x %struct], [240 x %struct]* %string, i64 1, i64 %idxprom, i32 2
   %2 = load i32* %origin, align 4
   store i32 %2, i32* %adj, align 4
   br label %if.end
@@ -50,11 +50,11 @@ if.end:
 ; CHECK-UseAA-LABEL: @test_GEP_CSE(
 ; CHECK-UseAA: [[PTR0:%[a-zA-Z0-9]+]] = bitcast [240 x %struct]* %string to i8*
 ; CHECK-UseAA: [[IDX:%[a-zA-Z0-9]+]] = mul i64 %idxprom, 96
-; CHECK-UseAA: [[PTR1:%[a-zA-Z0-9]+]] = getelementptr i8* [[PTR0]], i64 [[IDX]]
-; CHECK-UseAA: getelementptr i8* [[PTR1]], i64 23052
+; CHECK-UseAA: [[PTR1:%[a-zA-Z0-9]+]] = getelementptr i8, i8* [[PTR0]], i64 [[IDX]]
+; CHECK-UseAA: getelementptr i8, i8* [[PTR1]], i64 23052
 ; CHECK-UseAA: bitcast
 ; CHECK-UseAA: if.then:
-; CHECK-UseAA: getelementptr i8* [[PTR1]], i64 23048
+; CHECK-UseAA: getelementptr i8, i8* [[PTR1]], i64 23048
 ; CHECK-UseAA: bitcast
 
 %class.my = type { i32, [128 x i32], i32, [256 x %struct.pt]}
@@ -65,9 +65,9 @@ if.end:
 ; calculation and code gen can generate a better addressing mode for the second
 ; use.
 define void @test_GEP_across_BB(%class.my* %this, i64 %idx) {
-  %1 = getelementptr %class.my* %this, i64 0, i32 3, i64 %idx, i32 1
+  %1 = getelementptr %class.my, %class.my* %this, i64 0, i32 3, i64 %idx, i32 1
   %2 = load i32* %1, align 4
-  %3 = getelementptr %class.my* %this, i64 0, i32 3, i64 %idx, i32 2
+  %3 = getelementptr %class.my, %class.my* %this, i64 0, i32 3, i64 %idx, i32 2
   %4 = load i32* %3, align 4
   %5 = icmp eq i32 %2, %4
   br i1 %5, label %if.true, label %exit
@@ -99,12 +99,12 @@ exit:
 
 ; CHECK-UseAA-LABEL: test_GEP_across_BB(
 ; CHECK-UseAA: [[PTR0:%[a-zA-Z0-9]+]] = getelementptr
-; CHECK-UseAA: getelementptr i8* [[PTR0]], i64 528
-; CHECK-UseAA: getelementptr i8* [[PTR0]], i64 532
+; CHECK-UseAA: getelementptr i8, i8* [[PTR0]], i64 528
+; CHECK-UseAA: getelementptr i8, i8* [[PTR0]], i64 532
 ; CHECK-UseAA: if.true:
-; CHECK-UseAA: {{%sunk[a-zA-Z0-9]+}} = getelementptr i8* [[PTR0]], i64 532
+; CHECK-UseAA: {{%sunk[a-zA-Z0-9]+}} = getelementptr i8, i8* [[PTR0]], i64 532
 ; CHECK-UseAA: exit:
-; CHECK-UseAA: {{%sunk[a-zA-Z0-9]+}} = getelementptr i8* [[PTR0]], i64 528
+; CHECK-UseAA: {{%sunk[a-zA-Z0-9]+}} = getelementptr i8, i8* [[PTR0]], i64 528
 
 %struct.S = type { float, double }
 @struct_array = global [1024 x %struct.S] zeroinitializer, align 16
@@ -118,7 +118,7 @@ define double* @test-struct_1(i32 %i) {
 entry:
   %add = add nsw i32 %i, 5
   %idxprom = sext i32 %add to i64
-  %p = getelementptr [1024 x %struct.S]* @struct_array, i64 0, i64 %idxprom, i32 1
+  %p = getelementptr [1024 x %struct.S], [1024 x %struct.S]* @struct_array, i64 0, i64 %idxprom, i32 1
   ret double* %p
 }
 ; CHECK-NoAA-LABEL: @test-struct_1(
@@ -126,7 +126,7 @@ entry:
 ; CHECK-NoAA: add i64 %{{[a-zA-Z0-9]+}}, 88
 
 ; CHECK-UseAA-LABEL: @test-struct_1(
-; CHECK-UseAA: getelementptr i8* %{{[a-zA-Z0-9]+}}, i64 88
+; CHECK-UseAA: getelementptr i8, i8* %{{[a-zA-Z0-9]+}}, i64 88
 
 %struct3 = type { i64, i32 }
 %struct2 = type { %struct3, i32 }
@@ -140,7 +140,7 @@ entry:
 define %struct2* @test-struct_2(%struct0* %ptr, i64 %idx) {
 entry:
   %arrayidx = add nsw i64 %idx, -2
-  %ptr2 = getelementptr %struct0* %ptr, i64 0, i32 3, i64 %arrayidx, i32 1
+  %ptr2 = getelementptr %struct0, %struct0* %ptr, i64 0, i32 3, i64 %arrayidx, i32 1
   ret %struct2* %ptr2
 }
 ; CHECK-NoAA-LABEL: @test-struct_2(
@@ -148,14 +148,14 @@ entry:
 ; CHECK-NoAA: add i64 %{{[a-zA-Z0-9]+}}, -40
 
 ; CHECK-UseAA-LABEL: @test-struct_2(
-; CHECK-UseAA: getelementptr i8* %{{[a-zA-Z0-9]+}}, i64 -40
+; CHECK-UseAA: getelementptr i8, i8* %{{[a-zA-Z0-9]+}}, i64 -40
 
 ; Test that when a index is added from two constant, SeparateConstOffsetFromGEP
 ; pass does not generate incorrect result.
 define void @test_const_add([3 x i32]* %in) {
   %inc = add nsw i32 2, 1
   %idxprom = sext i32 %inc to i64
-  %arrayidx = getelementptr [3 x i32]* %in, i64 %idxprom, i64 2
+  %arrayidx = getelementptr [3 x i32], [3 x i32]* %in, i64 %idxprom, i64 2
   store i32 0, i32* %arrayidx, align 4
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/and-mask-removal.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/and-mask-removal.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/and-mask-removal.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/and-mask-removal.ll Fri Feb 27 13:29:02 2015
@@ -8,7 +8,7 @@
 define void @new_position(i32 %pos) {
 entry:
   %idxprom = sext i32 %pos to i64
-  %arrayidx = getelementptr inbounds [400 x i8]* @board, i64 0, i64 %idxprom
+  %arrayidx = getelementptr inbounds [400 x i8], [400 x i8]* @board, i64 0, i64 %idxprom
   %tmp = load i8* %arrayidx, align 1
   %.off = add i8 %tmp, -1
   %switch = icmp ult i8 %.off, 2
@@ -16,7 +16,7 @@ entry:
 
 if.then:                                          ; preds = %entry
   %tmp1 = load i32* @next_string, align 4
-  %arrayidx8 = getelementptr inbounds [400 x i32]* @string_number, i64 0, i64 %idxprom
+  %arrayidx8 = getelementptr inbounds [400 x i32], [400 x i32]* @string_number, i64 0, i64 %idxprom
   store i32 %tmp1, i32* %arrayidx8, align 4
   br label %if.end
 

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-2011-03-21-Unaligned-Frame-Index.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-2011-03-21-Unaligned-Frame-Index.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-2011-03-21-Unaligned-Frame-Index.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-2011-03-21-Unaligned-Frame-Index.ll Fri Feb 27 13:29:02 2015
@@ -5,7 +5,7 @@ define void @foo(i64 %val) {
 ;   instruction that can handle that.
 ; CHECK: stur x0, [sp, #20]
   %a = alloca [49 x i32], align 4
-  %p32 = getelementptr inbounds [49 x i32]* %a, i64 0, i64 2
+  %p32 = getelementptr inbounds [49 x i32], [49 x i32]* %a, i64 0, i64 2
   %p = bitcast i32* %p32 to i64*
   store i64 %val, i64* %p, align 8
   ret void

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-2012-05-22-LdStOptBug.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-2012-05-22-LdStOptBug.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-2012-05-22-LdStOptBug.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-2012-05-22-LdStOptBug.ll Fri Feb 27 13:29:02 2015
@@ -17,19 +17,19 @@ entry:
 ; CHECK: ldp d{{[0-9]+}}, d{{[0-9]+}}
   %ivar = load i64* @"OBJC_IVAR_$_UIScreen._bounds", align 8, !invariant.load !4
   %0 = bitcast %0* %self to i8*
-  %add.ptr = getelementptr inbounds i8* %0, i64 %ivar
+  %add.ptr = getelementptr inbounds i8, i8* %0, i64 %ivar
   %add.ptr10.0 = bitcast i8* %add.ptr to double*
   %tmp11 = load double* %add.ptr10.0, align 8
   %add.ptr.sum = add i64 %ivar, 8
-  %add.ptr10.1 = getelementptr inbounds i8* %0, i64 %add.ptr.sum
+  %add.ptr10.1 = getelementptr inbounds i8, i8* %0, i64 %add.ptr.sum
   %1 = bitcast i8* %add.ptr10.1 to double*
   %tmp12 = load double* %1, align 8
   %add.ptr.sum17 = add i64 %ivar, 16
-  %add.ptr4.1 = getelementptr inbounds i8* %0, i64 %add.ptr.sum17
+  %add.ptr4.1 = getelementptr inbounds i8, i8* %0, i64 %add.ptr.sum17
   %add.ptr4.1.0 = bitcast i8* %add.ptr4.1 to double*
   %tmp = load double* %add.ptr4.1.0, align 8
   %add.ptr4.1.sum = add i64 %ivar, 24
-  %add.ptr4.1.1 = getelementptr inbounds i8* %0, i64 %add.ptr4.1.sum
+  %add.ptr4.1.1 = getelementptr inbounds i8, i8* %0, i64 %add.ptr4.1.sum
   %2 = bitcast i8* %add.ptr4.1.1 to double*
   %tmp5 = load double* %2, align 8
   %insert14 = insertvalue %struct.CGPoint undef, double %tmp11, 0

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-2012-07-11-InstrEmitterBug.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-2012-07-11-InstrEmitterBug.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-2012-07-11-InstrEmitterBug.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-2012-07-11-InstrEmitterBug.ll Fri Feb 27 13:29:02 2015
@@ -44,7 +44,7 @@ cond.true43:
   unreachable
 
 cond.false45:                                     ; preds = %for.body14
-  %add.ptr = getelementptr inbounds i8* %path, i64 %conv30
+  %add.ptr = getelementptr inbounds i8, i8* %path, i64 %conv30
   unreachable
 
 if.end56:                                         ; preds = %for.cond10, %entry

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-abi-varargs.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-abi-varargs.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-abi-varargs.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-abi-varargs.ll Fri Feb 27 13:29:02 2015
@@ -159,11 +159,11 @@ entry:
   %0 = va_arg i8** %args, i32
   store i32 %0, i32* %vc, align 4
   %ap.cur = load i8** %args
-  %1 = getelementptr i8* %ap.cur, i32 15
+  %1 = getelementptr i8, i8* %ap.cur, i32 15
   %2 = ptrtoint i8* %1 to i64
   %3 = and i64 %2, -16
   %ap.align = inttoptr i64 %3 to i8*
-  %ap.next = getelementptr i8* %ap.align, i32 16
+  %ap.next = getelementptr i8, i8* %ap.align, i32 16
   store i8* %ap.next, i8** %args
   %4 = bitcast i8* %ap.align to %struct.s41*
   %5 = bitcast %struct.s41* %vs to i8*

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-abi_align.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-abi_align.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-abi_align.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-abi_align.ll Fri Feb 27 13:29:02 2015
@@ -260,14 +260,14 @@ entry:
 ; FAST: ldr w[[B:[0-9]+]], [x2]
 ; FAST: add w[[C:[0-9]+]], w[[A]], w0
 ; FAST: add {{w[0-9]+}}, w[[C]], w[[B]]
-  %i1 = getelementptr inbounds %struct.s42* %s1, i64 0, i32 0
+  %i1 = getelementptr inbounds %struct.s42, %struct.s42* %s1, i64 0, i32 0
   %0 = load i32* %i1, align 4, !tbaa !0
-  %i2 = getelementptr inbounds %struct.s42* %s2, i64 0, i32 0
+  %i2 = getelementptr inbounds %struct.s42, %struct.s42* %s2, i64 0, i32 0
   %1 = load i32* %i2, align 4, !tbaa !0
-  %s = getelementptr inbounds %struct.s42* %s1, i64 0, i32 1
+  %s = getelementptr inbounds %struct.s42, %struct.s42* %s1, i64 0, i32 1
   %2 = load i16* %s, align 2, !tbaa !3
   %conv = sext i16 %2 to i32
-  %s5 = getelementptr inbounds %struct.s42* %s2, i64 0, i32 1
+  %s5 = getelementptr inbounds %struct.s42, %struct.s42* %s2, i64 0, i32 1
   %3 = load i16* %s5, align 2, !tbaa !3
   %conv6 = sext i16 %3 to i32
   %add = add i32 %0, %i
@@ -369,14 +369,14 @@ entry:
 ; FAST: ldr w[[B:[0-9]+]], [x2]
 ; FAST: add w[[C:[0-9]+]], w[[A]], w0
 ; FAST: add {{w[0-9]+}}, w[[C]], w[[B]]
-  %i1 = getelementptr inbounds %struct.s43* %s1, i64 0, i32 0
+  %i1 = getelementptr inbounds %struct.s43, %struct.s43* %s1, i64 0, i32 0
   %0 = load i32* %i1, align 4, !tbaa !0
-  %i2 = getelementptr inbounds %struct.s43* %s2, i64 0, i32 0
+  %i2 = getelementptr inbounds %struct.s43, %struct.s43* %s2, i64 0, i32 0
   %1 = load i32* %i2, align 4, !tbaa !0
-  %s = getelementptr inbounds %struct.s43* %s1, i64 0, i32 1
+  %s = getelementptr inbounds %struct.s43, %struct.s43* %s1, i64 0, i32 1
   %2 = load i16* %s, align 2, !tbaa !3
   %conv = sext i16 %2 to i32
-  %s5 = getelementptr inbounds %struct.s43* %s2, i64 0, i32 1
+  %s5 = getelementptr inbounds %struct.s43, %struct.s43* %s2, i64 0, i32 1
   %3 = load i16* %s5, align 2, !tbaa !3
   %conv6 = sext i16 %3 to i32
   %add = add i32 %0, %i

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-addr-mode-folding.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-addr-mode-folding.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-addr-mode-folding.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-addr-mode-folding.ll Fri Feb 27 13:29:02 2015
@@ -12,10 +12,10 @@ define i32 @fct(i32 %i1, i32 %i2) {
 entry:
   %idxprom = sext i32 %i1 to i64
   %0 = load i8** @block, align 8
-  %arrayidx = getelementptr inbounds i8* %0, i64 %idxprom
+  %arrayidx = getelementptr inbounds i8, i8* %0, i64 %idxprom
   %1 = load i8* %arrayidx, align 1
   %idxprom1 = sext i32 %i2 to i64
-  %arrayidx2 = getelementptr inbounds i8* %0, i64 %idxprom1
+  %arrayidx2 = getelementptr inbounds i8, i8* %0, i64 %idxprom1
   %2 = load i8* %arrayidx2, align 1
   %cmp = icmp eq i8 %1, %2
   br i1 %cmp, label %if.end, label %if.then
@@ -29,10 +29,10 @@ if.end:
   %inc = add nsw i32 %i1, 1
   %inc9 = add nsw i32 %i2, 1
   %idxprom10 = sext i32 %inc to i64
-  %arrayidx11 = getelementptr inbounds i8* %0, i64 %idxprom10
+  %arrayidx11 = getelementptr inbounds i8, i8* %0, i64 %idxprom10
   %3 = load i8* %arrayidx11, align 1
   %idxprom12 = sext i32 %inc9 to i64
-  %arrayidx13 = getelementptr inbounds i8* %0, i64 %idxprom12
+  %arrayidx13 = getelementptr inbounds i8, i8* %0, i64 %idxprom12
   %4 = load i8* %arrayidx13, align 1
   %cmp16 = icmp eq i8 %3, %4
   br i1 %cmp16, label %if.end23, label %if.then18
@@ -46,10 +46,10 @@ if.end23:
   %inc24 = add nsw i32 %i1, 2
   %inc25 = add nsw i32 %i2, 2
   %idxprom26 = sext i32 %inc24 to i64
-  %arrayidx27 = getelementptr inbounds i8* %0, i64 %idxprom26
+  %arrayidx27 = getelementptr inbounds i8, i8* %0, i64 %idxprom26
   %5 = load i8* %arrayidx27, align 1
   %idxprom28 = sext i32 %inc25 to i64
-  %arrayidx29 = getelementptr inbounds i8* %0, i64 %idxprom28
+  %arrayidx29 = getelementptr inbounds i8, i8* %0, i64 %idxprom28
   %6 = load i8* %arrayidx29, align 1
   %cmp32 = icmp eq i8 %5, %6
   br i1 %cmp32, label %return, label %if.then34
@@ -72,10 +72,10 @@ define i32 @fct1(i32 %i1, i32 %i2) optsi
 entry:
   %idxprom = sext i32 %i1 to i64
   %0 = load i8** @block, align 8
-  %arrayidx = getelementptr inbounds i8* %0, i64 %idxprom
+  %arrayidx = getelementptr inbounds i8, i8* %0, i64 %idxprom
   %1 = load i8* %arrayidx, align 1
   %idxprom1 = sext i32 %i2 to i64
-  %arrayidx2 = getelementptr inbounds i8* %0, i64 %idxprom1
+  %arrayidx2 = getelementptr inbounds i8, i8* %0, i64 %idxprom1
   %2 = load i8* %arrayidx2, align 1
   %cmp = icmp eq i8 %1, %2
   br i1 %cmp, label %if.end, label %if.then
@@ -89,10 +89,10 @@ if.end:
   %inc = add nsw i32 %i1, 1
   %inc9 = add nsw i32 %i2, 1
   %idxprom10 = sext i32 %inc to i64
-  %arrayidx11 = getelementptr inbounds i8* %0, i64 %idxprom10
+  %arrayidx11 = getelementptr inbounds i8, i8* %0, i64 %idxprom10
   %3 = load i8* %arrayidx11, align 1
   %idxprom12 = sext i32 %inc9 to i64
-  %arrayidx13 = getelementptr inbounds i8* %0, i64 %idxprom12
+  %arrayidx13 = getelementptr inbounds i8, i8* %0, i64 %idxprom12
   %4 = load i8* %arrayidx13, align 1
   %cmp16 = icmp eq i8 %3, %4
   br i1 %cmp16, label %if.end23, label %if.then18
@@ -106,10 +106,10 @@ if.end23:
   %inc24 = add nsw i32 %i1, 2
   %inc25 = add nsw i32 %i2, 2
   %idxprom26 = sext i32 %inc24 to i64
-  %arrayidx27 = getelementptr inbounds i8* %0, i64 %idxprom26
+  %arrayidx27 = getelementptr inbounds i8, i8* %0, i64 %idxprom26
   %5 = load i8* %arrayidx27, align 1
   %idxprom28 = sext i32 %inc25 to i64
-  %arrayidx29 = getelementptr inbounds i8* %0, i64 %idxprom28
+  %arrayidx29 = getelementptr inbounds i8, i8* %0, i64 %idxprom28
   %6 = load i8* %arrayidx29, align 1
   %cmp32 = icmp eq i8 %5, %6
   br i1 %cmp32, label %return, label %if.then34
@@ -135,7 +135,7 @@ entry:
 
 if.then:                                          ; preds = %entry
   %idxprom = zext i8 %c to i64
-  %arrayidx = getelementptr inbounds i32* %array, i64 %idxprom
+  %arrayidx = getelementptr inbounds i32, i32* %array, i64 %idxprom
   %0 = load volatile i32* %arrayidx, align 4
   %1 = load volatile i32* %arrayidx, align 4
   %add3 = add nsw i32 %1, %0
@@ -159,7 +159,7 @@ entry:
 
 if.then:                                          ; preds = %entry
   %idxprom = zext i8 %c to i64
-  %arrayidx = getelementptr inbounds i32* %array, i64 %idxprom
+  %arrayidx = getelementptr inbounds i32, i32* %array, i64 %idxprom
   %0 = load volatile i32* %arrayidx, align 4
   %1 = load volatile i32* %arrayidx, align 4
   %add3 = add nsw i32 %1, %0

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-addr-type-promotion.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-addr-type-promotion.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-addr-type-promotion.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-addr-type-promotion.ll Fri Feb 27 13:29:02 2015
@@ -29,10 +29,10 @@ define zeroext i8 @fullGtU(i32 %i1, i32
 entry:
   %idxprom = sext i32 %i1 to i64
   %tmp = load i8** @block, align 8
-  %arrayidx = getelementptr inbounds i8* %tmp, i64 %idxprom
+  %arrayidx = getelementptr inbounds i8, i8* %tmp, i64 %idxprom
   %tmp1 = load i8* %arrayidx, align 1
   %idxprom1 = sext i32 %i2 to i64
-  %arrayidx2 = getelementptr inbounds i8* %tmp, i64 %idxprom1
+  %arrayidx2 = getelementptr inbounds i8, i8* %tmp, i64 %idxprom1
   %tmp2 = load i8* %arrayidx2, align 1
   %cmp = icmp eq i8 %tmp1, %tmp2
   br i1 %cmp, label %if.end, label %if.then
@@ -46,10 +46,10 @@ if.end:
   %inc = add nsw i32 %i1, 1
   %inc10 = add nsw i32 %i2, 1
   %idxprom11 = sext i32 %inc to i64
-  %arrayidx12 = getelementptr inbounds i8* %tmp, i64 %idxprom11
+  %arrayidx12 = getelementptr inbounds i8, i8* %tmp, i64 %idxprom11
   %tmp3 = load i8* %arrayidx12, align 1
   %idxprom13 = sext i32 %inc10 to i64
-  %arrayidx14 = getelementptr inbounds i8* %tmp, i64 %idxprom13
+  %arrayidx14 = getelementptr inbounds i8, i8* %tmp, i64 %idxprom13
   %tmp4 = load i8* %arrayidx14, align 1
   %cmp17 = icmp eq i8 %tmp3, %tmp4
   br i1 %cmp17, label %if.end25, label %if.then19
@@ -63,10 +63,10 @@ if.end25:
   %inc26 = add nsw i32 %i1, 2
   %inc27 = add nsw i32 %i2, 2
   %idxprom28 = sext i32 %inc26 to i64
-  %arrayidx29 = getelementptr inbounds i8* %tmp, i64 %idxprom28
+  %arrayidx29 = getelementptr inbounds i8, i8* %tmp, i64 %idxprom28
   %tmp5 = load i8* %arrayidx29, align 1
   %idxprom30 = sext i32 %inc27 to i64
-  %arrayidx31 = getelementptr inbounds i8* %tmp, i64 %idxprom30
+  %arrayidx31 = getelementptr inbounds i8, i8* %tmp, i64 %idxprom30
   %tmp6 = load i8* %arrayidx31, align 1
   %cmp34 = icmp eq i8 %tmp5, %tmp6
   br i1 %cmp34, label %return, label %if.then36

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-addrmode.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-addrmode.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-addrmode.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-addrmode.ll Fri Feb 27 13:29:02 2015
@@ -8,7 +8,7 @@
 ; CHECK: ldr xzr, [x{{[0-9]+}}, #8]
 ; CHECK: ret
 define void @t1() {
-  %incdec.ptr = getelementptr inbounds i64* @object, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* @object, i64 1
   %tmp = load volatile i64* %incdec.ptr, align 8
   ret void
 }
@@ -20,7 +20,7 @@ define void @t1() {
 ; CHECK: [[ADDREG]]]
 ; CHECK: ret
 define void @t2() {
-  %incdec.ptr = getelementptr inbounds i64* @object, i64 -33
+  %incdec.ptr = getelementptr inbounds i64, i64* @object, i64 -33
   %tmp = load volatile i64* %incdec.ptr, align 8
   ret void
 }
@@ -30,7 +30,7 @@ define void @t2() {
 ; CHECK: ldr xzr, [x{{[0-9]+}}, #32760]
 ; CHECK: ret
 define void @t3() {
-  %incdec.ptr = getelementptr inbounds i64* @object, i64 4095
+  %incdec.ptr = getelementptr inbounds i64, i64* @object, i64 4095
   %tmp = load volatile i64* %incdec.ptr, align 8
   ret void
 }
@@ -41,7 +41,7 @@ define void @t3() {
 ; CHECK: ldr xzr, [x{{[0-9]+}}, x[[NUM]]]
 ; CHECK: ret
 define void @t4() {
-  %incdec.ptr = getelementptr inbounds i64* @object, i64 4096
+  %incdec.ptr = getelementptr inbounds i64, i64* @object, i64 4096
   %tmp = load volatile i64* %incdec.ptr, align 8
   ret void
 }
@@ -51,7 +51,7 @@ define void @t4() {
 ; CHECK: ldr xzr, [x{{[0-9]+}}, x{{[0-9]+}}, lsl #3]
 ; CHECK: ret
 define void @t5(i64 %a) {
-  %incdec.ptr = getelementptr inbounds i64* @object, i64 %a
+  %incdec.ptr = getelementptr inbounds i64, i64* @object, i64 %a
   %tmp = load volatile i64* %incdec.ptr, align 8
   ret void
 }
@@ -63,8 +63,8 @@ define void @t5(i64 %a) {
 ; CHECK: ldr xzr, [x{{[0-9]+}}, x[[NUM]]]
 ; CHECK: ret
 define void @t6(i64 %a) {
-  %tmp1 = getelementptr inbounds i64* @object, i64 %a
-  %incdec.ptr = getelementptr inbounds i64* %tmp1, i64 4096
+  %tmp1 = getelementptr inbounds i64, i64* @object, i64 %a
+  %incdec.ptr = getelementptr inbounds i64, i64* %tmp1, i64 4096
   %tmp = load volatile i64* %incdec.ptr, align 8
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-atomic.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-atomic.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-atomic.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-atomic.ll Fri Feb 27 13:29:02 2015
@@ -115,21 +115,21 @@ define i32 @atomic_load(i32* %p) {
 
 define i8 @atomic_load_relaxed_8(i8* %p, i32 %off32) {
 ; CHECK-LABEL: atomic_load_relaxed_8:
-  %ptr_unsigned = getelementptr i8* %p, i32 4095
+  %ptr_unsigned = getelementptr i8, i8* %p, i32 4095
   %val_unsigned = load atomic i8* %ptr_unsigned monotonic, align 1
 ; CHECK: ldrb {{w[0-9]+}}, [x0, #4095]
 
-  %ptr_regoff = getelementptr i8* %p, i32 %off32
+  %ptr_regoff = getelementptr i8, i8* %p, i32 %off32
   %val_regoff = load atomic i8* %ptr_regoff unordered, align 1
   %tot1 = add i8 %val_unsigned, %val_regoff
 ; CHECK: ldrb {{w[0-9]+}}, [x0, w1, sxtw]
 
-  %ptr_unscaled = getelementptr i8* %p, i32 -256
+  %ptr_unscaled = getelementptr i8, i8* %p, i32 -256
   %val_unscaled = load atomic i8* %ptr_unscaled monotonic, align 1
   %tot2 = add i8 %tot1, %val_unscaled
 ; CHECK: ldurb {{w[0-9]+}}, [x0, #-256]
 
-  %ptr_random = getelementptr i8* %p, i32 1191936 ; 0x123000 (i.e. ADD imm)
+  %ptr_random = getelementptr i8, i8* %p, i32 1191936 ; 0x123000 (i.e. ADD imm)
   %val_random = load atomic i8* %ptr_random unordered, align 1
   %tot3 = add i8 %tot2, %val_random
 ; CHECK: add x[[ADDR:[0-9]+]], x0, #291, lsl #12
@@ -140,21 +140,21 @@ define i8 @atomic_load_relaxed_8(i8* %p,
 
 define i16 @atomic_load_relaxed_16(i16* %p, i32 %off32) {
 ; CHECK-LABEL: atomic_load_relaxed_16:
-  %ptr_unsigned = getelementptr i16* %p, i32 4095
+  %ptr_unsigned = getelementptr i16, i16* %p, i32 4095
   %val_unsigned = load atomic i16* %ptr_unsigned monotonic, align 2
 ; CHECK: ldrh {{w[0-9]+}}, [x0, #8190]
 
-  %ptr_regoff = getelementptr i16* %p, i32 %off32
+  %ptr_regoff = getelementptr i16, i16* %p, i32 %off32
   %val_regoff = load atomic i16* %ptr_regoff unordered, align 2
   %tot1 = add i16 %val_unsigned, %val_regoff
 ; CHECK: ldrh {{w[0-9]+}}, [x0, w1, sxtw #1]
 
-  %ptr_unscaled = getelementptr i16* %p, i32 -128
+  %ptr_unscaled = getelementptr i16, i16* %p, i32 -128
   %val_unscaled = load atomic i16* %ptr_unscaled monotonic, align 2
   %tot2 = add i16 %tot1, %val_unscaled
 ; CHECK: ldurh {{w[0-9]+}}, [x0, #-256]
 
-  %ptr_random = getelementptr i16* %p, i32 595968 ; 0x123000/2 (i.e. ADD imm)
+  %ptr_random = getelementptr i16, i16* %p, i32 595968 ; 0x123000/2 (i.e. ADD imm)
   %val_random = load atomic i16* %ptr_random unordered, align 2
   %tot3 = add i16 %tot2, %val_random
 ; CHECK: add x[[ADDR:[0-9]+]], x0, #291, lsl #12
@@ -165,21 +165,21 @@ define i16 @atomic_load_relaxed_16(i16*
 
 define i32 @atomic_load_relaxed_32(i32* %p, i32 %off32) {
 ; CHECK-LABEL: atomic_load_relaxed_32:
-  %ptr_unsigned = getelementptr i32* %p, i32 4095
+  %ptr_unsigned = getelementptr i32, i32* %p, i32 4095
   %val_unsigned = load atomic i32* %ptr_unsigned monotonic, align 4
 ; CHECK: ldr {{w[0-9]+}}, [x0, #16380]
 
-  %ptr_regoff = getelementptr i32* %p, i32 %off32
+  %ptr_regoff = getelementptr i32, i32* %p, i32 %off32
   %val_regoff = load atomic i32* %ptr_regoff unordered, align 4
   %tot1 = add i32 %val_unsigned, %val_regoff
 ; CHECK: ldr {{w[0-9]+}}, [x0, w1, sxtw #2]
 
-  %ptr_unscaled = getelementptr i32* %p, i32 -64
+  %ptr_unscaled = getelementptr i32, i32* %p, i32 -64
   %val_unscaled = load atomic i32* %ptr_unscaled monotonic, align 4
   %tot2 = add i32 %tot1, %val_unscaled
 ; CHECK: ldur {{w[0-9]+}}, [x0, #-256]
 
-  %ptr_random = getelementptr i32* %p, i32 297984 ; 0x123000/4 (i.e. ADD imm)
+  %ptr_random = getelementptr i32, i32* %p, i32 297984 ; 0x123000/4 (i.e. ADD imm)
   %val_random = load atomic i32* %ptr_random unordered, align 4
   %tot3 = add i32 %tot2, %val_random
 ; CHECK: add x[[ADDR:[0-9]+]], x0, #291, lsl #12
@@ -190,21 +190,21 @@ define i32 @atomic_load_relaxed_32(i32*
 
 define i64 @atomic_load_relaxed_64(i64* %p, i32 %off32) {
 ; CHECK-LABEL: atomic_load_relaxed_64:
-  %ptr_unsigned = getelementptr i64* %p, i32 4095
+  %ptr_unsigned = getelementptr i64, i64* %p, i32 4095
   %val_unsigned = load atomic i64* %ptr_unsigned monotonic, align 8
 ; CHECK: ldr {{x[0-9]+}}, [x0, #32760]
 
-  %ptr_regoff = getelementptr i64* %p, i32 %off32
+  %ptr_regoff = getelementptr i64, i64* %p, i32 %off32
   %val_regoff = load atomic i64* %ptr_regoff unordered, align 8
   %tot1 = add i64 %val_unsigned, %val_regoff
 ; CHECK: ldr {{x[0-9]+}}, [x0, w1, sxtw #3]
 
-  %ptr_unscaled = getelementptr i64* %p, i32 -32
+  %ptr_unscaled = getelementptr i64, i64* %p, i32 -32
   %val_unscaled = load atomic i64* %ptr_unscaled monotonic, align 8
   %tot2 = add i64 %tot1, %val_unscaled
 ; CHECK: ldur {{x[0-9]+}}, [x0, #-256]
 
-  %ptr_random = getelementptr i64* %p, i32 148992 ; 0x123000/8 (i.e. ADD imm)
+  %ptr_random = getelementptr i64, i64* %p, i32 148992 ; 0x123000/8 (i.e. ADD imm)
   %val_random = load atomic i64* %ptr_random unordered, align 8
   %tot3 = add i64 %tot2, %val_random
 ; CHECK: add x[[ADDR:[0-9]+]], x0, #291, lsl #12
@@ -223,19 +223,19 @@ define void @atomc_store(i32* %p) {
 
 define void @atomic_store_relaxed_8(i8* %p, i32 %off32, i8 %val) {
 ; CHECK-LABEL: atomic_store_relaxed_8:
-  %ptr_unsigned = getelementptr i8* %p, i32 4095
+  %ptr_unsigned = getelementptr i8, i8* %p, i32 4095
   store atomic i8 %val, i8* %ptr_unsigned monotonic, align 1
 ; CHECK: strb {{w[0-9]+}}, [x0, #4095]
 
-  %ptr_regoff = getelementptr i8* %p, i32 %off32
+  %ptr_regoff = getelementptr i8, i8* %p, i32 %off32
   store atomic i8 %val, i8* %ptr_regoff unordered, align 1
 ; CHECK: strb {{w[0-9]+}}, [x0, w1, sxtw]
 
-  %ptr_unscaled = getelementptr i8* %p, i32 -256
+  %ptr_unscaled = getelementptr i8, i8* %p, i32 -256
   store atomic i8 %val, i8* %ptr_unscaled monotonic, align 1
 ; CHECK: sturb {{w[0-9]+}}, [x0, #-256]
 
-  %ptr_random = getelementptr i8* %p, i32 1191936 ; 0x123000 (i.e. ADD imm)
+  %ptr_random = getelementptr i8, i8* %p, i32 1191936 ; 0x123000 (i.e. ADD imm)
   store atomic i8 %val, i8* %ptr_random unordered, align 1
 ; CHECK: add x[[ADDR:[0-9]+]], x0, #291, lsl #12
 ; CHECK: strb {{w[0-9]+}}, [x[[ADDR]]]
@@ -245,19 +245,19 @@ define void @atomic_store_relaxed_8(i8*
 
 define void @atomic_store_relaxed_16(i16* %p, i32 %off32, i16 %val) {
 ; CHECK-LABEL: atomic_store_relaxed_16:
-  %ptr_unsigned = getelementptr i16* %p, i32 4095
+  %ptr_unsigned = getelementptr i16, i16* %p, i32 4095
   store atomic i16 %val, i16* %ptr_unsigned monotonic, align 2
 ; CHECK: strh {{w[0-9]+}}, [x0, #8190]
 
-  %ptr_regoff = getelementptr i16* %p, i32 %off32
+  %ptr_regoff = getelementptr i16, i16* %p, i32 %off32
   store atomic i16 %val, i16* %ptr_regoff unordered, align 2
 ; CHECK: strh {{w[0-9]+}}, [x0, w1, sxtw #1]
 
-  %ptr_unscaled = getelementptr i16* %p, i32 -128
+  %ptr_unscaled = getelementptr i16, i16* %p, i32 -128
   store atomic i16 %val, i16* %ptr_unscaled monotonic, align 2
 ; CHECK: sturh {{w[0-9]+}}, [x0, #-256]
 
-  %ptr_random = getelementptr i16* %p, i32 595968 ; 0x123000/2 (i.e. ADD imm)
+  %ptr_random = getelementptr i16, i16* %p, i32 595968 ; 0x123000/2 (i.e. ADD imm)
   store atomic i16 %val, i16* %ptr_random unordered, align 2
 ; CHECK: add x[[ADDR:[0-9]+]], x0, #291, lsl #12
 ; CHECK: strh {{w[0-9]+}}, [x[[ADDR]]]
@@ -267,19 +267,19 @@ define void @atomic_store_relaxed_16(i16
 
 define void @atomic_store_relaxed_32(i32* %p, i32 %off32, i32 %val) {
 ; CHECK-LABEL: atomic_store_relaxed_32:
-  %ptr_unsigned = getelementptr i32* %p, i32 4095
+  %ptr_unsigned = getelementptr i32, i32* %p, i32 4095
   store atomic i32 %val, i32* %ptr_unsigned monotonic, align 4
 ; CHECK: str {{w[0-9]+}}, [x0, #16380]
 
-  %ptr_regoff = getelementptr i32* %p, i32 %off32
+  %ptr_regoff = getelementptr i32, i32* %p, i32 %off32
   store atomic i32 %val, i32* %ptr_regoff unordered, align 4
 ; CHECK: str {{w[0-9]+}}, [x0, w1, sxtw #2]
 
-  %ptr_unscaled = getelementptr i32* %p, i32 -64
+  %ptr_unscaled = getelementptr i32, i32* %p, i32 -64
   store atomic i32 %val, i32* %ptr_unscaled monotonic, align 4
 ; CHECK: stur {{w[0-9]+}}, [x0, #-256]
 
-  %ptr_random = getelementptr i32* %p, i32 297984 ; 0x123000/4 (i.e. ADD imm)
+  %ptr_random = getelementptr i32, i32* %p, i32 297984 ; 0x123000/4 (i.e. ADD imm)
   store atomic i32 %val, i32* %ptr_random unordered, align 4
 ; CHECK: add x[[ADDR:[0-9]+]], x0, #291, lsl #12
 ; CHECK: str {{w[0-9]+}}, [x[[ADDR]]]
@@ -289,19 +289,19 @@ define void @atomic_store_relaxed_32(i32
 
 define void @atomic_store_relaxed_64(i64* %p, i32 %off32, i64 %val) {
 ; CHECK-LABEL: atomic_store_relaxed_64:
-  %ptr_unsigned = getelementptr i64* %p, i32 4095
+  %ptr_unsigned = getelementptr i64, i64* %p, i32 4095
   store atomic i64 %val, i64* %ptr_unsigned monotonic, align 8
 ; CHECK: str {{x[0-9]+}}, [x0, #32760]
 
-  %ptr_regoff = getelementptr i64* %p, i32 %off32
+  %ptr_regoff = getelementptr i64, i64* %p, i32 %off32
   store atomic i64 %val, i64* %ptr_regoff unordered, align 8
 ; CHECK: str {{x[0-9]+}}, [x0, w1, sxtw #3]
 
-  %ptr_unscaled = getelementptr i64* %p, i32 -32
+  %ptr_unscaled = getelementptr i64, i64* %p, i32 -32
   store atomic i64 %val, i64* %ptr_unscaled monotonic, align 8
 ; CHECK: stur {{x[0-9]+}}, [x0, #-256]
 
-  %ptr_random = getelementptr i64* %p, i32 148992 ; 0x123000/8 (i.e. ADD imm)
+  %ptr_random = getelementptr i64, i64* %p, i32 148992 ; 0x123000/8 (i.e. ADD imm)
   store atomic i64 %val, i64* %ptr_random unordered, align 8
 ; CHECK: add x[[ADDR:[0-9]+]], x0, #291, lsl #12
 ; CHECK: str {{x[0-9]+}}, [x[[ADDR]]]

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-bcc.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-bcc.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-bcc.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-bcc.ll Fri Feb 27 13:29:02 2015
@@ -27,9 +27,9 @@ entry:
 define { i64, i1 } @foo(i64* , %Sstruct* , i1, i64) {
 entry:
   %.sroa.0 = alloca i72, align 16
-  %.count.value = getelementptr inbounds %Sstruct* %1, i64 0, i32 0, i32 0
+  %.count.value = getelementptr inbounds %Sstruct, %Sstruct* %1, i64 0, i32 0, i32 0
   %4 = load i64* %.count.value, align 8
-  %.repeatedValue.value = getelementptr inbounds %Sstruct* %1, i64 0, i32 1, i32 0
+  %.repeatedValue.value = getelementptr inbounds %Sstruct, %Sstruct* %1, i64 0, i32 1, i32 0
   %5 = load i32* %.repeatedValue.value, align 8
   %6 = icmp eq i64 %4, 0
   br label %7

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-big-endian-varargs.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-big-endian-varargs.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-big-endian-varargs.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-big-endian-varargs.ll Fri Feb 27 13:29:02 2015
@@ -21,7 +21,7 @@ entry:
   %vl = alloca %struct.__va_list, align 8
   %vl1 = bitcast %struct.__va_list* %vl to i8*
   call void @llvm.va_start(i8* %vl1)
-  %vr_offs_p = getelementptr inbounds %struct.__va_list* %vl, i64 0, i32 4
+  %vr_offs_p = getelementptr inbounds %struct.__va_list, %struct.__va_list* %vl, i64 0, i32 4
   %vr_offs = load i32* %vr_offs_p, align 4
   %0 = icmp sgt i32 %vr_offs, -1
   br i1 %0, label %vaarg.on_stack, label %vaarg.maybe_reg
@@ -33,19 +33,19 @@ vaarg.maybe_reg:
   br i1 %inreg, label %vaarg.in_reg, label %vaarg.on_stack
 
 vaarg.in_reg:                                     ; preds = %vaarg.maybe_reg
-  %reg_top_p = getelementptr inbounds %struct.__va_list* %vl, i64 0, i32 2
+  %reg_top_p = getelementptr inbounds %struct.__va_list, %struct.__va_list* %vl, i64 0, i32 2
   %reg_top = load i8** %reg_top_p, align 8
   %1 = sext i32 %vr_offs to i64
-  %2 = getelementptr i8* %reg_top, i64 %1
+  %2 = getelementptr i8, i8* %reg_top, i64 %1
   %3 = ptrtoint i8* %2 to i64
   %align_be = add i64 %3, 8
   %4 = inttoptr i64 %align_be to i8*
   br label %vaarg.end
 
 vaarg.on_stack:                                   ; preds = %vaarg.maybe_reg, %entry
-  %stack_p = getelementptr inbounds %struct.__va_list* %vl, i64 0, i32 0
+  %stack_p = getelementptr inbounds %struct.__va_list, %struct.__va_list* %vl, i64 0, i32 0
   %stack = load i8** %stack_p, align 8
-  %new_stack = getelementptr i8* %stack, i64 8
+  %new_stack = getelementptr i8, i8* %stack, i64 8
   store i8* %new_stack, i8** %stack_p, align 8
   br label %vaarg.end
 

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-big-stack.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-big-stack.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-big-stack.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-big-stack.ll Fri Feb 27 13:29:02 2015
@@ -13,7 +13,7 @@ target triple = "arm64-apple-macosx10"
 define void @foo() nounwind ssp {
 entry:
   %buffer = alloca [33554432 x i8], align 1
-  %arraydecay = getelementptr inbounds [33554432 x i8]* %buffer, i64 0, i64 0
+  %arraydecay = getelementptr inbounds [33554432 x i8], [33554432 x i8]* %buffer, i64 0, i64 0
   call void @doit(i8* %arraydecay) nounwind
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-bitfield-extract.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-bitfield-extract.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-bitfield-extract.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-bitfield-extract.ll Fri Feb 27 13:29:02 2015
@@ -13,7 +13,7 @@ define void @foo(%struct.X* nocapture %x
 
   %tmp = bitcast %struct.X* %x to i32*
   %tmp1 = load i32* %tmp, align 4
-  %b = getelementptr inbounds %struct.Y* %y, i64 0, i32 1
+  %b = getelementptr inbounds %struct.Y, %struct.Y* %y, i64 0, i32 1
   %bf.clear = lshr i32 %tmp1, 3
   %bf.clear.lobit = and i32 %bf.clear, 1
   %frombool = trunc i32 %bf.clear.lobit to i8
@@ -47,7 +47,7 @@ define void @fct1(%struct.Z* nocapture %
 
   %tmp = bitcast %struct.Z* %x to i64*
   %tmp1 = load i64* %tmp, align 4
-  %b = getelementptr inbounds %struct.A* %y, i64 0, i32 0
+  %b = getelementptr inbounds %struct.A, %struct.A* %y, i64 0, i32 0
   %bf.clear = lshr i64 %tmp1, 3
   %bf.clear.lobit = and i64 %bf.clear, 1
   store i64 %bf.clear.lobit, i64* %b, align 8
@@ -421,7 +421,7 @@ entry:
   br i1 %tobool, label %if.end, label %if.then
 
 if.then:                                          ; preds = %entry
-  %arrayidx3 = getelementptr inbounds [65536 x i8]* @first_ones, i64 0, i64 %x.sroa.5.0.extract.shift
+  %arrayidx3 = getelementptr inbounds [65536 x i8], [65536 x i8]* @first_ones, i64 0, i64 %x.sroa.5.0.extract.shift
   %0 = load i8* %arrayidx3, align 1
   %conv = zext i8 %0 to i32
   br label %return
@@ -443,7 +443,7 @@ if.then7:
 ; CHECK-NOT: and
 ; CHECK-NOT: ubfm
   %idxprom10 = and i64 %x.sroa.3.0.extract.shift, 65535
-  %arrayidx11 = getelementptr inbounds [65536 x i8]* @first_ones, i64 0, i64 %idxprom10
+  %arrayidx11 = getelementptr inbounds [65536 x i8], [65536 x i8]* @first_ones, i64 0, i64 %idxprom10
   %1 = load i8* %arrayidx11, align 1
   %conv12 = zext i8 %1 to i32
   %add = add nsw i32 %conv12, 16
@@ -466,7 +466,7 @@ if.then17:
 ; CHECK-NOT: and
 ; CHECK-NOT: ubfm
   %idxprom20 = and i64 %x.sroa.1.0.extract.shift, 65535
-  %arrayidx21 = getelementptr inbounds [65536 x i8]* @first_ones, i64 0, i64 %idxprom20
+  %arrayidx21 = getelementptr inbounds [65536 x i8], [65536 x i8]* @first_ones, i64 0, i64 %idxprom20
   %2 = load i8* %arrayidx21, align 1
   %conv22 = zext i8 %2 to i32
   %add23 = add nsw i32 %conv22, 32
@@ -509,7 +509,7 @@ define i64 @fct21(i64 %x) {
 entry:
   %shr = lshr i64 %x, 4
   %and = and i64 %shr, 15
-  %arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, i64 0, i64 %and
+  %arrayidx = getelementptr inbounds [8 x [64 x i64]], [8 x [64 x i64]]* @arr, i64 0, i64 0, i64 %and
   %0 = load i64* %arrayidx, align 8
   ret i64 %0
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-cast-opt.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-cast-opt.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-cast-opt.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-cast-opt.ll Fri Feb 27 13:29:02 2015
@@ -12,10 +12,10 @@ define zeroext i8 @foo(i32 %i1, i32 %i2)
 entry:
   %idxprom = sext i32 %i1 to i64
   %0 = load i8** @block, align 8
-  %arrayidx = getelementptr inbounds i8* %0, i64 %idxprom
+  %arrayidx = getelementptr inbounds i8, i8* %0, i64 %idxprom
   %1 = load i8* %arrayidx, align 1
   %idxprom1 = sext i32 %i2 to i64
-  %arrayidx2 = getelementptr inbounds i8* %0, i64 %idxprom1
+  %arrayidx2 = getelementptr inbounds i8, i8* %0, i64 %idxprom1
   %2 = load i8* %arrayidx2, align 1
   %cmp = icmp eq i8 %1, %2
   br i1 %cmp, label %return, label %if.then

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-ccmp-heuristics.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-ccmp-heuristics.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-ccmp-heuristics.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-ccmp-heuristics.ll Fri Feb 27 13:29:02 2015
@@ -30,22 +30,22 @@ for.body:
   %i.092 = phi i64 [ 1, %entry ], [ %inc53, %for.inc ]
   %numLeft.091 = phi i32 [ 0, %entry ], [ %numLeft.1, %for.inc ]
   %2 = load i8** @mazeRoute, align 8, !tbaa !3
-  %arrayidx = getelementptr inbounds i8* %2, i64 %i.092
+  %arrayidx = getelementptr inbounds i8, i8* %2, i64 %i.092
   %3 = load i8* %arrayidx, align 1, !tbaa !1
   %tobool = icmp eq i8 %3, 0
   br i1 %tobool, label %for.inc, label %if.then
 
 if.then:                                          ; preds = %for.body
   %4 = load i64** @TOP, align 8, !tbaa !3
-  %arrayidx1 = getelementptr inbounds i64* %4, i64 %i.092
+  %arrayidx1 = getelementptr inbounds i64, i64* %4, i64 %i.092
   %5 = load i64* %arrayidx1, align 8, !tbaa !0
   %6 = load i64** @netsAssign, align 8, !tbaa !3
-  %arrayidx2 = getelementptr inbounds i64* %6, i64 %5
+  %arrayidx2 = getelementptr inbounds i64, i64* %6, i64 %5
   %7 = load i64* %arrayidx2, align 8, !tbaa !0
   %8 = load i64** @BOT, align 8, !tbaa !3
-  %arrayidx3 = getelementptr inbounds i64* %8, i64 %i.092
+  %arrayidx3 = getelementptr inbounds i64, i64* %8, i64 %i.092
   %9 = load i64* %arrayidx3, align 8, !tbaa !0
-  %arrayidx4 = getelementptr inbounds i64* %6, i64 %9
+  %arrayidx4 = getelementptr inbounds i64, i64* %6, i64 %9
   %10 = load i64* %arrayidx4, align 8, !tbaa !0
   %cmp5 = icmp ugt i64 %i.092, 1
   %cmp6 = icmp ugt i64 %10, 1
@@ -65,14 +65,14 @@ land.lhs.true7.if.else_crit_edge:
 
 if.then9:                                         ; preds = %land.lhs.true7
   %12 = load i8** @mazeRoute, align 8, !tbaa !3
-  %arrayidx10 = getelementptr inbounds i8* %12, i64 %i.092
+  %arrayidx10 = getelementptr inbounds i8, i8* %12, i64 %i.092
   store i8 0, i8* %arrayidx10, align 1, !tbaa !1
   %13 = load i64** @TOP, align 8, !tbaa !3
-  %arrayidx11 = getelementptr inbounds i64* %13, i64 %i.092
+  %arrayidx11 = getelementptr inbounds i64, i64* %13, i64 %i.092
   %14 = load i64* %arrayidx11, align 8, !tbaa !0
   tail call fastcc void @CleanNet(i64 %14)
   %15 = load i64** @BOT, align 8, !tbaa !3
-  %arrayidx12 = getelementptr inbounds i64* %15, i64 %i.092
+  %arrayidx12 = getelementptr inbounds i64, i64* %15, i64 %i.092
   %16 = load i64* %arrayidx12, align 8, !tbaa !0
   tail call fastcc void @CleanNet(i64 %16)
   br label %for.inc
@@ -92,14 +92,14 @@ land.lhs.true16:
 
 if.then20:                                        ; preds = %land.lhs.true16
   %19 = load i8** @mazeRoute, align 8, !tbaa !3
-  %arrayidx21 = getelementptr inbounds i8* %19, i64 %i.092
+  %arrayidx21 = getelementptr inbounds i8, i8* %19, i64 %i.092
   store i8 0, i8* %arrayidx21, align 1, !tbaa !1
   %20 = load i64** @TOP, align 8, !tbaa !3
-  %arrayidx22 = getelementptr inbounds i64* %20, i64 %i.092
+  %arrayidx22 = getelementptr inbounds i64, i64* %20, i64 %i.092
   %21 = load i64* %arrayidx22, align 8, !tbaa !0
   tail call fastcc void @CleanNet(i64 %21)
   %22 = load i64** @BOT, align 8, !tbaa !3
-  %arrayidx23 = getelementptr inbounds i64* %22, i64 %i.092
+  %arrayidx23 = getelementptr inbounds i64, i64* %22, i64 %i.092
   %23 = load i64* %arrayidx23, align 8, !tbaa !0
   tail call fastcc void @CleanNet(i64 %23)
   br label %for.inc
@@ -120,14 +120,14 @@ land.lhs.true28:
 
 if.then32:                                        ; preds = %land.lhs.true28
   %25 = load i8** @mazeRoute, align 8, !tbaa !3
-  %arrayidx33 = getelementptr inbounds i8* %25, i64 %i.092
+  %arrayidx33 = getelementptr inbounds i8, i8* %25, i64 %i.092
   store i8 0, i8* %arrayidx33, align 1, !tbaa !1
   %26 = load i64** @TOP, align 8, !tbaa !3
-  %arrayidx34 = getelementptr inbounds i64* %26, i64 %i.092
+  %arrayidx34 = getelementptr inbounds i64, i64* %26, i64 %i.092
   %27 = load i64* %arrayidx34, align 8, !tbaa !0
   tail call fastcc void @CleanNet(i64 %27)
   %28 = load i64** @BOT, align 8, !tbaa !3
-  %arrayidx35 = getelementptr inbounds i64* %28, i64 %i.092
+  %arrayidx35 = getelementptr inbounds i64, i64* %28, i64 %i.092
   %29 = load i64* %arrayidx35, align 8, !tbaa !0
   tail call fastcc void @CleanNet(i64 %29)
   br label %for.inc
@@ -150,14 +150,14 @@ land.lhs.true40:
 
 if.then44:                                        ; preds = %land.lhs.true40
   %32 = load i8** @mazeRoute, align 8, !tbaa !3
-  %arrayidx45 = getelementptr inbounds i8* %32, i64 %i.092
+  %arrayidx45 = getelementptr inbounds i8, i8* %32, i64 %i.092
   store i8 0, i8* %arrayidx45, align 1, !tbaa !1
   %33 = load i64** @TOP, align 8, !tbaa !3
-  %arrayidx46 = getelementptr inbounds i64* %33, i64 %i.092
+  %arrayidx46 = getelementptr inbounds i64, i64* %33, i64 %i.092
   %34 = load i64* %arrayidx46, align 8, !tbaa !0
   tail call fastcc void @CleanNet(i64 %34)
   %35 = load i64** @BOT, align 8, !tbaa !3
-  %arrayidx47 = getelementptr inbounds i64* %35, i64 %i.092
+  %arrayidx47 = getelementptr inbounds i64, i64* %35, i64 %i.092
   %36 = load i64* %arrayidx47, align 8, !tbaa !0
   tail call fastcc void @CleanNet(i64 %36)
   br label %for.inc

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-ccmp.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-ccmp.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-ccmp.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-ccmp.ll Fri Feb 27 13:29:02 2015
@@ -281,9 +281,9 @@ if.end85:
 
 sw.bb.i.i:
   %ref.tr.i.i = phi %str1* [ %0, %sw.bb.i.i ], [ undef, %entry ]
-  %operands.i.i = getelementptr inbounds %str1* %ref.tr.i.i, i64 0, i32 0, i32 2
+  %operands.i.i = getelementptr inbounds %str1, %str1* %ref.tr.i.i, i64 0, i32 0, i32 2
   %arrayidx.i.i = bitcast i32* %operands.i.i to %str1**
   %0 = load %str1** %arrayidx.i.i, align 8
-  %code1.i.i.phi.trans.insert = getelementptr inbounds %str1* %0, i64 0, i32 0, i32 0, i64 16
+  %code1.i.i.phi.trans.insert = getelementptr inbounds %str1, %str1* %0, i64 0, i32 0, i32 0, i64 16
   br label %sw.bb.i.i
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-collect-loh-garbage-crash.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-collect-loh-garbage-crash.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-collect-loh-garbage-crash.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-collect-loh-garbage-crash.ll Fri Feb 27 13:29:02 2015
@@ -27,7 +27,7 @@ if.then83:
   tail call void asm sideeffect "", "~{x19},~{x20},~{x21},~{x22},~{x23},~{x24},~{x25},~{x26},~{x27}"()
   %tmp2 = load %"class.H4ISP::H4ISPDevice"** @pH4ISPDevice, align 8
   tail call void asm sideeffect "", "~{x19},~{x20},~{x21},~{x22},~{x23},~{x24},~{x25},~{x26},~{x28}"()
-  %pCameraManager.i268 = getelementptr inbounds %"class.H4ISP::H4ISPDevice"* %tmp2, i64 0, i32 3
+  %pCameraManager.i268 = getelementptr inbounds %"class.H4ISP::H4ISPDevice", %"class.H4ISP::H4ISPDevice"* %tmp2, i64 0, i32 3
   %tmp3 = load %"class.H4ISP::H4ISPCameraManager"** %pCameraManager.i268, align 8
   %tobool.i269 = icmp eq %"class.H4ISP::H4ISPCameraManager"* %tmp3, null
   br i1 %tobool.i269, label %if.then83, label %end

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-complex-copy-noneon.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-complex-copy-noneon.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-complex-copy-noneon.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-complex-copy-noneon.ll Fri Feb 27 13:29:02 2015
@@ -8,13 +8,13 @@ define void @store_combine() nounwind {
   %src = alloca { double, double }, align 8
   %dst = alloca { double, double }, align 8
 
-  %src.realp = getelementptr inbounds { double, double }* %src, i32 0, i32 0
+  %src.realp = getelementptr inbounds { double, double }, { double, double }* %src, i32 0, i32 0
   %src.real = load double* %src.realp
-  %src.imagp = getelementptr inbounds { double, double }* %src, i32 0, i32 1
+  %src.imagp = getelementptr inbounds { double, double }, { double, double }* %src, i32 0, i32 1
   %src.imag = load double* %src.imagp
 
-  %dst.realp = getelementptr inbounds { double, double }* %dst, i32 0, i32 0
-  %dst.imagp = getelementptr inbounds { double, double }* %dst, i32 0, i32 1
+  %dst.realp = getelementptr inbounds { double, double }, { double, double }* %dst, i32 0, i32 0
+  %dst.imagp = getelementptr inbounds { double, double }, { double, double }* %dst, i32 0, i32 1
   store double %src.real, double* %dst.realp
   store double %src.imag, double* %dst.imagp
   ret void

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-const-addr.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-const-addr.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-const-addr.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-const-addr.ll Fri Feb 27 13:29:02 2015
@@ -10,12 +10,12 @@ define i32 @test1() nounwind {
 ; CHECK-NEXT:   ldp w9, w10, [x8, #4]
 ; CHECK:        ldr w8, [x8, #12]
   %at = inttoptr i64 68141056 to %T*
-  %o1 = getelementptr %T* %at, i32 0, i32 1
+  %o1 = getelementptr %T, %T* %at, i32 0, i32 1
   %t1 = load i32* %o1
-  %o2 = getelementptr %T* %at, i32 0, i32 2
+  %o2 = getelementptr %T, %T* %at, i32 0, i32 2
   %t2 = load i32* %o2
   %a1 = add i32 %t1, %t2
-  %o3 = getelementptr %T* %at, i32 0, i32 3
+  %o3 = getelementptr %T, %T* %at, i32 0, i32 3
   %t3 = load i32* %o3
   %a2 = add i32 %a1, %t3
   ret i32 %a2

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-cse.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-cse.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-cse.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-cse.ll Fri Feb 27 13:29:02 2015
@@ -25,7 +25,7 @@ if.end:
  %s2 = sub nsw i32 %s, %size
  %s3 = sub nsw i32 %sub, %s2
  store i32 %s3, i32* %offset, align 4
- %add.ptr = getelementptr inbounds i8* %base, i32 %sub
+ %add.ptr = getelementptr inbounds i8, i8* %base, i32 %sub
  br label %return
 
 return:
@@ -50,7 +50,7 @@ entry:
 if.end:
  %sub = sub nsw i32 %0, 1
  store i32 %sub, i32* %offset, align 4
- %add.ptr = getelementptr inbounds i8* %base, i32 %sub
+ %add.ptr = getelementptr inbounds i8, i8* %base, i32 %sub
  br label %return
 
 return:

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-dead-indexed-load.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-dead-indexed-load.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-dead-indexed-load.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-dead-indexed-load.ll Fri Feb 27 13:29:02 2015
@@ -17,7 +17,7 @@ target triple = "arm64-apple-ios"
 ; CHECK-NOT: str
 define void @test(%"struct.SU"* nocapture %su) {
 entry:
-  %r1 = getelementptr inbounds %"struct.SU"* %su, i64 1, i32 5
+  %r1 = getelementptr inbounds %"struct.SU", %"struct.SU"* %su, i64 1, i32 5
   %r2 = bitcast %"struct.BO"* %r1 to i48*
   %r3 = load i48* %r2, align 8
   %r4 = and i48 %r3, -4294967296

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-dagcombiner-load-slicing.ll Fri Feb 27 13:29:02 2015
@@ -14,7 +14,7 @@
 ; CHECK: ret
 define void @test(%class.Complex* nocapture %out, i64 %out_start) {
 entry:
-  %arrayidx = getelementptr inbounds %class.Complex* %out, i64 %out_start
+  %arrayidx = getelementptr inbounds %class.Complex, %class.Complex* %out, i64 %out_start
   %0 = bitcast %class.Complex* %arrayidx to i64*
   %1 = load i64* %0, align 4
   %t0.sroa.0.0.extract.trunc = trunc i64 %1 to i32
@@ -23,12 +23,12 @@ entry:
   %t0.sroa.2.0.extract.trunc = trunc i64 %t0.sroa.2.0.extract.shift to i32
   %3 = bitcast i32 %t0.sroa.2.0.extract.trunc to float
   %add = add i64 %out_start, 8
-  %arrayidx2 = getelementptr inbounds %class.Complex* %out, i64 %add
-  %i.i = getelementptr inbounds %class.Complex* %arrayidx2, i64 0, i32 0
+  %arrayidx2 = getelementptr inbounds %class.Complex, %class.Complex* %out, i64 %add
+  %i.i = getelementptr inbounds %class.Complex, %class.Complex* %arrayidx2, i64 0, i32 0
   %4 = load float* %i.i, align 4
   %add.i = fadd float %4, %2
   %retval.sroa.0.0.vec.insert.i = insertelement <2 x float> undef, float %add.i, i32 0
-  %r.i = getelementptr inbounds %class.Complex* %arrayidx2, i64 0, i32 1
+  %r.i = getelementptr inbounds %class.Complex, %class.Complex* %arrayidx2, i64 0, i32 1
   %5 = load float* %r.i, align 4
   %add5.i = fadd float %5, %3
   %retval.sroa.0.4.vec.insert.i = insertelement <2 x float> %retval.sroa.0.0.vec.insert.i, float %add5.i, i32 1
@@ -46,7 +46,7 @@ entry:
 ; CHECK: ret
 define void @test_int(%class.Complex_int* nocapture %out, i64 %out_start) {
 entry:
-  %arrayidx = getelementptr inbounds %class.Complex_int* %out, i64 %out_start
+  %arrayidx = getelementptr inbounds %class.Complex_int, %class.Complex_int* %out, i64 %out_start
   %0 = bitcast %class.Complex_int* %arrayidx to i64*
   %1 = load i64* %0, align 4
   %t0.sroa.0.0.extract.trunc = trunc i64 %1 to i32
@@ -55,12 +55,12 @@ entry:
   %t0.sroa.2.0.extract.trunc = trunc i64 %t0.sroa.2.0.extract.shift to i32
   %3 = bitcast i32 %t0.sroa.2.0.extract.trunc to i32
   %add = add i64 %out_start, 8
-  %arrayidx2 = getelementptr inbounds %class.Complex_int* %out, i64 %add
-  %i.i = getelementptr inbounds %class.Complex_int* %arrayidx2, i64 0, i32 0
+  %arrayidx2 = getelementptr inbounds %class.Complex_int, %class.Complex_int* %out, i64 %add
+  %i.i = getelementptr inbounds %class.Complex_int, %class.Complex_int* %arrayidx2, i64 0, i32 0
   %4 = load i32* %i.i, align 4
   %add.i = add i32 %4, %2
   %retval.sroa.0.0.vec.insert.i = insertelement <2 x i32> undef, i32 %add.i, i32 0
-  %r.i = getelementptr inbounds %class.Complex_int* %arrayidx2, i64 0, i32 1
+  %r.i = getelementptr inbounds %class.Complex_int, %class.Complex_int* %arrayidx2, i64 0, i32 1
   %5 = load i32* %r.i, align 4
   %add5.i = add i32 %5, %3
   %retval.sroa.0.4.vec.insert.i = insertelement <2 x i32> %retval.sroa.0.0.vec.insert.i, i32 %add5.i, i32 1
@@ -78,7 +78,7 @@ entry:
 ; CHECK: ret
 define void @test_long(%class.Complex_long* nocapture %out, i64 %out_start) {
 entry:
-  %arrayidx = getelementptr inbounds %class.Complex_long* %out, i64 %out_start
+  %arrayidx = getelementptr inbounds %class.Complex_long, %class.Complex_long* %out, i64 %out_start
   %0 = bitcast %class.Complex_long* %arrayidx to i128*
   %1 = load i128* %0, align 4
   %t0.sroa.0.0.extract.trunc = trunc i128 %1 to i64
@@ -87,12 +87,12 @@ entry:
   %t0.sroa.2.0.extract.trunc = trunc i128 %t0.sroa.2.0.extract.shift to i64
   %3 = bitcast i64 %t0.sroa.2.0.extract.trunc to i64
   %add = add i64 %out_start, 8
-  %arrayidx2 = getelementptr inbounds %class.Complex_long* %out, i64 %add
-  %i.i = getelementptr inbounds %class.Complex_long* %arrayidx2, i32 0, i32 0
+  %arrayidx2 = getelementptr inbounds %class.Complex_long, %class.Complex_long* %out, i64 %add
+  %i.i = getelementptr inbounds %class.Complex_long, %class.Complex_long* %arrayidx2, i32 0, i32 0
   %4 = load i64* %i.i, align 4
   %add.i = add i64 %4, %2
   %retval.sroa.0.0.vec.insert.i = insertelement <2 x i64> undef, i64 %add.i, i32 0
-  %r.i = getelementptr inbounds %class.Complex_long* %arrayidx2, i32 0, i32 1
+  %r.i = getelementptr inbounds %class.Complex_long, %class.Complex_long* %arrayidx2, i32 0, i32 1
   %5 = load i64* %r.i, align 4
   %add5.i = add i64 %5, %3
   %retval.sroa.0.4.vec.insert.i = insertelement <2 x i64> %retval.sroa.0.0.vec.insert.i, i64 %add5.i, i32 1

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-early-ifcvt.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-early-ifcvt.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-early-ifcvt.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-early-ifcvt.ll Fri Feb 27 13:29:02 2015
@@ -14,7 +14,7 @@ do.body:
   %min.0 = phi i32 [ 0, %entry ], [ %min.1, %do.cond ]
   %n.addr.0 = phi i32 [ %n, %entry ], [ %dec, %do.cond ]
   %p.addr.0 = phi i32* [ %p, %entry ], [ %incdec.ptr, %do.cond ]
-  %incdec.ptr = getelementptr inbounds i32* %p.addr.0, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %p.addr.0, i64 1
   %0 = load i32* %p.addr.0, align 4
   %cmp = icmp sgt i32 %0, %max.0
   br i1 %cmp, label %do.cond, label %if.else
@@ -412,7 +412,7 @@ if.then.i146:
 is_sbox.exit155:                                  ; preds = %if.then.i146, %for.body
   %seg_offset.0.i151 = phi i32 [ %add9.i145, %if.then.i146 ], [ undef, %for.body ]
   %idxprom15.i152 = sext i32 %seg_offset.0.i151 to i64
-  %arrayidx18.i154 = getelementptr inbounds i32* null, i64 %idxprom15.i152
+  %arrayidx18.i154 = getelementptr inbounds i32, i32* null, i64 %idxprom15.i152
   %x1 = load i32* %arrayidx18.i154, align 4
   br i1 undef, label %for.body51, label %for.body
 

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-elf-globals.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-elf-globals.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-elf-globals.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-elf-globals.ll Fri Feb 27 13:29:02 2015
@@ -96,7 +96,7 @@ define i32 @test_vis() {
 @var_default = external global [2 x i32]
 
 define i32 @test_default_align() {
-  %addr = getelementptr [2 x i32]* @var_default, i32 0, i32 0
+  %addr = getelementptr [2 x i32], [2 x i32]* @var_default, i32 0, i32 0
   %val = load i32* %addr
   ret i32 %val
 ; CHECK-LABEL: test_default_align:

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-extend.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-extend.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-extend.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-extend.ll Fri Feb 27 13:29:02 2015
@@ -8,7 +8,7 @@ define i64 @foo(i32 %i) {
 ; CHECK:  ldrsw x0, [x[[REG1]], w0, sxtw #2]
 ; CHECK:  ret
   %idxprom = sext i32 %i to i64
-  %arrayidx = getelementptr inbounds [0 x i32]* @array, i64 0, i64 %idxprom
+  %arrayidx = getelementptr inbounds [0 x i32], [0 x i32]* @array, i64 0, i64 %idxprom
   %tmp1 = load i32* %arrayidx, align 4
   %conv = sext i32 %tmp1 to i64
   ret i64 %conv

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-extern-weak.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-extern-weak.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-extern-weak.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-extern-weak.ll Fri Feb 27 13:29:02 2015
@@ -30,7 +30,7 @@ define i32()* @foo() {
 @arr_var = extern_weak global [10 x i32]
 
 define i32* @bar() {
-  %addr = getelementptr [10 x i32]* @arr_var, i32 0, i32 5
+  %addr = getelementptr [10 x i32], [10 x i32]* @arr_var, i32 0, i32 5
 ; CHECK: adrp x[[ARR_VAR_HI:[0-9]+]], :got:arr_var
 ; CHECK: ldr [[ARR_VAR:x[0-9]+]], [x[[ARR_VAR_HI]], :got_lo12:arr_var]
 ; CHECK: add x0, [[ARR_VAR]], #20

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-addr-offset.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-addr-offset.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-addr-offset.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-addr-offset.ll Fri Feb 27 13:29:02 2015
@@ -41,7 +41,7 @@ entry:
 ; CHECK: movk x[[REG]], #0x73ce, lsl #16
 ; CHECK: movk x[[REG]], #0x2ff2
   %0 = load i8** @pd2, align 8
-  %arrayidx = getelementptr inbounds i8* %0, i64 12345678901234
+  %arrayidx = getelementptr inbounds i8, i8* %0, i64 12345678901234
   %1 = load i8* %arrayidx, align 1
   ret i8 %1
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-alloca.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-alloca.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-alloca.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-alloca.ll Fri Feb 27 13:29:02 2015
@@ -18,7 +18,7 @@ entry:
 ; CHECK: mov [[REG:x[0-9]+]], sp
 ; CHECK-NEXT: add x0, [[REG]], #8
   %E = alloca %struct.S2Ty, align 4
-  %B = getelementptr inbounds %struct.S2Ty* %E, i32 0, i32 1
+  %B = getelementptr inbounds %struct.S2Ty, %struct.S2Ty* %E, i32 0, i32 1
   call void @takeS1(%struct.S1Ty* %B)
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-indirectbr.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-indirectbr.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-indirectbr.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-indirectbr.ll Fri Feb 27 13:29:02 2015
@@ -10,7 +10,7 @@ entry:
   store i32 %target, i32* %target.addr, align 4
   %0 = load i32* %target.addr, align 4
   %idxprom = zext i32 %0 to i64
-  %arrayidx = getelementptr inbounds [2 x i8*]* @fn.table, i32 0, i64 %idxprom
+  %arrayidx = getelementptr inbounds [2 x i8*], [2 x i8*]* @fn.table, i32 0, i64 %idxprom
   %1 = load i8** %arrayidx, align 8
   br label %indirectgoto
 

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-intrinsic.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-intrinsic.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-intrinsic.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel-intrinsic.ll Fri Feb 27 13:29:02 2015
@@ -142,7 +142,7 @@ define void @test_distant_memcpy(i8* %ds
 ; ARM64: ldrb [[BYTE:w[0-9]+]], [x[[ADDR]]]
 ; ARM64: strb [[BYTE]], [x0]
   %array = alloca i8, i32 8192
-  %elem = getelementptr i8* %array, i32 8000
+  %elem = getelementptr i8, i8* %array, i32 8000
   call void @llvm.memcpy.p0i8.p0i8.i64(i8* %dst, i8* %elem, i64 1, i32 1, i1 false)
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-fast-isel.ll Fri Feb 27 13:29:02 2015
@@ -48,7 +48,7 @@ entry:
 ; CHECK-LABEL: t2:
 ; CHECK: ldur w0, [x0, #-4]
 ; CHECK: ret
-  %0 = getelementptr i32 *%ptr, i32 -1
+  %0 = getelementptr i32, i32 *%ptr, i32 -1
   %1 = load i32* %0, align 4
   ret i32 %1
 }
@@ -58,7 +58,7 @@ entry:
 ; CHECK-LABEL: t3:
 ; CHECK: ldur w0, [x0, #-256]
 ; CHECK: ret
-  %0 = getelementptr i32 *%ptr, i32 -64
+  %0 = getelementptr i32, i32 *%ptr, i32 -64
   %1 = load i32* %0, align 4
   ret i32 %1
 }
@@ -68,7 +68,7 @@ entry:
 ; CHECK-LABEL: t4:
 ; CHECK: stur wzr, [x0, #-4]
 ; CHECK: ret
-  %0 = getelementptr i32 *%ptr, i32 -1
+  %0 = getelementptr i32, i32 *%ptr, i32 -1
   store i32 0, i32* %0, align 4
   ret void
 }
@@ -78,7 +78,7 @@ entry:
 ; CHECK-LABEL: t5:
 ; CHECK: stur wzr, [x0, #-256]
 ; CHECK: ret
-  %0 = getelementptr i32 *%ptr, i32 -64
+  %0 = getelementptr i32, i32 *%ptr, i32 -64
   store i32 0, i32* %0, align 4
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-fastisel-gep-promote-before-add.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-fastisel-gep-promote-before-add.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-fastisel-gep-promote-before-add.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-fastisel-gep-promote-before-add.ll Fri Feb 27 13:29:02 2015
@@ -10,7 +10,7 @@ entry:
 
   ; CHECK-LABEL: _gep_promotion:
   ; CHECK: ldrb {{[a-z][0-9]+}}, {{\[[a-z][0-9]+\]}}
-  %arrayidx = getelementptr inbounds i8* %0, i8 %add
+  %arrayidx = getelementptr inbounds i8, i8* %0, i8 %add
 
   %1 = load i8* %arrayidx, align 1
   ret i8 %1

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-fold-address.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-fold-address.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-fold-address.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-fold-address.ll Fri Feb 27 13:29:02 2015
@@ -16,19 +16,19 @@ entry:
 ; CHECK: ret
   %ivar = load i64* @"OBJC_IVAR_$_UIScreen._bounds", align 8, !invariant.load !4
   %0 = bitcast %0* %self to i8*
-  %add.ptr = getelementptr inbounds i8* %0, i64 %ivar
+  %add.ptr = getelementptr inbounds i8, i8* %0, i64 %ivar
   %add.ptr10.0 = bitcast i8* %add.ptr to double*
   %tmp11 = load double* %add.ptr10.0, align 8
   %add.ptr.sum = add i64 %ivar, 8
-  %add.ptr10.1 = getelementptr inbounds i8* %0, i64 %add.ptr.sum
+  %add.ptr10.1 = getelementptr inbounds i8, i8* %0, i64 %add.ptr.sum
   %1 = bitcast i8* %add.ptr10.1 to double*
   %tmp12 = load double* %1, align 8
   %add.ptr.sum17 = add i64 %ivar, 16
-  %add.ptr4.1 = getelementptr inbounds i8* %0, i64 %add.ptr.sum17
+  %add.ptr4.1 = getelementptr inbounds i8, i8* %0, i64 %add.ptr.sum17
   %add.ptr4.1.0 = bitcast i8* %add.ptr4.1 to double*
   %tmp = load double* %add.ptr4.1.0, align 8
   %add.ptr4.1.sum = add i64 %ivar, 24
-  %add.ptr4.1.1 = getelementptr inbounds i8* %0, i64 %add.ptr4.1.sum
+  %add.ptr4.1.1 = getelementptr inbounds i8, i8* %0, i64 %add.ptr4.1.sum
   %2 = bitcast i8* %add.ptr4.1.1 to double*
   %tmp5 = load double* %2, align 8
   %insert14 = insertvalue %struct.CGPoint undef, double %tmp11, 0
@@ -48,16 +48,16 @@ entry:
 ; CHECK: ret
   %ivar = load i64* @"OBJC_IVAR_$_UIScreen._bounds", align 8, !invariant.load !4
   %0 = bitcast %0* %self to i8*
-  %add.ptr = getelementptr inbounds i8* %0, i64 %ivar
+  %add.ptr = getelementptr inbounds i8, i8* %0, i64 %ivar
   %add.ptr10.0 = bitcast i8* %add.ptr to double*
   %tmp11 = load double* %add.ptr10.0, align 8
-  %add.ptr10.1 = getelementptr inbounds i8* %0, i64 %ivar
+  %add.ptr10.1 = getelementptr inbounds i8, i8* %0, i64 %ivar
   %1 = bitcast i8* %add.ptr10.1 to double*
   %tmp12 = load double* %1, align 8
-  %add.ptr4.1 = getelementptr inbounds i8* %0, i64 %ivar
+  %add.ptr4.1 = getelementptr inbounds i8, i8* %0, i64 %ivar
   %add.ptr4.1.0 = bitcast i8* %add.ptr4.1 to double*
   %tmp = load double* %add.ptr4.1.0, align 8
-  %add.ptr4.1.1 = getelementptr inbounds i8* %0, i64 %ivar
+  %add.ptr4.1.1 = getelementptr inbounds i8, i8* %0, i64 %ivar
   %2 = bitcast i8* %add.ptr4.1.1 to double*
   %tmp5 = load double* %2, align 8
   %insert14 = insertvalue %struct.CGPoint undef, double %tmp11, 0

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-fold-lsl.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-fold-lsl.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-fold-lsl.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-fold-lsl.ll Fri Feb 27 13:29:02 2015
@@ -13,7 +13,7 @@ define i16 @load_halfword(%struct.a* %ct
   %shr81 = lshr i32 %xor72, 9
   %conv82 = zext i32 %shr81 to i64
   %idxprom83 = and i64 %conv82, 255
-  %arrayidx86 = getelementptr inbounds %struct.a* %ctx, i64 0, i64 %idxprom83
+  %arrayidx86 = getelementptr inbounds %struct.a, %struct.a* %ctx, i64 0, i64 %idxprom83
   %result = load i16* %arrayidx86, align 2
   ret i16 %result
 }
@@ -25,7 +25,7 @@ define i32 @load_word(%struct.b* %ctx, i
   %shr81 = lshr i32 %xor72, 9
   %conv82 = zext i32 %shr81 to i64
   %idxprom83 = and i64 %conv82, 255
-  %arrayidx86 = getelementptr inbounds %struct.b* %ctx, i64 0, i64 %idxprom83
+  %arrayidx86 = getelementptr inbounds %struct.b, %struct.b* %ctx, i64 0, i64 %idxprom83
   %result = load i32* %arrayidx86, align 4
   ret i32 %result
 }
@@ -37,7 +37,7 @@ define i64 @load_doubleword(%struct.c* %
   %shr81 = lshr i32 %xor72, 9
   %conv82 = zext i32 %shr81 to i64
   %idxprom83 = and i64 %conv82, 255
-  %arrayidx86 = getelementptr inbounds %struct.c* %ctx, i64 0, i64 %idxprom83
+  %arrayidx86 = getelementptr inbounds %struct.c, %struct.c* %ctx, i64 0, i64 %idxprom83
   %result = load i64* %arrayidx86, align 8
   ret i64 %result
 }
@@ -49,7 +49,7 @@ define void @store_halfword(%struct.a* %
   %shr81 = lshr i32 %xor72, 9
   %conv82 = zext i32 %shr81 to i64
   %idxprom83 = and i64 %conv82, 255
-  %arrayidx86 = getelementptr inbounds %struct.a* %ctx, i64 0, i64 %idxprom83
+  %arrayidx86 = getelementptr inbounds %struct.a, %struct.a* %ctx, i64 0, i64 %idxprom83
   store i16 %val, i16* %arrayidx86, align 8
   ret void
 }
@@ -61,7 +61,7 @@ define void @store_word(%struct.b* %ctx,
   %shr81 = lshr i32 %xor72, 9
   %conv82 = zext i32 %shr81 to i64
   %idxprom83 = and i64 %conv82, 255
-  %arrayidx86 = getelementptr inbounds %struct.b* %ctx, i64 0, i64 %idxprom83
+  %arrayidx86 = getelementptr inbounds %struct.b, %struct.b* %ctx, i64 0, i64 %idxprom83
   store i32 %val, i32* %arrayidx86, align 8
   ret void
 }
@@ -73,7 +73,7 @@ define void @store_doubleword(%struct.c*
   %shr81 = lshr i32 %xor72, 9
   %conv82 = zext i32 %shr81 to i64
   %idxprom83 = and i64 %conv82, 255
-  %arrayidx86 = getelementptr inbounds %struct.c* %ctx, i64 0, i64 %idxprom83
+  %arrayidx86 = getelementptr inbounds %struct.c, %struct.c* %ctx, i64 0, i64 %idxprom83
   store i64 %val, i64* %arrayidx86, align 8
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-indexed-memory.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-indexed-memory.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-indexed-memory.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-indexed-memory.ll Fri Feb 27 13:29:02 2015
@@ -5,7 +5,7 @@ define void @store64(i64** nocapture %ou
 ; CHECK: str x{{[0-9+]}}, [x{{[0-9+]}}], #8
 ; CHECK: ret
   %tmp = load i64** %out, align 8
-  %incdec.ptr = getelementptr inbounds i64* %tmp, i64 1
+  %incdec.ptr = getelementptr inbounds i64, i64* %tmp, i64 1
   store i64 %spacing, i64* %tmp, align 4
   store i64* %incdec.ptr, i64** %out, align 8
   ret void
@@ -16,7 +16,7 @@ define void @store32(i32** nocapture %ou
 ; CHECK: str w{{[0-9+]}}, [x{{[0-9+]}}], #4
 ; CHECK: ret
   %tmp = load i32** %out, align 8
-  %incdec.ptr = getelementptr inbounds i32* %tmp, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %tmp, i64 1
   store i32 %spacing, i32* %tmp, align 4
   store i32* %incdec.ptr, i32** %out, align 8
   ret void
@@ -27,7 +27,7 @@ define void @store16(i16** nocapture %ou
 ; CHECK: strh w{{[0-9+]}}, [x{{[0-9+]}}], #2
 ; CHECK: ret
   %tmp = load i16** %out, align 8
-  %incdec.ptr = getelementptr inbounds i16* %tmp, i64 1
+  %incdec.ptr = getelementptr inbounds i16, i16* %tmp, i64 1
   store i16 %spacing, i16* %tmp, align 4
   store i16* %incdec.ptr, i16** %out, align 8
   ret void
@@ -38,7 +38,7 @@ define void @store8(i8** nocapture %out,
 ; CHECK: strb w{{[0-9+]}}, [x{{[0-9+]}}], #1
 ; CHECK: ret
   %tmp = load i8** %out, align 8
-  %incdec.ptr = getelementptr inbounds i8* %tmp, i64 1
+  %incdec.ptr = getelementptr inbounds i8, i8* %tmp, i64 1
   store i8 %spacing, i8* %tmp, align 4
   store i8* %incdec.ptr, i8** %out, align 8
   ret void
@@ -49,7 +49,7 @@ define void @truncst64to32(i32** nocaptu
 ; CHECK: str w{{[0-9+]}}, [x{{[0-9+]}}], #4
 ; CHECK: ret
   %tmp = load i32** %out, align 8
-  %incdec.ptr = getelementptr inbounds i32* %tmp, i64 1
+  %incdec.ptr = getelementptr inbounds i32, i32* %tmp, i64 1
   %trunc = trunc i64 %spacing to i32
   store i32 %trunc, i32* %tmp, align 4
   store i32* %incdec.ptr, i32** %out, align 8
@@ -61,7 +61,7 @@ define void @truncst64to16(i16** nocaptu
 ; CHECK: strh w{{[0-9+]}}, [x{{[0-9+]}}], #2
 ; CHECK: ret
   %tmp = load i16** %out, align 8
-  %incdec.ptr = getelementptr inbounds i16* %tmp, i64 1
+  %incdec.ptr = getelementptr inbounds i16, i16* %tmp, i64 1
   %trunc = trunc i64 %spacing to i16
   store i16 %trunc, i16* %tmp, align 4
   store i16* %incdec.ptr, i16** %out, align 8
@@ -73,7 +73,7 @@ define void @truncst64to8(i8** nocapture
 ; CHECK: strb w{{[0-9+]}}, [x{{[0-9+]}}], #1
 ; CHECK: ret
   %tmp = load i8** %out, align 8
-  %incdec.ptr = getelementptr inbounds i8* %tmp, i64 1
+  %incdec.ptr = getelementptr inbounds i8, i8* %tmp, i64 1
   %trunc = trunc i64 %spacing to i8
   store i8 %trunc, i8* %tmp, align 4
   store i8* %incdec.ptr, i8** %out, align 8
@@ -86,7 +86,7 @@ define void @storef32(float** nocapture
 ; CHECK: str s{{[0-9+]}}, [x{{[0-9+]}}], #4
 ; CHECK: ret
   %tmp = load float** %out, align 8
-  %incdec.ptr = getelementptr inbounds float* %tmp, i64 1
+  %incdec.ptr = getelementptr inbounds float, float* %tmp, i64 1
   store float %spacing, float* %tmp, align 4
   store float* %incdec.ptr, float** %out, align 8
   ret void
@@ -97,7 +97,7 @@ define void @storef64(double** nocapture
 ; CHECK: str d{{[0-9+]}}, [x{{[0-9+]}}], #8
 ; CHECK: ret
   %tmp = load double** %out, align 8
-  %incdec.ptr = getelementptr inbounds double* %tmp, i64 1
+  %incdec.ptr = getelementptr inbounds double, double* %tmp, i64 1
   store double %spacing, double* %tmp, align 4
   store double* %incdec.ptr, double** %out, align 8
   ret void
@@ -109,7 +109,7 @@ define double * @pref64(double** nocaptu
 ; CHECK-NEXT: str     d0, [x0, #32]!
 ; CHECK-NEXT: ret
   %tmp = load double** %out, align 8
-  %ptr = getelementptr inbounds double* %tmp, i64 4
+  %ptr = getelementptr inbounds double, double* %tmp, i64 4
   store double %spacing, double* %ptr, align 4
   ret double *%ptr
 }
@@ -120,7 +120,7 @@ define float * @pref32(float** nocapture
 ; CHECK-NEXT: str     s0, [x0, #12]!
 ; CHECK-NEXT: ret
   %tmp = load float** %out, align 8
-  %ptr = getelementptr inbounds float* %tmp, i64 3
+  %ptr = getelementptr inbounds float, float* %tmp, i64 3
   store float %spacing, float* %ptr, align 4
   ret float *%ptr
 }
@@ -131,7 +131,7 @@ define i64 * @pre64(i64** nocapture %out
 ; CHECK-NEXT: str     x1, [x0, #16]!
 ; CHECK-NEXT: ret
   %tmp = load i64** %out, align 8
-  %ptr = getelementptr inbounds i64* %tmp, i64 2
+  %ptr = getelementptr inbounds i64, i64* %tmp, i64 2
   store i64 %spacing, i64* %ptr, align 4
   ret i64 *%ptr
 }
@@ -142,7 +142,7 @@ define i32 * @pre32(i32** nocapture %out
 ; CHECK-NEXT: str     w1, [x0, #8]!
 ; CHECK-NEXT: ret
   %tmp = load i32** %out, align 8
-  %ptr = getelementptr inbounds i32* %tmp, i64 2
+  %ptr = getelementptr inbounds i32, i32* %tmp, i64 2
   store i32 %spacing, i32* %ptr, align 4
   ret i32 *%ptr
 }
@@ -153,7 +153,7 @@ define i16 * @pre16(i16** nocapture %out
 ; CHECK-NEXT: strh    w1, [x0, #4]!
 ; CHECK-NEXT: ret
   %tmp = load i16** %out, align 8
-  %ptr = getelementptr inbounds i16* %tmp, i64 2
+  %ptr = getelementptr inbounds i16, i16* %tmp, i64 2
   store i16 %spacing, i16* %ptr, align 4
   ret i16 *%ptr
 }
@@ -164,7 +164,7 @@ define i8 * @pre8(i8** nocapture %out, i
 ; CHECK-NEXT: strb    w1, [x0, #2]!
 ; CHECK-NEXT: ret
   %tmp = load i8** %out, align 8
-  %ptr = getelementptr inbounds i8* %tmp, i64 2
+  %ptr = getelementptr inbounds i8, i8* %tmp, i64 2
   store i8 %spacing, i8* %ptr, align 4
   ret i8 *%ptr
 }
@@ -175,7 +175,7 @@ define i32 * @pretrunc64to32(i32** nocap
 ; CHECK-NEXT: str     w1, [x0, #8]!
 ; CHECK-NEXT: ret
   %tmp = load i32** %out, align 8
-  %ptr = getelementptr inbounds i32* %tmp, i64 2
+  %ptr = getelementptr inbounds i32, i32* %tmp, i64 2
   %trunc = trunc i64 %spacing to i32
   store i32 %trunc, i32* %ptr, align 4
   ret i32 *%ptr
@@ -187,7 +187,7 @@ define i16 * @pretrunc64to16(i16** nocap
 ; CHECK-NEXT: strh    w1, [x0, #4]!
 ; CHECK-NEXT: ret
   %tmp = load i16** %out, align 8
-  %ptr = getelementptr inbounds i16* %tmp, i64 2
+  %ptr = getelementptr inbounds i16, i16* %tmp, i64 2
   %trunc = trunc i64 %spacing to i16
   store i16 %trunc, i16* %ptr, align 4
   ret i16 *%ptr
@@ -199,7 +199,7 @@ define i8 * @pretrunc64to8(i8** nocaptur
 ; CHECK-NEXT: strb    w1, [x0, #2]!
 ; CHECK-NEXT: ret
   %tmp = load i8** %out, align 8
-  %ptr = getelementptr inbounds i8* %tmp, i64 2
+  %ptr = getelementptr inbounds i8, i8* %tmp, i64 2
   %trunc = trunc i64 %spacing to i8
   store i8 %trunc, i8* %ptr, align 4
   ret i8 *%ptr
@@ -213,7 +213,7 @@ define double* @preidxf64(double* %src,
 ; CHECK: ldr     d0, [x0, #8]!
 ; CHECK: str     d0, [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds double* %src, i64 1
+  %ptr = getelementptr inbounds double, double* %src, i64 1
   %tmp = load double* %ptr, align 4
   store double %tmp, double* %out, align 4
   ret double* %ptr
@@ -224,7 +224,7 @@ define float* @preidxf32(float* %src, fl
 ; CHECK: ldr     s0, [x0, #4]!
 ; CHECK: str     s0, [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds float* %src, i64 1
+  %ptr = getelementptr inbounds float, float* %src, i64 1
   %tmp = load float* %ptr, align 4
   store float %tmp, float* %out, align 4
   ret float* %ptr
@@ -235,7 +235,7 @@ define i64* @preidx64(i64* %src, i64* %o
 ; CHECK: ldr     x[[REG:[0-9]+]], [x0, #8]!
 ; CHECK: str     x[[REG]], [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds i64* %src, i64 1
+  %ptr = getelementptr inbounds i64, i64* %src, i64 1
   %tmp = load i64* %ptr, align 4
   store i64 %tmp, i64* %out, align 4
   ret i64* %ptr
@@ -245,7 +245,7 @@ define i32* @preidx32(i32* %src, i32* %o
 ; CHECK: ldr     w[[REG:[0-9]+]], [x0, #4]!
 ; CHECK: str     w[[REG]], [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds i32* %src, i64 1
+  %ptr = getelementptr inbounds i32, i32* %src, i64 1
   %tmp = load i32* %ptr, align 4
   store i32 %tmp, i32* %out, align 4
   ret i32* %ptr
@@ -255,7 +255,7 @@ define i16* @preidx16zext32(i16* %src, i
 ; CHECK: ldrh    w[[REG:[0-9]+]], [x0, #2]!
 ; CHECK: str     w[[REG]], [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds i16* %src, i64 1
+  %ptr = getelementptr inbounds i16, i16* %src, i64 1
   %tmp = load i16* %ptr, align 4
   %ext = zext i16 %tmp to i32
   store i32 %ext, i32* %out, align 4
@@ -266,7 +266,7 @@ define i16* @preidx16zext64(i16* %src, i
 ; CHECK: ldrh    w[[REG:[0-9]+]], [x0, #2]!
 ; CHECK: str     x[[REG]], [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds i16* %src, i64 1
+  %ptr = getelementptr inbounds i16, i16* %src, i64 1
   %tmp = load i16* %ptr, align 4
   %ext = zext i16 %tmp to i64
   store i64 %ext, i64* %out, align 4
@@ -277,7 +277,7 @@ define i8* @preidx8zext32(i8* %src, i32*
 ; CHECK: ldrb    w[[REG:[0-9]+]], [x0, #1]!
 ; CHECK: str     w[[REG]], [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds i8* %src, i64 1
+  %ptr = getelementptr inbounds i8, i8* %src, i64 1
   %tmp = load i8* %ptr, align 4
   %ext = zext i8 %tmp to i32
   store i32 %ext, i32* %out, align 4
@@ -288,7 +288,7 @@ define i8* @preidx8zext64(i8* %src, i64*
 ; CHECK: ldrb    w[[REG:[0-9]+]], [x0, #1]!
 ; CHECK: str     x[[REG]], [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds i8* %src, i64 1
+  %ptr = getelementptr inbounds i8, i8* %src, i64 1
   %tmp = load i8* %ptr, align 4
   %ext = zext i8 %tmp to i64
   store i64 %ext, i64* %out, align 4
@@ -299,7 +299,7 @@ define i32* @preidx32sext64(i32* %src, i
 ; CHECK: ldrsw   x[[REG:[0-9]+]], [x0, #4]!
 ; CHECK: str     x[[REG]], [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds i32* %src, i64 1
+  %ptr = getelementptr inbounds i32, i32* %src, i64 1
   %tmp = load i32* %ptr, align 4
   %ext = sext i32 %tmp to i64
   store i64 %ext, i64* %out, align 8
@@ -310,7 +310,7 @@ define i16* @preidx16sext32(i16* %src, i
 ; CHECK: ldrsh   w[[REG:[0-9]+]], [x0, #2]!
 ; CHECK: str     w[[REG]], [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds i16* %src, i64 1
+  %ptr = getelementptr inbounds i16, i16* %src, i64 1
   %tmp = load i16* %ptr, align 4
   %ext = sext i16 %tmp to i32
   store i32 %ext, i32* %out, align 4
@@ -321,7 +321,7 @@ define i16* @preidx16sext64(i16* %src, i
 ; CHECK: ldrsh   x[[REG:[0-9]+]], [x0, #2]!
 ; CHECK: str     x[[REG]], [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds i16* %src, i64 1
+  %ptr = getelementptr inbounds i16, i16* %src, i64 1
   %tmp = load i16* %ptr, align 4
   %ext = sext i16 %tmp to i64
   store i64 %ext, i64* %out, align 4
@@ -332,7 +332,7 @@ define i8* @preidx8sext32(i8* %src, i32*
 ; CHECK: ldrsb   w[[REG:[0-9]+]], [x0, #1]!
 ; CHECK: str     w[[REG]], [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds i8* %src, i64 1
+  %ptr = getelementptr inbounds i8, i8* %src, i64 1
   %tmp = load i8* %ptr, align 4
   %ext = sext i8 %tmp to i32
   store i32 %ext, i32* %out, align 4
@@ -343,7 +343,7 @@ define i8* @preidx8sext64(i8* %src, i64*
 ; CHECK: ldrsb   x[[REG:[0-9]+]], [x0, #1]!
 ; CHECK: str     x[[REG]], [x1]
 ; CHECK: ret
-  %ptr = getelementptr inbounds i8* %src, i64 1
+  %ptr = getelementptr inbounds i8, i8* %src, i64 1
   %tmp = load i8* %ptr, align 4
   %ext = sext i8 %tmp to i64
   store i64 %ext, i64* %out, align 4
@@ -358,6 +358,6 @@ define i64* @postidx_clobber(i64* %addr)
 ; ret
  %paddr = bitcast i64* %addr to i64**
  store i64* %addr, i64** %paddr
- %newaddr = getelementptr i64* %addr, i32 1
+ %newaddr = getelementptr i64, i64* %addr, i32 1
  ret i64* %newaddr
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-indexed-vector-ldst-2.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-indexed-vector-ldst-2.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-indexed-vector-ldst-2.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-indexed-vector-ldst-2.ll Fri Feb 27 13:29:02 2015
@@ -9,7 +9,7 @@ target triple = "arm64-apple-ios7.0.0"
 ; Function Attrs: nounwind ssp
 define void @f(double* %P1) #0 {
 entry:
-  %arrayidx4 = getelementptr inbounds double* %P1, i64 1
+  %arrayidx4 = getelementptr inbounds double, double* %P1, i64 1
   %0 = load double* %arrayidx4, align 8, !tbaa !1
   %1 = load double* %P1, align 8, !tbaa !1
   %2 = insertelement <2 x double> undef, double %0, i32 0

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-indexed-vector-ldst.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-indexed-vector-ldst.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-indexed-vector-ldst.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-indexed-vector-ldst.ll Fri Feb 27 13:29:02 2015
@@ -5,7 +5,7 @@
 define <8 x i8> @test_v8i8_pre_load(<8 x i8>* %addr) {
 ; CHECK-LABEL: test_v8i8_pre_load:
 ; CHECK: ldr d0, [x0, #40]!
-  %newaddr = getelementptr <8 x i8>* %addr, i32 5
+  %newaddr = getelementptr <8 x i8>, <8 x i8>* %addr, i32 5
   %val = load <8 x i8>* %newaddr, align 8
   store <8 x i8>* %newaddr, <8 x i8>** bitcast(i8** @ptr to <8 x i8>**)
   ret <8 x i8> %val
@@ -14,7 +14,7 @@ define <8 x i8> @test_v8i8_pre_load(<8 x
 define <8 x i8> @test_v8i8_post_load(<8 x i8>* %addr) {
 ; CHECK-LABEL: test_v8i8_post_load:
 ; CHECK: ldr d0, [x0], #40
-  %newaddr = getelementptr <8 x i8>* %addr, i32 5
+  %newaddr = getelementptr <8 x i8>, <8 x i8>* %addr, i32 5
   %val = load <8 x i8>* %addr, align 8
   store <8 x i8>* %newaddr, <8 x i8>** bitcast(i8** @ptr to <8 x i8>**)
   ret <8 x i8> %val
@@ -23,7 +23,7 @@ define <8 x i8> @test_v8i8_post_load(<8
 define void @test_v8i8_pre_store(<8 x i8> %in, <8 x i8>* %addr) {
 ; CHECK-LABEL: test_v8i8_pre_store:
 ; CHECK: str d0, [x0, #40]!
-  %newaddr = getelementptr <8 x i8>* %addr, i32 5
+  %newaddr = getelementptr <8 x i8>, <8 x i8>* %addr, i32 5
   store <8 x i8> %in, <8 x i8>* %newaddr, align 8
   store <8 x i8>* %newaddr, <8 x i8>** bitcast(i8** @ptr to <8 x i8>**)
   ret void
@@ -32,7 +32,7 @@ define void @test_v8i8_pre_store(<8 x i8
 define void @test_v8i8_post_store(<8 x i8> %in, <8 x i8>* %addr) {
 ; CHECK-LABEL: test_v8i8_post_store:
 ; CHECK: str d0, [x0], #40
-  %newaddr = getelementptr <8 x i8>* %addr, i32 5
+  %newaddr = getelementptr <8 x i8>, <8 x i8>* %addr, i32 5
   store <8 x i8> %in, <8 x i8>* %addr, align 8
   store <8 x i8>* %newaddr, <8 x i8>** bitcast(i8** @ptr to <8 x i8>**)
   ret void
@@ -41,7 +41,7 @@ define void @test_v8i8_post_store(<8 x i
 define <4 x i16> @test_v4i16_pre_load(<4 x i16>* %addr) {
 ; CHECK-LABEL: test_v4i16_pre_load:
 ; CHECK: ldr d0, [x0, #40]!
-  %newaddr = getelementptr <4 x i16>* %addr, i32 5
+  %newaddr = getelementptr <4 x i16>, <4 x i16>* %addr, i32 5
   %val = load <4 x i16>* %newaddr, align 8
   store <4 x i16>* %newaddr, <4 x i16>** bitcast(i8** @ptr to <4 x i16>**)
   ret <4 x i16> %val
@@ -50,7 +50,7 @@ define <4 x i16> @test_v4i16_pre_load(<4
 define <4 x i16> @test_v4i16_post_load(<4 x i16>* %addr) {
 ; CHECK-LABEL: test_v4i16_post_load:
 ; CHECK: ldr d0, [x0], #40
-  %newaddr = getelementptr <4 x i16>* %addr, i32 5
+  %newaddr = getelementptr <4 x i16>, <4 x i16>* %addr, i32 5
   %val = load <4 x i16>* %addr, align 8
   store <4 x i16>* %newaddr, <4 x i16>** bitcast(i8** @ptr to <4 x i16>**)
   ret <4 x i16> %val
@@ -59,7 +59,7 @@ define <4 x i16> @test_v4i16_post_load(<
 define void @test_v4i16_pre_store(<4 x i16> %in, <4 x i16>* %addr) {
 ; CHECK-LABEL: test_v4i16_pre_store:
 ; CHECK: str d0, [x0, #40]!
-  %newaddr = getelementptr <4 x i16>* %addr, i32 5
+  %newaddr = getelementptr <4 x i16>, <4 x i16>* %addr, i32 5
   store <4 x i16> %in, <4 x i16>* %newaddr, align 8
   store <4 x i16>* %newaddr, <4 x i16>** bitcast(i8** @ptr to <4 x i16>**)
   ret void
@@ -68,7 +68,7 @@ define void @test_v4i16_pre_store(<4 x i
 define void @test_v4i16_post_store(<4 x i16> %in, <4 x i16>* %addr) {
 ; CHECK-LABEL: test_v4i16_post_store:
 ; CHECK: str d0, [x0], #40
-  %newaddr = getelementptr <4 x i16>* %addr, i32 5
+  %newaddr = getelementptr <4 x i16>, <4 x i16>* %addr, i32 5
   store <4 x i16> %in, <4 x i16>* %addr, align 8
   store <4 x i16>* %newaddr, <4 x i16>** bitcast(i8** @ptr to <4 x i16>**)
   ret void
@@ -77,7 +77,7 @@ define void @test_v4i16_post_store(<4 x
 define <2 x i32> @test_v2i32_pre_load(<2 x i32>* %addr) {
 ; CHECK-LABEL: test_v2i32_pre_load:
 ; CHECK: ldr d0, [x0, #40]!
-  %newaddr = getelementptr <2 x i32>* %addr, i32 5
+  %newaddr = getelementptr <2 x i32>, <2 x i32>* %addr, i32 5
   %val = load <2 x i32>* %newaddr, align 8
   store <2 x i32>* %newaddr, <2 x i32>** bitcast(i8** @ptr to <2 x i32>**)
   ret <2 x i32> %val
@@ -86,7 +86,7 @@ define <2 x i32> @test_v2i32_pre_load(<2
 define <2 x i32> @test_v2i32_post_load(<2 x i32>* %addr) {
 ; CHECK-LABEL: test_v2i32_post_load:
 ; CHECK: ldr d0, [x0], #40
-  %newaddr = getelementptr <2 x i32>* %addr, i32 5
+  %newaddr = getelementptr <2 x i32>, <2 x i32>* %addr, i32 5
   %val = load <2 x i32>* %addr, align 8
   store <2 x i32>* %newaddr, <2 x i32>** bitcast(i8** @ptr to <2 x i32>**)
   ret <2 x i32> %val
@@ -95,7 +95,7 @@ define <2 x i32> @test_v2i32_post_load(<
 define void @test_v2i32_pre_store(<2 x i32> %in, <2 x i32>* %addr) {
 ; CHECK-LABEL: test_v2i32_pre_store:
 ; CHECK: str d0, [x0, #40]!
-  %newaddr = getelementptr <2 x i32>* %addr, i32 5
+  %newaddr = getelementptr <2 x i32>, <2 x i32>* %addr, i32 5
   store <2 x i32> %in, <2 x i32>* %newaddr, align 8
   store <2 x i32>* %newaddr, <2 x i32>** bitcast(i8** @ptr to <2 x i32>**)
   ret void
@@ -104,7 +104,7 @@ define void @test_v2i32_pre_store(<2 x i
 define void @test_v2i32_post_store(<2 x i32> %in, <2 x i32>* %addr) {
 ; CHECK-LABEL: test_v2i32_post_store:
 ; CHECK: str d0, [x0], #40
-  %newaddr = getelementptr <2 x i32>* %addr, i32 5
+  %newaddr = getelementptr <2 x i32>, <2 x i32>* %addr, i32 5
   store <2 x i32> %in, <2 x i32>* %addr, align 8
   store <2 x i32>* %newaddr, <2 x i32>** bitcast(i8** @ptr to <2 x i32>**)
   ret void
@@ -113,7 +113,7 @@ define void @test_v2i32_post_store(<2 x
 define <2 x float> @test_v2f32_pre_load(<2 x float>* %addr) {
 ; CHECK-LABEL: test_v2f32_pre_load:
 ; CHECK: ldr d0, [x0, #40]!
-  %newaddr = getelementptr <2 x float>* %addr, i32 5
+  %newaddr = getelementptr <2 x float>, <2 x float>* %addr, i32 5
   %val = load <2 x float>* %newaddr, align 8
   store <2 x float>* %newaddr, <2 x float>** bitcast(i8** @ptr to <2 x float>**)
   ret <2 x float> %val
@@ -122,7 +122,7 @@ define <2 x float> @test_v2f32_pre_load(
 define <2 x float> @test_v2f32_post_load(<2 x float>* %addr) {
 ; CHECK-LABEL: test_v2f32_post_load:
 ; CHECK: ldr d0, [x0], #40
-  %newaddr = getelementptr <2 x float>* %addr, i32 5
+  %newaddr = getelementptr <2 x float>, <2 x float>* %addr, i32 5
   %val = load <2 x float>* %addr, align 8
   store <2 x float>* %newaddr, <2 x float>** bitcast(i8** @ptr to <2 x float>**)
   ret <2 x float> %val
@@ -131,7 +131,7 @@ define <2 x float> @test_v2f32_post_load
 define void @test_v2f32_pre_store(<2 x float> %in, <2 x float>* %addr) {
 ; CHECK-LABEL: test_v2f32_pre_store:
 ; CHECK: str d0, [x0, #40]!
-  %newaddr = getelementptr <2 x float>* %addr, i32 5
+  %newaddr = getelementptr <2 x float>, <2 x float>* %addr, i32 5
   store <2 x float> %in, <2 x float>* %newaddr, align 8
   store <2 x float>* %newaddr, <2 x float>** bitcast(i8** @ptr to <2 x float>**)
   ret void
@@ -140,7 +140,7 @@ define void @test_v2f32_pre_store(<2 x f
 define void @test_v2f32_post_store(<2 x float> %in, <2 x float>* %addr) {
 ; CHECK-LABEL: test_v2f32_post_store:
 ; CHECK: str d0, [x0], #40
-  %newaddr = getelementptr <2 x float>* %addr, i32 5
+  %newaddr = getelementptr <2 x float>, <2 x float>* %addr, i32 5
   store <2 x float> %in, <2 x float>* %addr, align 8
   store <2 x float>* %newaddr, <2 x float>** bitcast(i8** @ptr to <2 x float>**)
   ret void
@@ -149,7 +149,7 @@ define void @test_v2f32_post_store(<2 x
 define <1 x i64> @test_v1i64_pre_load(<1 x i64>* %addr) {
 ; CHECK-LABEL: test_v1i64_pre_load:
 ; CHECK: ldr d0, [x0, #40]!
-  %newaddr = getelementptr <1 x i64>* %addr, i32 5
+  %newaddr = getelementptr <1 x i64>, <1 x i64>* %addr, i32 5
   %val = load <1 x i64>* %newaddr, align 8
   store <1 x i64>* %newaddr, <1 x i64>** bitcast(i8** @ptr to <1 x i64>**)
   ret <1 x i64> %val
@@ -158,7 +158,7 @@ define <1 x i64> @test_v1i64_pre_load(<1
 define <1 x i64> @test_v1i64_post_load(<1 x i64>* %addr) {
 ; CHECK-LABEL: test_v1i64_post_load:
 ; CHECK: ldr d0, [x0], #40
-  %newaddr = getelementptr <1 x i64>* %addr, i32 5
+  %newaddr = getelementptr <1 x i64>, <1 x i64>* %addr, i32 5
   %val = load <1 x i64>* %addr, align 8
   store <1 x i64>* %newaddr, <1 x i64>** bitcast(i8** @ptr to <1 x i64>**)
   ret <1 x i64> %val
@@ -167,7 +167,7 @@ define <1 x i64> @test_v1i64_post_load(<
 define void @test_v1i64_pre_store(<1 x i64> %in, <1 x i64>* %addr) {
 ; CHECK-LABEL: test_v1i64_pre_store:
 ; CHECK: str d0, [x0, #40]!
-  %newaddr = getelementptr <1 x i64>* %addr, i32 5
+  %newaddr = getelementptr <1 x i64>, <1 x i64>* %addr, i32 5
   store <1 x i64> %in, <1 x i64>* %newaddr, align 8
   store <1 x i64>* %newaddr, <1 x i64>** bitcast(i8** @ptr to <1 x i64>**)
   ret void
@@ -176,7 +176,7 @@ define void @test_v1i64_pre_store(<1 x i
 define void @test_v1i64_post_store(<1 x i64> %in, <1 x i64>* %addr) {
 ; CHECK-LABEL: test_v1i64_post_store:
 ; CHECK: str d0, [x0], #40
-  %newaddr = getelementptr <1 x i64>* %addr, i32 5
+  %newaddr = getelementptr <1 x i64>, <1 x i64>* %addr, i32 5
   store <1 x i64> %in, <1 x i64>* %addr, align 8
   store <1 x i64>* %newaddr, <1 x i64>** bitcast(i8** @ptr to <1 x i64>**)
   ret void
@@ -185,7 +185,7 @@ define void @test_v1i64_post_store(<1 x
 define <16 x i8> @test_v16i8_pre_load(<16 x i8>* %addr) {
 ; CHECK-LABEL: test_v16i8_pre_load:
 ; CHECK: ldr q0, [x0, #80]!
-  %newaddr = getelementptr <16 x i8>* %addr, i32 5
+  %newaddr = getelementptr <16 x i8>, <16 x i8>* %addr, i32 5
   %val = load <16 x i8>* %newaddr, align 8
   store <16 x i8>* %newaddr, <16 x i8>** bitcast(i8** @ptr to <16 x i8>**)
   ret <16 x i8> %val
@@ -194,7 +194,7 @@ define <16 x i8> @test_v16i8_pre_load(<1
 define <16 x i8> @test_v16i8_post_load(<16 x i8>* %addr) {
 ; CHECK-LABEL: test_v16i8_post_load:
 ; CHECK: ldr q0, [x0], #80
-  %newaddr = getelementptr <16 x i8>* %addr, i32 5
+  %newaddr = getelementptr <16 x i8>, <16 x i8>* %addr, i32 5
   %val = load <16 x i8>* %addr, align 8
   store <16 x i8>* %newaddr, <16 x i8>** bitcast(i8** @ptr to <16 x i8>**)
   ret <16 x i8> %val
@@ -203,7 +203,7 @@ define <16 x i8> @test_v16i8_post_load(<
 define void @test_v16i8_pre_store(<16 x i8> %in, <16 x i8>* %addr) {
 ; CHECK-LABEL: test_v16i8_pre_store:
 ; CHECK: str q0, [x0, #80]!
-  %newaddr = getelementptr <16 x i8>* %addr, i32 5
+  %newaddr = getelementptr <16 x i8>, <16 x i8>* %addr, i32 5
   store <16 x i8> %in, <16 x i8>* %newaddr, align 8
   store <16 x i8>* %newaddr, <16 x i8>** bitcast(i8** @ptr to <16 x i8>**)
   ret void
@@ -212,7 +212,7 @@ define void @test_v16i8_pre_store(<16 x
 define void @test_v16i8_post_store(<16 x i8> %in, <16 x i8>* %addr) {
 ; CHECK-LABEL: test_v16i8_post_store:
 ; CHECK: str q0, [x0], #80
-  %newaddr = getelementptr <16 x i8>* %addr, i32 5
+  %newaddr = getelementptr <16 x i8>, <16 x i8>* %addr, i32 5
   store <16 x i8> %in, <16 x i8>* %addr, align 8
   store <16 x i8>* %newaddr, <16 x i8>** bitcast(i8** @ptr to <16 x i8>**)
   ret void
@@ -221,7 +221,7 @@ define void @test_v16i8_post_store(<16 x
 define <8 x i16> @test_v8i16_pre_load(<8 x i16>* %addr) {
 ; CHECK-LABEL: test_v8i16_pre_load:
 ; CHECK: ldr q0, [x0, #80]!
-  %newaddr = getelementptr <8 x i16>* %addr, i32 5
+  %newaddr = getelementptr <8 x i16>, <8 x i16>* %addr, i32 5
   %val = load <8 x i16>* %newaddr, align 8
   store <8 x i16>* %newaddr, <8 x i16>** bitcast(i8** @ptr to <8 x i16>**)
   ret <8 x i16> %val
@@ -230,7 +230,7 @@ define <8 x i16> @test_v8i16_pre_load(<8
 define <8 x i16> @test_v8i16_post_load(<8 x i16>* %addr) {
 ; CHECK-LABEL: test_v8i16_post_load:
 ; CHECK: ldr q0, [x0], #80
-  %newaddr = getelementptr <8 x i16>* %addr, i32 5
+  %newaddr = getelementptr <8 x i16>, <8 x i16>* %addr, i32 5
   %val = load <8 x i16>* %addr, align 8
   store <8 x i16>* %newaddr, <8 x i16>** bitcast(i8** @ptr to <8 x i16>**)
   ret <8 x i16> %val
@@ -239,7 +239,7 @@ define <8 x i16> @test_v8i16_post_load(<
 define void @test_v8i16_pre_store(<8 x i16> %in, <8 x i16>* %addr) {
 ; CHECK-LABEL: test_v8i16_pre_store:
 ; CHECK: str q0, [x0, #80]!
-  %newaddr = getelementptr <8 x i16>* %addr, i32 5
+  %newaddr = getelementptr <8 x i16>, <8 x i16>* %addr, i32 5
   store <8 x i16> %in, <8 x i16>* %newaddr, align 8
   store <8 x i16>* %newaddr, <8 x i16>** bitcast(i8** @ptr to <8 x i16>**)
   ret void
@@ -248,7 +248,7 @@ define void @test_v8i16_pre_store(<8 x i
 define void @test_v8i16_post_store(<8 x i16> %in, <8 x i16>* %addr) {
 ; CHECK-LABEL: test_v8i16_post_store:
 ; CHECK: str q0, [x0], #80
-  %newaddr = getelementptr <8 x i16>* %addr, i32 5
+  %newaddr = getelementptr <8 x i16>, <8 x i16>* %addr, i32 5
   store <8 x i16> %in, <8 x i16>* %addr, align 8
   store <8 x i16>* %newaddr, <8 x i16>** bitcast(i8** @ptr to <8 x i16>**)
   ret void
@@ -257,7 +257,7 @@ define void @test_v8i16_post_store(<8 x
 define <4 x i32> @test_v4i32_pre_load(<4 x i32>* %addr) {
 ; CHECK-LABEL: test_v4i32_pre_load:
 ; CHECK: ldr q0, [x0, #80]!
-  %newaddr = getelementptr <4 x i32>* %addr, i32 5
+  %newaddr = getelementptr <4 x i32>, <4 x i32>* %addr, i32 5
   %val = load <4 x i32>* %newaddr, align 8
   store <4 x i32>* %newaddr, <4 x i32>** bitcast(i8** @ptr to <4 x i32>**)
   ret <4 x i32> %val
@@ -266,7 +266,7 @@ define <4 x i32> @test_v4i32_pre_load(<4
 define <4 x i32> @test_v4i32_post_load(<4 x i32>* %addr) {
 ; CHECK-LABEL: test_v4i32_post_load:
 ; CHECK: ldr q0, [x0], #80
-  %newaddr = getelementptr <4 x i32>* %addr, i32 5
+  %newaddr = getelementptr <4 x i32>, <4 x i32>* %addr, i32 5
   %val = load <4 x i32>* %addr, align 8
   store <4 x i32>* %newaddr, <4 x i32>** bitcast(i8** @ptr to <4 x i32>**)
   ret <4 x i32> %val
@@ -275,7 +275,7 @@ define <4 x i32> @test_v4i32_post_load(<
 define void @test_v4i32_pre_store(<4 x i32> %in, <4 x i32>* %addr) {
 ; CHECK-LABEL: test_v4i32_pre_store:
 ; CHECK: str q0, [x0, #80]!
-  %newaddr = getelementptr <4 x i32>* %addr, i32 5
+  %newaddr = getelementptr <4 x i32>, <4 x i32>* %addr, i32 5
   store <4 x i32> %in, <4 x i32>* %newaddr, align 8
   store <4 x i32>* %newaddr, <4 x i32>** bitcast(i8** @ptr to <4 x i32>**)
   ret void
@@ -284,7 +284,7 @@ define void @test_v4i32_pre_store(<4 x i
 define void @test_v4i32_post_store(<4 x i32> %in, <4 x i32>* %addr) {
 ; CHECK-LABEL: test_v4i32_post_store:
 ; CHECK: str q0, [x0], #80
-  %newaddr = getelementptr <4 x i32>* %addr, i32 5
+  %newaddr = getelementptr <4 x i32>, <4 x i32>* %addr, i32 5
   store <4 x i32> %in, <4 x i32>* %addr, align 8
   store <4 x i32>* %newaddr, <4 x i32>** bitcast(i8** @ptr to <4 x i32>**)
   ret void
@@ -294,7 +294,7 @@ define void @test_v4i32_post_store(<4 x
 define <4 x float> @test_v4f32_pre_load(<4 x float>* %addr) {
 ; CHECK-LABEL: test_v4f32_pre_load:
 ; CHECK: ldr q0, [x0, #80]!
-  %newaddr = getelementptr <4 x float>* %addr, i32 5
+  %newaddr = getelementptr <4 x float>, <4 x float>* %addr, i32 5
   %val = load <4 x float>* %newaddr, align 8
   store <4 x float>* %newaddr, <4 x float>** bitcast(i8** @ptr to <4 x float>**)
   ret <4 x float> %val
@@ -303,7 +303,7 @@ define <4 x float> @test_v4f32_pre_load(
 define <4 x float> @test_v4f32_post_load(<4 x float>* %addr) {
 ; CHECK-LABEL: test_v4f32_post_load:
 ; CHECK: ldr q0, [x0], #80
-  %newaddr = getelementptr <4 x float>* %addr, i32 5
+  %newaddr = getelementptr <4 x float>, <4 x float>* %addr, i32 5
   %val = load <4 x float>* %addr, align 8
   store <4 x float>* %newaddr, <4 x float>** bitcast(i8** @ptr to <4 x float>**)
   ret <4 x float> %val
@@ -312,7 +312,7 @@ define <4 x float> @test_v4f32_post_load
 define void @test_v4f32_pre_store(<4 x float> %in, <4 x float>* %addr) {
 ; CHECK-LABEL: test_v4f32_pre_store:
 ; CHECK: str q0, [x0, #80]!
-  %newaddr = getelementptr <4 x float>* %addr, i32 5
+  %newaddr = getelementptr <4 x float>, <4 x float>* %addr, i32 5
   store <4 x float> %in, <4 x float>* %newaddr, align 8
   store <4 x float>* %newaddr, <4 x float>** bitcast(i8** @ptr to <4 x float>**)
   ret void
@@ -321,7 +321,7 @@ define void @test_v4f32_pre_store(<4 x f
 define void @test_v4f32_post_store(<4 x float> %in, <4 x float>* %addr) {
 ; CHECK-LABEL: test_v4f32_post_store:
 ; CHECK: str q0, [x0], #80
-  %newaddr = getelementptr <4 x float>* %addr, i32 5
+  %newaddr = getelementptr <4 x float>, <4 x float>* %addr, i32 5
   store <4 x float> %in, <4 x float>* %addr, align 8
   store <4 x float>* %newaddr, <4 x float>** bitcast(i8** @ptr to <4 x float>**)
   ret void
@@ -331,7 +331,7 @@ define void @test_v4f32_post_store(<4 x
 define <2 x i64> @test_v2i64_pre_load(<2 x i64>* %addr) {
 ; CHECK-LABEL: test_v2i64_pre_load:
 ; CHECK: ldr q0, [x0, #80]!
-  %newaddr = getelementptr <2 x i64>* %addr, i32 5
+  %newaddr = getelementptr <2 x i64>, <2 x i64>* %addr, i32 5
   %val = load <2 x i64>* %newaddr, align 8
   store <2 x i64>* %newaddr, <2 x i64>** bitcast(i8** @ptr to <2 x i64>**)
   ret <2 x i64> %val
@@ -340,7 +340,7 @@ define <2 x i64> @test_v2i64_pre_load(<2
 define <2 x i64> @test_v2i64_post_load(<2 x i64>* %addr) {
 ; CHECK-LABEL: test_v2i64_post_load:
 ; CHECK: ldr q0, [x0], #80
-  %newaddr = getelementptr <2 x i64>* %addr, i32 5
+  %newaddr = getelementptr <2 x i64>, <2 x i64>* %addr, i32 5
   %val = load <2 x i64>* %addr, align 8
   store <2 x i64>* %newaddr, <2 x i64>** bitcast(i8** @ptr to <2 x i64>**)
   ret <2 x i64> %val
@@ -349,7 +349,7 @@ define <2 x i64> @test_v2i64_post_load(<
 define void @test_v2i64_pre_store(<2 x i64> %in, <2 x i64>* %addr) {
 ; CHECK-LABEL: test_v2i64_pre_store:
 ; CHECK: str q0, [x0, #80]!
-  %newaddr = getelementptr <2 x i64>* %addr, i32 5
+  %newaddr = getelementptr <2 x i64>, <2 x i64>* %addr, i32 5
   store <2 x i64> %in, <2 x i64>* %newaddr, align 8
   store <2 x i64>* %newaddr, <2 x i64>** bitcast(i8** @ptr to <2 x i64>**)
   ret void
@@ -358,7 +358,7 @@ define void @test_v2i64_pre_store(<2 x i
 define void @test_v2i64_post_store(<2 x i64> %in, <2 x i64>* %addr) {
 ; CHECK-LABEL: test_v2i64_post_store:
 ; CHECK: str q0, [x0], #80
-  %newaddr = getelementptr <2 x i64>* %addr, i32 5
+  %newaddr = getelementptr <2 x i64>, <2 x i64>* %addr, i32 5
   store <2 x i64> %in, <2 x i64>* %addr, align 8
   store <2 x i64>* %newaddr, <2 x i64>** bitcast(i8** @ptr to <2 x i64>**)
   ret void
@@ -368,7 +368,7 @@ define void @test_v2i64_post_store(<2 x
 define <2 x double> @test_v2f64_pre_load(<2 x double>* %addr) {
 ; CHECK-LABEL: test_v2f64_pre_load:
 ; CHECK: ldr q0, [x0, #80]!
-  %newaddr = getelementptr <2 x double>* %addr, i32 5
+  %newaddr = getelementptr <2 x double>, <2 x double>* %addr, i32 5
   %val = load <2 x double>* %newaddr, align 8
   store <2 x double>* %newaddr, <2 x double>** bitcast(i8** @ptr to <2 x double>**)
   ret <2 x double> %val
@@ -377,7 +377,7 @@ define <2 x double> @test_v2f64_pre_load
 define <2 x double> @test_v2f64_post_load(<2 x double>* %addr) {
 ; CHECK-LABEL: test_v2f64_post_load:
 ; CHECK: ldr q0, [x0], #80
-  %newaddr = getelementptr <2 x double>* %addr, i32 5
+  %newaddr = getelementptr <2 x double>, <2 x double>* %addr, i32 5
   %val = load <2 x double>* %addr, align 8
   store <2 x double>* %newaddr, <2 x double>** bitcast(i8** @ptr to <2 x double>**)
   ret <2 x double> %val
@@ -386,7 +386,7 @@ define <2 x double> @test_v2f64_post_loa
 define void @test_v2f64_pre_store(<2 x double> %in, <2 x double>* %addr) {
 ; CHECK-LABEL: test_v2f64_pre_store:
 ; CHECK: str q0, [x0, #80]!
-  %newaddr = getelementptr <2 x double>* %addr, i32 5
+  %newaddr = getelementptr <2 x double>, <2 x double>* %addr, i32 5
   store <2 x double> %in, <2 x double>* %newaddr, align 8
   store <2 x double>* %newaddr, <2 x double>** bitcast(i8** @ptr to <2 x double>**)
   ret void
@@ -395,7 +395,7 @@ define void @test_v2f64_pre_store(<2 x d
 define void @test_v2f64_post_store(<2 x double> %in, <2 x double>* %addr) {
 ; CHECK-LABEL: test_v2f64_post_store:
 ; CHECK: str q0, [x0], #80
-  %newaddr = getelementptr <2 x double>* %addr, i32 5
+  %newaddr = getelementptr <2 x double>, <2 x double>* %addr, i32 5
   store <2 x double> %in, <2 x double>* %addr, align 8
   store <2 x double>* %newaddr, <2 x double>** bitcast(i8** @ptr to <2 x double>**)
   ret void
@@ -407,7 +407,7 @@ define i8* @test_v16i8_post_imm_st1_lane
   %elt = extractelement <16 x i8> %in, i32 3
   store i8 %elt, i8* %addr
 
-  %newaddr = getelementptr i8* %addr, i32 1
+  %newaddr = getelementptr i8, i8* %addr, i32 1
   ret i8* %newaddr
 }
 
@@ -418,7 +418,7 @@ define i8* @test_v16i8_post_reg_st1_lane
   %elt = extractelement <16 x i8> %in, i32 3
   store i8 %elt, i8* %addr
 
-  %newaddr = getelementptr i8* %addr, i32 2
+  %newaddr = getelementptr i8, i8* %addr, i32 2
   ret i8* %newaddr
 }
 
@@ -429,7 +429,7 @@ define i16* @test_v8i16_post_imm_st1_lan
   %elt = extractelement <8 x i16> %in, i32 3
   store i16 %elt, i16* %addr
 
-  %newaddr = getelementptr i16* %addr, i32 1
+  %newaddr = getelementptr i16, i16* %addr, i32 1
   ret i16* %newaddr
 }
 
@@ -440,7 +440,7 @@ define i16* @test_v8i16_post_reg_st1_lan
   %elt = extractelement <8 x i16> %in, i32 3
   store i16 %elt, i16* %addr
 
-  %newaddr = getelementptr i16* %addr, i32 2
+  %newaddr = getelementptr i16, i16* %addr, i32 2
   ret i16* %newaddr
 }
 
@@ -450,7 +450,7 @@ define i32* @test_v4i32_post_imm_st1_lan
   %elt = extractelement <4 x i32> %in, i32 3
   store i32 %elt, i32* %addr
 
-  %newaddr = getelementptr i32* %addr, i32 1
+  %newaddr = getelementptr i32, i32* %addr, i32 1
   ret i32* %newaddr
 }
 
@@ -461,7 +461,7 @@ define i32* @test_v4i32_post_reg_st1_lan
   %elt = extractelement <4 x i32> %in, i32 3
   store i32 %elt, i32* %addr
 
-  %newaddr = getelementptr i32* %addr, i32 2
+  %newaddr = getelementptr i32, i32* %addr, i32 2
   ret i32* %newaddr
 }
 
@@ -471,7 +471,7 @@ define float* @test_v4f32_post_imm_st1_l
   %elt = extractelement <4 x float> %in, i32 3
   store float %elt, float* %addr
 
-  %newaddr = getelementptr float* %addr, i32 1
+  %newaddr = getelementptr float, float* %addr, i32 1
   ret float* %newaddr
 }
 
@@ -482,7 +482,7 @@ define float* @test_v4f32_post_reg_st1_l
   %elt = extractelement <4 x float> %in, i32 3
   store float %elt, float* %addr
 
-  %newaddr = getelementptr float* %addr, i32 2
+  %newaddr = getelementptr float, float* %addr, i32 2
   ret float* %newaddr
 }
 
@@ -492,7 +492,7 @@ define i64* @test_v2i64_post_imm_st1_lan
   %elt = extractelement <2 x i64> %in, i64 1
   store i64 %elt, i64* %addr
 
-  %newaddr = getelementptr i64* %addr, i64 1
+  %newaddr = getelementptr i64, i64* %addr, i64 1
   ret i64* %newaddr
 }
 
@@ -503,7 +503,7 @@ define i64* @test_v2i64_post_reg_st1_lan
   %elt = extractelement <2 x i64> %in, i64 1
   store i64 %elt, i64* %addr
 
-  %newaddr = getelementptr i64* %addr, i64 2
+  %newaddr = getelementptr i64, i64* %addr, i64 2
   ret i64* %newaddr
 }
 
@@ -513,7 +513,7 @@ define double* @test_v2f64_post_imm_st1_
   %elt = extractelement <2 x double> %in, i32 1
   store double %elt, double* %addr
 
-  %newaddr = getelementptr double* %addr, i32 1
+  %newaddr = getelementptr double, double* %addr, i32 1
   ret double* %newaddr
 }
 
@@ -524,7 +524,7 @@ define double* @test_v2f64_post_reg_st1_
   %elt = extractelement <2 x double> %in, i32 1
   store double %elt, double* %addr
 
-  %newaddr = getelementptr double* %addr, i32 2
+  %newaddr = getelementptr double, double* %addr, i32 2
   ret double* %newaddr
 }
 
@@ -534,7 +534,7 @@ define i8* @test_v8i8_post_imm_st1_lane(
   %elt = extractelement <8 x i8> %in, i32 3
   store i8 %elt, i8* %addr
 
-  %newaddr = getelementptr i8* %addr, i32 1
+  %newaddr = getelementptr i8, i8* %addr, i32 1
   ret i8* %newaddr
 }
 
@@ -545,7 +545,7 @@ define i8* @test_v8i8_post_reg_st1_lane(
   %elt = extractelement <8 x i8> %in, i32 3
   store i8 %elt, i8* %addr
 
-  %newaddr = getelementptr i8* %addr, i32 2
+  %newaddr = getelementptr i8, i8* %addr, i32 2
   ret i8* %newaddr
 }
 
@@ -555,7 +555,7 @@ define i16* @test_v4i16_post_imm_st1_lan
   %elt = extractelement <4 x i16> %in, i32 3
   store i16 %elt, i16* %addr
 
-  %newaddr = getelementptr i16* %addr, i32 1
+  %newaddr = getelementptr i16, i16* %addr, i32 1
   ret i16* %newaddr
 }
 
@@ -566,7 +566,7 @@ define i16* @test_v4i16_post_reg_st1_lan
   %elt = extractelement <4 x i16> %in, i32 3
   store i16 %elt, i16* %addr
 
-  %newaddr = getelementptr i16* %addr, i32 2
+  %newaddr = getelementptr i16, i16* %addr, i32 2
   ret i16* %newaddr
 }
 
@@ -576,7 +576,7 @@ define i32* @test_v2i32_post_imm_st1_lan
   %elt = extractelement <2 x i32> %in, i32 1
   store i32 %elt, i32* %addr
 
-  %newaddr = getelementptr i32* %addr, i32 1
+  %newaddr = getelementptr i32, i32* %addr, i32 1
   ret i32* %newaddr
 }
 
@@ -587,7 +587,7 @@ define i32* @test_v2i32_post_reg_st1_lan
   %elt = extractelement <2 x i32> %in, i32 1
   store i32 %elt, i32* %addr
 
-  %newaddr = getelementptr i32* %addr, i32 2
+  %newaddr = getelementptr i32, i32* %addr, i32 2
   ret i32* %newaddr
 }
 
@@ -597,7 +597,7 @@ define float* @test_v2f32_post_imm_st1_l
   %elt = extractelement <2 x float> %in, i32 1
   store float %elt, float* %addr
 
-  %newaddr = getelementptr float* %addr, i32 1
+  %newaddr = getelementptr float, float* %addr, i32 1
   ret float* %newaddr
 }
 
@@ -608,7 +608,7 @@ define float* @test_v2f32_post_reg_st1_l
   %elt = extractelement <2 x float> %in, i32 1
   store float %elt, float* %addr
 
-  %newaddr = getelementptr float* %addr, i32 2
+  %newaddr = getelementptr float, float* %addr, i32 2
   ret float* %newaddr
 }
 
@@ -616,7 +616,7 @@ define { <16 x i8>, <16 x i8> } @test_v1
 ;CHECK-LABEL: test_v16i8_post_imm_ld2:
 ;CHECK: ld2.16b { v0, v1 }, [x0], #32
   %ld2 = tail call { <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld2.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 32
+  %tmp = getelementptr i8, i8* %A, i32 32
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8> } %ld2
 }
@@ -625,7 +625,7 @@ define { <16 x i8>, <16 x i8> } @test_v1
 ;CHECK-LABEL: test_v16i8_post_reg_ld2:
 ;CHECK: ld2.16b { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = tail call { <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld2.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8> } %ld2
 }
@@ -637,7 +637,7 @@ define { <8 x i8>, <8 x i8> } @test_v8i8
 ;CHECK-LABEL: test_v8i8_post_imm_ld2:
 ;CHECK: ld2.8b { v0, v1 }, [x0], #16
   %ld2 = tail call { <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld2.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 16
+  %tmp = getelementptr i8, i8* %A, i32 16
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8> } %ld2
 }
@@ -646,7 +646,7 @@ define { <8 x i8>, <8 x i8> } @test_v8i8
 ;CHECK-LABEL: test_v8i8_post_reg_ld2:
 ;CHECK: ld2.8b { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = tail call { <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld2.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8> } %ld2
 }
@@ -658,7 +658,7 @@ define { <8 x i16>, <8 x i16> } @test_v8
 ;CHECK-LABEL: test_v8i16_post_imm_ld2:
 ;CHECK: ld2.8h { v0, v1 }, [x0], #32
   %ld2 = tail call { <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld2.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 16
+  %tmp = getelementptr i16, i16* %A, i32 16
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16> } %ld2
 }
@@ -667,7 +667,7 @@ define { <8 x i16>, <8 x i16> } @test_v8
 ;CHECK-LABEL: test_v8i16_post_reg_ld2:
 ;CHECK: ld2.8h { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = tail call { <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld2.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16> } %ld2
 }
@@ -679,7 +679,7 @@ define { <4 x i16>, <4 x i16> } @test_v4
 ;CHECK-LABEL: test_v4i16_post_imm_ld2:
 ;CHECK: ld2.4h { v0, v1 }, [x0], #16
   %ld2 = tail call { <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld2.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 8
+  %tmp = getelementptr i16, i16* %A, i32 8
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16> } %ld2
 }
@@ -688,7 +688,7 @@ define { <4 x i16>, <4 x i16> } @test_v4
 ;CHECK-LABEL: test_v4i16_post_reg_ld2:
 ;CHECK: ld2.4h { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = tail call { <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld2.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16> } %ld2
 }
@@ -700,7 +700,7 @@ define { <4 x i32>, <4 x i32> } @test_v4
 ;CHECK-LABEL: test_v4i32_post_imm_ld2:
 ;CHECK: ld2.4s { v0, v1 }, [x0], #32
   %ld2 = tail call { <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld2.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 8
+  %tmp = getelementptr i32, i32* %A, i32 8
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32> } %ld2
 }
@@ -709,7 +709,7 @@ define { <4 x i32>, <4 x i32> } @test_v4
 ;CHECK-LABEL: test_v4i32_post_reg_ld2:
 ;CHECK: ld2.4s { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = tail call { <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld2.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32> } %ld2
 }
@@ -721,7 +721,7 @@ define { <2 x i32>, <2 x i32> } @test_v2
 ;CHECK-LABEL: test_v2i32_post_imm_ld2:
 ;CHECK: ld2.2s { v0, v1 }, [x0], #16
   %ld2 = tail call { <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld2.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 4
+  %tmp = getelementptr i32, i32* %A, i32 4
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32> } %ld2
 }
@@ -730,7 +730,7 @@ define { <2 x i32>, <2 x i32> } @test_v2
 ;CHECK-LABEL: test_v2i32_post_reg_ld2:
 ;CHECK: ld2.2s { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = tail call { <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld2.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32> } %ld2
 }
@@ -742,7 +742,7 @@ define { <2 x i64>, <2 x i64> } @test_v2
 ;CHECK-LABEL: test_v2i64_post_imm_ld2:
 ;CHECK: ld2.2d { v0, v1 }, [x0], #32
   %ld2 = tail call { <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld2.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 4
+  %tmp = getelementptr i64, i64* %A, i32 4
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64> } %ld2
 }
@@ -751,7 +751,7 @@ define { <2 x i64>, <2 x i64> } @test_v2
 ;CHECK-LABEL: test_v2i64_post_reg_ld2:
 ;CHECK: ld2.2d { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = tail call { <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld2.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64> } %ld2
 }
@@ -763,7 +763,7 @@ define { <1 x i64>, <1 x i64> } @test_v1
 ;CHECK-LABEL: test_v1i64_post_imm_ld2:
 ;CHECK: ld1.1d { v0, v1 }, [x0], #16
   %ld2 = tail call { <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld2.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 2
+  %tmp = getelementptr i64, i64* %A, i32 2
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64> } %ld2
 }
@@ -772,7 +772,7 @@ define { <1 x i64>, <1 x i64> } @test_v1
 ;CHECK-LABEL: test_v1i64_post_reg_ld2:
 ;CHECK: ld1.1d { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = tail call { <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld2.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64> } %ld2
 }
@@ -784,7 +784,7 @@ define { <4 x float>, <4 x float> } @tes
 ;CHECK-LABEL: test_v4f32_post_imm_ld2:
 ;CHECK: ld2.4s { v0, v1 }, [x0], #32
   %ld2 = tail call { <4 x float>, <4 x float> } @llvm.aarch64.neon.ld2.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 8
+  %tmp = getelementptr float, float* %A, i32 8
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float> } %ld2
 }
@@ -793,7 +793,7 @@ define { <4 x float>, <4 x float> } @tes
 ;CHECK-LABEL: test_v4f32_post_reg_ld2:
 ;CHECK: ld2.4s { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = tail call { <4 x float>, <4 x float> } @llvm.aarch64.neon.ld2.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float> } %ld2
 }
@@ -805,7 +805,7 @@ define { <2 x float>, <2 x float> } @tes
 ;CHECK-LABEL: test_v2f32_post_imm_ld2:
 ;CHECK: ld2.2s { v0, v1 }, [x0], #16
   %ld2 = tail call { <2 x float>, <2 x float> } @llvm.aarch64.neon.ld2.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 4
+  %tmp = getelementptr float, float* %A, i32 4
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float> } %ld2
 }
@@ -814,7 +814,7 @@ define { <2 x float>, <2 x float> } @tes
 ;CHECK-LABEL: test_v2f32_post_reg_ld2:
 ;CHECK: ld2.2s { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = tail call { <2 x float>, <2 x float> } @llvm.aarch64.neon.ld2.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float> } %ld2
 }
@@ -826,7 +826,7 @@ define { <2 x double>, <2 x double> } @t
 ;CHECK-LABEL: test_v2f64_post_imm_ld2:
 ;CHECK: ld2.2d { v0, v1 }, [x0], #32
   %ld2 = tail call { <2 x double>, <2 x double> } @llvm.aarch64.neon.ld2.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 4
+  %tmp = getelementptr double, double* %A, i32 4
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double> } %ld2
 }
@@ -835,7 +835,7 @@ define { <2 x double>, <2 x double> } @t
 ;CHECK-LABEL: test_v2f64_post_reg_ld2:
 ;CHECK: ld2.2d { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = tail call { <2 x double>, <2 x double> } @llvm.aarch64.neon.ld2.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double> } %ld2
 }
@@ -847,7 +847,7 @@ define { <1 x double>, <1 x double> } @t
 ;CHECK-LABEL: test_v1f64_post_imm_ld2:
 ;CHECK: ld1.1d { v0, v1 }, [x0], #16
   %ld2 = tail call { <1 x double>, <1 x double> } @llvm.aarch64.neon.ld2.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 2
+  %tmp = getelementptr double, double* %A, i32 2
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double> } %ld2
 }
@@ -856,7 +856,7 @@ define { <1 x double>, <1 x double> } @t
 ;CHECK-LABEL: test_v1f64_post_reg_ld2:
 ;CHECK: ld1.1d { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = tail call { <1 x double>, <1 x double> } @llvm.aarch64.neon.ld2.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double> } %ld2
 }
@@ -868,7 +868,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_imm_ld3:
 ;CHECK: ld3.16b { v0, v1, v2 }, [x0], #48
   %ld3 = tail call { <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld3.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 48
+  %tmp = getelementptr i8, i8* %A, i32 48
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8> } %ld3
 }
@@ -877,7 +877,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_reg_ld3:
 ;CHECK: ld3.16b { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = tail call { <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld3.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8> } %ld3
 }
@@ -889,7 +889,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8> }
 ;CHECK-LABEL: test_v8i8_post_imm_ld3:
 ;CHECK: ld3.8b { v0, v1, v2 }, [x0], #24
   %ld3 = tail call { <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld3.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 24
+  %tmp = getelementptr i8, i8* %A, i32 24
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8> } %ld3
 }
@@ -898,7 +898,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8> }
 ;CHECK-LABEL: test_v8i8_post_reg_ld3:
 ;CHECK: ld3.8b { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = tail call { <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld3.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8> } %ld3
 }
@@ -910,7 +910,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_imm_ld3:
 ;CHECK: ld3.8h { v0, v1, v2 }, [x0], #48
   %ld3 = tail call { <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld3.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 24
+  %tmp = getelementptr i16, i16* %A, i32 24
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16> } %ld3
 }
@@ -919,7 +919,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_reg_ld3:
 ;CHECK: ld3.8h { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = tail call { <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld3.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16> } %ld3
 }
@@ -931,7 +931,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_imm_ld3:
 ;CHECK: ld3.4h { v0, v1, v2 }, [x0], #24
   %ld3 = tail call { <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld3.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 12
+  %tmp = getelementptr i16, i16* %A, i32 12
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16> } %ld3
 }
@@ -940,7 +940,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_reg_ld3:
 ;CHECK: ld3.4h { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = tail call { <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld3.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16> } %ld3
 }
@@ -952,7 +952,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_imm_ld3:
 ;CHECK: ld3.4s { v0, v1, v2 }, [x0], #48
   %ld3 = tail call { <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld3.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 12
+  %tmp = getelementptr i32, i32* %A, i32 12
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32> } %ld3
 }
@@ -961,7 +961,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_reg_ld3:
 ;CHECK: ld3.4s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = tail call { <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld3.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32> } %ld3
 }
@@ -973,7 +973,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_imm_ld3:
 ;CHECK: ld3.2s { v0, v1, v2 }, [x0], #24
   %ld3 = tail call { <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld3.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 6
+  %tmp = getelementptr i32, i32* %A, i32 6
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32> } %ld3
 }
@@ -982,7 +982,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_reg_ld3:
 ;CHECK: ld3.2s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = tail call { <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld3.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32> } %ld3
 }
@@ -994,7 +994,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_imm_ld3:
 ;CHECK: ld3.2d { v0, v1, v2 }, [x0], #48
   %ld3 = tail call { <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld3.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 6
+  %tmp = getelementptr i64, i64* %A, i32 6
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64> } %ld3
 }
@@ -1003,7 +1003,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_reg_ld3:
 ;CHECK: ld3.2d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = tail call { <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld3.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64> } %ld3
 }
@@ -1015,7 +1015,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_imm_ld3:
 ;CHECK: ld1.1d { v0, v1, v2 }, [x0], #24
   %ld3 = tail call { <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld3.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 3
+  %tmp = getelementptr i64, i64* %A, i32 3
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64> } %ld3
 }
@@ -1024,7 +1024,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_reg_ld3:
 ;CHECK: ld1.1d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = tail call { <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld3.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64> } %ld3
 }
@@ -1036,7 +1036,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_imm_ld3:
 ;CHECK: ld3.4s { v0, v1, v2 }, [x0], #48
   %ld3 = tail call { <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld3.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 12
+  %tmp = getelementptr float, float* %A, i32 12
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float> } %ld3
 }
@@ -1045,7 +1045,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_reg_ld3:
 ;CHECK: ld3.4s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = tail call { <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld3.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float> } %ld3
 }
@@ -1057,7 +1057,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_imm_ld3:
 ;CHECK: ld3.2s { v0, v1, v2 }, [x0], #24
   %ld3 = tail call { <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld3.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 6
+  %tmp = getelementptr float, float* %A, i32 6
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float> } %ld3
 }
@@ -1066,7 +1066,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_reg_ld3:
 ;CHECK: ld3.2s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = tail call { <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld3.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float> } %ld3
 }
@@ -1078,7 +1078,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_imm_ld3:
 ;CHECK: ld3.2d { v0, v1, v2 }, [x0], #48
   %ld3 = tail call { <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld3.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 6
+  %tmp = getelementptr double, double* %A, i32 6
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double> } %ld3
 }
@@ -1087,7 +1087,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_reg_ld3:
 ;CHECK: ld3.2d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = tail call { <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld3.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double> } %ld3
 }
@@ -1099,7 +1099,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_imm_ld3:
 ;CHECK: ld1.1d { v0, v1, v2 }, [x0], #24
   %ld3 = tail call { <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld3.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 3
+  %tmp = getelementptr double, double* %A, i32 3
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double> } %ld3
 }
@@ -1108,7 +1108,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_reg_ld3:
 ;CHECK: ld1.1d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = tail call { <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld3.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double> } %ld3
 }
@@ -1120,7 +1120,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_imm_ld4:
 ;CHECK: ld4.16b { v0, v1, v2, v3 }, [x0], #64
   %ld4 = tail call { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld4.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 64
+  %tmp = getelementptr i8, i8* %A, i32 64
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } %ld4
 }
@@ -1129,7 +1129,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_reg_ld4:
 ;CHECK: ld4.16b { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = tail call { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld4.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } %ld4
 }
@@ -1141,7 +1141,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8>, <
 ;CHECK-LABEL: test_v8i8_post_imm_ld4:
 ;CHECK: ld4.8b { v0, v1, v2, v3 }, [x0], #32
   %ld4 = tail call { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld4.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 32
+  %tmp = getelementptr i8, i8* %A, i32 32
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } %ld4
 }
@@ -1150,7 +1150,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8>, <
 ;CHECK-LABEL: test_v8i8_post_reg_ld4:
 ;CHECK: ld4.8b { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = tail call { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld4.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } %ld4
 }
@@ -1162,7 +1162,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_imm_ld4:
 ;CHECK: ld4.8h { v0, v1, v2, v3 }, [x0], #64
   %ld4 = tail call { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld4.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 32
+  %tmp = getelementptr i16, i16* %A, i32 32
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } %ld4
 }
@@ -1171,7 +1171,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_reg_ld4:
 ;CHECK: ld4.8h { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = tail call { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld4.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } %ld4
 }
@@ -1183,7 +1183,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_imm_ld4:
 ;CHECK: ld4.4h { v0, v1, v2, v3 }, [x0], #32
   %ld4 = tail call { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld4.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 16
+  %tmp = getelementptr i16, i16* %A, i32 16
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } %ld4
 }
@@ -1192,7 +1192,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_reg_ld4:
 ;CHECK: ld4.4h { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = tail call { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld4.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } %ld4
 }
@@ -1204,7 +1204,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_imm_ld4:
 ;CHECK: ld4.4s { v0, v1, v2, v3 }, [x0], #64
   %ld4 = tail call { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld4.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 16
+  %tmp = getelementptr i32, i32* %A, i32 16
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } %ld4
 }
@@ -1213,7 +1213,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_reg_ld4:
 ;CHECK: ld4.4s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = tail call { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld4.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } %ld4
 }
@@ -1225,7 +1225,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_imm_ld4:
 ;CHECK: ld4.2s { v0, v1, v2, v3 }, [x0], #32
   %ld4 = tail call { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld4.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 8
+  %tmp = getelementptr i32, i32* %A, i32 8
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } %ld4
 }
@@ -1234,7 +1234,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_reg_ld4:
 ;CHECK: ld4.2s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = tail call { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld4.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } %ld4
 }
@@ -1246,7 +1246,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_imm_ld4:
 ;CHECK: ld4.2d { v0, v1, v2, v3 }, [x0], #64
   %ld4 = tail call { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld4.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 8
+  %tmp = getelementptr i64, i64* %A, i32 8
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } %ld4
 }
@@ -1255,7 +1255,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_reg_ld4:
 ;CHECK: ld4.2d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = tail call { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld4.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } %ld4
 }
@@ -1267,7 +1267,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_imm_ld4:
 ;CHECK: ld1.1d { v0, v1, v2, v3 }, [x0], #32
   %ld4 = tail call { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld4.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 4
+  %tmp = getelementptr i64, i64* %A, i32 4
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } %ld4
 }
@@ -1276,7 +1276,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_reg_ld4:
 ;CHECK: ld1.1d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = tail call { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld4.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } %ld4
 }
@@ -1288,7 +1288,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_imm_ld4:
 ;CHECK: ld4.4s { v0, v1, v2, v3 }, [x0], #64
   %ld4 = tail call { <4 x float>, <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld4.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 16
+  %tmp = getelementptr float, float* %A, i32 16
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float>, <4 x float> } %ld4
 }
@@ -1297,7 +1297,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_reg_ld4:
 ;CHECK: ld4.4s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = tail call { <4 x float>, <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld4.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float>, <4 x float> } %ld4
 }
@@ -1309,7 +1309,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_imm_ld4:
 ;CHECK: ld4.2s { v0, v1, v2, v3 }, [x0], #32
   %ld4 = tail call { <2 x float>, <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld4.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 8
+  %tmp = getelementptr float, float* %A, i32 8
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float>, <2 x float> } %ld4
 }
@@ -1318,7 +1318,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_reg_ld4:
 ;CHECK: ld4.2s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = tail call { <2 x float>, <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld4.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float>, <2 x float> } %ld4
 }
@@ -1330,7 +1330,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_imm_ld4:
 ;CHECK: ld4.2d { v0, v1, v2, v3 }, [x0], #64
   %ld4 = tail call { <2 x double>, <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld4.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 8
+  %tmp = getelementptr double, double* %A, i32 8
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double>, <2 x double> } %ld4
 }
@@ -1339,7 +1339,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_reg_ld4:
 ;CHECK: ld4.2d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = tail call { <2 x double>, <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld4.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double>, <2 x double> } %ld4
 }
@@ -1351,7 +1351,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_imm_ld4:
 ;CHECK: ld1.1d { v0, v1, v2, v3 }, [x0], #32
   %ld4 = tail call { <1 x double>, <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld4.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 4
+  %tmp = getelementptr double, double* %A, i32 4
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double>, <1 x double> } %ld4
 }
@@ -1360,7 +1360,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_reg_ld4:
 ;CHECK: ld1.1d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = tail call { <1 x double>, <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld4.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double>, <1 x double> } %ld4
 }
@@ -1371,7 +1371,7 @@ define { <16 x i8>, <16 x i8> } @test_v1
 ;CHECK-LABEL: test_v16i8_post_imm_ld1x2:
 ;CHECK: ld1.16b { v0, v1 }, [x0], #32
   %ld1x2 = tail call { <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld1x2.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 32
+  %tmp = getelementptr i8, i8* %A, i32 32
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8> } %ld1x2
 }
@@ -1380,7 +1380,7 @@ define { <16 x i8>, <16 x i8> } @test_v1
 ;CHECK-LABEL: test_v16i8_post_reg_ld1x2:
 ;CHECK: ld1.16b { v0, v1 }, [x0], x{{[0-9]+}}
   %ld1x2 = tail call { <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld1x2.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8> } %ld1x2
 }
@@ -1392,7 +1392,7 @@ define { <8 x i8>, <8 x i8> } @test_v8i8
 ;CHECK-LABEL: test_v8i8_post_imm_ld1x2:
 ;CHECK: ld1.8b { v0, v1 }, [x0], #16
   %ld1x2 = tail call { <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld1x2.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 16
+  %tmp = getelementptr i8, i8* %A, i32 16
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8> } %ld1x2
 }
@@ -1401,7 +1401,7 @@ define { <8 x i8>, <8 x i8> } @test_v8i8
 ;CHECK-LABEL: test_v8i8_post_reg_ld1x2:
 ;CHECK: ld1.8b { v0, v1 }, [x0], x{{[0-9]+}}
   %ld1x2 = tail call { <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld1x2.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8> } %ld1x2
 }
@@ -1413,7 +1413,7 @@ define { <8 x i16>, <8 x i16> } @test_v8
 ;CHECK-LABEL: test_v8i16_post_imm_ld1x2:
 ;CHECK: ld1.8h { v0, v1 }, [x0], #32
   %ld1x2 = tail call { <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld1x2.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 16
+  %tmp = getelementptr i16, i16* %A, i32 16
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16> } %ld1x2
 }
@@ -1422,7 +1422,7 @@ define { <8 x i16>, <8 x i16> } @test_v8
 ;CHECK-LABEL: test_v8i16_post_reg_ld1x2:
 ;CHECK: ld1.8h { v0, v1 }, [x0], x{{[0-9]+}}
   %ld1x2 = tail call { <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld1x2.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16> } %ld1x2
 }
@@ -1434,7 +1434,7 @@ define { <4 x i16>, <4 x i16> } @test_v4
 ;CHECK-LABEL: test_v4i16_post_imm_ld1x2:
 ;CHECK: ld1.4h { v0, v1 }, [x0], #16
   %ld1x2 = tail call { <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld1x2.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 8
+  %tmp = getelementptr i16, i16* %A, i32 8
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16> } %ld1x2
 }
@@ -1443,7 +1443,7 @@ define { <4 x i16>, <4 x i16> } @test_v4
 ;CHECK-LABEL: test_v4i16_post_reg_ld1x2:
 ;CHECK: ld1.4h { v0, v1 }, [x0], x{{[0-9]+}}
   %ld1x2 = tail call { <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld1x2.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16> } %ld1x2
 }
@@ -1455,7 +1455,7 @@ define { <4 x i32>, <4 x i32> } @test_v4
 ;CHECK-LABEL: test_v4i32_post_imm_ld1x2:
 ;CHECK: ld1.4s { v0, v1 }, [x0], #32
   %ld1x2 = tail call { <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld1x2.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 8
+  %tmp = getelementptr i32, i32* %A, i32 8
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32> } %ld1x2
 }
@@ -1464,7 +1464,7 @@ define { <4 x i32>, <4 x i32> } @test_v4
 ;CHECK-LABEL: test_v4i32_post_reg_ld1x2:
 ;CHECK: ld1.4s { v0, v1 }, [x0], x{{[0-9]+}}
   %ld1x2 = tail call { <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld1x2.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32> } %ld1x2
 }
@@ -1476,7 +1476,7 @@ define { <2 x i32>, <2 x i32> } @test_v2
 ;CHECK-LABEL: test_v2i32_post_imm_ld1x2:
 ;CHECK: ld1.2s { v0, v1 }, [x0], #16
   %ld1x2 = tail call { <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld1x2.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 4
+  %tmp = getelementptr i32, i32* %A, i32 4
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32> } %ld1x2
 }
@@ -1485,7 +1485,7 @@ define { <2 x i32>, <2 x i32> } @test_v2
 ;CHECK-LABEL: test_v2i32_post_reg_ld1x2:
 ;CHECK: ld1.2s { v0, v1 }, [x0], x{{[0-9]+}}
   %ld1x2 = tail call { <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld1x2.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32> } %ld1x2
 }
@@ -1497,7 +1497,7 @@ define { <2 x i64>, <2 x i64> } @test_v2
 ;CHECK-LABEL: test_v2i64_post_imm_ld1x2:
 ;CHECK: ld1.2d { v0, v1 }, [x0], #32
   %ld1x2 = tail call { <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld1x2.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 4
+  %tmp = getelementptr i64, i64* %A, i32 4
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64> } %ld1x2
 }
@@ -1506,7 +1506,7 @@ define { <2 x i64>, <2 x i64> } @test_v2
 ;CHECK-LABEL: test_v2i64_post_reg_ld1x2:
 ;CHECK: ld1.2d { v0, v1 }, [x0], x{{[0-9]+}}
   %ld1x2 = tail call { <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld1x2.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64> } %ld1x2
 }
@@ -1518,7 +1518,7 @@ define { <1 x i64>, <1 x i64> } @test_v1
 ;CHECK-LABEL: test_v1i64_post_imm_ld1x2:
 ;CHECK: ld1.1d { v0, v1 }, [x0], #16
   %ld1x2 = tail call { <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld1x2.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 2
+  %tmp = getelementptr i64, i64* %A, i32 2
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64> } %ld1x2
 }
@@ -1527,7 +1527,7 @@ define { <1 x i64>, <1 x i64> } @test_v1
 ;CHECK-LABEL: test_v1i64_post_reg_ld1x2:
 ;CHECK: ld1.1d { v0, v1 }, [x0], x{{[0-9]+}}
   %ld1x2 = tail call { <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld1x2.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64> } %ld1x2
 }
@@ -1539,7 +1539,7 @@ define { <4 x float>, <4 x float> } @tes
 ;CHECK-LABEL: test_v4f32_post_imm_ld1x2:
 ;CHECK: ld1.4s { v0, v1 }, [x0], #32
   %ld1x2 = tail call { <4 x float>, <4 x float> } @llvm.aarch64.neon.ld1x2.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 8
+  %tmp = getelementptr float, float* %A, i32 8
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float> } %ld1x2
 }
@@ -1548,7 +1548,7 @@ define { <4 x float>, <4 x float> } @tes
 ;CHECK-LABEL: test_v4f32_post_reg_ld1x2:
 ;CHECK: ld1.4s { v0, v1 }, [x0], x{{[0-9]+}}
   %ld1x2 = tail call { <4 x float>, <4 x float> } @llvm.aarch64.neon.ld1x2.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float> } %ld1x2
 }
@@ -1560,7 +1560,7 @@ define { <2 x float>, <2 x float> } @tes
 ;CHECK-LABEL: test_v2f32_post_imm_ld1x2:
 ;CHECK: ld1.2s { v0, v1 }, [x0], #16
   %ld1x2 = tail call { <2 x float>, <2 x float> } @llvm.aarch64.neon.ld1x2.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 4
+  %tmp = getelementptr float, float* %A, i32 4
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float> } %ld1x2
 }
@@ -1569,7 +1569,7 @@ define { <2 x float>, <2 x float> } @tes
 ;CHECK-LABEL: test_v2f32_post_reg_ld1x2:
 ;CHECK: ld1.2s { v0, v1 }, [x0], x{{[0-9]+}}
   %ld1x2 = tail call { <2 x float>, <2 x float> } @llvm.aarch64.neon.ld1x2.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float> } %ld1x2
 }
@@ -1581,7 +1581,7 @@ define { <2 x double>, <2 x double> } @t
 ;CHECK-LABEL: test_v2f64_post_imm_ld1x2:
 ;CHECK: ld1.2d { v0, v1 }, [x0], #32
   %ld1x2 = tail call { <2 x double>, <2 x double> } @llvm.aarch64.neon.ld1x2.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 4
+  %tmp = getelementptr double, double* %A, i32 4
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double> } %ld1x2
 }
@@ -1590,7 +1590,7 @@ define { <2 x double>, <2 x double> } @t
 ;CHECK-LABEL: test_v2f64_post_reg_ld1x2:
 ;CHECK: ld1.2d { v0, v1 }, [x0], x{{[0-9]+}}
   %ld1x2 = tail call { <2 x double>, <2 x double> } @llvm.aarch64.neon.ld1x2.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double> } %ld1x2
 }
@@ -1602,7 +1602,7 @@ define { <1 x double>, <1 x double> } @t
 ;CHECK-LABEL: test_v1f64_post_imm_ld1x2:
 ;CHECK: ld1.1d { v0, v1 }, [x0], #16
   %ld1x2 = tail call { <1 x double>, <1 x double> } @llvm.aarch64.neon.ld1x2.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 2
+  %tmp = getelementptr double, double* %A, i32 2
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double> } %ld1x2
 }
@@ -1611,7 +1611,7 @@ define { <1 x double>, <1 x double> } @t
 ;CHECK-LABEL: test_v1f64_post_reg_ld1x2:
 ;CHECK: ld1.1d { v0, v1 }, [x0], x{{[0-9]+}}
   %ld1x2 = tail call { <1 x double>, <1 x double> } @llvm.aarch64.neon.ld1x2.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double> } %ld1x2
 }
@@ -1623,7 +1623,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_imm_ld1x3:
 ;CHECK: ld1.16b { v0, v1, v2 }, [x0], #48
   %ld1x3 = tail call { <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld1x3.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 48
+  %tmp = getelementptr i8, i8* %A, i32 48
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8> } %ld1x3
 }
@@ -1632,7 +1632,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_reg_ld1x3:
 ;CHECK: ld1.16b { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld1x3 = tail call { <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld1x3.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8> } %ld1x3
 }
@@ -1644,7 +1644,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8> }
 ;CHECK-LABEL: test_v8i8_post_imm_ld1x3:
 ;CHECK: ld1.8b { v0, v1, v2 }, [x0], #24
   %ld1x3 = tail call { <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld1x3.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 24
+  %tmp = getelementptr i8, i8* %A, i32 24
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8> } %ld1x3
 }
@@ -1653,7 +1653,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8> }
 ;CHECK-LABEL: test_v8i8_post_reg_ld1x3:
 ;CHECK: ld1.8b { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld1x3 = tail call { <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld1x3.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8> } %ld1x3
 }
@@ -1665,7 +1665,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_imm_ld1x3:
 ;CHECK: ld1.8h { v0, v1, v2 }, [x0], #48
   %ld1x3 = tail call { <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld1x3.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 24
+  %tmp = getelementptr i16, i16* %A, i32 24
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16> } %ld1x3
 }
@@ -1674,7 +1674,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_reg_ld1x3:
 ;CHECK: ld1.8h { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld1x3 = tail call { <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld1x3.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16> } %ld1x3
 }
@@ -1686,7 +1686,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_imm_ld1x3:
 ;CHECK: ld1.4h { v0, v1, v2 }, [x0], #24
   %ld1x3 = tail call { <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld1x3.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 12
+  %tmp = getelementptr i16, i16* %A, i32 12
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16> } %ld1x3
 }
@@ -1695,7 +1695,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_reg_ld1x3:
 ;CHECK: ld1.4h { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld1x3 = tail call { <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld1x3.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16> } %ld1x3
 }
@@ -1707,7 +1707,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_imm_ld1x3:
 ;CHECK: ld1.4s { v0, v1, v2 }, [x0], #48
   %ld1x3 = tail call { <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld1x3.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 12
+  %tmp = getelementptr i32, i32* %A, i32 12
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32> } %ld1x3
 }
@@ -1716,7 +1716,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_reg_ld1x3:
 ;CHECK: ld1.4s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld1x3 = tail call { <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld1x3.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32> } %ld1x3
 }
@@ -1728,7 +1728,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_imm_ld1x3:
 ;CHECK: ld1.2s { v0, v1, v2 }, [x0], #24
   %ld1x3 = tail call { <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld1x3.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 6
+  %tmp = getelementptr i32, i32* %A, i32 6
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32> } %ld1x3
 }
@@ -1737,7 +1737,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_reg_ld1x3:
 ;CHECK: ld1.2s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld1x3 = tail call { <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld1x3.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32> } %ld1x3
 }
@@ -1749,7 +1749,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_imm_ld1x3:
 ;CHECK: ld1.2d { v0, v1, v2 }, [x0], #48
   %ld1x3 = tail call { <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld1x3.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 6
+  %tmp = getelementptr i64, i64* %A, i32 6
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64> } %ld1x3
 }
@@ -1758,7 +1758,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_reg_ld1x3:
 ;CHECK: ld1.2d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld1x3 = tail call { <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld1x3.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64> } %ld1x3
 }
@@ -1770,7 +1770,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_imm_ld1x3:
 ;CHECK: ld1.1d { v0, v1, v2 }, [x0], #24
   %ld1x3 = tail call { <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld1x3.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 3
+  %tmp = getelementptr i64, i64* %A, i32 3
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64> } %ld1x3
 }
@@ -1779,7 +1779,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_reg_ld1x3:
 ;CHECK: ld1.1d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld1x3 = tail call { <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld1x3.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64> } %ld1x3
 }
@@ -1791,7 +1791,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_imm_ld1x3:
 ;CHECK: ld1.4s { v0, v1, v2 }, [x0], #48
   %ld1x3 = tail call { <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld1x3.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 12
+  %tmp = getelementptr float, float* %A, i32 12
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float> } %ld1x3
 }
@@ -1800,7 +1800,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_reg_ld1x3:
 ;CHECK: ld1.4s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld1x3 = tail call { <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld1x3.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float> } %ld1x3
 }
@@ -1812,7 +1812,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_imm_ld1x3:
 ;CHECK: ld1.2s { v0, v1, v2 }, [x0], #24
   %ld1x3 = tail call { <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld1x3.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 6
+  %tmp = getelementptr float, float* %A, i32 6
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float> } %ld1x3
 }
@@ -1821,7 +1821,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_reg_ld1x3:
 ;CHECK: ld1.2s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld1x3 = tail call { <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld1x3.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float> } %ld1x3
 }
@@ -1833,7 +1833,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_imm_ld1x3:
 ;CHECK: ld1.2d { v0, v1, v2 }, [x0], #48
   %ld1x3 = tail call { <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld1x3.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 6
+  %tmp = getelementptr double, double* %A, i32 6
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double> } %ld1x3
 }
@@ -1842,7 +1842,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_reg_ld1x3:
 ;CHECK: ld1.2d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld1x3 = tail call { <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld1x3.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double> } %ld1x3
 }
@@ -1854,7 +1854,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_imm_ld1x3:
 ;CHECK: ld1.1d { v0, v1, v2 }, [x0], #24
   %ld1x3 = tail call { <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld1x3.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 3
+  %tmp = getelementptr double, double* %A, i32 3
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double> } %ld1x3
 }
@@ -1863,7 +1863,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_reg_ld1x3:
 ;CHECK: ld1.1d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld1x3 = tail call { <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld1x3.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double> } %ld1x3
 }
@@ -1875,7 +1875,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_imm_ld1x4:
 ;CHECK: ld1.16b { v0, v1, v2, v3 }, [x0], #64
   %ld1x4 = tail call { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld1x4.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 64
+  %tmp = getelementptr i8, i8* %A, i32 64
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } %ld1x4
 }
@@ -1884,7 +1884,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_reg_ld1x4:
 ;CHECK: ld1.16b { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld1x4 = tail call { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld1x4.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } %ld1x4
 }
@@ -1896,7 +1896,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8>, <
 ;CHECK-LABEL: test_v8i8_post_imm_ld1x4:
 ;CHECK: ld1.8b { v0, v1, v2, v3 }, [x0], #32
   %ld1x4 = tail call { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld1x4.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 32
+  %tmp = getelementptr i8, i8* %A, i32 32
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } %ld1x4
 }
@@ -1905,7 +1905,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8>, <
 ;CHECK-LABEL: test_v8i8_post_reg_ld1x4:
 ;CHECK: ld1.8b { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld1x4 = tail call { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld1x4.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } %ld1x4
 }
@@ -1917,7 +1917,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_imm_ld1x4:
 ;CHECK: ld1.8h { v0, v1, v2, v3 }, [x0], #64
   %ld1x4 = tail call { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld1x4.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 32
+  %tmp = getelementptr i16, i16* %A, i32 32
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } %ld1x4
 }
@@ -1926,7 +1926,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_reg_ld1x4:
 ;CHECK: ld1.8h { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld1x4 = tail call { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld1x4.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } %ld1x4
 }
@@ -1938,7 +1938,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_imm_ld1x4:
 ;CHECK: ld1.4h { v0, v1, v2, v3 }, [x0], #32
   %ld1x4 = tail call { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld1x4.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 16
+  %tmp = getelementptr i16, i16* %A, i32 16
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } %ld1x4
 }
@@ -1947,7 +1947,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_reg_ld1x4:
 ;CHECK: ld1.4h { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld1x4 = tail call { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld1x4.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } %ld1x4
 }
@@ -1959,7 +1959,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_imm_ld1x4:
 ;CHECK: ld1.4s { v0, v1, v2, v3 }, [x0], #64
   %ld1x4 = tail call { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld1x4.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 16
+  %tmp = getelementptr i32, i32* %A, i32 16
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } %ld1x4
 }
@@ -1968,7 +1968,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_reg_ld1x4:
 ;CHECK: ld1.4s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld1x4 = tail call { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld1x4.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } %ld1x4
 }
@@ -1980,7 +1980,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_imm_ld1x4:
 ;CHECK: ld1.2s { v0, v1, v2, v3 }, [x0], #32
   %ld1x4 = tail call { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld1x4.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 8
+  %tmp = getelementptr i32, i32* %A, i32 8
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } %ld1x4
 }
@@ -1989,7 +1989,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_reg_ld1x4:
 ;CHECK: ld1.2s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld1x4 = tail call { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld1x4.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } %ld1x4
 }
@@ -2001,7 +2001,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_imm_ld1x4:
 ;CHECK: ld1.2d { v0, v1, v2, v3 }, [x0], #64
   %ld1x4 = tail call { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld1x4.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 8
+  %tmp = getelementptr i64, i64* %A, i32 8
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } %ld1x4
 }
@@ -2010,7 +2010,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_reg_ld1x4:
 ;CHECK: ld1.2d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld1x4 = tail call { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld1x4.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } %ld1x4
 }
@@ -2022,7 +2022,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_imm_ld1x4:
 ;CHECK: ld1.1d { v0, v1, v2, v3 }, [x0], #32
   %ld1x4 = tail call { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld1x4.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 4
+  %tmp = getelementptr i64, i64* %A, i32 4
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } %ld1x4
 }
@@ -2031,7 +2031,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_reg_ld1x4:
 ;CHECK: ld1.1d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld1x4 = tail call { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld1x4.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } %ld1x4
 }
@@ -2043,7 +2043,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_imm_ld1x4:
 ;CHECK: ld1.4s { v0, v1, v2, v3 }, [x0], #64
   %ld1x4 = tail call { <4 x float>, <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld1x4.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 16
+  %tmp = getelementptr float, float* %A, i32 16
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float>, <4 x float> } %ld1x4
 }
@@ -2052,7 +2052,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_reg_ld1x4:
 ;CHECK: ld1.4s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld1x4 = tail call { <4 x float>, <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld1x4.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float>, <4 x float> } %ld1x4
 }
@@ -2064,7 +2064,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_imm_ld1x4:
 ;CHECK: ld1.2s { v0, v1, v2, v3 }, [x0], #32
   %ld1x4 = tail call { <2 x float>, <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld1x4.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 8
+  %tmp = getelementptr float, float* %A, i32 8
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float>, <2 x float> } %ld1x4
 }
@@ -2073,7 +2073,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_reg_ld1x4:
 ;CHECK: ld1.2s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld1x4 = tail call { <2 x float>, <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld1x4.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float>, <2 x float> } %ld1x4
 }
@@ -2085,7 +2085,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_imm_ld1x4:
 ;CHECK: ld1.2d { v0, v1, v2, v3 }, [x0], #64
   %ld1x4 = tail call { <2 x double>, <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld1x4.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 8
+  %tmp = getelementptr double, double* %A, i32 8
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double>, <2 x double> } %ld1x4
 }
@@ -2094,7 +2094,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_reg_ld1x4:
 ;CHECK: ld1.2d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld1x4 = tail call { <2 x double>, <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld1x4.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double>, <2 x double> } %ld1x4
 }
@@ -2106,7 +2106,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_imm_ld1x4:
 ;CHECK: ld1.1d { v0, v1, v2, v3 }, [x0], #32
   %ld1x4 = tail call { <1 x double>, <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld1x4.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 4
+  %tmp = getelementptr double, double* %A, i32 4
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double>, <1 x double> } %ld1x4
 }
@@ -2115,7 +2115,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_reg_ld1x4:
 ;CHECK: ld1.1d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld1x4 = tail call { <1 x double>, <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld1x4.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double>, <1 x double> } %ld1x4
 }
@@ -2127,7 +2127,7 @@ define { <16 x i8>, <16 x i8> } @test_v1
 ;CHECK-LABEL: test_v16i8_post_imm_ld2r:
 ;CHECK: ld2r.16b { v0, v1 }, [x0], #2
   %ld2 = call { <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld2r.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 2
+  %tmp = getelementptr i8, i8* %A, i32 2
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8> } %ld2
 }
@@ -2136,7 +2136,7 @@ define { <16 x i8>, <16 x i8> } @test_v1
 ;CHECK-LABEL: test_v16i8_post_reg_ld2r:
 ;CHECK: ld2r.16b { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = call { <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld2r.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8> } %ld2
 }
@@ -2148,7 +2148,7 @@ define { <8 x i8>, <8 x i8> } @test_v8i8
 ;CHECK-LABEL: test_v8i8_post_imm_ld2r:
 ;CHECK: ld2r.8b { v0, v1 }, [x0], #2
   %ld2 = call { <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld2r.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 2
+  %tmp = getelementptr i8, i8* %A, i32 2
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8> } %ld2
 }
@@ -2157,7 +2157,7 @@ define { <8 x i8>, <8 x i8> } @test_v8i8
 ;CHECK-LABEL: test_v8i8_post_reg_ld2r:
 ;CHECK: ld2r.8b { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = call { <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld2r.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8> } %ld2
 }
@@ -2169,7 +2169,7 @@ define { <8 x i16>, <8 x i16> } @test_v8
 ;CHECK-LABEL: test_v8i16_post_imm_ld2r:
 ;CHECK: ld2r.8h { v0, v1 }, [x0], #4
   %ld2 = call { <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld2r.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 2
+  %tmp = getelementptr i16, i16* %A, i32 2
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16> } %ld2
 }
@@ -2178,7 +2178,7 @@ define { <8 x i16>, <8 x i16> } @test_v8
 ;CHECK-LABEL: test_v8i16_post_reg_ld2r:
 ;CHECK: ld2r.8h { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = call { <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld2r.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16> } %ld2
 }
@@ -2190,7 +2190,7 @@ define { <4 x i16>, <4 x i16> } @test_v4
 ;CHECK-LABEL: test_v4i16_post_imm_ld2r:
 ;CHECK: ld2r.4h { v0, v1 }, [x0], #4
   %ld2 = call { <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld2r.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 2
+  %tmp = getelementptr i16, i16* %A, i32 2
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16> } %ld2
 }
@@ -2199,7 +2199,7 @@ define { <4 x i16>, <4 x i16> } @test_v4
 ;CHECK-LABEL: test_v4i16_post_reg_ld2r:
 ;CHECK: ld2r.4h { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = call { <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld2r.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16> } %ld2
 }
@@ -2211,7 +2211,7 @@ define { <4 x i32>, <4 x i32> } @test_v4
 ;CHECK-LABEL: test_v4i32_post_imm_ld2r:
 ;CHECK: ld2r.4s { v0, v1 }, [x0], #8
   %ld2 = call { <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld2r.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 2
+  %tmp = getelementptr i32, i32* %A, i32 2
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32> } %ld2
 }
@@ -2220,7 +2220,7 @@ define { <4 x i32>, <4 x i32> } @test_v4
 ;CHECK-LABEL: test_v4i32_post_reg_ld2r:
 ;CHECK: ld2r.4s { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = call { <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld2r.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32> } %ld2
 }
@@ -2231,7 +2231,7 @@ define { <2 x i32>, <2 x i32> } @test_v2
 ;CHECK-LABEL: test_v2i32_post_imm_ld2r:
 ;CHECK: ld2r.2s { v0, v1 }, [x0], #8
   %ld2 = call { <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld2r.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 2
+  %tmp = getelementptr i32, i32* %A, i32 2
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32> } %ld2
 }
@@ -2240,7 +2240,7 @@ define { <2 x i32>, <2 x i32> } @test_v2
 ;CHECK-LABEL: test_v2i32_post_reg_ld2r:
 ;CHECK: ld2r.2s { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = call { <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld2r.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32> } %ld2
 }
@@ -2252,7 +2252,7 @@ define { <2 x i64>, <2 x i64> } @test_v2
 ;CHECK-LABEL: test_v2i64_post_imm_ld2r:
 ;CHECK: ld2r.2d { v0, v1 }, [x0], #16
   %ld2 = call { <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld2r.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 2
+  %tmp = getelementptr i64, i64* %A, i32 2
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64> } %ld2
 }
@@ -2261,7 +2261,7 @@ define { <2 x i64>, <2 x i64> } @test_v2
 ;CHECK-LABEL: test_v2i64_post_reg_ld2r:
 ;CHECK: ld2r.2d { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = call { <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld2r.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64> } %ld2
 }
@@ -2272,7 +2272,7 @@ define { <1 x i64>, <1 x i64> } @test_v1
 ;CHECK-LABEL: test_v1i64_post_imm_ld2r:
 ;CHECK: ld2r.1d { v0, v1 }, [x0], #16
   %ld2 = call { <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld2r.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 2
+  %tmp = getelementptr i64, i64* %A, i32 2
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64> } %ld2
 }
@@ -2281,7 +2281,7 @@ define { <1 x i64>, <1 x i64> } @test_v1
 ;CHECK-LABEL: test_v1i64_post_reg_ld2r:
 ;CHECK: ld2r.1d { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = call { <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld2r.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64> } %ld2
 }
@@ -2293,7 +2293,7 @@ define { <4 x float>, <4 x float> } @tes
 ;CHECK-LABEL: test_v4f32_post_imm_ld2r:
 ;CHECK: ld2r.4s { v0, v1 }, [x0], #8
   %ld2 = call { <4 x float>, <4 x float> } @llvm.aarch64.neon.ld2r.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 2
+  %tmp = getelementptr float, float* %A, i32 2
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float> } %ld2
 }
@@ -2302,7 +2302,7 @@ define { <4 x float>, <4 x float> } @tes
 ;CHECK-LABEL: test_v4f32_post_reg_ld2r:
 ;CHECK: ld2r.4s { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = call { <4 x float>, <4 x float> } @llvm.aarch64.neon.ld2r.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float> } %ld2
 }
@@ -2313,7 +2313,7 @@ define { <2 x float>, <2 x float> } @tes
 ;CHECK-LABEL: test_v2f32_post_imm_ld2r:
 ;CHECK: ld2r.2s { v0, v1 }, [x0], #8
   %ld2 = call { <2 x float>, <2 x float> } @llvm.aarch64.neon.ld2r.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 2
+  %tmp = getelementptr float, float* %A, i32 2
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float> } %ld2
 }
@@ -2322,7 +2322,7 @@ define { <2 x float>, <2 x float> } @tes
 ;CHECK-LABEL: test_v2f32_post_reg_ld2r:
 ;CHECK: ld2r.2s { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = call { <2 x float>, <2 x float> } @llvm.aarch64.neon.ld2r.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float> } %ld2
 }
@@ -2334,7 +2334,7 @@ define { <2 x double>, <2 x double> } @t
 ;CHECK-LABEL: test_v2f64_post_imm_ld2r:
 ;CHECK: ld2r.2d { v0, v1 }, [x0], #16
   %ld2 = call { <2 x double>, <2 x double> } @llvm.aarch64.neon.ld2r.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 2
+  %tmp = getelementptr double, double* %A, i32 2
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double> } %ld2
 }
@@ -2343,7 +2343,7 @@ define { <2 x double>, <2 x double> } @t
 ;CHECK-LABEL: test_v2f64_post_reg_ld2r:
 ;CHECK: ld2r.2d { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = call { <2 x double>, <2 x double> } @llvm.aarch64.neon.ld2r.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double> } %ld2
 }
@@ -2354,7 +2354,7 @@ define { <1 x double>, <1 x double> } @t
 ;CHECK-LABEL: test_v1f64_post_imm_ld2r:
 ;CHECK: ld2r.1d { v0, v1 }, [x0], #16
   %ld2 = call { <1 x double>, <1 x double> } @llvm.aarch64.neon.ld2r.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 2
+  %tmp = getelementptr double, double* %A, i32 2
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double> } %ld2
 }
@@ -2363,7 +2363,7 @@ define { <1 x double>, <1 x double> } @t
 ;CHECK-LABEL: test_v1f64_post_reg_ld2r:
 ;CHECK: ld2r.1d { v0, v1 }, [x0], x{{[0-9]+}}
   %ld2 = call { <1 x double>, <1 x double> } @llvm.aarch64.neon.ld2r.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double> } %ld2
 }
@@ -2375,7 +2375,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_imm_ld3r:
 ;CHECK: ld3r.16b { v0, v1, v2 }, [x0], #3
   %ld3 = call { <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld3r.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 3
+  %tmp = getelementptr i8, i8* %A, i32 3
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8> } %ld3
 }
@@ -2384,7 +2384,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_reg_ld3r:
 ;CHECK: ld3r.16b { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = call { <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld3r.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8> } %ld3
 }
@@ -2396,7 +2396,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8> }
 ;CHECK-LABEL: test_v8i8_post_imm_ld3r:
 ;CHECK: ld3r.8b { v0, v1, v2 }, [x0], #3
   %ld3 = call { <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld3r.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 3
+  %tmp = getelementptr i8, i8* %A, i32 3
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8> } %ld3
 }
@@ -2405,7 +2405,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8> }
 ;CHECK-LABEL: test_v8i8_post_reg_ld3r:
 ;CHECK: ld3r.8b { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = call { <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld3r.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8> } %ld3
 }
@@ -2417,7 +2417,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_imm_ld3r:
 ;CHECK: ld3r.8h { v0, v1, v2 }, [x0], #6
   %ld3 = call { <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld3r.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 3
+  %tmp = getelementptr i16, i16* %A, i32 3
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16> } %ld3
 }
@@ -2426,7 +2426,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_reg_ld3r:
 ;CHECK: ld3r.8h { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = call { <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld3r.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16> } %ld3
 }
@@ -2438,7 +2438,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_imm_ld3r:
 ;CHECK: ld3r.4h { v0, v1, v2 }, [x0], #6
   %ld3 = call { <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld3r.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 3
+  %tmp = getelementptr i16, i16* %A, i32 3
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16> } %ld3
 }
@@ -2447,7 +2447,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_reg_ld3r:
 ;CHECK: ld3r.4h { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = call { <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld3r.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16> } %ld3
 }
@@ -2459,7 +2459,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_imm_ld3r:
 ;CHECK: ld3r.4s { v0, v1, v2 }, [x0], #12
   %ld3 = call { <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld3r.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 3
+  %tmp = getelementptr i32, i32* %A, i32 3
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32> } %ld3
 }
@@ -2468,7 +2468,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_reg_ld3r:
 ;CHECK: ld3r.4s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = call { <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld3r.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32> } %ld3
 }
@@ -2479,7 +2479,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_imm_ld3r:
 ;CHECK: ld3r.2s { v0, v1, v2 }, [x0], #12
   %ld3 = call { <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld3r.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 3
+  %tmp = getelementptr i32, i32* %A, i32 3
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32> } %ld3
 }
@@ -2488,7 +2488,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_reg_ld3r:
 ;CHECK: ld3r.2s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = call { <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld3r.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32> } %ld3
 }
@@ -2500,7 +2500,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_imm_ld3r:
 ;CHECK: ld3r.2d { v0, v1, v2 }, [x0], #24
   %ld3 = call { <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld3r.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 3
+  %tmp = getelementptr i64, i64* %A, i32 3
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64> } %ld3
 }
@@ -2509,7 +2509,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_reg_ld3r:
 ;CHECK: ld3r.2d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = call { <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld3r.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64> } %ld3
 }
@@ -2520,7 +2520,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_imm_ld3r:
 ;CHECK: ld3r.1d { v0, v1, v2 }, [x0], #24
   %ld3 = call { <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld3r.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 3
+  %tmp = getelementptr i64, i64* %A, i32 3
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64> } %ld3
 }
@@ -2529,7 +2529,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_reg_ld3r:
 ;CHECK: ld3r.1d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = call { <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld3r.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64> } %ld3
 }
@@ -2541,7 +2541,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_imm_ld3r:
 ;CHECK: ld3r.4s { v0, v1, v2 }, [x0], #12
   %ld3 = call { <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld3r.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 3
+  %tmp = getelementptr float, float* %A, i32 3
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float> } %ld3
 }
@@ -2550,7 +2550,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_reg_ld3r:
 ;CHECK: ld3r.4s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = call { <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld3r.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float> } %ld3
 }
@@ -2561,7 +2561,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_imm_ld3r:
 ;CHECK: ld3r.2s { v0, v1, v2 }, [x0], #12
   %ld3 = call { <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld3r.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 3
+  %tmp = getelementptr float, float* %A, i32 3
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float> } %ld3
 }
@@ -2570,7 +2570,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_reg_ld3r:
 ;CHECK: ld3r.2s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = call { <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld3r.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float> } %ld3
 }
@@ -2582,7 +2582,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_imm_ld3r:
 ;CHECK: ld3r.2d { v0, v1, v2 }, [x0], #24
   %ld3 = call { <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld3r.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 3
+  %tmp = getelementptr double, double* %A, i32 3
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double> } %ld3
 }
@@ -2591,7 +2591,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_reg_ld3r:
 ;CHECK: ld3r.2d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = call { <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld3r.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double> } %ld3
 }
@@ -2602,7 +2602,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_imm_ld3r:
 ;CHECK: ld3r.1d { v0, v1, v2 }, [x0], #24
   %ld3 = call { <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld3r.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 3
+  %tmp = getelementptr double, double* %A, i32 3
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double> } %ld3
 }
@@ -2611,7 +2611,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_reg_ld3r:
 ;CHECK: ld3r.1d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   %ld3 = call { <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld3r.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double> } %ld3
 }
@@ -2623,7 +2623,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_imm_ld4r:
 ;CHECK: ld4r.16b { v0, v1, v2, v3 }, [x0], #4
   %ld4 = call { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld4r.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 4
+  %tmp = getelementptr i8, i8* %A, i32 4
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } %ld4
 }
@@ -2632,7 +2632,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_reg_ld4r:
 ;CHECK: ld4r.16b { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = call { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld4r.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } %ld4
 }
@@ -2644,7 +2644,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8>, <
 ;CHECK-LABEL: test_v8i8_post_imm_ld4r:
 ;CHECK: ld4r.8b { v0, v1, v2, v3 }, [x0], #4
   %ld4 = call { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld4r.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 4
+  %tmp = getelementptr i8, i8* %A, i32 4
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } %ld4
 }
@@ -2653,7 +2653,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8>, <
 ;CHECK-LABEL: test_v8i8_post_reg_ld4r:
 ;CHECK: ld4r.8b { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = call { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld4r.v8i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } %ld4
 }
@@ -2665,7 +2665,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_imm_ld4r:
 ;CHECK: ld4r.8h { v0, v1, v2, v3 }, [x0], #8
   %ld4 = call { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld4r.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 4
+  %tmp = getelementptr i16, i16* %A, i32 4
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } %ld4
 }
@@ -2674,7 +2674,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_reg_ld4r:
 ;CHECK: ld4r.8h { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = call { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld4r.v8i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } %ld4
 }
@@ -2686,7 +2686,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_imm_ld4r:
 ;CHECK: ld4r.4h { v0, v1, v2, v3 }, [x0], #8
   %ld4 = call { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld4r.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i32 4
+  %tmp = getelementptr i16, i16* %A, i32 4
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } %ld4
 }
@@ -2695,7 +2695,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_reg_ld4r:
 ;CHECK: ld4r.4h { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = call { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld4r.v4i16.p0i16(i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } %ld4
 }
@@ -2707,7 +2707,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_imm_ld4r:
 ;CHECK: ld4r.4s { v0, v1, v2, v3 }, [x0], #16
   %ld4 = call { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld4r.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 4
+  %tmp = getelementptr i32, i32* %A, i32 4
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } %ld4
 }
@@ -2716,7 +2716,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_reg_ld4r:
 ;CHECK: ld4r.4s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = call { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld4r.v4i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } %ld4
 }
@@ -2727,7 +2727,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_imm_ld4r:
 ;CHECK: ld4r.2s { v0, v1, v2, v3 }, [x0], #16
   %ld4 = call { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld4r.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i32 4
+  %tmp = getelementptr i32, i32* %A, i32 4
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } %ld4
 }
@@ -2736,7 +2736,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_reg_ld4r:
 ;CHECK: ld4r.2s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = call { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld4r.v2i32.p0i32(i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } %ld4
 }
@@ -2748,7 +2748,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_imm_ld4r:
 ;CHECK: ld4r.2d { v0, v1, v2, v3 }, [x0], #32
   %ld4 = call { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld4r.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 4
+  %tmp = getelementptr i64, i64* %A, i32 4
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } %ld4
 }
@@ -2757,7 +2757,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_reg_ld4r:
 ;CHECK: ld4r.2d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = call { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld4r.v2i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } %ld4
 }
@@ -2768,7 +2768,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_imm_ld4r:
 ;CHECK: ld4r.1d { v0, v1, v2, v3 }, [x0], #32
   %ld4 = call { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld4r.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i32 4
+  %tmp = getelementptr i64, i64* %A, i32 4
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } %ld4
 }
@@ -2777,7 +2777,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_reg_ld4r:
 ;CHECK: ld4r.1d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = call { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld4r.v1i64.p0i64(i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } %ld4
 }
@@ -2789,7 +2789,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_imm_ld4r:
 ;CHECK: ld4r.4s { v0, v1, v2, v3 }, [x0], #16
   %ld4 = call { <4 x float>, <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld4r.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 4
+  %tmp = getelementptr float, float* %A, i32 4
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float>, <4 x float> } %ld4
 }
@@ -2798,7 +2798,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_reg_ld4r:
 ;CHECK: ld4r.4s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = call { <4 x float>, <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld4r.v4f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float>, <4 x float> } %ld4
 }
@@ -2809,7 +2809,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_imm_ld4r:
 ;CHECK: ld4r.2s { v0, v1, v2, v3 }, [x0], #16
   %ld4 = call { <2 x float>, <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld4r.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i32 4
+  %tmp = getelementptr float, float* %A, i32 4
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float>, <2 x float> } %ld4
 }
@@ -2818,7 +2818,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_reg_ld4r:
 ;CHECK: ld4r.2s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = call { <2 x float>, <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld4r.v2f32.p0f32(float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float>, <2 x float> } %ld4
 }
@@ -2830,7 +2830,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_imm_ld4r:
 ;CHECK: ld4r.2d { v0, v1, v2, v3 }, [x0], #32
   %ld4 = call { <2 x double>, <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld4r.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 4
+  %tmp = getelementptr double, double* %A, i32 4
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double>, <2 x double> } %ld4
 }
@@ -2839,7 +2839,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_reg_ld4r:
 ;CHECK: ld4r.2d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = call { <2 x double>, <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld4r.v2f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double>, <2 x double> } %ld4
 }
@@ -2850,7 +2850,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_imm_ld4r:
 ;CHECK: ld4r.1d { v0, v1, v2, v3 }, [x0], #32
   %ld4 = call { <1 x double>, <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld4r.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i32 4
+  %tmp = getelementptr double, double* %A, i32 4
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double>, <1 x double> } %ld4
 }
@@ -2859,7 +2859,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_reg_ld4r:
 ;CHECK: ld4r.1d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   %ld4 = call { <1 x double>, <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld4r.v1f64.p0f64(double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double>, <1 x double> } %ld4
 }
@@ -2871,7 +2871,7 @@ define { <16 x i8>, <16 x i8> } @test_v1
 ;CHECK-LABEL: test_v16i8_post_imm_ld2lane:
 ;CHECK: ld2.b { v0, v1 }[0], [x0], #2
   %ld2 = call { <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld2lane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i32 2
+  %tmp = getelementptr i8, i8* %A, i32 2
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8> } %ld2
 }
@@ -2880,7 +2880,7 @@ define { <16 x i8>, <16 x i8> } @test_v1
 ;CHECK-LABEL: test_v16i8_post_reg_ld2lane:
 ;CHECK: ld2.b { v0, v1 }[0], [x0], x{{[0-9]+}}
   %ld2 = call { <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld2lane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8> } %ld2
 }
@@ -2892,7 +2892,7 @@ define { <8 x i8>, <8 x i8> } @test_v8i8
 ;CHECK-LABEL: test_v8i8_post_imm_ld2lane:
 ;CHECK: ld2.b { v0, v1 }[0], [x0], #2
   %ld2 = call { <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld2lane.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i32 2
+  %tmp = getelementptr i8, i8* %A, i32 2
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8> } %ld2
 }
@@ -2901,7 +2901,7 @@ define { <8 x i8>, <8 x i8> } @test_v8i8
 ;CHECK-LABEL: test_v8i8_post_reg_ld2lane:
 ;CHECK: ld2.b { v0, v1 }[0], [x0], x{{[0-9]+}}
   %ld2 = call { <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld2lane.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8> } %ld2
 }
@@ -2913,7 +2913,7 @@ define { <8 x i16>, <8 x i16> } @test_v8
 ;CHECK-LABEL: test_v8i16_post_imm_ld2lane:
 ;CHECK: ld2.h { v0, v1 }[0], [x0], #4
   %ld2 = call { <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld2lane.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i32 2
+  %tmp = getelementptr i16, i16* %A, i32 2
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16> } %ld2
 }
@@ -2922,7 +2922,7 @@ define { <8 x i16>, <8 x i16> } @test_v8
 ;CHECK-LABEL: test_v8i16_post_reg_ld2lane:
 ;CHECK: ld2.h { v0, v1 }[0], [x0], x{{[0-9]+}}
   %ld2 = call { <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld2lane.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16> } %ld2
 }
@@ -2934,7 +2934,7 @@ define { <4 x i16>, <4 x i16> } @test_v4
 ;CHECK-LABEL: test_v4i16_post_imm_ld2lane:
 ;CHECK: ld2.h { v0, v1 }[0], [x0], #4
   %ld2 = call { <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld2lane.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i32 2
+  %tmp = getelementptr i16, i16* %A, i32 2
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16> } %ld2
 }
@@ -2943,7 +2943,7 @@ define { <4 x i16>, <4 x i16> } @test_v4
 ;CHECK-LABEL: test_v4i16_post_reg_ld2lane:
 ;CHECK: ld2.h { v0, v1 }[0], [x0], x{{[0-9]+}}
   %ld2 = call { <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld2lane.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16> } %ld2
 }
@@ -2955,7 +2955,7 @@ define { <4 x i32>, <4 x i32> } @test_v4
 ;CHECK-LABEL: test_v4i32_post_imm_ld2lane:
 ;CHECK: ld2.s { v0, v1 }[0], [x0], #8
   %ld2 = call { <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld2lane.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i32 2
+  %tmp = getelementptr i32, i32* %A, i32 2
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32> } %ld2
 }
@@ -2964,7 +2964,7 @@ define { <4 x i32>, <4 x i32> } @test_v4
 ;CHECK-LABEL: test_v4i32_post_reg_ld2lane:
 ;CHECK: ld2.s { v0, v1 }[0], [x0], x{{[0-9]+}}
   %ld2 = call { <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld2lane.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32> } %ld2
 }
@@ -2976,7 +2976,7 @@ define { <2 x i32>, <2 x i32> } @test_v2
 ;CHECK-LABEL: test_v2i32_post_imm_ld2lane:
 ;CHECK: ld2.s { v0, v1 }[0], [x0], #8
   %ld2 = call { <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld2lane.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i32 2
+  %tmp = getelementptr i32, i32* %A, i32 2
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32> } %ld2
 }
@@ -2985,7 +2985,7 @@ define { <2 x i32>, <2 x i32> } @test_v2
 ;CHECK-LABEL: test_v2i32_post_reg_ld2lane:
 ;CHECK: ld2.s { v0, v1 }[0], [x0], x{{[0-9]+}}
   %ld2 = call { <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld2lane.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32> } %ld2
 }
@@ -2997,7 +2997,7 @@ define { <2 x i64>, <2 x i64> } @test_v2
 ;CHECK-LABEL: test_v2i64_post_imm_ld2lane:
 ;CHECK: ld2.d { v0, v1 }[0], [x0], #16
   %ld2 = call { <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld2lane.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i32 2
+  %tmp = getelementptr i64, i64* %A, i32 2
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64> } %ld2
 }
@@ -3006,7 +3006,7 @@ define { <2 x i64>, <2 x i64> } @test_v2
 ;CHECK-LABEL: test_v2i64_post_reg_ld2lane:
 ;CHECK: ld2.d { v0, v1 }[0], [x0], x{{[0-9]+}}
   %ld2 = call { <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld2lane.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64> } %ld2
 }
@@ -3018,7 +3018,7 @@ define { <1 x i64>, <1 x i64> } @test_v1
 ;CHECK-LABEL: test_v1i64_post_imm_ld2lane:
 ;CHECK: ld2.d { v0, v1 }[0], [x0], #16
   %ld2 = call { <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld2lane.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i32 2
+  %tmp = getelementptr i64, i64* %A, i32 2
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64> } %ld2
 }
@@ -3027,7 +3027,7 @@ define { <1 x i64>, <1 x i64> } @test_v1
 ;CHECK-LABEL: test_v1i64_post_reg_ld2lane:
 ;CHECK: ld2.d { v0, v1 }[0], [x0], x{{[0-9]+}}
   %ld2 = call { <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld2lane.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64> } %ld2
 }
@@ -3039,7 +3039,7 @@ define { <4 x float>, <4 x float> } @tes
 ;CHECK-LABEL: test_v4f32_post_imm_ld2lane:
 ;CHECK: ld2.s { v0, v1 }[0], [x0], #8
   %ld2 = call { <4 x float>, <4 x float> } @llvm.aarch64.neon.ld2lane.v4f32.p0f32(<4 x float> %B, <4 x float> %C, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i32 2
+  %tmp = getelementptr float, float* %A, i32 2
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float> } %ld2
 }
@@ -3048,7 +3048,7 @@ define { <4 x float>, <4 x float> } @tes
 ;CHECK-LABEL: test_v4f32_post_reg_ld2lane:
 ;CHECK: ld2.s { v0, v1 }[0], [x0], x{{[0-9]+}}
   %ld2 = call { <4 x float>, <4 x float> } @llvm.aarch64.neon.ld2lane.v4f32.p0f32(<4 x float> %B, <4 x float> %C, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float> } %ld2
 }
@@ -3060,7 +3060,7 @@ define { <2 x float>, <2 x float> } @tes
 ;CHECK-LABEL: test_v2f32_post_imm_ld2lane:
 ;CHECK: ld2.s { v0, v1 }[0], [x0], #8
   %ld2 = call { <2 x float>, <2 x float> } @llvm.aarch64.neon.ld2lane.v2f32.p0f32(<2 x float> %B, <2 x float> %C, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i32 2
+  %tmp = getelementptr float, float* %A, i32 2
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float> } %ld2
 }
@@ -3069,7 +3069,7 @@ define { <2 x float>, <2 x float> } @tes
 ;CHECK-LABEL: test_v2f32_post_reg_ld2lane:
 ;CHECK: ld2.s { v0, v1 }[0], [x0], x{{[0-9]+}}
   %ld2 = call { <2 x float>, <2 x float> } @llvm.aarch64.neon.ld2lane.v2f32.p0f32(<2 x float> %B, <2 x float> %C, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float> } %ld2
 }
@@ -3081,7 +3081,7 @@ define { <2 x double>, <2 x double> } @t
 ;CHECK-LABEL: test_v2f64_post_imm_ld2lane:
 ;CHECK: ld2.d { v0, v1 }[0], [x0], #16
   %ld2 = call { <2 x double>, <2 x double> } @llvm.aarch64.neon.ld2lane.v2f64.p0f64(<2 x double> %B, <2 x double> %C, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i32 2
+  %tmp = getelementptr double, double* %A, i32 2
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double> } %ld2
 }
@@ -3090,7 +3090,7 @@ define { <2 x double>, <2 x double> } @t
 ;CHECK-LABEL: test_v2f64_post_reg_ld2lane:
 ;CHECK: ld2.d { v0, v1 }[0], [x0], x{{[0-9]+}}
   %ld2 = call { <2 x double>, <2 x double> } @llvm.aarch64.neon.ld2lane.v2f64.p0f64(<2 x double> %B, <2 x double> %C, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double> } %ld2
 }
@@ -3102,7 +3102,7 @@ define { <1 x double>, <1 x double> } @t
 ;CHECK-LABEL: test_v1f64_post_imm_ld2lane:
 ;CHECK: ld2.d { v0, v1 }[0], [x0], #16
   %ld2 = call { <1 x double>, <1 x double> } @llvm.aarch64.neon.ld2lane.v1f64.p0f64(<1 x double> %B, <1 x double> %C, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i32 2
+  %tmp = getelementptr double, double* %A, i32 2
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double> } %ld2
 }
@@ -3111,7 +3111,7 @@ define { <1 x double>, <1 x double> } @t
 ;CHECK-LABEL: test_v1f64_post_reg_ld2lane:
 ;CHECK: ld2.d { v0, v1 }[0], [x0], x{{[0-9]+}}
   %ld2 = call { <1 x double>, <1 x double> } @llvm.aarch64.neon.ld2lane.v1f64.p0f64(<1 x double> %B, <1 x double> %C, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double> } %ld2
 }
@@ -3123,7 +3123,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_imm_ld3lane:
 ;CHECK: ld3.b { v0, v1, v2 }[0], [x0], #3
   %ld3 = call { <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld3lane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i32 3
+  %tmp = getelementptr i8, i8* %A, i32 3
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8> } %ld3
 }
@@ -3132,7 +3132,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_reg_ld3lane:
 ;CHECK: ld3.b { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   %ld3 = call { <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld3lane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8> } %ld3
 }
@@ -3144,7 +3144,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8> }
 ;CHECK-LABEL: test_v8i8_post_imm_ld3lane:
 ;CHECK: ld3.b { v0, v1, v2 }[0], [x0], #3
   %ld3 = call { <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld3lane.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i32 3
+  %tmp = getelementptr i8, i8* %A, i32 3
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8> } %ld3
 }
@@ -3153,7 +3153,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8> }
 ;CHECK-LABEL: test_v8i8_post_reg_ld3lane:
 ;CHECK: ld3.b { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   %ld3 = call { <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld3lane.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8> } %ld3
 }
@@ -3165,7 +3165,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_imm_ld3lane:
 ;CHECK: ld3.h { v0, v1, v2 }[0], [x0], #6
   %ld3 = call { <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld3lane.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i32 3
+  %tmp = getelementptr i16, i16* %A, i32 3
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16> } %ld3
 }
@@ -3174,7 +3174,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_reg_ld3lane:
 ;CHECK: ld3.h { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   %ld3 = call { <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld3lane.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16> } %ld3
 }
@@ -3186,7 +3186,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_imm_ld3lane:
 ;CHECK: ld3.h { v0, v1, v2 }[0], [x0], #6
   %ld3 = call { <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld3lane.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i32 3
+  %tmp = getelementptr i16, i16* %A, i32 3
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16> } %ld3
 }
@@ -3195,7 +3195,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_reg_ld3lane:
 ;CHECK: ld3.h { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   %ld3 = call { <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld3lane.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16> } %ld3
 }
@@ -3207,7 +3207,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_imm_ld3lane:
 ;CHECK: ld3.s { v0, v1, v2 }[0], [x0], #12
   %ld3 = call { <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld3lane.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i32 3
+  %tmp = getelementptr i32, i32* %A, i32 3
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32> } %ld3
 }
@@ -3216,7 +3216,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_reg_ld3lane:
 ;CHECK: ld3.s { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   %ld3 = call { <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld3lane.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32> } %ld3
 }
@@ -3228,7 +3228,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_imm_ld3lane:
 ;CHECK: ld3.s { v0, v1, v2 }[0], [x0], #12
   %ld3 = call { <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld3lane.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i32 3
+  %tmp = getelementptr i32, i32* %A, i32 3
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32> } %ld3
 }
@@ -3237,7 +3237,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_reg_ld3lane:
 ;CHECK: ld3.s { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   %ld3 = call { <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld3lane.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32> } %ld3
 }
@@ -3249,7 +3249,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_imm_ld3lane:
 ;CHECK: ld3.d { v0, v1, v2 }[0], [x0], #24
   %ld3 = call { <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld3lane.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i32 3
+  %tmp = getelementptr i64, i64* %A, i32 3
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64> } %ld3
 }
@@ -3258,7 +3258,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_reg_ld3lane:
 ;CHECK: ld3.d { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   %ld3 = call { <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld3lane.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64> } %ld3
 }
@@ -3270,7 +3270,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_imm_ld3lane:
 ;CHECK: ld3.d { v0, v1, v2 }[0], [x0], #24
   %ld3 = call { <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld3lane.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i32 3
+  %tmp = getelementptr i64, i64* %A, i32 3
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64> } %ld3
 }
@@ -3279,7 +3279,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_reg_ld3lane:
 ;CHECK: ld3.d { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   %ld3 = call { <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld3lane.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64> } %ld3
 }
@@ -3291,7 +3291,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_imm_ld3lane:
 ;CHECK: ld3.s { v0, v1, v2 }[0], [x0], #12
   %ld3 = call { <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld3lane.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i32 3
+  %tmp = getelementptr float, float* %A, i32 3
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float> } %ld3
 }
@@ -3300,7 +3300,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_reg_ld3lane:
 ;CHECK: ld3.s { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   %ld3 = call { <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld3lane.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float> } %ld3
 }
@@ -3312,7 +3312,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_imm_ld3lane:
 ;CHECK: ld3.s { v0, v1, v2 }[0], [x0], #12
   %ld3 = call { <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld3lane.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i32 3
+  %tmp = getelementptr float, float* %A, i32 3
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float> } %ld3
 }
@@ -3321,7 +3321,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_reg_ld3lane:
 ;CHECK: ld3.s { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   %ld3 = call { <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld3lane.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float> } %ld3
 }
@@ -3333,7 +3333,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_imm_ld3lane:
 ;CHECK: ld3.d { v0, v1, v2 }[0], [x0], #24
   %ld3 = call { <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld3lane.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i32 3
+  %tmp = getelementptr double, double* %A, i32 3
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double> } %ld3
 }
@@ -3342,7 +3342,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_reg_ld3lane:
 ;CHECK: ld3.d { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   %ld3 = call { <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld3lane.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double> } %ld3
 }
@@ -3354,7 +3354,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_imm_ld3lane:
 ;CHECK: ld3.d { v0, v1, v2 }[0], [x0], #24
   %ld3 = call { <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld3lane.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i32 3
+  %tmp = getelementptr double, double* %A, i32 3
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double> } %ld3
 }
@@ -3363,7 +3363,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_reg_ld3lane:
 ;CHECK: ld3.d { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   %ld3 = call { <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld3lane.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double> } %ld3
 }
@@ -3375,7 +3375,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_imm_ld4lane:
 ;CHECK: ld4.b { v0, v1, v2, v3 }[0], [x0], #4
   %ld4 = call { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld4lane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, <16 x i8> %E, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i32 4
+  %tmp = getelementptr i8, i8* %A, i32 4
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } %ld4
 }
@@ -3384,7 +3384,7 @@ define { <16 x i8>, <16 x i8>, <16 x i8>
 ;CHECK-LABEL: test_v16i8_post_reg_ld4lane:
 ;CHECK: ld4.b { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   %ld4 = call { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld4lane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, <16 x i8> %E, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } %ld4
 }
@@ -3396,7 +3396,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8>, <
 ;CHECK-LABEL: test_v8i8_post_imm_ld4lane:
 ;CHECK: ld4.b { v0, v1, v2, v3 }[0], [x0], #4
   %ld4 = call { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld4lane.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, <8 x i8> %E, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i32 4
+  %tmp = getelementptr i8, i8* %A, i32 4
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } %ld4
 }
@@ -3405,7 +3405,7 @@ define { <8 x i8>, <8 x i8>, <8 x i8>, <
 ;CHECK-LABEL: test_v8i8_post_reg_ld4lane:
 ;CHECK: ld4.b { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   %ld4 = call { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } @llvm.aarch64.neon.ld4lane.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, <8 x i8> %E, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   store i8* %tmp, i8** %ptr
   ret { <8 x i8>, <8 x i8>, <8 x i8>, <8 x i8> } %ld4
 }
@@ -3417,7 +3417,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_imm_ld4lane:
 ;CHECK: ld4.h { v0, v1, v2, v3 }[0], [x0], #8
   %ld4 = call { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld4lane.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, <8 x i16> %E, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i32 4
+  %tmp = getelementptr i16, i16* %A, i32 4
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } %ld4
 }
@@ -3426,7 +3426,7 @@ define { <8 x i16>, <8 x i16>, <8 x i16>
 ;CHECK-LABEL: test_v8i16_post_reg_ld4lane:
 ;CHECK: ld4.h { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   %ld4 = call { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } @llvm.aarch64.neon.ld4lane.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, <8 x i16> %E, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <8 x i16>, <8 x i16>, <8 x i16>, <8 x i16> } %ld4
 }
@@ -3438,7 +3438,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_imm_ld4lane:
 ;CHECK: ld4.h { v0, v1, v2, v3 }[0], [x0], #8
   %ld4 = call { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld4lane.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, <4 x i16> %E, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i32 4
+  %tmp = getelementptr i16, i16* %A, i32 4
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } %ld4
 }
@@ -3447,7 +3447,7 @@ define { <4 x i16>, <4 x i16>, <4 x i16>
 ;CHECK-LABEL: test_v4i16_post_reg_ld4lane:
 ;CHECK: ld4.h { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   %ld4 = call { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } @llvm.aarch64.neon.ld4lane.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, <4 x i16> %E, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   store i16* %tmp, i16** %ptr
   ret { <4 x i16>, <4 x i16>, <4 x i16>, <4 x i16> } %ld4
 }
@@ -3459,7 +3459,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_imm_ld4lane:
 ;CHECK: ld4.s { v0, v1, v2, v3 }[0], [x0], #16
   %ld4 = call { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld4lane.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, <4 x i32> %E, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i32 4
+  %tmp = getelementptr i32, i32* %A, i32 4
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } %ld4
 }
@@ -3468,7 +3468,7 @@ define { <4 x i32>, <4 x i32>, <4 x i32>
 ;CHECK-LABEL: test_v4i32_post_reg_ld4lane:
 ;CHECK: ld4.s { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   %ld4 = call { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } @llvm.aarch64.neon.ld4lane.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, <4 x i32> %E, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <4 x i32>, <4 x i32>, <4 x i32>, <4 x i32> } %ld4
 }
@@ -3480,7 +3480,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_imm_ld4lane:
 ;CHECK: ld4.s { v0, v1, v2, v3 }[0], [x0], #16
   %ld4 = call { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld4lane.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, <2 x i32> %E, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i32 4
+  %tmp = getelementptr i32, i32* %A, i32 4
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } %ld4
 }
@@ -3489,7 +3489,7 @@ define { <2 x i32>, <2 x i32>, <2 x i32>
 ;CHECK-LABEL: test_v2i32_post_reg_ld4lane:
 ;CHECK: ld4.s { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   %ld4 = call { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } @llvm.aarch64.neon.ld4lane.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, <2 x i32> %E, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   store i32* %tmp, i32** %ptr
   ret { <2 x i32>, <2 x i32>, <2 x i32>, <2 x i32> } %ld4
 }
@@ -3501,7 +3501,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_imm_ld4lane:
 ;CHECK: ld4.d { v0, v1, v2, v3 }[0], [x0], #32
   %ld4 = call { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld4lane.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, <2 x i64> %E, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i32 4
+  %tmp = getelementptr i64, i64* %A, i32 4
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } %ld4
 }
@@ -3510,7 +3510,7 @@ define { <2 x i64>, <2 x i64>, <2 x i64>
 ;CHECK-LABEL: test_v2i64_post_reg_ld4lane:
 ;CHECK: ld4.d { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   %ld4 = call { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } @llvm.aarch64.neon.ld4lane.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, <2 x i64> %E, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <2 x i64>, <2 x i64>, <2 x i64>, <2 x i64> } %ld4
 }
@@ -3522,7 +3522,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_imm_ld4lane:
 ;CHECK: ld4.d { v0, v1, v2, v3 }[0], [x0], #32
   %ld4 = call { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld4lane.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, <1 x i64> %E, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i32 4
+  %tmp = getelementptr i64, i64* %A, i32 4
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } %ld4
 }
@@ -3531,7 +3531,7 @@ define { <1 x i64>, <1 x i64>, <1 x i64>
 ;CHECK-LABEL: test_v1i64_post_reg_ld4lane:
 ;CHECK: ld4.d { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   %ld4 = call { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } @llvm.aarch64.neon.ld4lane.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, <1 x i64> %E, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   store i64* %tmp, i64** %ptr
   ret { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> } %ld4
 }
@@ -3543,7 +3543,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_imm_ld4lane:
 ;CHECK: ld4.s { v0, v1, v2, v3 }[0], [x0], #16
   %ld4 = call { <4 x float>, <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld4lane.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, <4 x float> %E, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i32 4
+  %tmp = getelementptr float, float* %A, i32 4
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float>, <4 x float> } %ld4
 }
@@ -3552,7 +3552,7 @@ define { <4 x float>, <4 x float>, <4 x
 ;CHECK-LABEL: test_v4f32_post_reg_ld4lane:
 ;CHECK: ld4.s { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   %ld4 = call { <4 x float>, <4 x float>, <4 x float>, <4 x float> } @llvm.aarch64.neon.ld4lane.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, <4 x float> %E, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <4 x float>, <4 x float>, <4 x float>, <4 x float> } %ld4
 }
@@ -3564,7 +3564,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_imm_ld4lane:
 ;CHECK: ld4.s { v0, v1, v2, v3 }[0], [x0], #16
   %ld4 = call { <2 x float>, <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld4lane.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, <2 x float> %E, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i32 4
+  %tmp = getelementptr float, float* %A, i32 4
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float>, <2 x float> } %ld4
 }
@@ -3573,7 +3573,7 @@ define { <2 x float>, <2 x float>, <2 x
 ;CHECK-LABEL: test_v2f32_post_reg_ld4lane:
 ;CHECK: ld4.s { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   %ld4 = call { <2 x float>, <2 x float>, <2 x float>, <2 x float> } @llvm.aarch64.neon.ld4lane.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, <2 x float> %E, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   store float* %tmp, float** %ptr
   ret { <2 x float>, <2 x float>, <2 x float>, <2 x float> } %ld4
 }
@@ -3585,7 +3585,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_imm_ld4lane:
 ;CHECK: ld4.d { v0, v1, v2, v3 }[0], [x0], #32
   %ld4 = call { <2 x double>, <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld4lane.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, <2 x double> %E, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i32 4
+  %tmp = getelementptr double, double* %A, i32 4
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double>, <2 x double> } %ld4
 }
@@ -3594,7 +3594,7 @@ define { <2 x double>, <2 x double>, <2
 ;CHECK-LABEL: test_v2f64_post_reg_ld4lane:
 ;CHECK: ld4.d { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   %ld4 = call { <2 x double>, <2 x double>, <2 x double>, <2 x double> } @llvm.aarch64.neon.ld4lane.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, <2 x double> %E, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <2 x double>, <2 x double>, <2 x double>, <2 x double> } %ld4
 }
@@ -3606,7 +3606,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_imm_ld4lane:
 ;CHECK: ld4.d { v0, v1, v2, v3 }[0], [x0], #32
   %ld4 = call { <1 x double>, <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld4lane.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, <1 x double> %E, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i32 4
+  %tmp = getelementptr double, double* %A, i32 4
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double>, <1 x double> } %ld4
 }
@@ -3615,7 +3615,7 @@ define { <1 x double>, <1 x double>, <1
 ;CHECK-LABEL: test_v1f64_post_reg_ld4lane:
 ;CHECK: ld4.d { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   %ld4 = call { <1 x double>, <1 x double>, <1 x double>, <1 x double> } @llvm.aarch64.neon.ld4lane.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, <1 x double> %E, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   store double* %tmp, double** %ptr
   ret { <1 x double>, <1 x double>, <1 x double>, <1 x double> } %ld4
 }
@@ -3627,7 +3627,7 @@ define i8* @test_v16i8_post_imm_st2(i8*
 ;CHECK-LABEL: test_v16i8_post_imm_st2:
 ;CHECK: st2.16b { v0, v1 }, [x0], #32
   call void @llvm.aarch64.neon.st2.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, i8* %A)
-  %tmp = getelementptr i8* %A, i32 32
+  %tmp = getelementptr i8, i8* %A, i32 32
   ret i8* %tmp
 }
 
@@ -3635,7 +3635,7 @@ define i8* @test_v16i8_post_reg_st2(i8*
 ;CHECK-LABEL: test_v16i8_post_reg_st2:
 ;CHECK: st2.16b { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -3646,7 +3646,7 @@ define i8* @test_v8i8_post_imm_st2(i8* %
 ;CHECK-LABEL: test_v8i8_post_imm_st2:
 ;CHECK: st2.8b { v0, v1 }, [x0], #16
   call void @llvm.aarch64.neon.st2.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, i8* %A)
-  %tmp = getelementptr i8* %A, i32 16
+  %tmp = getelementptr i8, i8* %A, i32 16
   ret i8* %tmp
 }
 
@@ -3654,7 +3654,7 @@ define i8* @test_v8i8_post_reg_st2(i8* %
 ;CHECK-LABEL: test_v8i8_post_reg_st2:
 ;CHECK: st2.8b { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -3665,7 +3665,7 @@ define i16* @test_v8i16_post_imm_st2(i16
 ;CHECK-LABEL: test_v8i16_post_imm_st2:
 ;CHECK: st2.8h { v0, v1 }, [x0], #32
   call void @llvm.aarch64.neon.st2.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, i16* %A)
-  %tmp = getelementptr i16* %A, i32 16
+  %tmp = getelementptr i16, i16* %A, i32 16
   ret i16* %tmp
 }
 
@@ -3673,7 +3673,7 @@ define i16* @test_v8i16_post_reg_st2(i16
 ;CHECK-LABEL: test_v8i16_post_reg_st2:
 ;CHECK: st2.8h { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -3684,7 +3684,7 @@ define i16* @test_v4i16_post_imm_st2(i16
 ;CHECK-LABEL: test_v4i16_post_imm_st2:
 ;CHECK: st2.4h { v0, v1 }, [x0], #16
   call void @llvm.aarch64.neon.st2.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, i16* %A)
-  %tmp = getelementptr i16* %A, i32 8
+  %tmp = getelementptr i16, i16* %A, i32 8
   ret i16* %tmp
 }
 
@@ -3692,7 +3692,7 @@ define i16* @test_v4i16_post_reg_st2(i16
 ;CHECK-LABEL: test_v4i16_post_reg_st2:
 ;CHECK: st2.4h { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -3703,7 +3703,7 @@ define i32* @test_v4i32_post_imm_st2(i32
 ;CHECK-LABEL: test_v4i32_post_imm_st2:
 ;CHECK: st2.4s { v0, v1 }, [x0], #32
   call void @llvm.aarch64.neon.st2.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, i32* %A)
-  %tmp = getelementptr i32* %A, i32 8
+  %tmp = getelementptr i32, i32* %A, i32 8
   ret i32* %tmp
 }
 
@@ -3711,7 +3711,7 @@ define i32* @test_v4i32_post_reg_st2(i32
 ;CHECK-LABEL: test_v4i32_post_reg_st2:
 ;CHECK: st2.4s { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -3722,7 +3722,7 @@ define i32* @test_v2i32_post_imm_st2(i32
 ;CHECK-LABEL: test_v2i32_post_imm_st2:
 ;CHECK: st2.2s { v0, v1 }, [x0], #16
   call void @llvm.aarch64.neon.st2.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, i32* %A)
-  %tmp = getelementptr i32* %A, i32 4
+  %tmp = getelementptr i32, i32* %A, i32 4
   ret i32* %tmp
 }
 
@@ -3730,7 +3730,7 @@ define i32* @test_v2i32_post_reg_st2(i32
 ;CHECK-LABEL: test_v2i32_post_reg_st2:
 ;CHECK: st2.2s { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -3741,7 +3741,7 @@ define i64* @test_v2i64_post_imm_st2(i64
 ;CHECK-LABEL: test_v2i64_post_imm_st2:
 ;CHECK: st2.2d { v0, v1 }, [x0], #32
   call void @llvm.aarch64.neon.st2.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, i64* %A)
-  %tmp = getelementptr i64* %A, i64 4
+  %tmp = getelementptr i64, i64* %A, i64 4
   ret i64* %tmp
 }
 
@@ -3749,7 +3749,7 @@ define i64* @test_v2i64_post_reg_st2(i64
 ;CHECK-LABEL: test_v2i64_post_reg_st2:
 ;CHECK: st2.2d { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -3760,7 +3760,7 @@ define i64* @test_v1i64_post_imm_st2(i64
 ;CHECK-LABEL: test_v1i64_post_imm_st2:
 ;CHECK: st1.1d { v0, v1 }, [x0], #16
   call void @llvm.aarch64.neon.st2.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, i64* %A)
-  %tmp = getelementptr i64* %A, i64 2
+  %tmp = getelementptr i64, i64* %A, i64 2
   ret i64* %tmp
 }
 
@@ -3768,7 +3768,7 @@ define i64* @test_v1i64_post_reg_st2(i64
 ;CHECK-LABEL: test_v1i64_post_reg_st2:
 ;CHECK: st1.1d { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -3779,7 +3779,7 @@ define float* @test_v4f32_post_imm_st2(f
 ;CHECK-LABEL: test_v4f32_post_imm_st2:
 ;CHECK: st2.4s { v0, v1 }, [x0], #32
   call void @llvm.aarch64.neon.st2.v4f32.p0f32(<4 x float> %B, <4 x float> %C, float* %A)
-  %tmp = getelementptr float* %A, i32 8
+  %tmp = getelementptr float, float* %A, i32 8
   ret float* %tmp
 }
 
@@ -3787,7 +3787,7 @@ define float* @test_v4f32_post_reg_st2(f
 ;CHECK-LABEL: test_v4f32_post_reg_st2:
 ;CHECK: st2.4s { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2.v4f32.p0f32(<4 x float> %B, <4 x float> %C, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -3798,7 +3798,7 @@ define float* @test_v2f32_post_imm_st2(f
 ;CHECK-LABEL: test_v2f32_post_imm_st2:
 ;CHECK: st2.2s { v0, v1 }, [x0], #16
   call void @llvm.aarch64.neon.st2.v2f32.p0f32(<2 x float> %B, <2 x float> %C, float* %A)
-  %tmp = getelementptr float* %A, i32 4
+  %tmp = getelementptr float, float* %A, i32 4
   ret float* %tmp
 }
 
@@ -3806,7 +3806,7 @@ define float* @test_v2f32_post_reg_st2(f
 ;CHECK-LABEL: test_v2f32_post_reg_st2:
 ;CHECK: st2.2s { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2.v2f32.p0f32(<2 x float> %B, <2 x float> %C, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -3817,7 +3817,7 @@ define double* @test_v2f64_post_imm_st2(
 ;CHECK-LABEL: test_v2f64_post_imm_st2:
 ;CHECK: st2.2d { v0, v1 }, [x0], #32
   call void @llvm.aarch64.neon.st2.v2f64.p0f64(<2 x double> %B, <2 x double> %C, double* %A)
-  %tmp = getelementptr double* %A, i64 4
+  %tmp = getelementptr double, double* %A, i64 4
   ret double* %tmp
 }
 
@@ -3825,7 +3825,7 @@ define double* @test_v2f64_post_reg_st2(
 ;CHECK-LABEL: test_v2f64_post_reg_st2:
 ;CHECK: st2.2d { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2.v2f64.p0f64(<2 x double> %B, <2 x double> %C, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -3836,7 +3836,7 @@ define double* @test_v1f64_post_imm_st2(
 ;CHECK-LABEL: test_v1f64_post_imm_st2:
 ;CHECK: st1.1d { v0, v1 }, [x0], #16
   call void @llvm.aarch64.neon.st2.v1f64.p0f64(<1 x double> %B, <1 x double> %C, double* %A)
-  %tmp = getelementptr double* %A, i64 2
+  %tmp = getelementptr double, double* %A, i64 2
   ret double* %tmp
 }
 
@@ -3844,7 +3844,7 @@ define double* @test_v1f64_post_reg_st2(
 ;CHECK-LABEL: test_v1f64_post_reg_st2:
 ;CHECK: st1.1d { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2.v1f64.p0f64(<1 x double> %B, <1 x double> %C, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -3855,7 +3855,7 @@ define i8* @test_v16i8_post_imm_st3(i8*
 ;CHECK-LABEL: test_v16i8_post_imm_st3:
 ;CHECK: st3.16b { v0, v1, v2 }, [x0], #48
   call void @llvm.aarch64.neon.st3.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, i8* %A)
-  %tmp = getelementptr i8* %A, i32 48
+  %tmp = getelementptr i8, i8* %A, i32 48
   ret i8* %tmp
 }
 
@@ -3863,7 +3863,7 @@ define i8* @test_v16i8_post_reg_st3(i8*
 ;CHECK-LABEL: test_v16i8_post_reg_st3:
 ;CHECK: st3.16b { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -3874,7 +3874,7 @@ define i8* @test_v8i8_post_imm_st3(i8* %
 ;CHECK-LABEL: test_v8i8_post_imm_st3:
 ;CHECK: st3.8b { v0, v1, v2 }, [x0], #24
   call void @llvm.aarch64.neon.st3.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, i8* %A)
-  %tmp = getelementptr i8* %A, i32 24
+  %tmp = getelementptr i8, i8* %A, i32 24
   ret i8* %tmp
 }
 
@@ -3882,7 +3882,7 @@ define i8* @test_v8i8_post_reg_st3(i8* %
 ;CHECK-LABEL: test_v8i8_post_reg_st3:
 ;CHECK: st3.8b { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -3893,7 +3893,7 @@ define i16* @test_v8i16_post_imm_st3(i16
 ;CHECK-LABEL: test_v8i16_post_imm_st3:
 ;CHECK: st3.8h { v0, v1, v2 }, [x0], #48
   call void @llvm.aarch64.neon.st3.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, i16* %A)
-  %tmp = getelementptr i16* %A, i32 24
+  %tmp = getelementptr i16, i16* %A, i32 24
   ret i16* %tmp
 }
 
@@ -3901,7 +3901,7 @@ define i16* @test_v8i16_post_reg_st3(i16
 ;CHECK-LABEL: test_v8i16_post_reg_st3:
 ;CHECK: st3.8h { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -3912,7 +3912,7 @@ define i16* @test_v4i16_post_imm_st3(i16
 ;CHECK-LABEL: test_v4i16_post_imm_st3:
 ;CHECK: st3.4h { v0, v1, v2 }, [x0], #24
   call void @llvm.aarch64.neon.st3.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, i16* %A)
-  %tmp = getelementptr i16* %A, i32 12
+  %tmp = getelementptr i16, i16* %A, i32 12
   ret i16* %tmp
 }
 
@@ -3920,7 +3920,7 @@ define i16* @test_v4i16_post_reg_st3(i16
 ;CHECK-LABEL: test_v4i16_post_reg_st3:
 ;CHECK: st3.4h { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -3931,7 +3931,7 @@ define i32* @test_v4i32_post_imm_st3(i32
 ;CHECK-LABEL: test_v4i32_post_imm_st3:
 ;CHECK: st3.4s { v0, v1, v2 }, [x0], #48
   call void @llvm.aarch64.neon.st3.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, i32* %A)
-  %tmp = getelementptr i32* %A, i32 12
+  %tmp = getelementptr i32, i32* %A, i32 12
   ret i32* %tmp
 }
 
@@ -3939,7 +3939,7 @@ define i32* @test_v4i32_post_reg_st3(i32
 ;CHECK-LABEL: test_v4i32_post_reg_st3:
 ;CHECK: st3.4s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -3950,7 +3950,7 @@ define i32* @test_v2i32_post_imm_st3(i32
 ;CHECK-LABEL: test_v2i32_post_imm_st3:
 ;CHECK: st3.2s { v0, v1, v2 }, [x0], #24
   call void @llvm.aarch64.neon.st3.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, i32* %A)
-  %tmp = getelementptr i32* %A, i32 6
+  %tmp = getelementptr i32, i32* %A, i32 6
   ret i32* %tmp
 }
 
@@ -3958,7 +3958,7 @@ define i32* @test_v2i32_post_reg_st3(i32
 ;CHECK-LABEL: test_v2i32_post_reg_st3:
 ;CHECK: st3.2s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -3969,7 +3969,7 @@ define i64* @test_v2i64_post_imm_st3(i64
 ;CHECK-LABEL: test_v2i64_post_imm_st3:
 ;CHECK: st3.2d { v0, v1, v2 }, [x0], #48
   call void @llvm.aarch64.neon.st3.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, i64* %A)
-  %tmp = getelementptr i64* %A, i64 6
+  %tmp = getelementptr i64, i64* %A, i64 6
   ret i64* %tmp
 }
 
@@ -3977,7 +3977,7 @@ define i64* @test_v2i64_post_reg_st3(i64
 ;CHECK-LABEL: test_v2i64_post_reg_st3:
 ;CHECK: st3.2d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -3988,7 +3988,7 @@ define i64* @test_v1i64_post_imm_st3(i64
 ;CHECK-LABEL: test_v1i64_post_imm_st3:
 ;CHECK: st1.1d { v0, v1, v2 }, [x0], #24
   call void @llvm.aarch64.neon.st3.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, i64* %A)
-  %tmp = getelementptr i64* %A, i64 3
+  %tmp = getelementptr i64, i64* %A, i64 3
   ret i64* %tmp
 }
 
@@ -3996,7 +3996,7 @@ define i64* @test_v1i64_post_reg_st3(i64
 ;CHECK-LABEL: test_v1i64_post_reg_st3:
 ;CHECK: st1.1d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -4007,7 +4007,7 @@ define float* @test_v4f32_post_imm_st3(f
 ;CHECK-LABEL: test_v4f32_post_imm_st3:
 ;CHECK: st3.4s { v0, v1, v2 }, [x0], #48
   call void @llvm.aarch64.neon.st3.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, float* %A)
-  %tmp = getelementptr float* %A, i32 12
+  %tmp = getelementptr float, float* %A, i32 12
   ret float* %tmp
 }
 
@@ -4015,7 +4015,7 @@ define float* @test_v4f32_post_reg_st3(f
 ;CHECK-LABEL: test_v4f32_post_reg_st3:
 ;CHECK: st3.4s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -4026,7 +4026,7 @@ define float* @test_v2f32_post_imm_st3(f
 ;CHECK-LABEL: test_v2f32_post_imm_st3:
 ;CHECK: st3.2s { v0, v1, v2 }, [x0], #24
   call void @llvm.aarch64.neon.st3.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, float* %A)
-  %tmp = getelementptr float* %A, i32 6
+  %tmp = getelementptr float, float* %A, i32 6
   ret float* %tmp
 }
 
@@ -4034,7 +4034,7 @@ define float* @test_v2f32_post_reg_st3(f
 ;CHECK-LABEL: test_v2f32_post_reg_st3:
 ;CHECK: st3.2s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -4045,7 +4045,7 @@ define double* @test_v2f64_post_imm_st3(
 ;CHECK-LABEL: test_v2f64_post_imm_st3:
 ;CHECK: st3.2d { v0, v1, v2 }, [x0], #48
   call void @llvm.aarch64.neon.st3.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, double* %A)
-  %tmp = getelementptr double* %A, i64 6
+  %tmp = getelementptr double, double* %A, i64 6
   ret double* %tmp
 }
 
@@ -4053,7 +4053,7 @@ define double* @test_v2f64_post_reg_st3(
 ;CHECK-LABEL: test_v2f64_post_reg_st3:
 ;CHECK: st3.2d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -4064,7 +4064,7 @@ define double* @test_v1f64_post_imm_st3(
 ;CHECK-LABEL: test_v1f64_post_imm_st3:
 ;CHECK: st1.1d { v0, v1, v2 }, [x0], #24
   call void @llvm.aarch64.neon.st3.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, double* %A)
-  %tmp = getelementptr double* %A, i64 3
+  %tmp = getelementptr double, double* %A, i64 3
   ret double* %tmp
 }
 
@@ -4072,7 +4072,7 @@ define double* @test_v1f64_post_reg_st3(
 ;CHECK-LABEL: test_v1f64_post_reg_st3:
 ;CHECK: st1.1d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -4083,7 +4083,7 @@ define i8* @test_v16i8_post_imm_st4(i8*
 ;CHECK-LABEL: test_v16i8_post_imm_st4:
 ;CHECK: st4.16b { v0, v1, v2, v3 }, [x0], #64
   call void @llvm.aarch64.neon.st4.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, <16 x i8> %E, i8* %A)
-  %tmp = getelementptr i8* %A, i32 64
+  %tmp = getelementptr i8, i8* %A, i32 64
   ret i8* %tmp
 }
 
@@ -4091,7 +4091,7 @@ define i8* @test_v16i8_post_reg_st4(i8*
 ;CHECK-LABEL: test_v16i8_post_reg_st4:
 ;CHECK: st4.16b { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, <16 x i8> %E, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -4102,7 +4102,7 @@ define i8* @test_v8i8_post_imm_st4(i8* %
 ;CHECK-LABEL: test_v8i8_post_imm_st4:
 ;CHECK: st4.8b { v0, v1, v2, v3 }, [x0], #32
   call void @llvm.aarch64.neon.st4.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, <8 x i8> %E, i8* %A)
-  %tmp = getelementptr i8* %A, i32 32
+  %tmp = getelementptr i8, i8* %A, i32 32
   ret i8* %tmp
 }
 
@@ -4110,7 +4110,7 @@ define i8* @test_v8i8_post_reg_st4(i8* %
 ;CHECK-LABEL: test_v8i8_post_reg_st4:
 ;CHECK: st4.8b { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, <8 x i8> %E, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -4121,7 +4121,7 @@ define i16* @test_v8i16_post_imm_st4(i16
 ;CHECK-LABEL: test_v8i16_post_imm_st4:
 ;CHECK: st4.8h { v0, v1, v2, v3 }, [x0], #64
   call void @llvm.aarch64.neon.st4.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, <8 x i16> %E, i16* %A)
-  %tmp = getelementptr i16* %A, i32 32
+  %tmp = getelementptr i16, i16* %A, i32 32
   ret i16* %tmp
 }
 
@@ -4129,7 +4129,7 @@ define i16* @test_v8i16_post_reg_st4(i16
 ;CHECK-LABEL: test_v8i16_post_reg_st4:
 ;CHECK: st4.8h { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, <8 x i16> %E, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -4140,7 +4140,7 @@ define i16* @test_v4i16_post_imm_st4(i16
 ;CHECK-LABEL: test_v4i16_post_imm_st4:
 ;CHECK: st4.4h { v0, v1, v2, v3 }, [x0], #32
   call void @llvm.aarch64.neon.st4.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, <4 x i16> %E, i16* %A)
-  %tmp = getelementptr i16* %A, i32 16
+  %tmp = getelementptr i16, i16* %A, i32 16
   ret i16* %tmp
 }
 
@@ -4148,7 +4148,7 @@ define i16* @test_v4i16_post_reg_st4(i16
 ;CHECK-LABEL: test_v4i16_post_reg_st4:
 ;CHECK: st4.4h { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, <4 x i16> %E, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -4159,7 +4159,7 @@ define i32* @test_v4i32_post_imm_st4(i32
 ;CHECK-LABEL: test_v4i32_post_imm_st4:
 ;CHECK: st4.4s { v0, v1, v2, v3 }, [x0], #64
   call void @llvm.aarch64.neon.st4.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, <4 x i32> %E, i32* %A)
-  %tmp = getelementptr i32* %A, i32 16
+  %tmp = getelementptr i32, i32* %A, i32 16
   ret i32* %tmp
 }
 
@@ -4167,7 +4167,7 @@ define i32* @test_v4i32_post_reg_st4(i32
 ;CHECK-LABEL: test_v4i32_post_reg_st4:
 ;CHECK: st4.4s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, <4 x i32> %E, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -4178,7 +4178,7 @@ define i32* @test_v2i32_post_imm_st4(i32
 ;CHECK-LABEL: test_v2i32_post_imm_st4:
 ;CHECK: st4.2s { v0, v1, v2, v3 }, [x0], #32
   call void @llvm.aarch64.neon.st4.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, <2 x i32> %E, i32* %A)
-  %tmp = getelementptr i32* %A, i32 8
+  %tmp = getelementptr i32, i32* %A, i32 8
   ret i32* %tmp
 }
 
@@ -4186,7 +4186,7 @@ define i32* @test_v2i32_post_reg_st4(i32
 ;CHECK-LABEL: test_v2i32_post_reg_st4:
 ;CHECK: st4.2s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, <2 x i32> %E, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -4197,7 +4197,7 @@ define i64* @test_v2i64_post_imm_st4(i64
 ;CHECK-LABEL: test_v2i64_post_imm_st4:
 ;CHECK: st4.2d { v0, v1, v2, v3 }, [x0], #64
   call void @llvm.aarch64.neon.st4.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, <2 x i64> %E, i64* %A)
-  %tmp = getelementptr i64* %A, i64 8
+  %tmp = getelementptr i64, i64* %A, i64 8
   ret i64* %tmp
 }
 
@@ -4205,7 +4205,7 @@ define i64* @test_v2i64_post_reg_st4(i64
 ;CHECK-LABEL: test_v2i64_post_reg_st4:
 ;CHECK: st4.2d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, <2 x i64> %E, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -4216,7 +4216,7 @@ define i64* @test_v1i64_post_imm_st4(i64
 ;CHECK-LABEL: test_v1i64_post_imm_st4:
 ;CHECK: st1.1d { v0, v1, v2, v3 }, [x0], #32
   call void @llvm.aarch64.neon.st4.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, <1 x i64> %E, i64* %A)
-  %tmp = getelementptr i64* %A, i64 4
+  %tmp = getelementptr i64, i64* %A, i64 4
   ret i64* %tmp
 }
 
@@ -4224,7 +4224,7 @@ define i64* @test_v1i64_post_reg_st4(i64
 ;CHECK-LABEL: test_v1i64_post_reg_st4:
 ;CHECK: st1.1d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, <1 x i64> %E, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -4235,7 +4235,7 @@ define float* @test_v4f32_post_imm_st4(f
 ;CHECK-LABEL: test_v4f32_post_imm_st4:
 ;CHECK: st4.4s { v0, v1, v2, v3 }, [x0], #64
   call void @llvm.aarch64.neon.st4.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, <4 x float> %E, float* %A)
-  %tmp = getelementptr float* %A, i32 16
+  %tmp = getelementptr float, float* %A, i32 16
   ret float* %tmp
 }
 
@@ -4243,7 +4243,7 @@ define float* @test_v4f32_post_reg_st4(f
 ;CHECK-LABEL: test_v4f32_post_reg_st4:
 ;CHECK: st4.4s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, <4 x float> %E, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -4254,7 +4254,7 @@ define float* @test_v2f32_post_imm_st4(f
 ;CHECK-LABEL: test_v2f32_post_imm_st4:
 ;CHECK: st4.2s { v0, v1, v2, v3 }, [x0], #32
   call void @llvm.aarch64.neon.st4.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, <2 x float> %E, float* %A)
-  %tmp = getelementptr float* %A, i32 8
+  %tmp = getelementptr float, float* %A, i32 8
   ret float* %tmp
 }
 
@@ -4262,7 +4262,7 @@ define float* @test_v2f32_post_reg_st4(f
 ;CHECK-LABEL: test_v2f32_post_reg_st4:
 ;CHECK: st4.2s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, <2 x float> %E, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -4273,7 +4273,7 @@ define double* @test_v2f64_post_imm_st4(
 ;CHECK-LABEL: test_v2f64_post_imm_st4:
 ;CHECK: st4.2d { v0, v1, v2, v3 }, [x0], #64
   call void @llvm.aarch64.neon.st4.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, <2 x double> %E, double* %A)
-  %tmp = getelementptr double* %A, i64 8
+  %tmp = getelementptr double, double* %A, i64 8
   ret double* %tmp
 }
 
@@ -4281,7 +4281,7 @@ define double* @test_v2f64_post_reg_st4(
 ;CHECK-LABEL: test_v2f64_post_reg_st4:
 ;CHECK: st4.2d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, <2 x double> %E, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -4292,7 +4292,7 @@ define double* @test_v1f64_post_imm_st4(
 ;CHECK-LABEL: test_v1f64_post_imm_st4:
 ;CHECK: st1.1d { v0, v1, v2, v3 }, [x0], #32
   call void @llvm.aarch64.neon.st4.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, <1 x double> %E, double* %A)
-  %tmp = getelementptr double* %A, i64 4
+  %tmp = getelementptr double, double* %A, i64 4
   ret double* %tmp
 }
 
@@ -4300,7 +4300,7 @@ define double* @test_v1f64_post_reg_st4(
 ;CHECK-LABEL: test_v1f64_post_reg_st4:
 ;CHECK: st1.1d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, <1 x double> %E, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -4311,7 +4311,7 @@ define i8* @test_v16i8_post_imm_st1x2(i8
 ;CHECK-LABEL: test_v16i8_post_imm_st1x2:
 ;CHECK: st1.16b { v0, v1 }, [x0], #32
   call void @llvm.aarch64.neon.st1x2.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, i8* %A)
-  %tmp = getelementptr i8* %A, i32 32
+  %tmp = getelementptr i8, i8* %A, i32 32
   ret i8* %tmp
 }
 
@@ -4319,7 +4319,7 @@ define i8* @test_v16i8_post_reg_st1x2(i8
 ;CHECK-LABEL: test_v16i8_post_reg_st1x2:
 ;CHECK: st1.16b { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x2.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -4330,7 +4330,7 @@ define i8* @test_v8i8_post_imm_st1x2(i8*
 ;CHECK-LABEL: test_v8i8_post_imm_st1x2:
 ;CHECK: st1.8b { v0, v1 }, [x0], #16
   call void @llvm.aarch64.neon.st1x2.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, i8* %A)
-  %tmp = getelementptr i8* %A, i32 16
+  %tmp = getelementptr i8, i8* %A, i32 16
   ret i8* %tmp
 }
 
@@ -4338,7 +4338,7 @@ define i8* @test_v8i8_post_reg_st1x2(i8*
 ;CHECK-LABEL: test_v8i8_post_reg_st1x2:
 ;CHECK: st1.8b { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x2.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -4349,7 +4349,7 @@ define i16* @test_v8i16_post_imm_st1x2(i
 ;CHECK-LABEL: test_v8i16_post_imm_st1x2:
 ;CHECK: st1.8h { v0, v1 }, [x0], #32
   call void @llvm.aarch64.neon.st1x2.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, i16* %A)
-  %tmp = getelementptr i16* %A, i32 16
+  %tmp = getelementptr i16, i16* %A, i32 16
   ret i16* %tmp
 }
 
@@ -4357,7 +4357,7 @@ define i16* @test_v8i16_post_reg_st1x2(i
 ;CHECK-LABEL: test_v8i16_post_reg_st1x2:
 ;CHECK: st1.8h { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x2.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -4368,7 +4368,7 @@ define i16* @test_v4i16_post_imm_st1x2(i
 ;CHECK-LABEL: test_v4i16_post_imm_st1x2:
 ;CHECK: st1.4h { v0, v1 }, [x0], #16
   call void @llvm.aarch64.neon.st1x2.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, i16* %A)
-  %tmp = getelementptr i16* %A, i32 8
+  %tmp = getelementptr i16, i16* %A, i32 8
   ret i16* %tmp
 }
 
@@ -4376,7 +4376,7 @@ define i16* @test_v4i16_post_reg_st1x2(i
 ;CHECK-LABEL: test_v4i16_post_reg_st1x2:
 ;CHECK: st1.4h { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x2.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -4387,7 +4387,7 @@ define i32* @test_v4i32_post_imm_st1x2(i
 ;CHECK-LABEL: test_v4i32_post_imm_st1x2:
 ;CHECK: st1.4s { v0, v1 }, [x0], #32
   call void @llvm.aarch64.neon.st1x2.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, i32* %A)
-  %tmp = getelementptr i32* %A, i32 8
+  %tmp = getelementptr i32, i32* %A, i32 8
   ret i32* %tmp
 }
 
@@ -4395,7 +4395,7 @@ define i32* @test_v4i32_post_reg_st1x2(i
 ;CHECK-LABEL: test_v4i32_post_reg_st1x2:
 ;CHECK: st1.4s { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x2.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -4406,7 +4406,7 @@ define i32* @test_v2i32_post_imm_st1x2(i
 ;CHECK-LABEL: test_v2i32_post_imm_st1x2:
 ;CHECK: st1.2s { v0, v1 }, [x0], #16
   call void @llvm.aarch64.neon.st1x2.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, i32* %A)
-  %tmp = getelementptr i32* %A, i32 4
+  %tmp = getelementptr i32, i32* %A, i32 4
   ret i32* %tmp
 }
 
@@ -4414,7 +4414,7 @@ define i32* @test_v2i32_post_reg_st1x2(i
 ;CHECK-LABEL: test_v2i32_post_reg_st1x2:
 ;CHECK: st1.2s { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x2.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -4425,7 +4425,7 @@ define i64* @test_v2i64_post_imm_st1x2(i
 ;CHECK-LABEL: test_v2i64_post_imm_st1x2:
 ;CHECK: st1.2d { v0, v1 }, [x0], #32
   call void @llvm.aarch64.neon.st1x2.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, i64* %A)
-  %tmp = getelementptr i64* %A, i64 4
+  %tmp = getelementptr i64, i64* %A, i64 4
   ret i64* %tmp
 }
 
@@ -4433,7 +4433,7 @@ define i64* @test_v2i64_post_reg_st1x2(i
 ;CHECK-LABEL: test_v2i64_post_reg_st1x2:
 ;CHECK: st1.2d { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x2.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -4444,7 +4444,7 @@ define i64* @test_v1i64_post_imm_st1x2(i
 ;CHECK-LABEL: test_v1i64_post_imm_st1x2:
 ;CHECK: st1.1d { v0, v1 }, [x0], #16
   call void @llvm.aarch64.neon.st1x2.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, i64* %A)
-  %tmp = getelementptr i64* %A, i64 2
+  %tmp = getelementptr i64, i64* %A, i64 2
   ret i64* %tmp
 }
 
@@ -4452,7 +4452,7 @@ define i64* @test_v1i64_post_reg_st1x2(i
 ;CHECK-LABEL: test_v1i64_post_reg_st1x2:
 ;CHECK: st1.1d { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x2.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -4463,7 +4463,7 @@ define float* @test_v4f32_post_imm_st1x2
 ;CHECK-LABEL: test_v4f32_post_imm_st1x2:
 ;CHECK: st1.4s { v0, v1 }, [x0], #32
   call void @llvm.aarch64.neon.st1x2.v4f32.p0f32(<4 x float> %B, <4 x float> %C, float* %A)
-  %tmp = getelementptr float* %A, i32 8
+  %tmp = getelementptr float, float* %A, i32 8
   ret float* %tmp
 }
 
@@ -4471,7 +4471,7 @@ define float* @test_v4f32_post_reg_st1x2
 ;CHECK-LABEL: test_v4f32_post_reg_st1x2:
 ;CHECK: st1.4s { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x2.v4f32.p0f32(<4 x float> %B, <4 x float> %C, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -4482,7 +4482,7 @@ define float* @test_v2f32_post_imm_st1x2
 ;CHECK-LABEL: test_v2f32_post_imm_st1x2:
 ;CHECK: st1.2s { v0, v1 }, [x0], #16
   call void @llvm.aarch64.neon.st1x2.v2f32.p0f32(<2 x float> %B, <2 x float> %C, float* %A)
-  %tmp = getelementptr float* %A, i32 4
+  %tmp = getelementptr float, float* %A, i32 4
   ret float* %tmp
 }
 
@@ -4490,7 +4490,7 @@ define float* @test_v2f32_post_reg_st1x2
 ;CHECK-LABEL: test_v2f32_post_reg_st1x2:
 ;CHECK: st1.2s { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x2.v2f32.p0f32(<2 x float> %B, <2 x float> %C, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -4501,7 +4501,7 @@ define double* @test_v2f64_post_imm_st1x
 ;CHECK-LABEL: test_v2f64_post_imm_st1x2:
 ;CHECK: st1.2d { v0, v1 }, [x0], #32
   call void @llvm.aarch64.neon.st1x2.v2f64.p0f64(<2 x double> %B, <2 x double> %C, double* %A)
-  %tmp = getelementptr double* %A, i64 4
+  %tmp = getelementptr double, double* %A, i64 4
   ret double* %tmp
 }
 
@@ -4509,7 +4509,7 @@ define double* @test_v2f64_post_reg_st1x
 ;CHECK-LABEL: test_v2f64_post_reg_st1x2:
 ;CHECK: st1.2d { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x2.v2f64.p0f64(<2 x double> %B, <2 x double> %C, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -4520,7 +4520,7 @@ define double* @test_v1f64_post_imm_st1x
 ;CHECK-LABEL: test_v1f64_post_imm_st1x2:
 ;CHECK: st1.1d { v0, v1 }, [x0], #16
   call void @llvm.aarch64.neon.st1x2.v1f64.p0f64(<1 x double> %B, <1 x double> %C, double* %A)
-  %tmp = getelementptr double* %A, i64 2
+  %tmp = getelementptr double, double* %A, i64 2
   ret double* %tmp
 }
 
@@ -4528,7 +4528,7 @@ define double* @test_v1f64_post_reg_st1x
 ;CHECK-LABEL: test_v1f64_post_reg_st1x2:
 ;CHECK: st1.1d { v0, v1 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x2.v1f64.p0f64(<1 x double> %B, <1 x double> %C, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -4539,7 +4539,7 @@ define i8* @test_v16i8_post_imm_st1x3(i8
 ;CHECK-LABEL: test_v16i8_post_imm_st1x3:
 ;CHECK: st1.16b { v0, v1, v2 }, [x0], #48
   call void @llvm.aarch64.neon.st1x3.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, i8* %A)
-  %tmp = getelementptr i8* %A, i32 48
+  %tmp = getelementptr i8, i8* %A, i32 48
   ret i8* %tmp
 }
 
@@ -4547,7 +4547,7 @@ define i8* @test_v16i8_post_reg_st1x3(i8
 ;CHECK-LABEL: test_v16i8_post_reg_st1x3:
 ;CHECK: st1.16b { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x3.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -4558,7 +4558,7 @@ define i8* @test_v8i8_post_imm_st1x3(i8*
 ;CHECK-LABEL: test_v8i8_post_imm_st1x3:
 ;CHECK: st1.8b { v0, v1, v2 }, [x0], #24
   call void @llvm.aarch64.neon.st1x3.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, i8* %A)
-  %tmp = getelementptr i8* %A, i32 24
+  %tmp = getelementptr i8, i8* %A, i32 24
   ret i8* %tmp
 }
 
@@ -4566,7 +4566,7 @@ define i8* @test_v8i8_post_reg_st1x3(i8*
 ;CHECK-LABEL: test_v8i8_post_reg_st1x3:
 ;CHECK: st1.8b { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x3.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -4577,7 +4577,7 @@ define i16* @test_v8i16_post_imm_st1x3(i
 ;CHECK-LABEL: test_v8i16_post_imm_st1x3:
 ;CHECK: st1.8h { v0, v1, v2 }, [x0], #48
   call void @llvm.aarch64.neon.st1x3.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, i16* %A)
-  %tmp = getelementptr i16* %A, i32 24
+  %tmp = getelementptr i16, i16* %A, i32 24
   ret i16* %tmp
 }
 
@@ -4585,7 +4585,7 @@ define i16* @test_v8i16_post_reg_st1x3(i
 ;CHECK-LABEL: test_v8i16_post_reg_st1x3:
 ;CHECK: st1.8h { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x3.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -4596,7 +4596,7 @@ define i16* @test_v4i16_post_imm_st1x3(i
 ;CHECK-LABEL: test_v4i16_post_imm_st1x3:
 ;CHECK: st1.4h { v0, v1, v2 }, [x0], #24
   call void @llvm.aarch64.neon.st1x3.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, i16* %A)
-  %tmp = getelementptr i16* %A, i32 12
+  %tmp = getelementptr i16, i16* %A, i32 12
   ret i16* %tmp
 }
 
@@ -4604,7 +4604,7 @@ define i16* @test_v4i16_post_reg_st1x3(i
 ;CHECK-LABEL: test_v4i16_post_reg_st1x3:
 ;CHECK: st1.4h { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x3.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -4615,7 +4615,7 @@ define i32* @test_v4i32_post_imm_st1x3(i
 ;CHECK-LABEL: test_v4i32_post_imm_st1x3:
 ;CHECK: st1.4s { v0, v1, v2 }, [x0], #48
   call void @llvm.aarch64.neon.st1x3.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, i32* %A)
-  %tmp = getelementptr i32* %A, i32 12
+  %tmp = getelementptr i32, i32* %A, i32 12
   ret i32* %tmp
 }
 
@@ -4623,7 +4623,7 @@ define i32* @test_v4i32_post_reg_st1x3(i
 ;CHECK-LABEL: test_v4i32_post_reg_st1x3:
 ;CHECK: st1.4s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x3.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -4634,7 +4634,7 @@ define i32* @test_v2i32_post_imm_st1x3(i
 ;CHECK-LABEL: test_v2i32_post_imm_st1x3:
 ;CHECK: st1.2s { v0, v1, v2 }, [x0], #24
   call void @llvm.aarch64.neon.st1x3.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, i32* %A)
-  %tmp = getelementptr i32* %A, i32 6
+  %tmp = getelementptr i32, i32* %A, i32 6
   ret i32* %tmp
 }
 
@@ -4642,7 +4642,7 @@ define i32* @test_v2i32_post_reg_st1x3(i
 ;CHECK-LABEL: test_v2i32_post_reg_st1x3:
 ;CHECK: st1.2s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x3.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -4653,7 +4653,7 @@ define i64* @test_v2i64_post_imm_st1x3(i
 ;CHECK-LABEL: test_v2i64_post_imm_st1x3:
 ;CHECK: st1.2d { v0, v1, v2 }, [x0], #48
   call void @llvm.aarch64.neon.st1x3.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, i64* %A)
-  %tmp = getelementptr i64* %A, i64 6
+  %tmp = getelementptr i64, i64* %A, i64 6
   ret i64* %tmp
 }
 
@@ -4661,7 +4661,7 @@ define i64* @test_v2i64_post_reg_st1x3(i
 ;CHECK-LABEL: test_v2i64_post_reg_st1x3:
 ;CHECK: st1.2d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x3.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -4672,7 +4672,7 @@ define i64* @test_v1i64_post_imm_st1x3(i
 ;CHECK-LABEL: test_v1i64_post_imm_st1x3:
 ;CHECK: st1.1d { v0, v1, v2 }, [x0], #24
   call void @llvm.aarch64.neon.st1x3.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, i64* %A)
-  %tmp = getelementptr i64* %A, i64 3
+  %tmp = getelementptr i64, i64* %A, i64 3
   ret i64* %tmp
 }
 
@@ -4680,7 +4680,7 @@ define i64* @test_v1i64_post_reg_st1x3(i
 ;CHECK-LABEL: test_v1i64_post_reg_st1x3:
 ;CHECK: st1.1d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x3.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -4691,7 +4691,7 @@ define float* @test_v4f32_post_imm_st1x3
 ;CHECK-LABEL: test_v4f32_post_imm_st1x3:
 ;CHECK: st1.4s { v0, v1, v2 }, [x0], #48
   call void @llvm.aarch64.neon.st1x3.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, float* %A)
-  %tmp = getelementptr float* %A, i32 12
+  %tmp = getelementptr float, float* %A, i32 12
   ret float* %tmp
 }
 
@@ -4699,7 +4699,7 @@ define float* @test_v4f32_post_reg_st1x3
 ;CHECK-LABEL: test_v4f32_post_reg_st1x3:
 ;CHECK: st1.4s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x3.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -4710,7 +4710,7 @@ define float* @test_v2f32_post_imm_st1x3
 ;CHECK-LABEL: test_v2f32_post_imm_st1x3:
 ;CHECK: st1.2s { v0, v1, v2 }, [x0], #24
   call void @llvm.aarch64.neon.st1x3.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, float* %A)
-  %tmp = getelementptr float* %A, i32 6
+  %tmp = getelementptr float, float* %A, i32 6
   ret float* %tmp
 }
 
@@ -4718,7 +4718,7 @@ define float* @test_v2f32_post_reg_st1x3
 ;CHECK-LABEL: test_v2f32_post_reg_st1x3:
 ;CHECK: st1.2s { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x3.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -4729,7 +4729,7 @@ define double* @test_v2f64_post_imm_st1x
 ;CHECK-LABEL: test_v2f64_post_imm_st1x3:
 ;CHECK: st1.2d { v0, v1, v2 }, [x0], #48
   call void @llvm.aarch64.neon.st1x3.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, double* %A)
-  %tmp = getelementptr double* %A, i64 6
+  %tmp = getelementptr double, double* %A, i64 6
   ret double* %tmp
 }
 
@@ -4737,7 +4737,7 @@ define double* @test_v2f64_post_reg_st1x
 ;CHECK-LABEL: test_v2f64_post_reg_st1x3:
 ;CHECK: st1.2d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x3.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -4748,7 +4748,7 @@ define double* @test_v1f64_post_imm_st1x
 ;CHECK-LABEL: test_v1f64_post_imm_st1x3:
 ;CHECK: st1.1d { v0, v1, v2 }, [x0], #24
   call void @llvm.aarch64.neon.st1x3.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, double* %A)
-  %tmp = getelementptr double* %A, i64 3
+  %tmp = getelementptr double, double* %A, i64 3
   ret double* %tmp
 }
 
@@ -4756,7 +4756,7 @@ define double* @test_v1f64_post_reg_st1x
 ;CHECK-LABEL: test_v1f64_post_reg_st1x3:
 ;CHECK: st1.1d { v0, v1, v2 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x3.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -4767,7 +4767,7 @@ define i8* @test_v16i8_post_imm_st1x4(i8
 ;CHECK-LABEL: test_v16i8_post_imm_st1x4:
 ;CHECK: st1.16b { v0, v1, v2, v3 }, [x0], #64
   call void @llvm.aarch64.neon.st1x4.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, <16 x i8> %E, i8* %A)
-  %tmp = getelementptr i8* %A, i32 64
+  %tmp = getelementptr i8, i8* %A, i32 64
   ret i8* %tmp
 }
 
@@ -4775,7 +4775,7 @@ define i8* @test_v16i8_post_reg_st1x4(i8
 ;CHECK-LABEL: test_v16i8_post_reg_st1x4:
 ;CHECK: st1.16b { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x4.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, <16 x i8> %E, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -4786,7 +4786,7 @@ define i8* @test_v8i8_post_imm_st1x4(i8*
 ;CHECK-LABEL: test_v8i8_post_imm_st1x4:
 ;CHECK: st1.8b { v0, v1, v2, v3 }, [x0], #32
   call void @llvm.aarch64.neon.st1x4.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, <8 x i8> %E, i8* %A)
-  %tmp = getelementptr i8* %A, i32 32
+  %tmp = getelementptr i8, i8* %A, i32 32
   ret i8* %tmp
 }
 
@@ -4794,7 +4794,7 @@ define i8* @test_v8i8_post_reg_st1x4(i8*
 ;CHECK-LABEL: test_v8i8_post_reg_st1x4:
 ;CHECK: st1.8b { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x4.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, <8 x i8> %E, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -4805,7 +4805,7 @@ define i16* @test_v8i16_post_imm_st1x4(i
 ;CHECK-LABEL: test_v8i16_post_imm_st1x4:
 ;CHECK: st1.8h { v0, v1, v2, v3 }, [x0], #64
   call void @llvm.aarch64.neon.st1x4.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, <8 x i16> %E, i16* %A)
-  %tmp = getelementptr i16* %A, i32 32
+  %tmp = getelementptr i16, i16* %A, i32 32
   ret i16* %tmp
 }
 
@@ -4813,7 +4813,7 @@ define i16* @test_v8i16_post_reg_st1x4(i
 ;CHECK-LABEL: test_v8i16_post_reg_st1x4:
 ;CHECK: st1.8h { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x4.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, <8 x i16> %E, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -4824,7 +4824,7 @@ define i16* @test_v4i16_post_imm_st1x4(i
 ;CHECK-LABEL: test_v4i16_post_imm_st1x4:
 ;CHECK: st1.4h { v0, v1, v2, v3 }, [x0], #32
   call void @llvm.aarch64.neon.st1x4.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, <4 x i16> %E, i16* %A)
-  %tmp = getelementptr i16* %A, i32 16
+  %tmp = getelementptr i16, i16* %A, i32 16
   ret i16* %tmp
 }
 
@@ -4832,7 +4832,7 @@ define i16* @test_v4i16_post_reg_st1x4(i
 ;CHECK-LABEL: test_v4i16_post_reg_st1x4:
 ;CHECK: st1.4h { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x4.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, <4 x i16> %E, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -4843,7 +4843,7 @@ define i32* @test_v4i32_post_imm_st1x4(i
 ;CHECK-LABEL: test_v4i32_post_imm_st1x4:
 ;CHECK: st1.4s { v0, v1, v2, v3 }, [x0], #64
   call void @llvm.aarch64.neon.st1x4.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, <4 x i32> %E, i32* %A)
-  %tmp = getelementptr i32* %A, i32 16
+  %tmp = getelementptr i32, i32* %A, i32 16
   ret i32* %tmp
 }
 
@@ -4851,7 +4851,7 @@ define i32* @test_v4i32_post_reg_st1x4(i
 ;CHECK-LABEL: test_v4i32_post_reg_st1x4:
 ;CHECK: st1.4s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x4.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, <4 x i32> %E, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -4862,7 +4862,7 @@ define i32* @test_v2i32_post_imm_st1x4(i
 ;CHECK-LABEL: test_v2i32_post_imm_st1x4:
 ;CHECK: st1.2s { v0, v1, v2, v3 }, [x0], #32
   call void @llvm.aarch64.neon.st1x4.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, <2 x i32> %E, i32* %A)
-  %tmp = getelementptr i32* %A, i32 8
+  %tmp = getelementptr i32, i32* %A, i32 8
   ret i32* %tmp
 }
 
@@ -4870,7 +4870,7 @@ define i32* @test_v2i32_post_reg_st1x4(i
 ;CHECK-LABEL: test_v2i32_post_reg_st1x4:
 ;CHECK: st1.2s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x4.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, <2 x i32> %E, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -4881,7 +4881,7 @@ define i64* @test_v2i64_post_imm_st1x4(i
 ;CHECK-LABEL: test_v2i64_post_imm_st1x4:
 ;CHECK: st1.2d { v0, v1, v2, v3 }, [x0], #64
   call void @llvm.aarch64.neon.st1x4.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, <2 x i64> %E, i64* %A)
-  %tmp = getelementptr i64* %A, i64 8
+  %tmp = getelementptr i64, i64* %A, i64 8
   ret i64* %tmp
 }
 
@@ -4889,7 +4889,7 @@ define i64* @test_v2i64_post_reg_st1x4(i
 ;CHECK-LABEL: test_v2i64_post_reg_st1x4:
 ;CHECK: st1.2d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x4.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, <2 x i64> %E, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -4900,7 +4900,7 @@ define i64* @test_v1i64_post_imm_st1x4(i
 ;CHECK-LABEL: test_v1i64_post_imm_st1x4:
 ;CHECK: st1.1d { v0, v1, v2, v3 }, [x0], #32
   call void @llvm.aarch64.neon.st1x4.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, <1 x i64> %E, i64* %A)
-  %tmp = getelementptr i64* %A, i64 4
+  %tmp = getelementptr i64, i64* %A, i64 4
   ret i64* %tmp
 }
 
@@ -4908,7 +4908,7 @@ define i64* @test_v1i64_post_reg_st1x4(i
 ;CHECK-LABEL: test_v1i64_post_reg_st1x4:
 ;CHECK: st1.1d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x4.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, <1 x i64> %E, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -4919,7 +4919,7 @@ define float* @test_v4f32_post_imm_st1x4
 ;CHECK-LABEL: test_v4f32_post_imm_st1x4:
 ;CHECK: st1.4s { v0, v1, v2, v3 }, [x0], #64
   call void @llvm.aarch64.neon.st1x4.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, <4 x float> %E, float* %A)
-  %tmp = getelementptr float* %A, i32 16
+  %tmp = getelementptr float, float* %A, i32 16
   ret float* %tmp
 }
 
@@ -4927,7 +4927,7 @@ define float* @test_v4f32_post_reg_st1x4
 ;CHECK-LABEL: test_v4f32_post_reg_st1x4:
 ;CHECK: st1.4s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x4.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, <4 x float> %E, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -4938,7 +4938,7 @@ define float* @test_v2f32_post_imm_st1x4
 ;CHECK-LABEL: test_v2f32_post_imm_st1x4:
 ;CHECK: st1.2s { v0, v1, v2, v3 }, [x0], #32
   call void @llvm.aarch64.neon.st1x4.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, <2 x float> %E, float* %A)
-  %tmp = getelementptr float* %A, i32 8
+  %tmp = getelementptr float, float* %A, i32 8
   ret float* %tmp
 }
 
@@ -4946,7 +4946,7 @@ define float* @test_v2f32_post_reg_st1x4
 ;CHECK-LABEL: test_v2f32_post_reg_st1x4:
 ;CHECK: st1.2s { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x4.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, <2 x float> %E, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -4957,7 +4957,7 @@ define double* @test_v2f64_post_imm_st1x
 ;CHECK-LABEL: test_v2f64_post_imm_st1x4:
 ;CHECK: st1.2d { v0, v1, v2, v3 }, [x0], #64
   call void @llvm.aarch64.neon.st1x4.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, <2 x double> %E, double* %A)
-  %tmp = getelementptr double* %A, i64 8
+  %tmp = getelementptr double, double* %A, i64 8
   ret double* %tmp
 }
 
@@ -4965,7 +4965,7 @@ define double* @test_v2f64_post_reg_st1x
 ;CHECK-LABEL: test_v2f64_post_reg_st1x4:
 ;CHECK: st1.2d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x4.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, <2 x double> %E, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -4976,7 +4976,7 @@ define double* @test_v1f64_post_imm_st1x
 ;CHECK-LABEL: test_v1f64_post_imm_st1x4:
 ;CHECK: st1.1d { v0, v1, v2, v3 }, [x0], #32
   call void @llvm.aarch64.neon.st1x4.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, <1 x double> %E, double* %A)
-  %tmp = getelementptr double* %A, i64 4
+  %tmp = getelementptr double, double* %A, i64 4
   ret double* %tmp
 }
 
@@ -4984,7 +4984,7 @@ define double* @test_v1f64_post_reg_st1x
 ;CHECK-LABEL: test_v1f64_post_reg_st1x4:
 ;CHECK: st1.1d { v0, v1, v2, v3 }, [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st1x4.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, <1 x double> %E, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -4993,13 +4993,13 @@ declare void @llvm.aarch64.neon.st1x4.v1
 
 define i8* @test_v16i8_post_imm_st2lanelane(i8* %A, i8** %ptr, <16 x i8> %B, <16 x i8> %C) {
   call void @llvm.aarch64.neon.st2lanelane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, i64 0, i64 1, i8* %A)
-  %tmp = getelementptr i8* %A, i32 2
+  %tmp = getelementptr i8, i8* %A, i32 2
   ret i8* %tmp
 }
 
 define i8* @test_v16i8_post_reg_st2lanelane(i8* %A, i8** %ptr, <16 x i8> %B, <16 x i8> %C, i64 %inc) {
   call void @llvm.aarch64.neon.st2lanelane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, i64 0, i64 1, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -5010,7 +5010,7 @@ define i8* @test_v16i8_post_imm_st2lane(
 ;CHECK-LABEL: test_v16i8_post_imm_st2lane:
 ;CHECK: st2.b { v0, v1 }[0], [x0], #2
   call void @llvm.aarch64.neon.st2lane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i32 2
+  %tmp = getelementptr i8, i8* %A, i32 2
   ret i8* %tmp
 }
 
@@ -5018,7 +5018,7 @@ define i8* @test_v16i8_post_reg_st2lane(
 ;CHECK-LABEL: test_v16i8_post_reg_st2lane:
 ;CHECK: st2.b { v0, v1 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2lane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -5029,7 +5029,7 @@ define i8* @test_v8i8_post_imm_st2lane(i
 ;CHECK-LABEL: test_v8i8_post_imm_st2lane:
 ;CHECK: st2.b { v0, v1 }[0], [x0], #2
   call void @llvm.aarch64.neon.st2lane.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i32 2
+  %tmp = getelementptr i8, i8* %A, i32 2
   ret i8* %tmp
 }
 
@@ -5037,7 +5037,7 @@ define i8* @test_v8i8_post_reg_st2lane(i
 ;CHECK-LABEL: test_v8i8_post_reg_st2lane:
 ;CHECK: st2.b { v0, v1 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2lane.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -5048,7 +5048,7 @@ define i16* @test_v8i16_post_imm_st2lane
 ;CHECK-LABEL: test_v8i16_post_imm_st2lane:
 ;CHECK: st2.h { v0, v1 }[0], [x0], #4
   call void @llvm.aarch64.neon.st2lane.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i32 2
+  %tmp = getelementptr i16, i16* %A, i32 2
   ret i16* %tmp
 }
 
@@ -5056,7 +5056,7 @@ define i16* @test_v8i16_post_reg_st2lane
 ;CHECK-LABEL: test_v8i16_post_reg_st2lane:
 ;CHECK: st2.h { v0, v1 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2lane.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -5067,7 +5067,7 @@ define i16* @test_v4i16_post_imm_st2lane
 ;CHECK-LABEL: test_v4i16_post_imm_st2lane:
 ;CHECK: st2.h { v0, v1 }[0], [x0], #4
   call void @llvm.aarch64.neon.st2lane.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i32 2
+  %tmp = getelementptr i16, i16* %A, i32 2
   ret i16* %tmp
 }
 
@@ -5075,7 +5075,7 @@ define i16* @test_v4i16_post_reg_st2lane
 ;CHECK-LABEL: test_v4i16_post_reg_st2lane:
 ;CHECK: st2.h { v0, v1 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2lane.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -5086,7 +5086,7 @@ define i32* @test_v4i32_post_imm_st2lane
 ;CHECK-LABEL: test_v4i32_post_imm_st2lane:
 ;CHECK: st2.s { v0, v1 }[0], [x0], #8
   call void @llvm.aarch64.neon.st2lane.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i32 2
+  %tmp = getelementptr i32, i32* %A, i32 2
   ret i32* %tmp
 }
 
@@ -5094,7 +5094,7 @@ define i32* @test_v4i32_post_reg_st2lane
 ;CHECK-LABEL: test_v4i32_post_reg_st2lane:
 ;CHECK: st2.s { v0, v1 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2lane.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -5105,7 +5105,7 @@ define i32* @test_v2i32_post_imm_st2lane
 ;CHECK-LABEL: test_v2i32_post_imm_st2lane:
 ;CHECK: st2.s { v0, v1 }[0], [x0], #8
   call void @llvm.aarch64.neon.st2lane.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i32 2
+  %tmp = getelementptr i32, i32* %A, i32 2
   ret i32* %tmp
 }
 
@@ -5113,7 +5113,7 @@ define i32* @test_v2i32_post_reg_st2lane
 ;CHECK-LABEL: test_v2i32_post_reg_st2lane:
 ;CHECK: st2.s { v0, v1 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2lane.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -5124,7 +5124,7 @@ define i64* @test_v2i64_post_imm_st2lane
 ;CHECK-LABEL: test_v2i64_post_imm_st2lane:
 ;CHECK: st2.d { v0, v1 }[0], [x0], #16
   call void @llvm.aarch64.neon.st2lane.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 2
+  %tmp = getelementptr i64, i64* %A, i64 2
   ret i64* %tmp
 }
 
@@ -5132,7 +5132,7 @@ define i64* @test_v2i64_post_reg_st2lane
 ;CHECK-LABEL: test_v2i64_post_reg_st2lane:
 ;CHECK: st2.d { v0, v1 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2lane.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -5143,7 +5143,7 @@ define i64* @test_v1i64_post_imm_st2lane
 ;CHECK-LABEL: test_v1i64_post_imm_st2lane:
 ;CHECK: st2.d { v0, v1 }[0], [x0], #16
   call void @llvm.aarch64.neon.st2lane.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 2
+  %tmp = getelementptr i64, i64* %A, i64 2
   ret i64* %tmp
 }
 
@@ -5151,7 +5151,7 @@ define i64* @test_v1i64_post_reg_st2lane
 ;CHECK-LABEL: test_v1i64_post_reg_st2lane:
 ;CHECK: st2.d { v0, v1 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2lane.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -5162,7 +5162,7 @@ define float* @test_v4f32_post_imm_st2la
 ;CHECK-LABEL: test_v4f32_post_imm_st2lane:
 ;CHECK: st2.s { v0, v1 }[0], [x0], #8
   call void @llvm.aarch64.neon.st2lane.v4f32.p0f32(<4 x float> %B, <4 x float> %C, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i32 2
+  %tmp = getelementptr float, float* %A, i32 2
   ret float* %tmp
 }
 
@@ -5170,7 +5170,7 @@ define float* @test_v4f32_post_reg_st2la
 ;CHECK-LABEL: test_v4f32_post_reg_st2lane:
 ;CHECK: st2.s { v0, v1 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2lane.v4f32.p0f32(<4 x float> %B, <4 x float> %C, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -5181,7 +5181,7 @@ define float* @test_v2f32_post_imm_st2la
 ;CHECK-LABEL: test_v2f32_post_imm_st2lane:
 ;CHECK: st2.s { v0, v1 }[0], [x0], #8
   call void @llvm.aarch64.neon.st2lane.v2f32.p0f32(<2 x float> %B, <2 x float> %C, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i32 2
+  %tmp = getelementptr float, float* %A, i32 2
   ret float* %tmp
 }
 
@@ -5189,7 +5189,7 @@ define float* @test_v2f32_post_reg_st2la
 ;CHECK-LABEL: test_v2f32_post_reg_st2lane:
 ;CHECK: st2.s { v0, v1 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2lane.v2f32.p0f32(<2 x float> %B, <2 x float> %C, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -5200,7 +5200,7 @@ define double* @test_v2f64_post_imm_st2l
 ;CHECK-LABEL: test_v2f64_post_imm_st2lane:
 ;CHECK: st2.d { v0, v1 }[0], [x0], #16
   call void @llvm.aarch64.neon.st2lane.v2f64.p0f64(<2 x double> %B, <2 x double> %C, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 2
+  %tmp = getelementptr double, double* %A, i64 2
   ret double* %tmp
 }
 
@@ -5208,7 +5208,7 @@ define double* @test_v2f64_post_reg_st2l
 ;CHECK-LABEL: test_v2f64_post_reg_st2lane:
 ;CHECK: st2.d { v0, v1 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2lane.v2f64.p0f64(<2 x double> %B, <2 x double> %C, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -5219,7 +5219,7 @@ define double* @test_v1f64_post_imm_st2l
 ;CHECK-LABEL: test_v1f64_post_imm_st2lane:
 ;CHECK: st2.d { v0, v1 }[0], [x0], #16
   call void @llvm.aarch64.neon.st2lane.v1f64.p0f64(<1 x double> %B, <1 x double> %C, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 2
+  %tmp = getelementptr double, double* %A, i64 2
   ret double* %tmp
 }
 
@@ -5227,7 +5227,7 @@ define double* @test_v1f64_post_reg_st2l
 ;CHECK-LABEL: test_v1f64_post_reg_st2lane:
 ;CHECK: st2.d { v0, v1 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st2lane.v1f64.p0f64(<1 x double> %B, <1 x double> %C, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -5238,7 +5238,7 @@ define i8* @test_v16i8_post_imm_st3lane(
 ;CHECK-LABEL: test_v16i8_post_imm_st3lane:
 ;CHECK: st3.b { v0, v1, v2 }[0], [x0], #3
   call void @llvm.aarch64.neon.st3lane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i32 3
+  %tmp = getelementptr i8, i8* %A, i32 3
   ret i8* %tmp
 }
 
@@ -5246,7 +5246,7 @@ define i8* @test_v16i8_post_reg_st3lane(
 ;CHECK-LABEL: test_v16i8_post_reg_st3lane:
 ;CHECK: st3.b { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3lane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -5257,7 +5257,7 @@ define i8* @test_v8i8_post_imm_st3lane(i
 ;CHECK-LABEL: test_v8i8_post_imm_st3lane:
 ;CHECK: st3.b { v0, v1, v2 }[0], [x0], #3
   call void @llvm.aarch64.neon.st3lane.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i32 3
+  %tmp = getelementptr i8, i8* %A, i32 3
   ret i8* %tmp
 }
 
@@ -5265,7 +5265,7 @@ define i8* @test_v8i8_post_reg_st3lane(i
 ;CHECK-LABEL: test_v8i8_post_reg_st3lane:
 ;CHECK: st3.b { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3lane.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -5276,7 +5276,7 @@ define i16* @test_v8i16_post_imm_st3lane
 ;CHECK-LABEL: test_v8i16_post_imm_st3lane:
 ;CHECK: st3.h { v0, v1, v2 }[0], [x0], #6
   call void @llvm.aarch64.neon.st3lane.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i32 3
+  %tmp = getelementptr i16, i16* %A, i32 3
   ret i16* %tmp
 }
 
@@ -5284,7 +5284,7 @@ define i16* @test_v8i16_post_reg_st3lane
 ;CHECK-LABEL: test_v8i16_post_reg_st3lane:
 ;CHECK: st3.h { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3lane.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -5295,7 +5295,7 @@ define i16* @test_v4i16_post_imm_st3lane
 ;CHECK-LABEL: test_v4i16_post_imm_st3lane:
 ;CHECK: st3.h { v0, v1, v2 }[0], [x0], #6
   call void @llvm.aarch64.neon.st3lane.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i32 3
+  %tmp = getelementptr i16, i16* %A, i32 3
   ret i16* %tmp
 }
 
@@ -5303,7 +5303,7 @@ define i16* @test_v4i16_post_reg_st3lane
 ;CHECK-LABEL: test_v4i16_post_reg_st3lane:
 ;CHECK: st3.h { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3lane.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -5314,7 +5314,7 @@ define i32* @test_v4i32_post_imm_st3lane
 ;CHECK-LABEL: test_v4i32_post_imm_st3lane:
 ;CHECK: st3.s { v0, v1, v2 }[0], [x0], #12
   call void @llvm.aarch64.neon.st3lane.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i32 3
+  %tmp = getelementptr i32, i32* %A, i32 3
   ret i32* %tmp
 }
 
@@ -5322,7 +5322,7 @@ define i32* @test_v4i32_post_reg_st3lane
 ;CHECK-LABEL: test_v4i32_post_reg_st3lane:
 ;CHECK: st3.s { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3lane.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -5333,7 +5333,7 @@ define i32* @test_v2i32_post_imm_st3lane
 ;CHECK-LABEL: test_v2i32_post_imm_st3lane:
 ;CHECK: st3.s { v0, v1, v2 }[0], [x0], #12
   call void @llvm.aarch64.neon.st3lane.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i32 3
+  %tmp = getelementptr i32, i32* %A, i32 3
   ret i32* %tmp
 }
 
@@ -5341,7 +5341,7 @@ define i32* @test_v2i32_post_reg_st3lane
 ;CHECK-LABEL: test_v2i32_post_reg_st3lane:
 ;CHECK: st3.s { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3lane.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -5352,7 +5352,7 @@ define i64* @test_v2i64_post_imm_st3lane
 ;CHECK-LABEL: test_v2i64_post_imm_st3lane:
 ;CHECK: st3.d { v0, v1, v2 }[0], [x0], #24
   call void @llvm.aarch64.neon.st3lane.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 3
+  %tmp = getelementptr i64, i64* %A, i64 3
   ret i64* %tmp
 }
 
@@ -5360,7 +5360,7 @@ define i64* @test_v2i64_post_reg_st3lane
 ;CHECK-LABEL: test_v2i64_post_reg_st3lane:
 ;CHECK: st3.d { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3lane.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -5371,7 +5371,7 @@ define i64* @test_v1i64_post_imm_st3lane
 ;CHECK-LABEL: test_v1i64_post_imm_st3lane:
 ;CHECK: st3.d { v0, v1, v2 }[0], [x0], #24
   call void @llvm.aarch64.neon.st3lane.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 3
+  %tmp = getelementptr i64, i64* %A, i64 3
   ret i64* %tmp
 }
 
@@ -5379,7 +5379,7 @@ define i64* @test_v1i64_post_reg_st3lane
 ;CHECK-LABEL: test_v1i64_post_reg_st3lane:
 ;CHECK: st3.d { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3lane.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -5390,7 +5390,7 @@ define float* @test_v4f32_post_imm_st3la
 ;CHECK-LABEL: test_v4f32_post_imm_st3lane:
 ;CHECK: st3.s { v0, v1, v2 }[0], [x0], #12
   call void @llvm.aarch64.neon.st3lane.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i32 3
+  %tmp = getelementptr float, float* %A, i32 3
   ret float* %tmp
 }
 
@@ -5398,7 +5398,7 @@ define float* @test_v4f32_post_reg_st3la
 ;CHECK-LABEL: test_v4f32_post_reg_st3lane:
 ;CHECK: st3.s { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3lane.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -5409,7 +5409,7 @@ define float* @test_v2f32_post_imm_st3la
 ;CHECK-LABEL: test_v2f32_post_imm_st3lane:
 ;CHECK: st3.s { v0, v1, v2 }[0], [x0], #12
   call void @llvm.aarch64.neon.st3lane.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i32 3
+  %tmp = getelementptr float, float* %A, i32 3
   ret float* %tmp
 }
 
@@ -5417,7 +5417,7 @@ define float* @test_v2f32_post_reg_st3la
 ;CHECK-LABEL: test_v2f32_post_reg_st3lane:
 ;CHECK: st3.s { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3lane.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -5428,7 +5428,7 @@ define double* @test_v2f64_post_imm_st3l
 ;CHECK-LABEL: test_v2f64_post_imm_st3lane:
 ;CHECK: st3.d { v0, v1, v2 }[0], [x0], #24
   call void @llvm.aarch64.neon.st3lane.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 3
+  %tmp = getelementptr double, double* %A, i64 3
   ret double* %tmp
 }
 
@@ -5436,7 +5436,7 @@ define double* @test_v2f64_post_reg_st3l
 ;CHECK-LABEL: test_v2f64_post_reg_st3lane:
 ;CHECK: st3.d { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3lane.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -5447,7 +5447,7 @@ define double* @test_v1f64_post_imm_st3l
 ;CHECK-LABEL: test_v1f64_post_imm_st3lane:
 ;CHECK: st3.d { v0, v1, v2 }[0], [x0], #24
   call void @llvm.aarch64.neon.st3lane.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 3
+  %tmp = getelementptr double, double* %A, i64 3
   ret double* %tmp
 }
 
@@ -5455,7 +5455,7 @@ define double* @test_v1f64_post_reg_st3l
 ;CHECK-LABEL: test_v1f64_post_reg_st3lane:
 ;CHECK: st3.d { v0, v1, v2 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st3lane.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -5466,7 +5466,7 @@ define i8* @test_v16i8_post_imm_st4lane(
 ;CHECK-LABEL: test_v16i8_post_imm_st4lane:
 ;CHECK: st4.b { v0, v1, v2, v3 }[0], [x0], #4
   call void @llvm.aarch64.neon.st4lane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, <16 x i8> %E, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i32 4
+  %tmp = getelementptr i8, i8* %A, i32 4
   ret i8* %tmp
 }
 
@@ -5474,7 +5474,7 @@ define i8* @test_v16i8_post_reg_st4lane(
 ;CHECK-LABEL: test_v16i8_post_reg_st4lane:
 ;CHECK: st4.b { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4lane.v16i8.p0i8(<16 x i8> %B, <16 x i8> %C, <16 x i8> %D, <16 x i8> %E, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -5485,7 +5485,7 @@ define i8* @test_v8i8_post_imm_st4lane(i
 ;CHECK-LABEL: test_v8i8_post_imm_st4lane:
 ;CHECK: st4.b { v0, v1, v2, v3 }[0], [x0], #4
   call void @llvm.aarch64.neon.st4lane.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, <8 x i8> %E, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i32 4
+  %tmp = getelementptr i8, i8* %A, i32 4
   ret i8* %tmp
 }
 
@@ -5493,7 +5493,7 @@ define i8* @test_v8i8_post_reg_st4lane(i
 ;CHECK-LABEL: test_v8i8_post_reg_st4lane:
 ;CHECK: st4.b { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4lane.v8i8.p0i8(<8 x i8> %B, <8 x i8> %C, <8 x i8> %D, <8 x i8> %E, i64 0, i8* %A)
-  %tmp = getelementptr i8* %A, i64 %inc
+  %tmp = getelementptr i8, i8* %A, i64 %inc
   ret i8* %tmp
 }
 
@@ -5504,7 +5504,7 @@ define i16* @test_v8i16_post_imm_st4lane
 ;CHECK-LABEL: test_v8i16_post_imm_st4lane:
 ;CHECK: st4.h { v0, v1, v2, v3 }[0], [x0], #8
   call void @llvm.aarch64.neon.st4lane.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, <8 x i16> %E, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i32 4
+  %tmp = getelementptr i16, i16* %A, i32 4
   ret i16* %tmp
 }
 
@@ -5512,7 +5512,7 @@ define i16* @test_v8i16_post_reg_st4lane
 ;CHECK-LABEL: test_v8i16_post_reg_st4lane:
 ;CHECK: st4.h { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4lane.v8i16.p0i16(<8 x i16> %B, <8 x i16> %C, <8 x i16> %D, <8 x i16> %E, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -5523,7 +5523,7 @@ define i16* @test_v4i16_post_imm_st4lane
 ;CHECK-LABEL: test_v4i16_post_imm_st4lane:
 ;CHECK: st4.h { v0, v1, v2, v3 }[0], [x0], #8
   call void @llvm.aarch64.neon.st4lane.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, <4 x i16> %E, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i32 4
+  %tmp = getelementptr i16, i16* %A, i32 4
   ret i16* %tmp
 }
 
@@ -5531,7 +5531,7 @@ define i16* @test_v4i16_post_reg_st4lane
 ;CHECK-LABEL: test_v4i16_post_reg_st4lane:
 ;CHECK: st4.h { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4lane.v4i16.p0i16(<4 x i16> %B, <4 x i16> %C, <4 x i16> %D, <4 x i16> %E, i64 0, i16* %A)
-  %tmp = getelementptr i16* %A, i64 %inc
+  %tmp = getelementptr i16, i16* %A, i64 %inc
   ret i16* %tmp
 }
 
@@ -5542,7 +5542,7 @@ define i32* @test_v4i32_post_imm_st4lane
 ;CHECK-LABEL: test_v4i32_post_imm_st4lane:
 ;CHECK: st4.s { v0, v1, v2, v3 }[0], [x0], #16
   call void @llvm.aarch64.neon.st4lane.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, <4 x i32> %E, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i32 4
+  %tmp = getelementptr i32, i32* %A, i32 4
   ret i32* %tmp
 }
 
@@ -5550,7 +5550,7 @@ define i32* @test_v4i32_post_reg_st4lane
 ;CHECK-LABEL: test_v4i32_post_reg_st4lane:
 ;CHECK: st4.s { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4lane.v4i32.p0i32(<4 x i32> %B, <4 x i32> %C, <4 x i32> %D, <4 x i32> %E, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -5561,7 +5561,7 @@ define i32* @test_v2i32_post_imm_st4lane
 ;CHECK-LABEL: test_v2i32_post_imm_st4lane:
 ;CHECK: st4.s { v0, v1, v2, v3 }[0], [x0], #16
   call void @llvm.aarch64.neon.st4lane.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, <2 x i32> %E, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i32 4
+  %tmp = getelementptr i32, i32* %A, i32 4
   ret i32* %tmp
 }
 
@@ -5569,7 +5569,7 @@ define i32* @test_v2i32_post_reg_st4lane
 ;CHECK-LABEL: test_v2i32_post_reg_st4lane:
 ;CHECK: st4.s { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4lane.v2i32.p0i32(<2 x i32> %B, <2 x i32> %C, <2 x i32> %D, <2 x i32> %E, i64 0, i32* %A)
-  %tmp = getelementptr i32* %A, i64 %inc
+  %tmp = getelementptr i32, i32* %A, i64 %inc
   ret i32* %tmp
 }
 
@@ -5580,7 +5580,7 @@ define i64* @test_v2i64_post_imm_st4lane
 ;CHECK-LABEL: test_v2i64_post_imm_st4lane:
 ;CHECK: st4.d { v0, v1, v2, v3 }[0], [x0], #32
   call void @llvm.aarch64.neon.st4lane.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, <2 x i64> %E, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 4
+  %tmp = getelementptr i64, i64* %A, i64 4
   ret i64* %tmp
 }
 
@@ -5588,7 +5588,7 @@ define i64* @test_v2i64_post_reg_st4lane
 ;CHECK-LABEL: test_v2i64_post_reg_st4lane:
 ;CHECK: st4.d { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4lane.v2i64.p0i64(<2 x i64> %B, <2 x i64> %C, <2 x i64> %D, <2 x i64> %E, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -5599,7 +5599,7 @@ define i64* @test_v1i64_post_imm_st4lane
 ;CHECK-LABEL: test_v1i64_post_imm_st4lane:
 ;CHECK: st4.d { v0, v1, v2, v3 }[0], [x0], #32
   call void @llvm.aarch64.neon.st4lane.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, <1 x i64> %E, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 4
+  %tmp = getelementptr i64, i64* %A, i64 4
   ret i64* %tmp
 }
 
@@ -5607,7 +5607,7 @@ define i64* @test_v1i64_post_reg_st4lane
 ;CHECK-LABEL: test_v1i64_post_reg_st4lane:
 ;CHECK: st4.d { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4lane.v1i64.p0i64(<1 x i64> %B, <1 x i64> %C, <1 x i64> %D, <1 x i64> %E, i64 0, i64* %A)
-  %tmp = getelementptr i64* %A, i64 %inc
+  %tmp = getelementptr i64, i64* %A, i64 %inc
   ret i64* %tmp
 }
 
@@ -5618,7 +5618,7 @@ define float* @test_v4f32_post_imm_st4la
 ;CHECK-LABEL: test_v4f32_post_imm_st4lane:
 ;CHECK: st4.s { v0, v1, v2, v3 }[0], [x0], #16
   call void @llvm.aarch64.neon.st4lane.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, <4 x float> %E, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i32 4
+  %tmp = getelementptr float, float* %A, i32 4
   ret float* %tmp
 }
 
@@ -5626,7 +5626,7 @@ define float* @test_v4f32_post_reg_st4la
 ;CHECK-LABEL: test_v4f32_post_reg_st4lane:
 ;CHECK: st4.s { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4lane.v4f32.p0f32(<4 x float> %B, <4 x float> %C, <4 x float> %D, <4 x float> %E, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -5637,7 +5637,7 @@ define float* @test_v2f32_post_imm_st4la
 ;CHECK-LABEL: test_v2f32_post_imm_st4lane:
 ;CHECK: st4.s { v0, v1, v2, v3 }[0], [x0], #16
   call void @llvm.aarch64.neon.st4lane.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, <2 x float> %E, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i32 4
+  %tmp = getelementptr float, float* %A, i32 4
   ret float* %tmp
 }
 
@@ -5645,7 +5645,7 @@ define float* @test_v2f32_post_reg_st4la
 ;CHECK-LABEL: test_v2f32_post_reg_st4lane:
 ;CHECK: st4.s { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4lane.v2f32.p0f32(<2 x float> %B, <2 x float> %C, <2 x float> %D, <2 x float> %E, i64 0, float* %A)
-  %tmp = getelementptr float* %A, i64 %inc
+  %tmp = getelementptr float, float* %A, i64 %inc
   ret float* %tmp
 }
 
@@ -5656,7 +5656,7 @@ define double* @test_v2f64_post_imm_st4l
 ;CHECK-LABEL: test_v2f64_post_imm_st4lane:
 ;CHECK: st4.d { v0, v1, v2, v3 }[0], [x0], #32
   call void @llvm.aarch64.neon.st4lane.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, <2 x double> %E, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 4
+  %tmp = getelementptr double, double* %A, i64 4
   ret double* %tmp
 }
 
@@ -5664,7 +5664,7 @@ define double* @test_v2f64_post_reg_st4l
 ;CHECK-LABEL: test_v2f64_post_reg_st4lane:
 ;CHECK: st4.d { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4lane.v2f64.p0f64(<2 x double> %B, <2 x double> %C, <2 x double> %D, <2 x double> %E, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -5675,7 +5675,7 @@ define double* @test_v1f64_post_imm_st4l
 ;CHECK-LABEL: test_v1f64_post_imm_st4lane:
 ;CHECK: st4.d { v0, v1, v2, v3 }[0], [x0], #32
   call void @llvm.aarch64.neon.st4lane.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, <1 x double> %E, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 4
+  %tmp = getelementptr double, double* %A, i64 4
   ret double* %tmp
 }
 
@@ -5683,7 +5683,7 @@ define double* @test_v1f64_post_reg_st4l
 ;CHECK-LABEL: test_v1f64_post_reg_st4lane:
 ;CHECK: st4.d { v0, v1, v2, v3 }[0], [x0], x{{[0-9]+}}
   call void @llvm.aarch64.neon.st4lane.v1f64.p0f64(<1 x double> %B, <1 x double> %C, <1 x double> %D, <1 x double> %E, i64 0, double* %A)
-  %tmp = getelementptr double* %A, i64 %inc
+  %tmp = getelementptr double, double* %A, i64 %inc
   ret double* %tmp
 }
 
@@ -5709,7 +5709,7 @@ define <16 x i8> @test_v16i8_post_imm_ld
   %tmp15 = insertelement <16 x i8> %tmp14, i8 %tmp1, i32 13
   %tmp16 = insertelement <16 x i8> %tmp15, i8 %tmp1, i32 14
   %tmp17 = insertelement <16 x i8> %tmp16, i8 %tmp1, i32 15
-  %tmp18 = getelementptr i8* %bar, i64 1
+  %tmp18 = getelementptr i8, i8* %bar, i64 1
   store i8* %tmp18, i8** %ptr
   ret <16 x i8> %tmp17
 }
@@ -5734,7 +5734,7 @@ define <16 x i8> @test_v16i8_post_reg_ld
   %tmp15 = insertelement <16 x i8> %tmp14, i8 %tmp1, i32 13
   %tmp16 = insertelement <16 x i8> %tmp15, i8 %tmp1, i32 14
   %tmp17 = insertelement <16 x i8> %tmp16, i8 %tmp1, i32 15
-  %tmp18 = getelementptr i8* %bar, i64 %inc
+  %tmp18 = getelementptr i8, i8* %bar, i64 %inc
   store i8* %tmp18, i8** %ptr
   ret <16 x i8> %tmp17
 }
@@ -5751,7 +5751,7 @@ define <8 x i8> @test_v8i8_post_imm_ld1r
   %tmp7 = insertelement <8 x i8> %tmp6, i8 %tmp1, i32 5
   %tmp8 = insertelement <8 x i8> %tmp7, i8 %tmp1, i32 6
   %tmp9 = insertelement <8 x i8> %tmp8, i8 %tmp1, i32 7
-  %tmp10 = getelementptr i8* %bar, i64 1
+  %tmp10 = getelementptr i8, i8* %bar, i64 1
   store i8* %tmp10, i8** %ptr
   ret <8 x i8> %tmp9
 }
@@ -5768,7 +5768,7 @@ define <8 x i8> @test_v8i8_post_reg_ld1r
   %tmp7 = insertelement <8 x i8> %tmp6, i8 %tmp1, i32 5
   %tmp8 = insertelement <8 x i8> %tmp7, i8 %tmp1, i32 6
   %tmp9 = insertelement <8 x i8> %tmp8, i8 %tmp1, i32 7
-  %tmp10 = getelementptr i8* %bar, i64 %inc
+  %tmp10 = getelementptr i8, i8* %bar, i64 %inc
   store i8* %tmp10, i8** %ptr
   ret <8 x i8> %tmp9
 }
@@ -5785,7 +5785,7 @@ define <8 x i16> @test_v8i16_post_imm_ld
   %tmp7 = insertelement <8 x i16> %tmp6, i16 %tmp1, i32 5
   %tmp8 = insertelement <8 x i16> %tmp7, i16 %tmp1, i32 6
   %tmp9 = insertelement <8 x i16> %tmp8, i16 %tmp1, i32 7
-  %tmp10 = getelementptr i16* %bar, i64 1
+  %tmp10 = getelementptr i16, i16* %bar, i64 1
   store i16* %tmp10, i16** %ptr
   ret <8 x i16> %tmp9
 }
@@ -5802,7 +5802,7 @@ define <8 x i16> @test_v8i16_post_reg_ld
   %tmp7 = insertelement <8 x i16> %tmp6, i16 %tmp1, i32 5
   %tmp8 = insertelement <8 x i16> %tmp7, i16 %tmp1, i32 6
   %tmp9 = insertelement <8 x i16> %tmp8, i16 %tmp1, i32 7
-  %tmp10 = getelementptr i16* %bar, i64 %inc
+  %tmp10 = getelementptr i16, i16* %bar, i64 %inc
   store i16* %tmp10, i16** %ptr
   ret <8 x i16> %tmp9
 }
@@ -5815,7 +5815,7 @@ define <4 x i16> @test_v4i16_post_imm_ld
   %tmp3 = insertelement <4 x i16> %tmp2, i16 %tmp1, i32 1
   %tmp4 = insertelement <4 x i16> %tmp3, i16 %tmp1, i32 2
   %tmp5 = insertelement <4 x i16> %tmp4, i16 %tmp1, i32 3
-  %tmp6 = getelementptr i16* %bar, i64 1
+  %tmp6 = getelementptr i16, i16* %bar, i64 1
   store i16* %tmp6, i16** %ptr
   ret <4 x i16> %tmp5
 }
@@ -5828,7 +5828,7 @@ define <4 x i16> @test_v4i16_post_reg_ld
   %tmp3 = insertelement <4 x i16> %tmp2, i16 %tmp1, i32 1
   %tmp4 = insertelement <4 x i16> %tmp3, i16 %tmp1, i32 2
   %tmp5 = insertelement <4 x i16> %tmp4, i16 %tmp1, i32 3
-  %tmp6 = getelementptr i16* %bar, i64 %inc
+  %tmp6 = getelementptr i16, i16* %bar, i64 %inc
   store i16* %tmp6, i16** %ptr
   ret <4 x i16> %tmp5
 }
@@ -5841,7 +5841,7 @@ define <4 x i32> @test_v4i32_post_imm_ld
   %tmp3 = insertelement <4 x i32> %tmp2, i32 %tmp1, i32 1
   %tmp4 = insertelement <4 x i32> %tmp3, i32 %tmp1, i32 2
   %tmp5 = insertelement <4 x i32> %tmp4, i32 %tmp1, i32 3
-  %tmp6 = getelementptr i32* %bar, i64 1
+  %tmp6 = getelementptr i32, i32* %bar, i64 1
   store i32* %tmp6, i32** %ptr
   ret <4 x i32> %tmp5
 }
@@ -5854,7 +5854,7 @@ define <4 x i32> @test_v4i32_post_reg_ld
   %tmp3 = insertelement <4 x i32> %tmp2, i32 %tmp1, i32 1
   %tmp4 = insertelement <4 x i32> %tmp3, i32 %tmp1, i32 2
   %tmp5 = insertelement <4 x i32> %tmp4, i32 %tmp1, i32 3
-  %tmp6 = getelementptr i32* %bar, i64 %inc
+  %tmp6 = getelementptr i32, i32* %bar, i64 %inc
   store i32* %tmp6, i32** %ptr
   ret <4 x i32> %tmp5
 }
@@ -5865,7 +5865,7 @@ define <2 x i32> @test_v2i32_post_imm_ld
   %tmp1 = load i32* %bar
   %tmp2 = insertelement <2 x i32> <i32 undef, i32 undef>, i32 %tmp1, i32 0
   %tmp3 = insertelement <2 x i32> %tmp2, i32 %tmp1, i32 1
-  %tmp4 = getelementptr i32* %bar, i64 1
+  %tmp4 = getelementptr i32, i32* %bar, i64 1
   store i32* %tmp4, i32** %ptr
   ret <2 x i32> %tmp3
 }
@@ -5876,7 +5876,7 @@ define <2 x i32> @test_v2i32_post_reg_ld
   %tmp1 = load i32* %bar
   %tmp2 = insertelement <2 x i32> <i32 undef, i32 undef>, i32 %tmp1, i32 0
   %tmp3 = insertelement <2 x i32> %tmp2, i32 %tmp1, i32 1
-  %tmp4 = getelementptr i32* %bar, i64 %inc
+  %tmp4 = getelementptr i32, i32* %bar, i64 %inc
   store i32* %tmp4, i32** %ptr
   ret <2 x i32> %tmp3
 }
@@ -5887,7 +5887,7 @@ define <2 x i64> @test_v2i64_post_imm_ld
   %tmp1 = load i64* %bar
   %tmp2 = insertelement <2 x i64> <i64 undef, i64 undef>, i64 %tmp1, i32 0
   %tmp3 = insertelement <2 x i64> %tmp2, i64 %tmp1, i32 1
-  %tmp4 = getelementptr i64* %bar, i64 1
+  %tmp4 = getelementptr i64, i64* %bar, i64 1
   store i64* %tmp4, i64** %ptr
   ret <2 x i64> %tmp3
 }
@@ -5898,7 +5898,7 @@ define <2 x i64> @test_v2i64_post_reg_ld
   %tmp1 = load i64* %bar
   %tmp2 = insertelement <2 x i64> <i64 undef, i64 undef>, i64 %tmp1, i32 0
   %tmp3 = insertelement <2 x i64> %tmp2, i64 %tmp1, i32 1
-  %tmp4 = getelementptr i64* %bar, i64 %inc
+  %tmp4 = getelementptr i64, i64* %bar, i64 %inc
   store i64* %tmp4, i64** %ptr
   ret <2 x i64> %tmp3
 }
@@ -5911,7 +5911,7 @@ define <4 x float> @test_v4f32_post_imm_
   %tmp3 = insertelement <4 x float> %tmp2, float %tmp1, i32 1
   %tmp4 = insertelement <4 x float> %tmp3, float %tmp1, i32 2
   %tmp5 = insertelement <4 x float> %tmp4, float %tmp1, i32 3
-  %tmp6 = getelementptr float* %bar, i64 1
+  %tmp6 = getelementptr float, float* %bar, i64 1
   store float* %tmp6, float** %ptr
   ret <4 x float> %tmp5
 }
@@ -5924,7 +5924,7 @@ define <4 x float> @test_v4f32_post_reg_
   %tmp3 = insertelement <4 x float> %tmp2, float %tmp1, i32 1
   %tmp4 = insertelement <4 x float> %tmp3, float %tmp1, i32 2
   %tmp5 = insertelement <4 x float> %tmp4, float %tmp1, i32 3
-  %tmp6 = getelementptr float* %bar, i64 %inc
+  %tmp6 = getelementptr float, float* %bar, i64 %inc
   store float* %tmp6, float** %ptr
   ret <4 x float> %tmp5
 }
@@ -5935,7 +5935,7 @@ define <2 x float> @test_v2f32_post_imm_
   %tmp1 = load float* %bar
   %tmp2 = insertelement <2 x float> <float undef, float undef>, float %tmp1, i32 0
   %tmp3 = insertelement <2 x float> %tmp2, float %tmp1, i32 1
-  %tmp4 = getelementptr float* %bar, i64 1
+  %tmp4 = getelementptr float, float* %bar, i64 1
   store float* %tmp4, float** %ptr
   ret <2 x float> %tmp3
 }
@@ -5946,7 +5946,7 @@ define <2 x float> @test_v2f32_post_reg_
   %tmp1 = load float* %bar
   %tmp2 = insertelement <2 x float> <float undef, float undef>, float %tmp1, i32 0
   %tmp3 = insertelement <2 x float> %tmp2, float %tmp1, i32 1
-  %tmp4 = getelementptr float* %bar, i64 %inc
+  %tmp4 = getelementptr float, float* %bar, i64 %inc
   store float* %tmp4, float** %ptr
   ret <2 x float> %tmp3
 }
@@ -5957,7 +5957,7 @@ define <2 x double> @test_v2f64_post_imm
   %tmp1 = load double* %bar
   %tmp2 = insertelement <2 x double> <double undef, double undef>, double %tmp1, i32 0
   %tmp3 = insertelement <2 x double> %tmp2, double %tmp1, i32 1
-  %tmp4 = getelementptr double* %bar, i64 1
+  %tmp4 = getelementptr double, double* %bar, i64 1
   store double* %tmp4, double** %ptr
   ret <2 x double> %tmp3
 }
@@ -5968,7 +5968,7 @@ define <2 x double> @test_v2f64_post_reg
   %tmp1 = load double* %bar
   %tmp2 = insertelement <2 x double> <double undef, double undef>, double %tmp1, i32 0
   %tmp3 = insertelement <2 x double> %tmp2, double %tmp1, i32 1
-  %tmp4 = getelementptr double* %bar, i64 %inc
+  %tmp4 = getelementptr double, double* %bar, i64 %inc
   store double* %tmp4, double** %ptr
   ret <2 x double> %tmp3
 }
@@ -5978,7 +5978,7 @@ define <16 x i8> @test_v16i8_post_imm_ld
 ; CHECK: ld1.b { v0 }[1], [x0], #1
   %tmp1 = load i8* %bar
   %tmp2 = insertelement <16 x i8> %A, i8 %tmp1, i32 1
-  %tmp3 = getelementptr i8* %bar, i64 1
+  %tmp3 = getelementptr i8, i8* %bar, i64 1
   store i8* %tmp3, i8** %ptr
   ret <16 x i8> %tmp2
 }
@@ -5988,7 +5988,7 @@ define <16 x i8> @test_v16i8_post_reg_ld
 ; CHECK: ld1.b { v0 }[1], [x0], x{{[0-9]+}}
   %tmp1 = load i8* %bar
   %tmp2 = insertelement <16 x i8> %A, i8 %tmp1, i32 1
-  %tmp3 = getelementptr i8* %bar, i64 %inc
+  %tmp3 = getelementptr i8, i8* %bar, i64 %inc
   store i8* %tmp3, i8** %ptr
   ret <16 x i8> %tmp2
 }
@@ -5998,7 +5998,7 @@ define <8 x i8> @test_v8i8_post_imm_ld1l
 ; CHECK: ld1.b { v0 }[1], [x0], #1
   %tmp1 = load i8* %bar
   %tmp2 = insertelement <8 x i8> %A, i8 %tmp1, i32 1
-  %tmp3 = getelementptr i8* %bar, i64 1
+  %tmp3 = getelementptr i8, i8* %bar, i64 1
   store i8* %tmp3, i8** %ptr
   ret <8 x i8> %tmp2
 }
@@ -6008,7 +6008,7 @@ define <8 x i8> @test_v8i8_post_reg_ld1l
 ; CHECK: ld1.b { v0 }[1], [x0], x{{[0-9]+}}
   %tmp1 = load i8* %bar
   %tmp2 = insertelement <8 x i8> %A, i8 %tmp1, i32 1
-  %tmp3 = getelementptr i8* %bar, i64 %inc
+  %tmp3 = getelementptr i8, i8* %bar, i64 %inc
   store i8* %tmp3, i8** %ptr
   ret <8 x i8> %tmp2
 }
@@ -6018,7 +6018,7 @@ define <8 x i16> @test_v8i16_post_imm_ld
 ; CHECK: ld1.h { v0 }[1], [x0], #2
   %tmp1 = load i16* %bar
   %tmp2 = insertelement <8 x i16> %A, i16 %tmp1, i32 1
-  %tmp3 = getelementptr i16* %bar, i64 1
+  %tmp3 = getelementptr i16, i16* %bar, i64 1
   store i16* %tmp3, i16** %ptr
   ret <8 x i16> %tmp2
 }
@@ -6028,7 +6028,7 @@ define <8 x i16> @test_v8i16_post_reg_ld
 ; CHECK: ld1.h { v0 }[1], [x0], x{{[0-9]+}}
   %tmp1 = load i16* %bar
   %tmp2 = insertelement <8 x i16> %A, i16 %tmp1, i32 1
-  %tmp3 = getelementptr i16* %bar, i64 %inc
+  %tmp3 = getelementptr i16, i16* %bar, i64 %inc
   store i16* %tmp3, i16** %ptr
   ret <8 x i16> %tmp2
 }
@@ -6038,7 +6038,7 @@ define <4 x i16> @test_v4i16_post_imm_ld
 ; CHECK: ld1.h { v0 }[1], [x0], #2
   %tmp1 = load i16* %bar
   %tmp2 = insertelement <4 x i16> %A, i16 %tmp1, i32 1
-  %tmp3 = getelementptr i16* %bar, i64 1
+  %tmp3 = getelementptr i16, i16* %bar, i64 1
   store i16* %tmp3, i16** %ptr
   ret <4 x i16> %tmp2
 }
@@ -6048,7 +6048,7 @@ define <4 x i16> @test_v4i16_post_reg_ld
 ; CHECK: ld1.h { v0 }[1], [x0], x{{[0-9]+}}
   %tmp1 = load i16* %bar
   %tmp2 = insertelement <4 x i16> %A, i16 %tmp1, i32 1
-  %tmp3 = getelementptr i16* %bar, i64 %inc
+  %tmp3 = getelementptr i16, i16* %bar, i64 %inc
   store i16* %tmp3, i16** %ptr
   ret <4 x i16> %tmp2
 }
@@ -6058,7 +6058,7 @@ define <4 x i32> @test_v4i32_post_imm_ld
 ; CHECK: ld1.s { v0 }[1], [x0], #4
   %tmp1 = load i32* %bar
   %tmp2 = insertelement <4 x i32> %A, i32 %tmp1, i32 1
-  %tmp3 = getelementptr i32* %bar, i64 1
+  %tmp3 = getelementptr i32, i32* %bar, i64 1
   store i32* %tmp3, i32** %ptr
   ret <4 x i32> %tmp2
 }
@@ -6068,7 +6068,7 @@ define <4 x i32> @test_v4i32_post_reg_ld
 ; CHECK: ld1.s { v0 }[1], [x0], x{{[0-9]+}}
   %tmp1 = load i32* %bar
   %tmp2 = insertelement <4 x i32> %A, i32 %tmp1, i32 1
-  %tmp3 = getelementptr i32* %bar, i64 %inc
+  %tmp3 = getelementptr i32, i32* %bar, i64 %inc
   store i32* %tmp3, i32** %ptr
   ret <4 x i32> %tmp2
 }
@@ -6078,7 +6078,7 @@ define <2 x i32> @test_v2i32_post_imm_ld
 ; CHECK: ld1.s { v0 }[1], [x0], #4
   %tmp1 = load i32* %bar
   %tmp2 = insertelement <2 x i32> %A, i32 %tmp1, i32 1
-  %tmp3 = getelementptr i32* %bar, i64 1
+  %tmp3 = getelementptr i32, i32* %bar, i64 1
   store i32* %tmp3, i32** %ptr
   ret <2 x i32> %tmp2
 }
@@ -6088,7 +6088,7 @@ define <2 x i32> @test_v2i32_post_reg_ld
 ; CHECK: ld1.s { v0 }[1], [x0], x{{[0-9]+}}
   %tmp1 = load i32* %bar
   %tmp2 = insertelement <2 x i32> %A, i32 %tmp1, i32 1
-  %tmp3 = getelementptr i32* %bar, i64 %inc
+  %tmp3 = getelementptr i32, i32* %bar, i64 %inc
   store i32* %tmp3, i32** %ptr
   ret <2 x i32> %tmp2
 }
@@ -6098,7 +6098,7 @@ define <2 x i64> @test_v2i64_post_imm_ld
 ; CHECK: ld1.d { v0 }[1], [x0], #8
   %tmp1 = load i64* %bar
   %tmp2 = insertelement <2 x i64> %A, i64 %tmp1, i32 1
-  %tmp3 = getelementptr i64* %bar, i64 1
+  %tmp3 = getelementptr i64, i64* %bar, i64 1
   store i64* %tmp3, i64** %ptr
   ret <2 x i64> %tmp2
 }
@@ -6108,7 +6108,7 @@ define <2 x i64> @test_v2i64_post_reg_ld
 ; CHECK: ld1.d { v0 }[1], [x0], x{{[0-9]+}}
   %tmp1 = load i64* %bar
   %tmp2 = insertelement <2 x i64> %A, i64 %tmp1, i32 1
-  %tmp3 = getelementptr i64* %bar, i64 %inc
+  %tmp3 = getelementptr i64, i64* %bar, i64 %inc
   store i64* %tmp3, i64** %ptr
   ret <2 x i64> %tmp2
 }
@@ -6118,7 +6118,7 @@ define <4 x float> @test_v4f32_post_imm_
 ; CHECK: ld1.s { v0 }[1], [x0], #4
   %tmp1 = load float* %bar
   %tmp2 = insertelement <4 x float> %A, float %tmp1, i32 1
-  %tmp3 = getelementptr float* %bar, i64 1
+  %tmp3 = getelementptr float, float* %bar, i64 1
   store float* %tmp3, float** %ptr
   ret <4 x float> %tmp2
 }
@@ -6128,7 +6128,7 @@ define <4 x float> @test_v4f32_post_reg_
 ; CHECK: ld1.s { v0 }[1], [x0], x{{[0-9]+}}
   %tmp1 = load float* %bar
   %tmp2 = insertelement <4 x float> %A, float %tmp1, i32 1
-  %tmp3 = getelementptr float* %bar, i64 %inc
+  %tmp3 = getelementptr float, float* %bar, i64 %inc
   store float* %tmp3, float** %ptr
   ret <4 x float> %tmp2
 }
@@ -6138,7 +6138,7 @@ define <2 x float> @test_v2f32_post_imm_
 ; CHECK: ld1.s { v0 }[1], [x0], #4
   %tmp1 = load float* %bar
   %tmp2 = insertelement <2 x float> %A, float %tmp1, i32 1
-  %tmp3 = getelementptr float* %bar, i64 1
+  %tmp3 = getelementptr float, float* %bar, i64 1
   store float* %tmp3, float** %ptr
   ret <2 x float> %tmp2
 }
@@ -6148,7 +6148,7 @@ define <2 x float> @test_v2f32_post_reg_
 ; CHECK: ld1.s { v0 }[1], [x0], x{{[0-9]+}}
   %tmp1 = load float* %bar
   %tmp2 = insertelement <2 x float> %A, float %tmp1, i32 1
-  %tmp3 = getelementptr float* %bar, i64 %inc
+  %tmp3 = getelementptr float, float* %bar, i64 %inc
   store float* %tmp3, float** %ptr
   ret <2 x float> %tmp2
 }
@@ -6158,7 +6158,7 @@ define <2 x double> @test_v2f64_post_imm
 ; CHECK: ld1.d { v0 }[1], [x0], #8
   %tmp1 = load double* %bar
   %tmp2 = insertelement <2 x double> %A, double %tmp1, i32 1
-  %tmp3 = getelementptr double* %bar, i64 1
+  %tmp3 = getelementptr double, double* %bar, i64 1
   store double* %tmp3, double** %ptr
   ret <2 x double> %tmp2
 }
@@ -6168,7 +6168,7 @@ define <2 x double> @test_v2f64_post_reg
 ; CHECK: ld1.d { v0 }[1], [x0], x{{[0-9]+}}
   %tmp1 = load double* %bar
   %tmp2 = insertelement <2 x double> %A, double %tmp1, i32 1
-  %tmp3 = getelementptr double* %bar, i64 %inc
+  %tmp3 = getelementptr double, double* %bar, i64 %inc
   store double* %tmp3, double** %ptr
   ret <2 x double> %tmp2
 }
\ No newline at end of file

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-inline-asm.ll Fri Feb 27 13:29:02 2015
@@ -136,7 +136,7 @@ entry:
   ; CHECK-LABEL: t10:
   %data = alloca <2 x float>, align 8
   %a = alloca [2 x float], align 4
-  %arraydecay = getelementptr inbounds [2 x float]* %a, i32 0, i32 0
+  %arraydecay = getelementptr inbounds [2 x float], [2 x float]* %a, i32 0, i32 0
   %0 = load <2 x float>* %data, align 8
   call void asm sideeffect "ldr ${1:q}, [$0]\0A", "r,w"(float* %arraydecay, <2 x float> %0) nounwind
   ; CHECK: ldr {{q[0-9]+}}, [{{x[0-9]+}}]

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-large-frame.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-large-frame.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-large-frame.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-large-frame.ll Fri Feb 27 13:29:02 2015
@@ -23,7 +23,7 @@ define void @test_bigframe() {
 ; CHECK: add {{x[0-9]+}}, [[TMP1]], #3344
   store volatile i8* %var1, i8** @addr
 
-  %var1plus2 = getelementptr i8* %var1, i32 2
+  %var1plus2 = getelementptr i8, i8* %var1, i32 2
   store volatile i8* %var1plus2, i8** @addr
 
 ; CHECK: add [[TMP:x[0-9]+]], sp, #4095, lsl #12
@@ -31,12 +31,12 @@ define void @test_bigframe() {
 ; CHECK: add {{x[0-9]+}}, [[TMP1]], #3328
   store volatile i8* %var2, i8** @addr
 
-  %var2plus2 = getelementptr i8* %var2, i32 2
+  %var2plus2 = getelementptr i8, i8* %var2, i32 2
   store volatile i8* %var2plus2, i8** @addr
 
   store volatile i8* %var3, i8** @addr
 
-  %var3plus2 = getelementptr i8* %var3, i32 2
+  %var3plus2 = getelementptr i8, i8* %var3, i32 2
   store volatile i8* %var3plus2, i8** @addr
 
 ; CHECK: add sp, sp, #4095, lsl #12

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-ldp.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-ldp.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-ldp.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-ldp.ll Fri Feb 27 13:29:02 2015
@@ -6,7 +6,7 @@
 ; CHECK: ldp
 define i32 @ldp_int(i32* %p) nounwind {
   %tmp = load i32* %p, align 4
-  %add.ptr = getelementptr inbounds i32* %p, i64 1
+  %add.ptr = getelementptr inbounds i32, i32* %p, i64 1
   %tmp1 = load i32* %add.ptr, align 4
   %add = add nsw i32 %tmp1, %tmp
   ret i32 %add
@@ -16,7 +16,7 @@ define i32 @ldp_int(i32* %p) nounwind {
 ; CHECK: ldpsw
 define i64 @ldp_sext_int(i32* %p) nounwind {
   %tmp = load i32* %p, align 4
-  %add.ptr = getelementptr inbounds i32* %p, i64 1
+  %add.ptr = getelementptr inbounds i32, i32* %p, i64 1
   %tmp1 = load i32* %add.ptr, align 4
   %sexttmp = sext i32 %tmp to i64
   %sexttmp1 = sext i32 %tmp1 to i64
@@ -28,7 +28,7 @@ define i64 @ldp_sext_int(i32* %p) nounwi
 ; CHECK: ldp
 define i64 @ldp_long(i64* %p) nounwind {
   %tmp = load i64* %p, align 8
-  %add.ptr = getelementptr inbounds i64* %p, i64 1
+  %add.ptr = getelementptr inbounds i64, i64* %p, i64 1
   %tmp1 = load i64* %add.ptr, align 8
   %add = add nsw i64 %tmp1, %tmp
   ret i64 %add
@@ -38,7 +38,7 @@ define i64 @ldp_long(i64* %p) nounwind {
 ; CHECK: ldp
 define float @ldp_float(float* %p) nounwind {
   %tmp = load float* %p, align 4
-  %add.ptr = getelementptr inbounds float* %p, i64 1
+  %add.ptr = getelementptr inbounds float, float* %p, i64 1
   %tmp1 = load float* %add.ptr, align 4
   %add = fadd float %tmp, %tmp1
   ret float %add
@@ -48,7 +48,7 @@ define float @ldp_float(float* %p) nounw
 ; CHECK: ldp
 define double @ldp_double(double* %p) nounwind {
   %tmp = load double* %p, align 8
-  %add.ptr = getelementptr inbounds double* %p, i64 1
+  %add.ptr = getelementptr inbounds double, double* %p, i64 1
   %tmp1 = load double* %add.ptr, align 8
   %add = fadd double %tmp, %tmp1
   ret double %add
@@ -60,9 +60,9 @@ define i32 @ldur_int(i32* %a) nounwind {
 ; LDUR_CHK: ldp     [[DST1:w[0-9]+]], [[DST2:w[0-9]+]], [x0, #-8]
 ; LDUR_CHK-NEXT: add     w{{[0-9]+}}, [[DST2]], [[DST1]]
 ; LDUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds i32* %a, i32 -1
+  %p1 = getelementptr inbounds i32, i32* %a, i32 -1
   %tmp1 = load i32* %p1, align 2
-  %p2 = getelementptr inbounds i32* %a, i32 -2
+  %p2 = getelementptr inbounds i32, i32* %a, i32 -2
   %tmp2 = load i32* %p2, align 2
   %tmp3 = add i32 %tmp1, %tmp2
   ret i32 %tmp3
@@ -73,9 +73,9 @@ define i64 @ldur_sext_int(i32* %a) nounw
 ; LDUR_CHK: ldpsw     [[DST1:x[0-9]+]], [[DST2:x[0-9]+]], [x0, #-8]
 ; LDUR_CHK-NEXT: add     x{{[0-9]+}}, [[DST2]], [[DST1]]
 ; LDUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds i32* %a, i32 -1
+  %p1 = getelementptr inbounds i32, i32* %a, i32 -1
   %tmp1 = load i32* %p1, align 2
-  %p2 = getelementptr inbounds i32* %a, i32 -2
+  %p2 = getelementptr inbounds i32, i32* %a, i32 -2
   %tmp2 = load i32* %p2, align 2
   %sexttmp1 = sext i32 %tmp1 to i64
   %sexttmp2 = sext i32 %tmp2 to i64
@@ -88,9 +88,9 @@ define i64 @ldur_long(i64* %a) nounwind
 ; LDUR_CHK: ldp     [[DST1:x[0-9]+]], [[DST2:x[0-9]+]], [x0, #-16]
 ; LDUR_CHK-NEXT: add     x{{[0-9]+}}, [[DST2]], [[DST1]]
 ; LDUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds i64* %a, i64 -1
+  %p1 = getelementptr inbounds i64, i64* %a, i64 -1
   %tmp1 = load i64* %p1, align 2
-  %p2 = getelementptr inbounds i64* %a, i64 -2
+  %p2 = getelementptr inbounds i64, i64* %a, i64 -2
   %tmp2 = load i64* %p2, align 2
   %tmp3 = add i64 %tmp1, %tmp2
   ret i64 %tmp3
@@ -101,9 +101,9 @@ define float @ldur_float(float* %a) {
 ; LDUR_CHK: ldp     [[DST1:s[0-9]+]], [[DST2:s[0-9]+]], [x0, #-8]
 ; LDUR_CHK-NEXT: add     s{{[0-9]+}}, [[DST2]], [[DST1]]
 ; LDUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds float* %a, i64 -1
+  %p1 = getelementptr inbounds float, float* %a, i64 -1
   %tmp1 = load float* %p1, align 2
-  %p2 = getelementptr inbounds float* %a, i64 -2
+  %p2 = getelementptr inbounds float, float* %a, i64 -2
   %tmp2 = load float* %p2, align 2
   %tmp3 = fadd float %tmp1, %tmp2
   ret float %tmp3
@@ -114,9 +114,9 @@ define double @ldur_double(double* %a) {
 ; LDUR_CHK: ldp     [[DST1:d[0-9]+]], [[DST2:d[0-9]+]], [x0, #-16]
 ; LDUR_CHK-NEXT: add     d{{[0-9]+}}, [[DST2]], [[DST1]]
 ; LDUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds double* %a, i64 -1
+  %p1 = getelementptr inbounds double, double* %a, i64 -1
   %tmp1 = load double* %p1, align 2
-  %p2 = getelementptr inbounds double* %a, i64 -2
+  %p2 = getelementptr inbounds double, double* %a, i64 -2
   %tmp2 = load double* %p2, align 2
   %tmp3 = fadd double %tmp1, %tmp2
   ret double %tmp3
@@ -129,9 +129,9 @@ define i64 @pairUpBarelyIn(i64* %a) noun
 ; LDUR_CHK: ldp     [[DST1:x[0-9]+]], [[DST2:x[0-9]+]], [x0, #-256]
 ; LDUR_CHK-NEXT: add     x{{[0-9]+}}, [[DST2]], [[DST1]]
 ; LDUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds i64* %a, i64 -31
+  %p1 = getelementptr inbounds i64, i64* %a, i64 -31
   %tmp1 = load i64* %p1, align 2
-  %p2 = getelementptr inbounds i64* %a, i64 -32
+  %p2 = getelementptr inbounds i64, i64* %a, i64 -32
   %tmp2 = load i64* %p2, align 2
   %tmp3 = add i64 %tmp1, %tmp2
   ret i64 %tmp3
@@ -143,9 +143,9 @@ define i64 @pairUpBarelyInSext(i32* %a)
 ; LDUR_CHK: ldpsw     [[DST1:x[0-9]+]], [[DST2:x[0-9]+]], [x0, #-256]
 ; LDUR_CHK-NEXT: add     x{{[0-9]+}}, [[DST2]], [[DST1]]
 ; LDUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds i32* %a, i64 -63
+  %p1 = getelementptr inbounds i32, i32* %a, i64 -63
   %tmp1 = load i32* %p1, align 2
-  %p2 = getelementptr inbounds i32* %a, i64 -64
+  %p2 = getelementptr inbounds i32, i32* %a, i64 -64
   %tmp2 = load i32* %p2, align 2
   %sexttmp1 = sext i32 %tmp1 to i64
   %sexttmp2 = sext i32 %tmp2 to i64
@@ -160,9 +160,9 @@ define i64 @pairUpBarelyOut(i64* %a) nou
 ; are used---just check that there isn't an ldp before the add
 ; LDUR_CHK: add
 ; LDUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds i64* %a, i64 -32
+  %p1 = getelementptr inbounds i64, i64* %a, i64 -32
   %tmp1 = load i64* %p1, align 2
-  %p2 = getelementptr inbounds i64* %a, i64 -33
+  %p2 = getelementptr inbounds i64, i64* %a, i64 -33
   %tmp2 = load i64* %p2, align 2
   %tmp3 = add i64 %tmp1, %tmp2
   ret i64 %tmp3
@@ -175,9 +175,9 @@ define i64 @pairUpBarelyOutSext(i32* %a)
 ; are used---just check that there isn't an ldp before the add
 ; LDUR_CHK: add
 ; LDUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds i32* %a, i64 -64
+  %p1 = getelementptr inbounds i32, i32* %a, i64 -64
   %tmp1 = load i32* %p1, align 2
-  %p2 = getelementptr inbounds i32* %a, i64 -65
+  %p2 = getelementptr inbounds i32, i32* %a, i64 -65
   %tmp2 = load i32* %p2, align 2
   %sexttmp1 = sext i32 %tmp1 to i64
   %sexttmp2 = sext i32 %tmp2 to i64
@@ -192,15 +192,15 @@ define i64 @pairUpNotAligned(i64* %a) no
 ; LDUR_CHK-NEXT: ldur
 ; LDUR_CHK-NEXT: add
 ; LDUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds i64* %a, i64 -18
+  %p1 = getelementptr inbounds i64, i64* %a, i64 -18
   %bp1 = bitcast i64* %p1 to i8*
-  %bp1p1 = getelementptr inbounds i8* %bp1, i64 1
+  %bp1p1 = getelementptr inbounds i8, i8* %bp1, i64 1
   %dp1 = bitcast i8* %bp1p1 to i64*
   %tmp1 = load i64* %dp1, align 1
 
-  %p2 = getelementptr inbounds i64* %a, i64 -17
+  %p2 = getelementptr inbounds i64, i64* %a, i64 -17
   %bp2 = bitcast i64* %p2 to i8*
-  %bp2p1 = getelementptr inbounds i8* %bp2, i64 1
+  %bp2p1 = getelementptr inbounds i8, i8* %bp2, i64 1
   %dp2 = bitcast i8* %bp2p1 to i64*
   %tmp2 = load i64* %dp2, align 1
 
@@ -215,15 +215,15 @@ define i64 @pairUpNotAlignedSext(i32* %a
 ; LDUR_CHK-NEXT: ldursw
 ; LDUR_CHK-NEXT: add
 ; LDUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds i32* %a, i64 -18
+  %p1 = getelementptr inbounds i32, i32* %a, i64 -18
   %bp1 = bitcast i32* %p1 to i8*
-  %bp1p1 = getelementptr inbounds i8* %bp1, i64 1
+  %bp1p1 = getelementptr inbounds i8, i8* %bp1, i64 1
   %dp1 = bitcast i8* %bp1p1 to i32*
   %tmp1 = load i32* %dp1, align 1
 
-  %p2 = getelementptr inbounds i32* %a, i64 -17
+  %p2 = getelementptr inbounds i32, i32* %a, i64 -17
   %bp2 = bitcast i32* %p2 to i8*
-  %bp2p1 = getelementptr inbounds i8* %bp2, i64 1
+  %bp2p1 = getelementptr inbounds i8, i8* %bp2, i64 1
   %dp2 = bitcast i8* %bp2p1 to i32*
   %tmp2 = load i32* %dp2, align 1
 

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-ldur.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-ldur.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-ldur.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-ldur.ll Fri Feb 27 13:29:02 2015
@@ -4,7 +4,7 @@ define i64 @_f0(i64* %p) {
 ; CHECK: f0:
 ; CHECK: ldur x0, [x0, #-8]
 ; CHECK-NEXT: ret
-  %tmp = getelementptr inbounds i64* %p, i64 -1
+  %tmp = getelementptr inbounds i64, i64* %p, i64 -1
   %ret = load i64* %tmp, align 2
   ret i64 %ret
 }
@@ -12,7 +12,7 @@ define i32 @_f1(i32* %p) {
 ; CHECK: f1:
 ; CHECK: ldur w0, [x0, #-4]
 ; CHECK-NEXT: ret
-  %tmp = getelementptr inbounds i32* %p, i64 -1
+  %tmp = getelementptr inbounds i32, i32* %p, i64 -1
   %ret = load i32* %tmp, align 2
   ret i32 %ret
 }
@@ -20,7 +20,7 @@ define i16 @_f2(i16* %p) {
 ; CHECK: f2:
 ; CHECK: ldurh w0, [x0, #-2]
 ; CHECK-NEXT: ret
-  %tmp = getelementptr inbounds i16* %p, i64 -1
+  %tmp = getelementptr inbounds i16, i16* %p, i64 -1
   %ret = load i16* %tmp, align 2
   ret i16 %ret
 }
@@ -28,7 +28,7 @@ define i8 @_f3(i8* %p) {
 ; CHECK: f3:
 ; CHECK: ldurb w0, [x0, #-1]
 ; CHECK-NEXT: ret
-  %tmp = getelementptr inbounds i8* %p, i64 -1
+  %tmp = getelementptr inbounds i8, i8* %p, i64 -1
   %ret = load i8* %tmp, align 2
   ret i8 %ret
 }
@@ -37,7 +37,7 @@ define i64 @zext32(i8* %a) nounwind ssp
 ; CHECK-LABEL: zext32:
 ; CHECK: ldur w0, [x0, #-12]
 ; CHECK-NEXT: ret
-  %p = getelementptr inbounds i8* %a, i64 -12
+  %p = getelementptr inbounds i8, i8* %a, i64 -12
   %tmp1 = bitcast i8* %p to i32*
   %tmp2 = load i32* %tmp1, align 4
   %ret = zext i32 %tmp2 to i64
@@ -48,7 +48,7 @@ define i64 @zext16(i8* %a) nounwind ssp
 ; CHECK-LABEL: zext16:
 ; CHECK: ldurh w0, [x0, #-12]
 ; CHECK-NEXT: ret
-  %p = getelementptr inbounds i8* %a, i64 -12
+  %p = getelementptr inbounds i8, i8* %a, i64 -12
   %tmp1 = bitcast i8* %p to i16*
   %tmp2 = load i16* %tmp1, align 2
   %ret = zext i16 %tmp2 to i64
@@ -59,7 +59,7 @@ define i64 @zext8(i8* %a) nounwind ssp {
 ; CHECK-LABEL: zext8:
 ; CHECK: ldurb w0, [x0, #-12]
 ; CHECK-NEXT: ret
-  %p = getelementptr inbounds i8* %a, i64 -12
+  %p = getelementptr inbounds i8, i8* %a, i64 -12
   %tmp2 = load i8* %p, align 1
   %ret = zext i8 %tmp2 to i64
 

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-memset-inline.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-memset-inline.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-memset-inline.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-memset-inline.ll Fri Feb 27 13:29:02 2015
@@ -16,7 +16,7 @@ entry:
 ; CHECK: stp xzr, xzr, [sp, #16]
 ; CHECK: str xzr, [sp, #8]
   %buf = alloca [26 x i8], align 1
-  %0 = getelementptr inbounds [26 x i8]* %buf, i32 0, i32 0
+  %0 = getelementptr inbounds [26 x i8], [26 x i8]* %buf, i32 0, i32 0
   call void @llvm.memset.p0i8.i32(i8* %0, i8 0, i32 26, i32 1, i1 false)
   call void @something(i8* %0) nounwind
   ret void

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-misched-basic-A53.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-misched-basic-A53.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-misched-basic-A53.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-misched-basic-A53.ll Fri Feb 27 13:29:02 2015
@@ -41,7 +41,7 @@ for.cond:
 for.body:                                         ; preds = %for.cond
   %3 = load i32* %i, align 4
   %idxprom = sext i32 %3 to i64
-  %arrayidx = getelementptr inbounds [8 x i32]* %x, i32 0, i64 %idxprom
+  %arrayidx = getelementptr inbounds [8 x i32], [8 x i32]* %x, i32 0, i64 %idxprom
   %4 = load i32* %arrayidx, align 4
   %add = add nsw i32 %4, 1
   store i32 %add, i32* %xx, align 4
@@ -56,7 +56,7 @@ for.body:
   store i32 %add3, i32* %xx, align 4
   %8 = load i32* %i, align 4
   %idxprom4 = sext i32 %8 to i64
-  %arrayidx5 = getelementptr inbounds [8 x i32]* %y, i32 0, i64 %idxprom4
+  %arrayidx5 = getelementptr inbounds [8 x i32], [8 x i32]* %y, i32 0, i64 %idxprom4
   %9 = load i32* %arrayidx5, align 4
   %10 = load i32* %yy, align 4
   %mul = mul nsw i32 %10, %9
@@ -116,7 +116,7 @@ attributes #1 = { nounwind }
 ; Nothing explicit to check other than llc not crashing.
 define { <16 x i8>, <16 x i8> } @test_v16i8_post_imm_ld2(i8* %A, i8** %ptr) {
   %ld2 = tail call { <16 x i8>, <16 x i8> } @llvm.aarch64.neon.ld2.v16i8.p0i8(i8* %A)
-  %tmp = getelementptr i8* %A, i32 32
+  %tmp = getelementptr i8, i8* %A, i32 32
   store i8* %tmp, i8** %ptr
   ret { <16 x i8>, <16 x i8> } %ld2
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-misched-basic-A57.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-misched-basic-A57.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-misched-basic-A57.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-misched-basic-A57.ll Fri Feb 27 13:29:02 2015
@@ -49,7 +49,7 @@ for.body:
   %3 = load i32* %yy, align 4
   %4 = load i32* %i, align 4
   %idxprom = sext i32 %4 to i64
-  %arrayidx = getelementptr inbounds [8 x i32]* %x, i32 0, i64 %idxprom
+  %arrayidx = getelementptr inbounds [8 x i32], [8 x i32]* %x, i32 0, i64 %idxprom
   %5 = load i32* %arrayidx, align 4
   %add = add nsw i32 %5, 1
   store i32 %add, i32* %xx, align 4
@@ -64,7 +64,7 @@ for.body:
   store i32 %add3, i32* %xx, align 4
   %9 = load i32* %i, align 4
   %idxprom4 = sext i32 %9 to i64
-  %arrayidx5 = getelementptr inbounds [8 x i32]* %y, i32 0, i64 %idxprom4
+  %arrayidx5 = getelementptr inbounds [8 x i32], [8 x i32]* %y, i32 0, i64 %idxprom4
   %10 = load i32* %arrayidx5, align 4
 
   %add4 = add nsw i32 %9, %add

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-prefetch.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-prefetch.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-prefetch.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-prefetch.ll Fri Feb 27 13:29:02 2015
@@ -39,75 +39,75 @@ entry:
   %add = add nsw i32 %tmp1, %i
   %idxprom = sext i32 %add to i64
   %tmp2 = load i32** @a, align 8, !tbaa !3
-  %arrayidx = getelementptr inbounds i32* %tmp2, i64 %idxprom
+  %arrayidx = getelementptr inbounds i32, i32* %tmp2, i64 %idxprom
   %tmp3 = bitcast i32* %arrayidx to i8*
 
   ; CHECK: prfm pldl1strm
   call void @llvm.prefetch(i8* %tmp3, i32 0, i32 0, i32 1)
   %tmp4 = load i32** @a, align 8, !tbaa !3
-  %arrayidx3 = getelementptr inbounds i32* %tmp4, i64 %idxprom
+  %arrayidx3 = getelementptr inbounds i32, i32* %tmp4, i64 %idxprom
   %tmp5 = bitcast i32* %arrayidx3 to i8*
 
   ; CHECK: prfm pldl3keep
   call void @llvm.prefetch(i8* %tmp5, i32 0, i32 1, i32 1)
   %tmp6 = load i32** @a, align 8, !tbaa !3
-  %arrayidx6 = getelementptr inbounds i32* %tmp6, i64 %idxprom
+  %arrayidx6 = getelementptr inbounds i32, i32* %tmp6, i64 %idxprom
   %tmp7 = bitcast i32* %arrayidx6 to i8*
 
   ; CHECK: prfm pldl2keep
   call void @llvm.prefetch(i8* %tmp7, i32 0, i32 2, i32 1)
   %tmp8 = load i32** @a, align 8, !tbaa !3
-  %arrayidx9 = getelementptr inbounds i32* %tmp8, i64 %idxprom
+  %arrayidx9 = getelementptr inbounds i32, i32* %tmp8, i64 %idxprom
   %tmp9 = bitcast i32* %arrayidx9 to i8*
 
   ; CHECK: prfm pldl1keep
   call void @llvm.prefetch(i8* %tmp9, i32 0, i32 3, i32 1)
   %tmp10 = load i32** @a, align 8, !tbaa !3
-  %arrayidx12 = getelementptr inbounds i32* %tmp10, i64 %idxprom
+  %arrayidx12 = getelementptr inbounds i32, i32* %tmp10, i64 %idxprom
   %tmp11 = bitcast i32* %arrayidx12 to i8*
 
 
   ; CHECK: prfm plil1strm
   call void @llvm.prefetch(i8* %tmp11, i32 0, i32 0, i32 0)
   %tmp12 = load i32** @a, align 8, !tbaa !3
-  %arrayidx15 = getelementptr inbounds i32* %tmp12, i64 %idxprom
+  %arrayidx15 = getelementptr inbounds i32, i32* %tmp12, i64 %idxprom
   %tmp13 = bitcast i32* %arrayidx3 to i8*
 
   ; CHECK: prfm plil3keep
   call void @llvm.prefetch(i8* %tmp13, i32 0, i32 1, i32 0)
   %tmp14 = load i32** @a, align 8, !tbaa !3
-  %arrayidx18 = getelementptr inbounds i32* %tmp14, i64 %idxprom
+  %arrayidx18 = getelementptr inbounds i32, i32* %tmp14, i64 %idxprom
   %tmp15 = bitcast i32* %arrayidx6 to i8*
 
   ; CHECK: prfm plil2keep
   call void @llvm.prefetch(i8* %tmp15, i32 0, i32 2, i32 0)
   %tmp16 = load i32** @a, align 8, !tbaa !3
-  %arrayidx21 = getelementptr inbounds i32* %tmp16, i64 %idxprom
+  %arrayidx21 = getelementptr inbounds i32, i32* %tmp16, i64 %idxprom
   %tmp17 = bitcast i32* %arrayidx9 to i8*
 
   ; CHECK: prfm plil1keep
   call void @llvm.prefetch(i8* %tmp17, i32 0, i32 3, i32 0)
   %tmp18 = load i32** @a, align 8, !tbaa !3
-  %arrayidx24 = getelementptr inbounds i32* %tmp18, i64 %idxprom
+  %arrayidx24 = getelementptr inbounds i32, i32* %tmp18, i64 %idxprom
   %tmp19 = bitcast i32* %arrayidx12 to i8*
 
 
   ; CHECK: prfm pstl1strm
   call void @llvm.prefetch(i8* %tmp19, i32 1, i32 0, i32 1)
   %tmp20 = load i32** @a, align 8, !tbaa !3
-  %arrayidx27 = getelementptr inbounds i32* %tmp20, i64 %idxprom
+  %arrayidx27 = getelementptr inbounds i32, i32* %tmp20, i64 %idxprom
   %tmp21 = bitcast i32* %arrayidx15 to i8*
 
   ; CHECK: prfm pstl3keep
   call void @llvm.prefetch(i8* %tmp21, i32 1, i32 1, i32 1)
   %tmp22 = load i32** @a, align 8, !tbaa !3
-  %arrayidx30 = getelementptr inbounds i32* %tmp22, i64 %idxprom
+  %arrayidx30 = getelementptr inbounds i32, i32* %tmp22, i64 %idxprom
   %tmp23 = bitcast i32* %arrayidx18 to i8*
 
   ; CHECK: prfm pstl2keep
   call void @llvm.prefetch(i8* %tmp23, i32 1, i32 2, i32 1)
   %tmp24 = load i32** @a, align 8, !tbaa !3
-  %arrayidx33 = getelementptr inbounds i32* %tmp24, i64 %idxprom
+  %arrayidx33 = getelementptr inbounds i32, i32* %tmp24, i64 %idxprom
   %tmp25 = bitcast i32* %arrayidx21 to i8*
 
   ; CHECK: prfm pstl1keep

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-register-offset-addressing.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-register-offset-addressing.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-register-offset-addressing.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-register-offset-addressing.ll Fri Feb 27 13:29:02 2015
@@ -5,7 +5,7 @@ define i8 @test_64bit_add(i16* %a, i64 %
 ; CHECK: lsl [[REG:x[0-9]+]], x1, #1
 ; CHECK: ldrb w0, [x0, [[REG]]]
 ; CHECK: ret
-  %tmp1 = getelementptr inbounds i16* %a, i64 %b
+  %tmp1 = getelementptr inbounds i16, i16* %a, i64 %b
   %tmp2 = load i16* %tmp1
   %tmp3 = trunc i16 %tmp2 to i8
   ret i8 %tmp3
@@ -18,7 +18,7 @@ define void @ldst_8bit(i8* %base, i64 %o
 
    %off32.sext.tmp = shl i64 %offset, 32
    %off32.sext = ashr i64 %off32.sext.tmp, 32
-   %addr8_sxtw = getelementptr i8* %base, i64 %off32.sext
+   %addr8_sxtw = getelementptr i8, i8* %base, i64 %off32.sext
    %val8_sxtw = load volatile i8* %addr8_sxtw
    %val32_signed = sext i8 %val8_sxtw to i32
    store volatile i32 %val32_signed, i32* @var_32bit

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-rev.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-rev.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-rev.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-rev.ll Fri Feb 27 13:29:02 2015
@@ -217,7 +217,7 @@ entry:
   %0 = bitcast float* %source to <4 x float>*
   %tmp2 = load <4 x float>* %0, align 4
   %tmp5 = shufflevector <4 x float> <float 0.000000e+00, float undef, float undef, float undef>, <4 x float> %tmp2, <4 x i32> <i32 0, i32 7, i32 0, i32 0>
-  %arrayidx8 = getelementptr inbounds <4 x float>* %dest, i32 11
+  %arrayidx8 = getelementptr inbounds <4 x float>, <4 x float>* %dest, i32 11
   store <4 x float> %tmp5, <4 x float>* %arrayidx8, align 4
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-scaled_iv.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-scaled_iv.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-scaled_iv.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-scaled_iv.ll Fri Feb 27 13:29:02 2015
@@ -17,15 +17,15 @@ for.body:
 ; CHECK-NOT: phi
   %indvars.iv = phi i64 [ 1, %entry ], [ %indvars.iv.next, %for.body ]
   %tmp = add nsw i64 %indvars.iv, -1
-  %arrayidx = getelementptr inbounds double* %b, i64 %tmp
+  %arrayidx = getelementptr inbounds double, double* %b, i64 %tmp
   %tmp1 = load double* %arrayidx, align 8
 ; The induction variable should carry the scaling factor: 1 * 8 = 8.
 ; CHECK: [[IVNEXT]] = add nuw nsw i64 [[IV]], 8
   %indvars.iv.next = add i64 %indvars.iv, 1
-  %arrayidx2 = getelementptr inbounds double* %c, i64 %indvars.iv.next
+  %arrayidx2 = getelementptr inbounds double, double* %c, i64 %indvars.iv.next
   %tmp2 = load double* %arrayidx2, align 8
   %mul = fmul double %tmp1, %tmp2
-  %arrayidx4 = getelementptr inbounds double* %a, i64 %indvars.iv
+  %arrayidx4 = getelementptr inbounds double, double* %a, i64 %indvars.iv
   store double %mul, double* %arrayidx4, align 8
   %lftr.wideiv = trunc i64 %indvars.iv.next to i32
 ; Comparison should be 19 * 8 = 152.

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-scvt.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-scvt.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-scvt.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-scvt.ll Fri Feb 27 13:29:02 2015
@@ -75,7 +75,7 @@ define float @fct1(i8* nocapture %sp0) {
 ; CHECK-NEXT: ucvtf [[REG:s[0-9]+]], s[[REGNUM]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i8* %sp0, i64 1
+  %addr = getelementptr i8, i8* %sp0, i64 1
   %pix_sp0.0.copyload = load i8* %addr, align 1
   %val = uitofp i8 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -88,7 +88,7 @@ define float @fct2(i16* nocapture %sp0)
 ; CHECK-NEXT: ucvtf [[REG:s[0-9]+]], s[[REGNUM]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i16* %sp0, i64 1
+  %addr = getelementptr i16, i16* %sp0, i64 1
   %pix_sp0.0.copyload = load i16* %addr, align 1
   %val = uitofp i16 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -101,7 +101,7 @@ define float @fct3(i32* nocapture %sp0)
 ; CHECK-NEXT: ucvtf [[REG:s[0-9]+]], s[[REGNUM]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 1
+  %addr = getelementptr i32, i32* %sp0, i64 1
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %val = uitofp i32 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -115,7 +115,7 @@ define float @fct4(i64* nocapture %sp0)
 ; CHECK-NEXT: ucvtf [[REG:s[0-9]+]], x[[REGNUM]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i64* %sp0, i64 1
+  %addr = getelementptr i64, i64* %sp0, i64 1
   %pix_sp0.0.copyload = load i64* %addr, align 1
   %val = uitofp i64 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -129,7 +129,7 @@ define float @fct5(i8* nocapture %sp0, i
 ; CHECK-NEXT: ucvtf [[REG:s[0-9]+]], s[[REGNUM]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i8* %sp0, i64 %offset
+  %addr = getelementptr i8, i8* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i8* %addr, align 1
   %val = uitofp i8 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -142,7 +142,7 @@ define float @fct6(i16* nocapture %sp0,
 ; CHECK-NEXT: ucvtf [[REG:s[0-9]+]], s[[REGNUM]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i16* %sp0, i64 %offset
+  %addr = getelementptr i16, i16* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i16* %addr, align 1
   %val = uitofp i16 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -155,7 +155,7 @@ define float @fct7(i32* nocapture %sp0,
 ; CHECK-NEXT: ucvtf [[REG:s[0-9]+]], s[[REGNUM]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 %offset
+  %addr = getelementptr i32, i32* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %val = uitofp i32 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -169,7 +169,7 @@ define float @fct8(i64* nocapture %sp0,
 ; CHECK-NEXT: ucvtf [[REG:s[0-9]+]], x[[REGNUM]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i64* %sp0, i64 %offset
+  %addr = getelementptr i64, i64* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i64* %addr, align 1
   %val = uitofp i64 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -184,7 +184,7 @@ define double @fct9(i8* nocapture %sp0)
 ; CHECK-NEXT: ucvtf [[REG:d[0-9]+]], d[[REGNUM]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i8* %sp0, i64 1
+  %addr = getelementptr i8, i8* %sp0, i64 1
   %pix_sp0.0.copyload = load i8* %addr, align 1
   %val = uitofp i8 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -197,7 +197,7 @@ define double @fct10(i16* nocapture %sp0
 ; CHECK-NEXT: ucvtf [[REG:d[0-9]+]], d[[REGNUM]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i16* %sp0, i64 1
+  %addr = getelementptr i16, i16* %sp0, i64 1
   %pix_sp0.0.copyload = load i16* %addr, align 1
   %val = uitofp i16 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -210,7 +210,7 @@ define double @fct11(i32* nocapture %sp0
 ; CHECK-NEXT: ucvtf [[REG:d[0-9]+]], d[[REGNUM]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 1
+  %addr = getelementptr i32, i32* %sp0, i64 1
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %val = uitofp i32 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -223,7 +223,7 @@ define double @fct12(i64* nocapture %sp0
 ; CHECK-NEXT: ucvtf [[REG:d[0-9]+]], d[[REGNUM]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i64* %sp0, i64 1
+  %addr = getelementptr i64, i64* %sp0, i64 1
   %pix_sp0.0.copyload = load i64* %addr, align 1
   %val = uitofp i64 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -237,7 +237,7 @@ define double @fct13(i8* nocapture %sp0,
 ; CHECK-NEXT: ucvtf [[REG:d[0-9]+]], d[[REGNUM]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i8* %sp0, i64 %offset
+  %addr = getelementptr i8, i8* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i8* %addr, align 1
   %val = uitofp i8 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -250,7 +250,7 @@ define double @fct14(i16* nocapture %sp0
 ; CHECK-NEXT: ucvtf [[REG:d[0-9]+]], d[[REGNUM]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i16* %sp0, i64 %offset
+  %addr = getelementptr i16, i16* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i16* %addr, align 1
   %val = uitofp i16 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -263,7 +263,7 @@ define double @fct15(i32* nocapture %sp0
 ; CHECK-NEXT: ucvtf [[REG:d[0-9]+]], d[[REGNUM]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 %offset
+  %addr = getelementptr i32, i32* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %val = uitofp i32 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -276,7 +276,7 @@ define double @fct16(i64* nocapture %sp0
 ; CHECK-NEXT: ucvtf [[REG:d[0-9]+]], d[[REGNUM]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i64* %sp0, i64 %offset
+  %addr = getelementptr i64, i64* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i64* %addr, align 1
   %val = uitofp i64 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -415,7 +415,7 @@ define float @sfct1(i8* nocapture %sp0)
 ; CHECK-A57-NEXT: scvtf [[REG:s[0-9]+]], w[[REGNUM]]
 ; CHECK-A57-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i8* %sp0, i64 1
+  %addr = getelementptr i8, i8* %sp0, i64 1
   %pix_sp0.0.copyload = load i8* %addr, align 1
   %val = sitofp i8 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -429,7 +429,7 @@ define float @sfct2(i16* nocapture %sp0)
 ; CHECK: scvtf [[REG:s[0-9]+]], s[[SEXTREG]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i16* %sp0, i64 1
+  %addr = getelementptr i16, i16* %sp0, i64 1
   %pix_sp0.0.copyload = load i16* %addr, align 1
   %val = sitofp i16 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -442,7 +442,7 @@ define float @sfct3(i32* nocapture %sp0)
 ; CHECK-NEXT: scvtf [[REG:s[0-9]+]], s[[SEXTREG]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 1
+  %addr = getelementptr i32, i32* %sp0, i64 1
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %val = sitofp i32 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -456,7 +456,7 @@ define float @sfct4(i64* nocapture %sp0)
 ; CHECK-NEXT: scvtf [[REG:s[0-9]+]], x[[REGNUM]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i64* %sp0, i64 1
+  %addr = getelementptr i64, i64* %sp0, i64 1
   %pix_sp0.0.copyload = load i64* %addr, align 1
   %val = sitofp i64 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -476,7 +476,7 @@ define float @sfct5(i8* nocapture %sp0,
 ; CHECK-A57-NEXT: scvtf [[REG:s[0-9]+]], w[[REGNUM]]
 ; CHECK-A57-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i8* %sp0, i64 %offset
+  %addr = getelementptr i8, i8* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i8* %addr, align 1
   %val = sitofp i8 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -490,7 +490,7 @@ define float @sfct6(i16* nocapture %sp0,
 ; CHECK: scvtf [[REG:s[0-9]+]], s[[SEXTREG]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i16* %sp0, i64 %offset
+  %addr = getelementptr i16, i16* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i16* %addr, align 1
   %val = sitofp i16 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -503,7 +503,7 @@ define float @sfct7(i32* nocapture %sp0,
 ; CHECK-NEXT: scvtf [[REG:s[0-9]+]], s[[SEXTREG]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 %offset
+  %addr = getelementptr i32, i32* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %val = sitofp i32 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -517,7 +517,7 @@ define float @sfct8(i64* nocapture %sp0,
 ; CHECK-NEXT: scvtf [[REG:s[0-9]+]], x[[REGNUM]]
 ; CHECK-NEXT: fmul s0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i64* %sp0, i64 %offset
+  %addr = getelementptr i64, i64* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i64* %addr, align 1
   %val = sitofp i64 %pix_sp0.0.copyload to float
   %vmull.i = fmul float %val, %val
@@ -531,7 +531,7 @@ define double @sfct9(i8* nocapture %sp0)
 ; CHECK-NEXT: scvtf [[REG:d[0-9]+]], w[[REGNUM]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i8* %sp0, i64 1
+  %addr = getelementptr i8, i8* %sp0, i64 1
   %pix_sp0.0.copyload = load i8* %addr, align 1
   %val = sitofp i8 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -550,7 +550,7 @@ define double @sfct10(i16* nocapture %sp
 ; CHECK-A57-NEXT: scvtf [[REG:d[0-9]+]], w[[REGNUM]]
 ; CHECK-A57-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i16* %sp0, i64 1
+  %addr = getelementptr i16, i16* %sp0, i64 1
   %pix_sp0.0.copyload = load i16* %addr, align 1
   %val = sitofp i16 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -564,7 +564,7 @@ define double @sfct11(i32* nocapture %sp
 ; CHECK: scvtf [[REG:d[0-9]+]], d[[SEXTREG]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 1
+  %addr = getelementptr i32, i32* %sp0, i64 1
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %val = sitofp i32 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -577,7 +577,7 @@ define double @sfct12(i64* nocapture %sp
 ; CHECK-NEXT: scvtf [[REG:d[0-9]+]], d[[SEXTREG]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i64* %sp0, i64 1
+  %addr = getelementptr i64, i64* %sp0, i64 1
   %pix_sp0.0.copyload = load i64* %addr, align 1
   %val = sitofp i64 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -591,7 +591,7 @@ define double @sfct13(i8* nocapture %sp0
 ; CHECK-NEXT: scvtf [[REG:d[0-9]+]], w[[REGNUM]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i8* %sp0, i64 %offset
+  %addr = getelementptr i8, i8* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i8* %addr, align 1
   %val = sitofp i8 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -610,7 +610,7 @@ define double @sfct14(i16* nocapture %sp
 ; CHECK-A57-NEXT: scvtf [[REG:d[0-9]+]], w[[REGNUM]]
 ; CHECK-A57-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i16* %sp0, i64 %offset
+  %addr = getelementptr i16, i16* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i16* %addr, align 1
   %val = sitofp i16 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -624,7 +624,7 @@ define double @sfct15(i32* nocapture %sp
 ; CHECK: scvtf [[REG:d[0-9]+]], d[[SEXTREG]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 %offset
+  %addr = getelementptr i32, i32* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %val = sitofp i32 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -637,7 +637,7 @@ define double @sfct16(i64* nocapture %sp
 ; CHECK-NEXT: scvtf [[REG:d[0-9]+]], d[[SEXTREG]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i64* %sp0, i64 %offset
+  %addr = getelementptr i64, i64* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i64* %addr, align 1
   %val = sitofp i64 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val
@@ -799,7 +799,7 @@ define double @codesize_sfct11(i32* noca
 ; CHECK-NEXT: scvtf [[REG:d[0-9]+]], w[[REGNUM]]
 ; CHECK-NEXT: fmul d0, [[REG]], [[REG]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 1
+  %addr = getelementptr i32, i32* %sp0, i64 1
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %val = sitofp i32 %pix_sp0.0.copyload to double
   %vmull.i = fmul double %val, %val

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-spill-lr.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-spill-lr.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-spill-lr.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-spill-lr.ll Fri Feb 27 13:29:02 2015
@@ -11,7 +11,7 @@ entry:
   %stack = alloca [128 x i32], align 4
   %0 = bitcast [128 x i32]* %stack to i8*
   %idxprom = sext i32 %a to i64
-  %arrayidx = getelementptr inbounds [128 x i32]* %stack, i64 0, i64 %idxprom
+  %arrayidx = getelementptr inbounds [128 x i32], [128 x i32]* %stack, i64 0, i64 %idxprom
   store i32 %b, i32* %arrayidx, align 4
   %1 = load volatile i32* @bar, align 4
   %2 = load volatile i32* @bar, align 4
@@ -34,7 +34,7 @@ entry:
   %19 = load volatile i32* @bar, align 4
   %20 = load volatile i32* @bar, align 4
   %idxprom1 = sext i32 %c to i64
-  %arrayidx2 = getelementptr inbounds [128 x i32]* %stack, i64 0, i64 %idxprom1
+  %arrayidx2 = getelementptr inbounds [128 x i32], [128 x i32]* %stack, i64 0, i64 %idxprom1
   %21 = load i32* %arrayidx2, align 4
   %factor = mul i32 %h, -2
   %factor67 = mul i32 %g, -2

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-st1.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-st1.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-st1.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-st1.ll Fri Feb 27 13:29:02 2015
@@ -12,7 +12,7 @@ define void @st1lane_ro_16b(<16 x i8> %A
 ; CHECK-LABEL: st1lane_ro_16b
 ; CHECK: add x[[XREG:[0-9]+]], x0, x1
 ; CHECK: st1.b { v0 }[1], [x[[XREG]]]
-  %ptr = getelementptr i8* %D, i64 %offset
+  %ptr = getelementptr i8, i8* %D, i64 %offset
   %tmp = extractelement <16 x i8> %A, i32 1
   store i8 %tmp, i8* %ptr
   ret void
@@ -22,7 +22,7 @@ define void @st1lane0_ro_16b(<16 x i8> %
 ; CHECK-LABEL: st1lane0_ro_16b
 ; CHECK: add x[[XREG:[0-9]+]], x0, x1
 ; CHECK: st1.b { v0 }[0], [x[[XREG]]]
-  %ptr = getelementptr i8* %D, i64 %offset
+  %ptr = getelementptr i8, i8* %D, i64 %offset
   %tmp = extractelement <16 x i8> %A, i32 0
   store i8 %tmp, i8* %ptr
   ret void
@@ -40,7 +40,7 @@ define void @st1lane_ro_8h(<8 x i16> %A,
 ; CHECK-LABEL: st1lane_ro_8h
 ; CHECK: add x[[XREG:[0-9]+]], x0, x1
 ; CHECK: st1.h { v0 }[1], [x[[XREG]]]
-  %ptr = getelementptr i16* %D, i64 %offset
+  %ptr = getelementptr i16, i16* %D, i64 %offset
   %tmp = extractelement <8 x i16> %A, i32 1
   store i16 %tmp, i16* %ptr
   ret void
@@ -49,7 +49,7 @@ define void @st1lane_ro_8h(<8 x i16> %A,
 define void @st1lane0_ro_8h(<8 x i16> %A, i16* %D, i64 %offset) {
 ; CHECK-LABEL: st1lane0_ro_8h
 ; CHECK: str h0, [x0, x1, lsl #1]
-  %ptr = getelementptr i16* %D, i64 %offset
+  %ptr = getelementptr i16, i16* %D, i64 %offset
   %tmp = extractelement <8 x i16> %A, i32 0
   store i16 %tmp, i16* %ptr
   ret void
@@ -67,7 +67,7 @@ define void @st1lane_ro_4s(<4 x i32> %A,
 ; CHECK-LABEL: st1lane_ro_4s
 ; CHECK: add x[[XREG:[0-9]+]], x0, x1
 ; CHECK: st1.s { v0 }[1], [x[[XREG]]]
-  %ptr = getelementptr i32* %D, i64 %offset
+  %ptr = getelementptr i32, i32* %D, i64 %offset
   %tmp = extractelement <4 x i32> %A, i32 1
   store i32 %tmp, i32* %ptr
   ret void
@@ -76,7 +76,7 @@ define void @st1lane_ro_4s(<4 x i32> %A,
 define void @st1lane0_ro_4s(<4 x i32> %A, i32* %D, i64 %offset) {
 ; CHECK-LABEL: st1lane0_ro_4s
 ; CHECK: str s0, [x0, x1, lsl #2]
-  %ptr = getelementptr i32* %D, i64 %offset
+  %ptr = getelementptr i32, i32* %D, i64 %offset
   %tmp = extractelement <4 x i32> %A, i32 0
   store i32 %tmp, i32* %ptr
   ret void
@@ -94,7 +94,7 @@ define void @st1lane_ro_4s_float(<4 x fl
 ; CHECK-LABEL: st1lane_ro_4s_float
 ; CHECK: add x[[XREG:[0-9]+]], x0, x1
 ; CHECK: st1.s { v0 }[1], [x[[XREG]]]
-  %ptr = getelementptr float* %D, i64 %offset
+  %ptr = getelementptr float, float* %D, i64 %offset
   %tmp = extractelement <4 x float> %A, i32 1
   store float %tmp, float* %ptr
   ret void
@@ -103,7 +103,7 @@ define void @st1lane_ro_4s_float(<4 x fl
 define void @st1lane0_ro_4s_float(<4 x float> %A, float* %D, i64 %offset) {
 ; CHECK-LABEL: st1lane0_ro_4s_float
 ; CHECK: str s0, [x0, x1, lsl #2]
-  %ptr = getelementptr float* %D, i64 %offset
+  %ptr = getelementptr float, float* %D, i64 %offset
   %tmp = extractelement <4 x float> %A, i32 0
   store float %tmp, float* %ptr
   ret void
@@ -121,7 +121,7 @@ define void @st1lane_ro_2d(<2 x i64> %A,
 ; CHECK-LABEL: st1lane_ro_2d
 ; CHECK: add x[[XREG:[0-9]+]], x0, x1
 ; CHECK: st1.d { v0 }[1], [x[[XREG]]]
-  %ptr = getelementptr i64* %D, i64 %offset
+  %ptr = getelementptr i64, i64* %D, i64 %offset
   %tmp = extractelement <2 x i64> %A, i32 1
   store i64 %tmp, i64* %ptr
   ret void
@@ -130,7 +130,7 @@ define void @st1lane_ro_2d(<2 x i64> %A,
 define void @st1lane0_ro_2d(<2 x i64> %A, i64* %D, i64 %offset) {
 ; CHECK-LABEL: st1lane0_ro_2d
 ; CHECK: str d0, [x0, x1, lsl #3]
-  %ptr = getelementptr i64* %D, i64 %offset
+  %ptr = getelementptr i64, i64* %D, i64 %offset
   %tmp = extractelement <2 x i64> %A, i32 0
   store i64 %tmp, i64* %ptr
   ret void
@@ -148,7 +148,7 @@ define void @st1lane_ro_2d_double(<2 x d
 ; CHECK-LABEL: st1lane_ro_2d_double
 ; CHECK: add x[[XREG:[0-9]+]], x0, x1
 ; CHECK: st1.d { v0 }[1], [x[[XREG]]]
-  %ptr = getelementptr double* %D, i64 %offset
+  %ptr = getelementptr double, double* %D, i64 %offset
   %tmp = extractelement <2 x double> %A, i32 1
   store double %tmp, double* %ptr
   ret void
@@ -157,7 +157,7 @@ define void @st1lane_ro_2d_double(<2 x d
 define void @st1lane0_ro_2d_double(<2 x double> %A, double* %D, i64 %offset) {
 ; CHECK-LABEL: st1lane0_ro_2d_double
 ; CHECK: str d0, [x0, x1, lsl #3]
-  %ptr = getelementptr double* %D, i64 %offset
+  %ptr = getelementptr double, double* %D, i64 %offset
   %tmp = extractelement <2 x double> %A, i32 0
   store double %tmp, double* %ptr
   ret void
@@ -175,7 +175,7 @@ define void @st1lane_ro_8b(<8 x i8> %A,
 ; CHECK-LABEL: st1lane_ro_8b
 ; CHECK: add x[[XREG:[0-9]+]], x0, x1
 ; CHECK: st1.b { v0 }[1], [x[[XREG]]]
-  %ptr = getelementptr i8* %D, i64 %offset
+  %ptr = getelementptr i8, i8* %D, i64 %offset
   %tmp = extractelement <8 x i8> %A, i32 1
   store i8 %tmp, i8* %ptr
   ret void
@@ -185,7 +185,7 @@ define void @st1lane0_ro_8b(<8 x i8> %A,
 ; CHECK-LABEL: st1lane0_ro_8b
 ; CHECK: add x[[XREG:[0-9]+]], x0, x1
 ; CHECK: st1.b { v0 }[0], [x[[XREG]]]
-  %ptr = getelementptr i8* %D, i64 %offset
+  %ptr = getelementptr i8, i8* %D, i64 %offset
   %tmp = extractelement <8 x i8> %A, i32 0
   store i8 %tmp, i8* %ptr
   ret void
@@ -203,7 +203,7 @@ define void @st1lane_ro_4h(<4 x i16> %A,
 ; CHECK-LABEL: st1lane_ro_4h
 ; CHECK: add x[[XREG:[0-9]+]], x0, x1
 ; CHECK: st1.h { v0 }[1], [x[[XREG]]]
-  %ptr = getelementptr i16* %D, i64 %offset
+  %ptr = getelementptr i16, i16* %D, i64 %offset
   %tmp = extractelement <4 x i16> %A, i32 1
   store i16 %tmp, i16* %ptr
   ret void
@@ -212,7 +212,7 @@ define void @st1lane_ro_4h(<4 x i16> %A,
 define void @st1lane0_ro_4h(<4 x i16> %A, i16* %D, i64 %offset) {
 ; CHECK-LABEL: st1lane0_ro_4h
 ; CHECK: str h0, [x0, x1, lsl #1]
-  %ptr = getelementptr i16* %D, i64 %offset
+  %ptr = getelementptr i16, i16* %D, i64 %offset
   %tmp = extractelement <4 x i16> %A, i32 0
   store i16 %tmp, i16* %ptr
   ret void
@@ -230,7 +230,7 @@ define void @st1lane_ro_2s(<2 x i32> %A,
 ; CHECK-LABEL: st1lane_ro_2s
 ; CHECK: add x[[XREG:[0-9]+]], x0, x1
 ; CHECK: st1.s { v0 }[1], [x[[XREG]]]
-  %ptr = getelementptr i32* %D, i64 %offset
+  %ptr = getelementptr i32, i32* %D, i64 %offset
   %tmp = extractelement <2 x i32> %A, i32 1
   store i32 %tmp, i32* %ptr
   ret void
@@ -239,7 +239,7 @@ define void @st1lane_ro_2s(<2 x i32> %A,
 define void @st1lane0_ro_2s(<2 x i32> %A, i32* %D, i64 %offset) {
 ; CHECK-LABEL: st1lane0_ro_2s
 ; CHECK: str s0, [x0, x1, lsl #2]
-  %ptr = getelementptr i32* %D, i64 %offset
+  %ptr = getelementptr i32, i32* %D, i64 %offset
   %tmp = extractelement <2 x i32> %A, i32 0
   store i32 %tmp, i32* %ptr
   ret void
@@ -257,7 +257,7 @@ define void @st1lane_ro_2s_float(<2 x fl
 ; CHECK-LABEL: st1lane_ro_2s_float
 ; CHECK: add x[[XREG:[0-9]+]], x0, x1
 ; CHECK: st1.s { v0 }[1], [x[[XREG]]]
-  %ptr = getelementptr float* %D, i64 %offset
+  %ptr = getelementptr float, float* %D, i64 %offset
   %tmp = extractelement <2 x float> %A, i32 1
   store float %tmp, float* %ptr
   ret void
@@ -266,7 +266,7 @@ define void @st1lane_ro_2s_float(<2 x fl
 define void @st1lane0_ro_2s_float(<2 x float> %A, float* %D, i64 %offset) {
 ; CHECK-LABEL: st1lane0_ro_2s_float
 ; CHECK: str s0, [x0, x1, lsl #2]
-  %ptr = getelementptr float* %D, i64 %offset
+  %ptr = getelementptr float, float* %D, i64 %offset
   %tmp = extractelement <2 x float> %A, i32 0
   store float %tmp, float* %ptr
   ret void

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-stp.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-stp.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-stp.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-stp.ll Fri Feb 27 13:29:02 2015
@@ -6,7 +6,7 @@
 ; CHECK: stp w0, w1, [x2]
 define void @stp_int(i32 %a, i32 %b, i32* nocapture %p) nounwind {
   store i32 %a, i32* %p, align 4
-  %add.ptr = getelementptr inbounds i32* %p, i64 1
+  %add.ptr = getelementptr inbounds i32, i32* %p, i64 1
   store i32 %b, i32* %add.ptr, align 4
   ret void
 }
@@ -15,7 +15,7 @@ define void @stp_int(i32 %a, i32 %b, i32
 ; CHECK: stp x0, x1, [x2]
 define void @stp_long(i64 %a, i64 %b, i64* nocapture %p) nounwind {
   store i64 %a, i64* %p, align 8
-  %add.ptr = getelementptr inbounds i64* %p, i64 1
+  %add.ptr = getelementptr inbounds i64, i64* %p, i64 1
   store i64 %b, i64* %add.ptr, align 8
   ret void
 }
@@ -24,7 +24,7 @@ define void @stp_long(i64 %a, i64 %b, i6
 ; CHECK: stp s0, s1, [x0]
 define void @stp_float(float %a, float %b, float* nocapture %p) nounwind {
   store float %a, float* %p, align 4
-  %add.ptr = getelementptr inbounds float* %p, i64 1
+  %add.ptr = getelementptr inbounds float, float* %p, i64 1
   store float %b, float* %add.ptr, align 4
   ret void
 }
@@ -33,7 +33,7 @@ define void @stp_float(float %a, float %
 ; CHECK: stp d0, d1, [x0]
 define void @stp_double(double %a, double %b, double* nocapture %p) nounwind {
   store double %a, double* %p, align 8
-  %add.ptr = getelementptr inbounds double* %p, i64 1
+  %add.ptr = getelementptr inbounds double, double* %p, i64 1
   store double %b, double* %add.ptr, align 8
   ret void
 }
@@ -43,9 +43,9 @@ define void @stur_int(i32 %a, i32 %b, i3
 ; STUR_CHK: stur_int
 ; STUR_CHK: stp w{{[0-9]+}}, {{w[0-9]+}}, [x{{[0-9]+}}, #-8]
 ; STUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds i32* %p, i32 -1
+  %p1 = getelementptr inbounds i32, i32* %p, i32 -1
   store i32 %a, i32* %p1, align 2
-  %p2 = getelementptr inbounds i32* %p, i32 -2
+  %p2 = getelementptr inbounds i32, i32* %p, i32 -2
   store i32 %b, i32* %p2, align 2
   ret void
 }
@@ -54,9 +54,9 @@ define void @stur_long(i64 %a, i64 %b, i
 ; STUR_CHK: stur_long
 ; STUR_CHK: stp x{{[0-9]+}}, {{x[0-9]+}}, [x{{[0-9]+}}, #-16]
 ; STUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds i64* %p, i32 -1
+  %p1 = getelementptr inbounds i64, i64* %p, i32 -1
   store i64 %a, i64* %p1, align 2
-  %p2 = getelementptr inbounds i64* %p, i32 -2
+  %p2 = getelementptr inbounds i64, i64* %p, i32 -2
   store i64 %b, i64* %p2, align 2
   ret void
 }
@@ -65,9 +65,9 @@ define void @stur_float(float %a, float
 ; STUR_CHK: stur_float
 ; STUR_CHK: stp s{{[0-9]+}}, {{s[0-9]+}}, [x{{[0-9]+}}, #-8]
 ; STUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds float* %p, i32 -1
+  %p1 = getelementptr inbounds float, float* %p, i32 -1
   store float %a, float* %p1, align 2
-  %p2 = getelementptr inbounds float* %p, i32 -2
+  %p2 = getelementptr inbounds float, float* %p, i32 -2
   store float %b, float* %p2, align 2
   ret void
 }
@@ -76,9 +76,9 @@ define void @stur_double(double %a, doub
 ; STUR_CHK: stur_double
 ; STUR_CHK: stp d{{[0-9]+}}, {{d[0-9]+}}, [x{{[0-9]+}}, #-16]
 ; STUR_CHK-NEXT: ret
-  %p1 = getelementptr inbounds double* %p, i32 -1
+  %p1 = getelementptr inbounds double, double* %p, i32 -1
   store double %a, double* %p1, align 2
-  %p2 = getelementptr inbounds double* %p, i32 -2
+  %p2 = getelementptr inbounds double, double* %p, i32 -2
   store double %b, double* %p2, align 2
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-stur.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-stur.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-stur.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-stur.ll Fri Feb 27 13:29:02 2015
@@ -6,7 +6,7 @@ define void @foo1(i32* %p, i64 %val) nou
 ; CHECK: 	stur	w1, [x0, #-4]
 ; CHECK-NEXT: 	ret
   %tmp1 = trunc i64 %val to i32
-  %ptr = getelementptr inbounds i32* %p, i64 -1
+  %ptr = getelementptr inbounds i32, i32* %p, i64 -1
   store i32 %tmp1, i32* %ptr, align 4
   ret void
 }
@@ -15,7 +15,7 @@ define void @foo2(i16* %p, i64 %val) nou
 ; CHECK: 	sturh	w1, [x0, #-2]
 ; CHECK-NEXT: 	ret
   %tmp1 = trunc i64 %val to i16
-  %ptr = getelementptr inbounds i16* %p, i64 -1
+  %ptr = getelementptr inbounds i16, i16* %p, i64 -1
   store i16 %tmp1, i16* %ptr, align 2
   ret void
 }
@@ -24,7 +24,7 @@ define void @foo3(i8* %p, i64 %val) noun
 ; CHECK: 	sturb	w1, [x0, #-1]
 ; CHECK-NEXT: 	ret
   %tmp1 = trunc i64 %val to i8
-  %ptr = getelementptr inbounds i8* %p, i64 -1
+  %ptr = getelementptr inbounds i8, i8* %p, i64 -1
   store i8 %tmp1, i8* %ptr, align 1
   ret void
 }
@@ -33,7 +33,7 @@ define void @foo4(i16* %p, i32 %val) nou
 ; CHECK: 	sturh	w1, [x0, #-2]
 ; CHECK-NEXT: 	ret
   %tmp1 = trunc i32 %val to i16
-  %ptr = getelementptr inbounds i16* %p, i32 -1
+  %ptr = getelementptr inbounds i16, i16* %p, i32 -1
   store i16 %tmp1, i16* %ptr, align 2
   ret void
 }
@@ -42,7 +42,7 @@ define void @foo5(i8* %p, i32 %val) noun
 ; CHECK: 	sturb	w1, [x0, #-1]
 ; CHECK-NEXT: 	ret
   %tmp1 = trunc i32 %val to i8
-  %ptr = getelementptr inbounds i8* %p, i32 -1
+  %ptr = getelementptr inbounds i8, i8* %p, i32 -1
   store i8 %tmp1, i8* %ptr, align 1
   ret void
 }
@@ -53,7 +53,7 @@ define void @foo(%struct.X* nocapture %p
 ; CHECK: stur    xzr, [x0, #12]
 ; CHECK-NEXT: stur    xzr, [x0, #4]
 ; CHECK-NEXT: ret
-  %B = getelementptr inbounds %struct.X* %p, i64 0, i32 1
+  %B = getelementptr inbounds %struct.X, %struct.X* %p, i64 0, i32 1
   %val = bitcast i64* %B to i8*
   call void @llvm.memset.p0i8.i64(i8* %val, i8 0, i64 16, i32 1, i1 false)
   ret void

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-this-return.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-this-return.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-this-return.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-this-return.ll Fri Feb 27 13:29:02 2015
@@ -23,7 +23,7 @@ entry:
 ; CHECK: b {{_?B_ctor_base}}
   %0 = bitcast %struct.C* %this to %struct.A*
   %call = tail call %struct.A* @A_ctor_base(%struct.A* %0)
-  %1 = getelementptr inbounds %struct.C* %this, i32 0, i32 0
+  %1 = getelementptr inbounds %struct.C, %struct.C* %this, i32 0, i32 0
   %call2 = tail call %struct.B* @B_ctor_base(%struct.B* %1, i32 %x)
   ret %struct.C* %this
 }
@@ -37,7 +37,7 @@ entry:
 ; CHECK-NOT: b {{_?B_ctor_base_nothisret}}
   %0 = bitcast %struct.C* %this to %struct.A*
   %call = tail call %struct.A* @A_ctor_base_nothisret(%struct.A* %0)
-  %1 = getelementptr inbounds %struct.C* %this, i32 0, i32 0
+  %1 = getelementptr inbounds %struct.C, %struct.C* %this, i32 0, i32 0
   %call2 = tail call %struct.B* @B_ctor_base_nothisret(%struct.B* %1, i32 %x)
   ret %struct.C* %this
 }
@@ -65,7 +65,7 @@ entry:
 ; CHECK: bl {{_?B_ctor_complete}}
 ; CHECK-NOT: mov x0, {{x[0-9]+}}
 ; CHECK: b {{_?B_ctor_complete}}
-  %b = getelementptr inbounds %struct.D* %this, i32 0, i32 0
+  %b = getelementptr inbounds %struct.D, %struct.D* %this, i32 0, i32 0
   %call = tail call %struct.B* @B_ctor_complete(%struct.B* %b, i32 %x)
   %call2 = tail call %struct.B* @B_ctor_complete(%struct.B* %b, i32 %x)
   ret %struct.D* %this
@@ -75,9 +75,9 @@ define %struct.E* @E_ctor_base(%struct.E
 entry:
 ; CHECK-LABEL: E_ctor_base:
 ; CHECK-NOT: b {{_?B_ctor_complete}}
-  %b = getelementptr inbounds %struct.E* %this, i32 0, i32 0
+  %b = getelementptr inbounds %struct.E, %struct.E* %this, i32 0, i32 0
   %call = tail call %struct.B* @B_ctor_complete(%struct.B* %b, i32 %x)
-  %b2 = getelementptr inbounds %struct.E* %this, i32 0, i32 1
+  %b2 = getelementptr inbounds %struct.E, %struct.E* %this, i32 0, i32 1
   %call2 = tail call %struct.B* @B_ctor_complete(%struct.B* %b2, i32 %x)
   ret %struct.E* %this
 }

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-triv-disjoint-mem-access.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-triv-disjoint-mem-access.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-triv-disjoint-mem-access.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-triv-disjoint-mem-access.ll Fri Feb 27 13:29:02 2015
@@ -9,9 +9,9 @@ entry:
 ; CHECK: ldr {{w[0-9]+}}, [x[[REG:[0-9]+]], #4]
 ; CHECK: str {{w[0-9]+}}, [x[[REG]], #8]
   %0 = load i32** @a, align 8, !tbaa !1
-  %arrayidx = getelementptr inbounds i32* %0, i64 2
+  %arrayidx = getelementptr inbounds i32, i32* %0, i64 2
   store i32 %i, i32* %arrayidx, align 4, !tbaa !5
-  %arrayidx1 = getelementptr inbounds i32* %0, i64 1
+  %arrayidx1 = getelementptr inbounds i32, i32* %0, i64 1
   %1 = load i32* %arrayidx1, align 4, !tbaa !5
   %add = add nsw i32 %k, %i
   store i32 %add, i32* @m, align 4, !tbaa !5

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-trunc-store.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-trunc-store.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-trunc-store.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-trunc-store.ll Fri Feb 27 13:29:02 2015
@@ -28,7 +28,7 @@ bb:
   %.pre37 = load i32** @zptr32, align 8
   %dec = add nsw i32 %arg, -1
   %idxprom8 = sext i32 %dec to i64
-  %arrayidx9 = getelementptr inbounds i32* %.pre37, i64 %idxprom8
+  %arrayidx9 = getelementptr inbounds i32, i32* %.pre37, i64 %idxprom8
   %tmp = trunc i64 %var to i32
   store i32 %tmp, i32* %arrayidx9, align 4
   ret void
@@ -48,7 +48,7 @@ bb:
   %.pre37 = load i16** @zptr16, align 8
   %dec = add nsw i32 %arg, -1
   %idxprom8 = sext i32 %dec to i64
-  %arrayidx9 = getelementptr inbounds i16* %.pre37, i64 %idxprom8
+  %arrayidx9 = getelementptr inbounds i16, i16* %.pre37, i64 %idxprom8
   %tmp = trunc i64 %var to i16
   store i16 %tmp, i16* %arrayidx9, align 4
   ret void
@@ -68,7 +68,7 @@ bb:
   %.pre37 = load i8** @zptr8, align 8
   %dec = add nsw i32 %arg, -1
   %idxprom8 = sext i32 %dec to i64
-  %arrayidx9 = getelementptr inbounds i8* %.pre37, i64 %idxprom8
+  %arrayidx9 = getelementptr inbounds i8, i8* %.pre37, i64 %idxprom8
   %tmp = trunc i64 %var to i8
   store i8 %tmp, i8* %arrayidx9, align 4
   ret void

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-vector-ldst.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-vector-ldst.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-vector-ldst.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-vector-ldst.ll Fri Feb 27 13:29:02 2015
@@ -13,7 +13,7 @@ entry:
 ; CHECK: ldr x[[REG:[0-9]+]], [x0]
 ; CHECK: str q0, [x[[REG]]]
   %tmp1 = load %type1** %argtable, align 8
-  %tmp2 = getelementptr inbounds %type1* %tmp1, i64 0, i32 0
+  %tmp2 = getelementptr inbounds %type1, %type1* %tmp1, i64 0, i32 0
   store <16 x i8> zeroinitializer, <16 x i8>* %tmp2, align 16
   ret void
 }
@@ -24,7 +24,7 @@ entry:
 ; CHECK: ldr x[[REG:[0-9]+]], [x0]
 ; CHECK: str d0, [x[[REG]]]
   %tmp1 = load %type2** %argtable, align 8
-  %tmp2 = getelementptr inbounds %type2* %tmp1, i64 0, i32 0
+  %tmp2 = getelementptr inbounds %type2, %type2* %tmp1, i64 0, i32 0
   store <8 x i8> zeroinitializer, <8 x i8>* %tmp2, align 8
   ret void
 }
@@ -51,10 +51,10 @@ entry:
 ; CHECK: ldr [[DEST:q[0-9]+]], [x0, [[SHIFTEDOFFSET]]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], [[SHIFTEDOFFSET]]]
-  %arrayidx = getelementptr inbounds <2 x i64>* %array, i64 %offset
+  %arrayidx = getelementptr inbounds <2 x i64>, <2 x i64>* %array, i64 %offset
   %tmp = load <2 x i64>* %arrayidx, align 16
   %tmp1 = load <2 x i64>** @globalArray64x2, align 8
-  %arrayidx1 = getelementptr inbounds <2 x i64>* %tmp1, i64 %offset
+  %arrayidx1 = getelementptr inbounds <2 x i64>, <2 x i64>* %tmp1, i64 %offset
   store <2 x i64> %tmp, <2 x i64>* %arrayidx1, align 16
   ret void
 }
@@ -65,10 +65,10 @@ entry:
 ; CHECK: ldr [[DEST:q[0-9]+]], [x0, #48]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], #80]
-  %arrayidx = getelementptr inbounds <2 x i64>* %array, i64 3
+  %arrayidx = getelementptr inbounds <2 x i64>, <2 x i64>* %array, i64 3
   %tmp = load <2 x i64>* %arrayidx, align 16
   %tmp1 = load <2 x i64>** @globalArray64x2, align 8
-  %arrayidx1 = getelementptr inbounds <2 x i64>* %tmp1, i64 5
+  %arrayidx1 = getelementptr inbounds <2 x i64>, <2 x i64>* %tmp1, i64 5
   store <2 x i64> %tmp, <2 x i64>* %arrayidx1, align 16
   ret void
 }
@@ -80,10 +80,10 @@ entry:
 ; CHECK: ldr [[DEST:q[0-9]+]], [x0, [[SHIFTEDOFFSET]]]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], [[SHIFTEDOFFSET]]]
-  %arrayidx = getelementptr inbounds <4 x i32>* %array, i64 %offset
+  %arrayidx = getelementptr inbounds <4 x i32>, <4 x i32>* %array, i64 %offset
   %tmp = load <4 x i32>* %arrayidx, align 16
   %tmp1 = load <4 x i32>** @globalArray32x4, align 8
-  %arrayidx1 = getelementptr inbounds <4 x i32>* %tmp1, i64 %offset
+  %arrayidx1 = getelementptr inbounds <4 x i32>, <4 x i32>* %tmp1, i64 %offset
   store <4 x i32> %tmp, <4 x i32>* %arrayidx1, align 16
   ret void
 }
@@ -94,10 +94,10 @@ entry:
 ; CHECK: ldr [[DEST:q[0-9]+]], [x0, #48]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], #80]
-  %arrayidx = getelementptr inbounds <4 x i32>* %array, i64 3
+  %arrayidx = getelementptr inbounds <4 x i32>, <4 x i32>* %array, i64 3
   %tmp = load <4 x i32>* %arrayidx, align 16
   %tmp1 = load <4 x i32>** @globalArray32x4, align 8
-  %arrayidx1 = getelementptr inbounds <4 x i32>* %tmp1, i64 5
+  %arrayidx1 = getelementptr inbounds <4 x i32>, <4 x i32>* %tmp1, i64 5
   store <4 x i32> %tmp, <4 x i32>* %arrayidx1, align 16
   ret void
 }
@@ -109,10 +109,10 @@ entry:
 ; CHECK: ldr [[DEST:q[0-9]+]], [x0, [[SHIFTEDOFFSET]]]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], [[SHIFTEDOFFSET]]]
-  %arrayidx = getelementptr inbounds <8 x i16>* %array, i64 %offset
+  %arrayidx = getelementptr inbounds <8 x i16>, <8 x i16>* %array, i64 %offset
   %tmp = load <8 x i16>* %arrayidx, align 16
   %tmp1 = load <8 x i16>** @globalArray16x8, align 8
-  %arrayidx1 = getelementptr inbounds <8 x i16>* %tmp1, i64 %offset
+  %arrayidx1 = getelementptr inbounds <8 x i16>, <8 x i16>* %tmp1, i64 %offset
   store <8 x i16> %tmp, <8 x i16>* %arrayidx1, align 16
   ret void
 }
@@ -123,10 +123,10 @@ entry:
 ; CHECK: ldr [[DEST:q[0-9]+]], [x0, #48]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], #80]
-  %arrayidx = getelementptr inbounds <8 x i16>* %array, i64 3
+  %arrayidx = getelementptr inbounds <8 x i16>, <8 x i16>* %array, i64 3
   %tmp = load <8 x i16>* %arrayidx, align 16
   %tmp1 = load <8 x i16>** @globalArray16x8, align 8
-  %arrayidx1 = getelementptr inbounds <8 x i16>* %tmp1, i64 5
+  %arrayidx1 = getelementptr inbounds <8 x i16>, <8 x i16>* %tmp1, i64 5
   store <8 x i16> %tmp, <8 x i16>* %arrayidx1, align 16
   ret void
 }
@@ -138,10 +138,10 @@ entry:
 ; CHECK: ldr [[DEST:q[0-9]+]], [x0, [[SHIFTEDOFFSET]]]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], [[SHIFTEDOFFSET]]]
-  %arrayidx = getelementptr inbounds <16 x i8>* %array, i64 %offset
+  %arrayidx = getelementptr inbounds <16 x i8>, <16 x i8>* %array, i64 %offset
   %tmp = load <16 x i8>* %arrayidx, align 16
   %tmp1 = load <16 x i8>** @globalArray8x16, align 8
-  %arrayidx1 = getelementptr inbounds <16 x i8>* %tmp1, i64 %offset
+  %arrayidx1 = getelementptr inbounds <16 x i8>, <16 x i8>* %tmp1, i64 %offset
   store <16 x i8> %tmp, <16 x i8>* %arrayidx1, align 16
   ret void
 }
@@ -152,10 +152,10 @@ entry:
 ; CHECK: ldr [[DEST:q[0-9]+]], [x0, #48]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], #80]
-  %arrayidx = getelementptr inbounds <16 x i8>* %array, i64 3
+  %arrayidx = getelementptr inbounds <16 x i8>, <16 x i8>* %array, i64 3
   %tmp = load <16 x i8>* %arrayidx, align 16
   %tmp1 = load <16 x i8>** @globalArray8x16, align 8
-  %arrayidx1 = getelementptr inbounds <16 x i8>* %tmp1, i64 5
+  %arrayidx1 = getelementptr inbounds <16 x i8>, <16 x i8>* %tmp1, i64 5
   store <16 x i8> %tmp, <16 x i8>* %arrayidx1, align 16
   ret void
 }
@@ -167,10 +167,10 @@ entry:
 ; CHECK: ldr [[DEST:d[0-9]+]], [x0, [[SHIFTEDOFFSET]]]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], [[SHIFTEDOFFSET]]]
-  %arrayidx = getelementptr inbounds <1 x i64>* %array, i64 %offset
+  %arrayidx = getelementptr inbounds <1 x i64>, <1 x i64>* %array, i64 %offset
   %tmp = load <1 x i64>* %arrayidx, align 8
   %tmp1 = load <1 x i64>** @globalArray64x1, align 8
-  %arrayidx1 = getelementptr inbounds <1 x i64>* %tmp1, i64 %offset
+  %arrayidx1 = getelementptr inbounds <1 x i64>, <1 x i64>* %tmp1, i64 %offset
   store <1 x i64> %tmp, <1 x i64>* %arrayidx1, align 8
   ret void
 }
@@ -181,10 +181,10 @@ entry:
 ; CHECK: ldr [[DEST:d[0-9]+]], [x0, #24]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], #40]
-  %arrayidx = getelementptr inbounds <1 x i64>* %array, i64 3
+  %arrayidx = getelementptr inbounds <1 x i64>, <1 x i64>* %array, i64 3
   %tmp = load <1 x i64>* %arrayidx, align 8
   %tmp1 = load <1 x i64>** @globalArray64x1, align 8
-  %arrayidx1 = getelementptr inbounds <1 x i64>* %tmp1, i64 5
+  %arrayidx1 = getelementptr inbounds <1 x i64>, <1 x i64>* %tmp1, i64 5
   store <1 x i64> %tmp, <1 x i64>* %arrayidx1, align 8
   ret void
 }
@@ -196,10 +196,10 @@ entry:
 ; CHECK: ldr [[DEST:d[0-9]+]], [x0, [[SHIFTEDOFFSET]]]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], [[SHIFTEDOFFSET]]]
-  %arrayidx = getelementptr inbounds <2 x i32>* %array, i64 %offset
+  %arrayidx = getelementptr inbounds <2 x i32>, <2 x i32>* %array, i64 %offset
   %tmp = load <2 x i32>* %arrayidx, align 8
   %tmp1 = load <2 x i32>** @globalArray32x2, align 8
-  %arrayidx1 = getelementptr inbounds <2 x i32>* %tmp1, i64 %offset
+  %arrayidx1 = getelementptr inbounds <2 x i32>, <2 x i32>* %tmp1, i64 %offset
   store <2 x i32> %tmp, <2 x i32>* %arrayidx1, align 8
   ret void
 }
@@ -210,10 +210,10 @@ entry:
 ; CHECK: ldr [[DEST:d[0-9]+]], [x0, #24]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], #40]
-  %arrayidx = getelementptr inbounds <2 x i32>* %array, i64 3
+  %arrayidx = getelementptr inbounds <2 x i32>, <2 x i32>* %array, i64 3
   %tmp = load <2 x i32>* %arrayidx, align 8
   %tmp1 = load <2 x i32>** @globalArray32x2, align 8
-  %arrayidx1 = getelementptr inbounds <2 x i32>* %tmp1, i64 5
+  %arrayidx1 = getelementptr inbounds <2 x i32>, <2 x i32>* %tmp1, i64 5
   store <2 x i32> %tmp, <2 x i32>* %arrayidx1, align 8
   ret void
 }
@@ -225,10 +225,10 @@ entry:
 ; CHECK: ldr [[DEST:d[0-9]+]], [x0, [[SHIFTEDOFFSET]]]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], [[SHIFTEDOFFSET]]]
-  %arrayidx = getelementptr inbounds <4 x i16>* %array, i64 %offset
+  %arrayidx = getelementptr inbounds <4 x i16>, <4 x i16>* %array, i64 %offset
   %tmp = load <4 x i16>* %arrayidx, align 8
   %tmp1 = load <4 x i16>** @globalArray16x4, align 8
-  %arrayidx1 = getelementptr inbounds <4 x i16>* %tmp1, i64 %offset
+  %arrayidx1 = getelementptr inbounds <4 x i16>, <4 x i16>* %tmp1, i64 %offset
   store <4 x i16> %tmp, <4 x i16>* %arrayidx1, align 8
   ret void
 }
@@ -239,10 +239,10 @@ entry:
 ; CHECK: ldr [[DEST:d[0-9]+]], [x0, #24]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], #40]
-  %arrayidx = getelementptr inbounds <4 x i16>* %array, i64 3
+  %arrayidx = getelementptr inbounds <4 x i16>, <4 x i16>* %array, i64 3
   %tmp = load <4 x i16>* %arrayidx, align 8
   %tmp1 = load <4 x i16>** @globalArray16x4, align 8
-  %arrayidx1 = getelementptr inbounds <4 x i16>* %tmp1, i64 5
+  %arrayidx1 = getelementptr inbounds <4 x i16>, <4 x i16>* %tmp1, i64 5
   store <4 x i16> %tmp, <4 x i16>* %arrayidx1, align 8
   ret void
 }
@@ -254,10 +254,10 @@ entry:
 ; CHECK: ldr [[DEST:d[0-9]+]], [x0, [[SHIFTEDOFFSET]]]
 ; CHECK: ldr [[BASE:x[0-9]+]],
 ; CHECK: str [[DEST]], {{\[}}[[BASE]], [[SHIFTEDOFFSET]]]
-  %arrayidx = getelementptr inbounds <8 x i8>* %array, i64 %offset
+  %arrayidx = getelementptr inbounds <8 x i8>, <8 x i8>* %array, i64 %offset
   %tmp = load <8 x i8>* %arrayidx, align 8
   %tmp1 = load <8 x i8>** @globalArray8x8, align 8
-  %arrayidx1 = getelementptr inbounds <8 x i8>* %tmp1, i64 %offset
+  %arrayidx1 = getelementptr inbounds <8 x i8>, <8 x i8>* %tmp1, i64 %offset
   store <8 x i8> %tmp, <8 x i8>* %arrayidx1, align 8
   ret void
 }
@@ -419,7 +419,7 @@ define <8 x i8> @fct16(i8* nocapture %sp
 ; CHECK: ldr b[[REGNUM:[0-9]+]], [x0, #1]
 ; CHECK-NEXT: mul.8b v0, v[[REGNUM]], v[[REGNUM]]
 entry:
-  %addr = getelementptr i8* %sp0, i64 1
+  %addr = getelementptr i8, i8* %sp0, i64 1
   %pix_sp0.0.copyload = load i8* %addr, align 1
   %vec = insertelement <8 x i8> undef, i8 %pix_sp0.0.copyload, i32 0
   %vmull.i = mul <8 x i8> %vec, %vec
@@ -431,7 +431,7 @@ define <16 x i8> @fct17(i8* nocapture %s
 ; CHECK: ldr b[[REGNUM:[0-9]+]], [x0, #1]
 ; CHECK-NEXT: mul.16b v0, v[[REGNUM]], v[[REGNUM]]
 entry:
-  %addr = getelementptr i8* %sp0, i64 1
+  %addr = getelementptr i8, i8* %sp0, i64 1
   %pix_sp0.0.copyload = load i8* %addr, align 1
   %vec = insertelement <16 x i8> undef, i8 %pix_sp0.0.copyload, i32 0
   %vmull.i = mul <16 x i8> %vec, %vec
@@ -443,7 +443,7 @@ define <4 x i16> @fct18(i16* nocapture %
 ; CHECK: ldr h[[REGNUM:[0-9]+]], [x0, #2]
 ; CHECK-NEXT: mul.4h v0, v[[REGNUM]], v[[REGNUM]]
 entry:
-  %addr = getelementptr i16* %sp0, i64 1
+  %addr = getelementptr i16, i16* %sp0, i64 1
   %pix_sp0.0.copyload = load i16* %addr, align 1
   %vec = insertelement <4 x i16> undef, i16 %pix_sp0.0.copyload, i32 0
   %vmull.i = mul <4 x i16> %vec, %vec
@@ -455,7 +455,7 @@ define <8 x i16> @fct19(i16* nocapture %
 ; CHECK: ldr h[[REGNUM:[0-9]+]], [x0, #2]
 ; CHECK-NEXT: mul.8h v0, v[[REGNUM]], v[[REGNUM]]
 entry:
-  %addr = getelementptr i16* %sp0, i64 1
+  %addr = getelementptr i16, i16* %sp0, i64 1
   %pix_sp0.0.copyload = load i16* %addr, align 1
   %vec = insertelement <8 x i16> undef, i16 %pix_sp0.0.copyload, i32 0
   %vmull.i = mul <8 x i16> %vec, %vec
@@ -467,7 +467,7 @@ define <2 x i32> @fct20(i32* nocapture %
 ; CHECK: ldr s[[REGNUM:[0-9]+]], [x0, #4]
 ; CHECK-NEXT: mul.2s v0, v[[REGNUM]], v[[REGNUM]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 1
+  %addr = getelementptr i32, i32* %sp0, i64 1
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %vec = insertelement <2 x i32> undef, i32 %pix_sp0.0.copyload, i32 0
   %vmull.i = mul <2 x i32> %vec, %vec
@@ -479,7 +479,7 @@ define <4 x i32> @fct21(i32* nocapture %
 ; CHECK: ldr s[[REGNUM:[0-9]+]], [x0, #4]
 ; CHECK-NEXT: mul.4s v0, v[[REGNUM]], v[[REGNUM]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 1
+  %addr = getelementptr i32, i32* %sp0, i64 1
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %vec = insertelement <4 x i32> undef, i32 %pix_sp0.0.copyload, i32 0
   %vmull.i = mul <4 x i32> %vec, %vec
@@ -490,7 +490,7 @@ define <1 x i64> @fct22(i64* nocapture %
 ; CHECK-LABEL: fct22:
 ; CHECK: ldr d0, [x0, #8]
 entry:
-  %addr = getelementptr i64* %sp0, i64 1
+  %addr = getelementptr i64, i64* %sp0, i64 1
   %pix_sp0.0.copyload = load i64* %addr, align 1
   %vec = insertelement <1 x i64> undef, i64 %pix_sp0.0.copyload, i32 0
    ret <1 x i64> %vec
@@ -500,7 +500,7 @@ define <2 x i64> @fct23(i64* nocapture %
 ; CHECK-LABEL: fct23:
 ; CHECK: ldr d[[REGNUM:[0-9]+]], [x0, #8]
 entry:
-  %addr = getelementptr i64* %sp0, i64 1
+  %addr = getelementptr i64, i64* %sp0, i64 1
   %pix_sp0.0.copyload = load i64* %addr, align 1
   %vec = insertelement <2 x i64> undef, i64 %pix_sp0.0.copyload, i32 0
   ret <2 x i64> %vec
@@ -513,7 +513,7 @@ define <8 x i8> @fct24(i8* nocapture %sp
 ; CHECK: ldr b[[REGNUM:[0-9]+]], [x0, x1]
 ; CHECK-NEXT: mul.8b v0, v[[REGNUM]], v[[REGNUM]]
 entry:
-  %addr = getelementptr i8* %sp0, i64 %offset
+  %addr = getelementptr i8, i8* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i8* %addr, align 1
   %vec = insertelement <8 x i8> undef, i8 %pix_sp0.0.copyload, i32 0
   %vmull.i = mul <8 x i8> %vec, %vec
@@ -525,7 +525,7 @@ define <16 x i8> @fct25(i8* nocapture %s
 ; CHECK: ldr b[[REGNUM:[0-9]+]], [x0, x1]
 ; CHECK-NEXT: mul.16b v0, v[[REGNUM]], v[[REGNUM]]
 entry:
-  %addr = getelementptr i8* %sp0, i64 %offset
+  %addr = getelementptr i8, i8* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i8* %addr, align 1
   %vec = insertelement <16 x i8> undef, i8 %pix_sp0.0.copyload, i32 0
   %vmull.i = mul <16 x i8> %vec, %vec
@@ -537,7 +537,7 @@ define <4 x i16> @fct26(i16* nocapture %
 ; CHECK: ldr h[[REGNUM:[0-9]+]], [x0, x1, lsl #1]
 ; CHECK-NEXT: mul.4h v0, v[[REGNUM]], v[[REGNUM]]
 entry:
-  %addr = getelementptr i16* %sp0, i64 %offset
+  %addr = getelementptr i16, i16* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i16* %addr, align 1
   %vec = insertelement <4 x i16> undef, i16 %pix_sp0.0.copyload, i32 0
   %vmull.i = mul <4 x i16> %vec, %vec
@@ -549,7 +549,7 @@ define <8 x i16> @fct27(i16* nocapture %
 ; CHECK: ldr h[[REGNUM:[0-9]+]], [x0, x1, lsl #1]
 ; CHECK-NEXT: mul.8h v0, v[[REGNUM]], v[[REGNUM]]
 entry:
-  %addr = getelementptr i16* %sp0, i64 %offset
+  %addr = getelementptr i16, i16* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i16* %addr, align 1
   %vec = insertelement <8 x i16> undef, i16 %pix_sp0.0.copyload, i32 0
   %vmull.i = mul <8 x i16> %vec, %vec
@@ -561,7 +561,7 @@ define <2 x i32> @fct28(i32* nocapture %
 ; CHECK: ldr s[[REGNUM:[0-9]+]], [x0, x1, lsl #2]
 ; CHECK-NEXT: mul.2s v0, v[[REGNUM]], v[[REGNUM]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 %offset
+  %addr = getelementptr i32, i32* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %vec = insertelement <2 x i32> undef, i32 %pix_sp0.0.copyload, i32 0
   %vmull.i = mul <2 x i32> %vec, %vec
@@ -573,7 +573,7 @@ define <4 x i32> @fct29(i32* nocapture %
 ; CHECK: ldr s[[REGNUM:[0-9]+]], [x0, x1, lsl #2]
 ; CHECK-NEXT: mul.4s v0, v[[REGNUM]], v[[REGNUM]]
 entry:
-  %addr = getelementptr i32* %sp0, i64 %offset
+  %addr = getelementptr i32, i32* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i32* %addr, align 1
   %vec = insertelement <4 x i32> undef, i32 %pix_sp0.0.copyload, i32 0
   %vmull.i = mul <4 x i32> %vec, %vec
@@ -584,7 +584,7 @@ define <1 x i64> @fct30(i64* nocapture %
 ; CHECK-LABEL: fct30:
 ; CHECK: ldr d0, [x0, x1, lsl #3]
 entry:
-  %addr = getelementptr i64* %sp0, i64 %offset
+  %addr = getelementptr i64, i64* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i64* %addr, align 1
   %vec = insertelement <1 x i64> undef, i64 %pix_sp0.0.copyload, i32 0
    ret <1 x i64> %vec
@@ -594,7 +594,7 @@ define <2 x i64> @fct31(i64* nocapture %
 ; CHECK-LABEL: fct31:
 ; CHECK: ldr d0, [x0, x1, lsl #3]
 entry:
-  %addr = getelementptr i64* %sp0, i64 %offset
+  %addr = getelementptr i64, i64* %sp0, i64 %offset
   %pix_sp0.0.copyload = load i64* %addr, align 1
   %vec = insertelement <2 x i64> undef, i64 %pix_sp0.0.copyload, i32 0
   ret <2 x i64> %vec

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-virtual_base.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-virtual_base.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-virtual_base.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-virtual_base.ll Fri Feb 27 13:29:02 2015
@@ -39,9 +39,9 @@ define void @Precompute_Patch_Values(%st
 ; CHECK-NEXT: stur [[VAL2]], {{\[}}sp, #216]
 entry:
   %Control_Points = alloca [16 x [3 x double]], align 8
-  %arraydecay5.3.1 = getelementptr inbounds [16 x [3 x double]]* %Control_Points, i64 0, i64 9, i64 0
+  %arraydecay5.3.1 = getelementptr inbounds [16 x [3 x double]], [16 x [3 x double]]* %Control_Points, i64 0, i64 9, i64 0
   %tmp14 = bitcast double* %arraydecay5.3.1 to i8*
-  %arraydecay11.3.1 = getelementptr inbounds %struct.Bicubic_Patch_Struct* %Shape, i64 0, i32 12, i64 1, i64 3, i64 0
+  %arraydecay11.3.1 = getelementptr inbounds %struct.Bicubic_Patch_Struct, %struct.Bicubic_Patch_Struct* %Shape, i64 0, i32 12, i64 1, i64 3, i64 0
   %tmp15 = bitcast double* %arraydecay11.3.1 to i8*
   call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp14, i8* %tmp15, i64 24, i32 1, i1 false)
   ret void

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-volatile.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-volatile.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-volatile.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-volatile.ll Fri Feb 27 13:29:02 2015
@@ -4,9 +4,9 @@ define i64 @normal_load(i64* nocapture %
 ; CHECK: ldp
 ; CHECK-NEXT: add
 ; CHECK-NEXT: ret
-  %add.ptr = getelementptr inbounds i64* %bar, i64 1
+  %add.ptr = getelementptr inbounds i64, i64* %bar, i64 1
   %tmp = load i64* %add.ptr, align 8
-  %add.ptr1 = getelementptr inbounds i64* %bar, i64 2
+  %add.ptr1 = getelementptr inbounds i64, i64* %bar, i64 2
   %tmp1 = load i64* %add.ptr1, align 8
   %add = add nsw i64 %tmp1, %tmp
   ret i64 %add
@@ -18,9 +18,9 @@ define i64 @volatile_load(i64* nocapture
 ; CHECK-NEXT: ldr
 ; CHECK-NEXT: add
 ; CHECK-NEXT: ret
-  %add.ptr = getelementptr inbounds i64* %bar, i64 1
+  %add.ptr = getelementptr inbounds i64, i64* %bar, i64 1
   %tmp = load volatile i64* %add.ptr, align 8
-  %add.ptr1 = getelementptr inbounds i64* %bar, i64 2
+  %add.ptr1 = getelementptr inbounds i64, i64* %bar, i64 2
   %tmp1 = load volatile i64* %add.ptr1, align 8
   %add = add nsw i64 %tmp1, %tmp
   ret i64 %add

Modified: llvm/trunk/test/CodeGen/AArch64/arm64-zextload-unscaled.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/arm64-zextload-unscaled.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/arm64-zextload-unscaled.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/arm64-zextload-unscaled.ll Fri Feb 27 13:29:02 2015
@@ -6,7 +6,7 @@ define void @test_zextloadi1_unscaled(i1
 ; CHECK-LABEL: test_zextloadi1_unscaled:
 ; CHECK: ldurb {{w[0-9]+}}, [{{x[0-9]+}}, #-7]
 
-  %addr = getelementptr i1* %base, i32 -7
+  %addr = getelementptr i1, i1* %base, i32 -7
   %val = load i1* %addr, align 1
 
   %extended = zext i1 %val to i32
@@ -18,7 +18,7 @@ define void @test_zextloadi8_unscaled(i8
 ; CHECK-LABEL: test_zextloadi8_unscaled:
 ; CHECK: ldurb {{w[0-9]+}}, [{{x[0-9]+}}, #-7]
 
-  %addr = getelementptr i8* %base, i32 -7
+  %addr = getelementptr i8, i8* %base, i32 -7
   %val = load i8* %addr, align 1
 
   %extended = zext i8 %val to i32
@@ -30,7 +30,7 @@ define void @test_zextloadi16_unscaled(i
 ; CHECK-LABEL: test_zextloadi16_unscaled:
 ; CHECK: ldurh {{w[0-9]+}}, [{{x[0-9]+}}, #-14]
 
-  %addr = getelementptr i16* %base, i32 -7
+  %addr = getelementptr i16, i16* %base, i32 -7
   %val = load i16* %addr, align 2
 
   %extended = zext i16 %val to i32

Modified: llvm/trunk/test/CodeGen/AArch64/assertion-rc-mismatch.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/assertion-rc-mismatch.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/assertion-rc-mismatch.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/assertion-rc-mismatch.ll Fri Feb 27 13:29:02 2015
@@ -11,7 +11,7 @@ if:
   br label %end
 else:
   %tmp3 = call i8* @llvm.returnaddress(i32 0)
-  %ptr = getelementptr inbounds i8* %tmp3, i64 -16
+  %ptr = getelementptr inbounds i8, i8* %tmp3, i64 -16
   %ld = load i8* %ptr, align 4
   %tmp2 = inttoptr i8 %ld to i8*
   br label %end

Modified: llvm/trunk/test/CodeGen/AArch64/cmpwithshort.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/cmpwithshort.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/cmpwithshort.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/cmpwithshort.ll Fri Feb 27 13:29:02 2015
@@ -5,7 +5,7 @@ define i16 @test_1cmp_signed_1(i16* %ptr
 ; CHECK: ldrsh
 ; CHECK-NEXT: cmn
 entry:
-  %addr = getelementptr inbounds i16* %ptr1, i16 0
+  %addr = getelementptr inbounds i16, i16* %ptr1, i16 0
   %val = load i16* %addr, align 2
   %cmp = icmp eq i16 %val, -1
   br i1 %cmp, label %if, label %if.then
@@ -20,7 +20,7 @@ define i16 @test_1cmp_signed_2(i16* %ptr
 ; CHECK: ldrsh
 ; CHECK-NEXT: cmn
 entry:
-  %addr = getelementptr inbounds i16* %ptr1, i16 0
+  %addr = getelementptr inbounds i16, i16* %ptr1, i16 0
   %val = load i16* %addr, align 2
   %cmp = icmp sge i16 %val, -1
   br i1 %cmp, label %if, label %if.then
@@ -35,7 +35,7 @@ define i16 @test_1cmp_unsigned_1(i16* %p
 ; CHECK: ldrsh
 ; CHECK-NEXT: cmn
 entry:
-  %addr = getelementptr inbounds i16* %ptr1, i16 0
+  %addr = getelementptr inbounds i16, i16* %ptr1, i16 0
   %val = load i16* %addr, align 2
   %cmp = icmp uge i16 %val, -1
   br i1 %cmp, label %if, label %if.then

Modified: llvm/trunk/test/CodeGen/AArch64/combine-comparisons-by-cse.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/combine-comparisons-by-cse.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/combine-comparisons-by-cse.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/combine-comparisons-by-cse.ll Fri Feb 27 13:29:02 2015
@@ -237,7 +237,7 @@ declare %struct.Struct* @Update(%struct.
 ; no checks for this case, it just should be processed without errors
 define void @combine_non_adjacent_cmp_br(%struct.Struct* nocapture readonly %hdCall) #0 {
 entry:
-  %size = getelementptr inbounds %struct.Struct* %hdCall, i64 0, i32 0
+  %size = getelementptr inbounds %struct.Struct, %struct.Struct* %hdCall, i64 0, i32 0
   %0 = load i64* %size, align 8
   br label %land.rhs
 
@@ -374,7 +374,7 @@ entry:
   br i1 %cmp, label %land.lhs.true, label %if.end
 
 land.lhs.true:                                    ; preds = %entry
-  %arrayidx = getelementptr inbounds i8** %argv, i64 1
+  %arrayidx = getelementptr inbounds i8*, i8** %argv, i64 1
   %0 = load i8** %arrayidx, align 8
   %cmp1 = icmp eq i8* %0, null
   br i1 %cmp1, label %if.end, label %return

Modified: llvm/trunk/test/CodeGen/AArch64/complex-copy-noneon.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/complex-copy-noneon.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/complex-copy-noneon.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/complex-copy-noneon.ll Fri Feb 27 13:29:02 2015
@@ -8,13 +8,13 @@ define void @store_combine() nounwind {
   %src = alloca { double, double }, align 8
   %dst = alloca { double, double }, align 8
 
-  %src.realp = getelementptr inbounds { double, double }* %src, i32 0, i32 0
+  %src.realp = getelementptr inbounds { double, double }, { double, double }* %src, i32 0, i32 0
   %src.real = load double* %src.realp
-  %src.imagp = getelementptr inbounds { double, double }* %src, i32 0, i32 1
+  %src.imagp = getelementptr inbounds { double, double }, { double, double }* %src, i32 0, i32 1
   %src.imag = load double* %src.imagp
 
-  %dst.realp = getelementptr inbounds { double, double }* %dst, i32 0, i32 0
-  %dst.imagp = getelementptr inbounds { double, double }* %dst, i32 0, i32 1
+  %dst.realp = getelementptr inbounds { double, double }, { double, double }* %dst, i32 0, i32 0
+  %dst.imagp = getelementptr inbounds { double, double }, { double, double }* %dst, i32 0, i32 1
   store double %src.real, double* %dst.realp
   store double %src.imag, double* %dst.imagp
   ret void

Modified: llvm/trunk/test/CodeGen/AArch64/eliminate-trunc.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/eliminate-trunc.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/eliminate-trunc.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/eliminate-trunc.ll Fri Feb 27 13:29:02 2015
@@ -14,10 +14,10 @@ entry:
 
 for.body4.us:
   %indvars.iv = phi i64 [ 0, %for.body4.lr.ph.us ], [ %indvars.iv.next, %for.body4.us ]
-  %arrayidx6.us = getelementptr inbounds [8 x i8]* %a, i64 %indvars.iv26, i64 %indvars.iv
+  %arrayidx6.us = getelementptr inbounds [8 x i8], [8 x i8]* %a, i64 %indvars.iv26, i64 %indvars.iv
   %0 = load i8* %arrayidx6.us, align 1
   %idxprom7.us = zext i8 %0 to i64
-  %arrayidx8.us = getelementptr inbounds i8* %box, i64 %idxprom7.us
+  %arrayidx8.us = getelementptr inbounds i8, i8* %box, i64 %idxprom7.us
   %1 = load i8* %arrayidx8.us, align 1
   store i8 %1, i8* %arrayidx6.us, align 1
   %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1

Modified: llvm/trunk/test/CodeGen/AArch64/extern-weak.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/extern-weak.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/extern-weak.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/extern-weak.ll Fri Feb 27 13:29:02 2015
@@ -31,7 +31,7 @@ define i32()* @foo() {
 @arr_var = extern_weak global [10 x i32]
 
 define i32* @bar() {
-  %addr = getelementptr [10 x i32]* @arr_var, i32 0, i32 5
+  %addr = getelementptr [10 x i32], [10 x i32]* @arr_var, i32 0, i32 5
 
 
 ; CHECK: adrp x[[ADDRHI:[0-9]+]], :got:arr_var

Modified: llvm/trunk/test/CodeGen/AArch64/f16-convert.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/f16-convert.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/f16-convert.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/f16-convert.ll Fri Feb 27 13:29:02 2015
@@ -29,7 +29,7 @@ define float @load2(i16* nocapture reado
 ; CHECK-NEXT: ret
 
   %idxprom = sext i32 %i to i64
-  %arrayidx = getelementptr inbounds i16* %a, i64 %idxprom
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 %idxprom
   %tmp = load i16* %arrayidx, align 2
   %tmp1 = tail call float @llvm.convert.from.fp16.f32(i16 %tmp)
   ret float %tmp1
@@ -42,7 +42,7 @@ define double @load3(i16* nocapture read
 ; CHECK-NEXT: ret
 
   %idxprom = sext i32 %i to i64
-  %arrayidx = getelementptr inbounds i16* %a, i64 %idxprom
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 %idxprom
   %tmp = load i16* %arrayidx, align 2
   %conv = tail call double @llvm.convert.from.fp16.f64(i16 %tmp)
   ret double %conv
@@ -54,7 +54,7 @@ define float @load4(i16* nocapture reado
 ; CHECK-NEXT: fcvt s0, [[HREG]]
 ; CHECK-NEXT: ret
 
-  %arrayidx = getelementptr inbounds i16* %a, i64 %i
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 %i
   %tmp = load i16* %arrayidx, align 2
   %tmp1 = tail call float @llvm.convert.from.fp16.f32(i16 %tmp)
   ret float %tmp1
@@ -66,7 +66,7 @@ define double @load5(i16* nocapture read
 ; CHECK-NEXT: fcvt d0, [[HREG]]
 ; CHECK-NEXT: ret
 
-  %arrayidx = getelementptr inbounds i16* %a, i64 %i
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 %i
   %tmp = load i16* %arrayidx, align 2
   %conv = tail call double @llvm.convert.from.fp16.f64(i16 %tmp)
   ret double %conv
@@ -78,7 +78,7 @@ define float @load6(i16* nocapture reado
 ; CHECK-NEXT: fcvt s0, [[HREG]]
 ; CHECK-NEXT: ret
 
-  %arrayidx = getelementptr inbounds i16* %a, i64 10
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 10
   %tmp = load i16* %arrayidx, align 2
   %tmp1 = tail call float @llvm.convert.from.fp16.f32(i16 %tmp)
   ret float %tmp1
@@ -90,7 +90,7 @@ define double @load7(i16* nocapture read
 ; CHECK-NEXT: fcvt d0, [[HREG]]
 ; CHECK-NEXT: ret
 
-  %arrayidx = getelementptr inbounds i16* %a, i64 10
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 10
   %tmp = load i16* %arrayidx, align 2
   %conv = tail call double @llvm.convert.from.fp16.f64(i16 %tmp)
   ret double %conv
@@ -102,7 +102,7 @@ define float @load8(i16* nocapture reado
 ; CHECK-NEXT: fcvt s0, [[HREG]]
 ; CHECK-NEXT: ret
 
-  %arrayidx = getelementptr inbounds i16* %a, i64 -10
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 -10
   %tmp = load i16* %arrayidx, align 2
   %tmp1 = tail call float @llvm.convert.from.fp16.f32(i16 %tmp)
   ret float %tmp1
@@ -114,7 +114,7 @@ define double @load9(i16* nocapture read
 ; CHECK-NEXT: fcvt d0, [[HREG]]
 ; CHECK-NEXT: ret
 
-  %arrayidx = getelementptr inbounds i16* %a, i64 -10
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 -10
   %tmp = load i16* %arrayidx, align 2
   %conv = tail call double @llvm.convert.from.fp16.f64(i16 %tmp)
   ret double %conv
@@ -152,7 +152,7 @@ define void @store2(i16* nocapture %a, i
 
   %tmp = tail call i16 @llvm.convert.to.fp16.f32(float %val)
   %idxprom = sext i32 %i to i64
-  %arrayidx = getelementptr inbounds i16* %a, i64 %idxprom
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 %idxprom
   store i16 %tmp, i16* %arrayidx, align 2
   ret void
 }
@@ -167,7 +167,7 @@ define void @store3(i16* nocapture %a, i
   %conv = fptrunc double %val to float
   %tmp = tail call i16 @llvm.convert.to.fp16.f32(float %conv)
   %idxprom = sext i32 %i to i64
-  %arrayidx = getelementptr inbounds i16* %a, i64 %idxprom
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 %idxprom
   store i16 %tmp, i16* %arrayidx, align 2
   ret void
 }
@@ -179,7 +179,7 @@ define void @store4(i16* nocapture %a, i
 ; CHECK-NEXT: ret
 
   %tmp = tail call i16 @llvm.convert.to.fp16.f32(float %val)
-  %arrayidx = getelementptr inbounds i16* %a, i64 %i
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 %i
   store i16 %tmp, i16* %arrayidx, align 2
   ret void
 }
@@ -193,7 +193,7 @@ define void @store5(i16* nocapture %a, i
 
   %conv = fptrunc double %val to float
   %tmp = tail call i16 @llvm.convert.to.fp16.f32(float %conv)
-  %arrayidx = getelementptr inbounds i16* %a, i64 %i
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 %i
   store i16 %tmp, i16* %arrayidx, align 2
   ret void
 }
@@ -205,7 +205,7 @@ define void @store6(i16* nocapture %a, f
 ; CHECK-NEXT: ret
 
   %tmp = tail call i16 @llvm.convert.to.fp16.f32(float %val)
-  %arrayidx = getelementptr inbounds i16* %a, i64 10
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 10
   store i16 %tmp, i16* %arrayidx, align 2
   ret void
 }
@@ -219,7 +219,7 @@ define void @store7(i16* nocapture %a, d
 
   %conv = fptrunc double %val to float
   %tmp = tail call i16 @llvm.convert.to.fp16.f32(float %conv)
-  %arrayidx = getelementptr inbounds i16* %a, i64 10
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 10
   store i16 %tmp, i16* %arrayidx, align 2
   ret void
 }
@@ -231,7 +231,7 @@ define void @store8(i16* nocapture %a, f
 ; CHECK-NEXT: ret
 
   %tmp = tail call i16 @llvm.convert.to.fp16.f32(float %val)
-  %arrayidx = getelementptr inbounds i16* %a, i64 -10
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 -10
   store i16 %tmp, i16* %arrayidx, align 2
   ret void
 }
@@ -245,7 +245,7 @@ define void @store9(i16* nocapture %a, d
 
   %conv = fptrunc double %val to float
   %tmp = tail call i16 @llvm.convert.to.fp16.f32(float %conv)
-  %arrayidx = getelementptr inbounds i16* %a, i64 -10
+  %arrayidx = getelementptr inbounds i16, i16* %a, i64 -10
   store i16 %tmp, i16* %arrayidx, align 2
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/fast-isel-gep.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/fast-isel-gep.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/fast-isel-gep.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/fast-isel-gep.ll Fri Feb 27 13:29:02 2015
@@ -5,7 +5,7 @@
 define double* @test_struct(%struct.foo* %f) {
 ; CHECK-LABEL: test_struct
 ; CHECK:       add x0, x0, #24
-  %1 = getelementptr inbounds %struct.foo* %f, i64 0, i32 3
+  %1 = getelementptr inbounds %struct.foo, %struct.foo* %f, i64 0, i32 3
   ret double* %1
 }
 
@@ -13,21 +13,21 @@ define i32* @test_array1(i32* %a, i64 %i
 ; CHECK-LABEL: test_array1
 ; CHECK:       orr [[REG:x[0-9]+]], xzr, #0x4
 ; CHECK-NEXT:  madd  x0, x1, [[REG]], x0
-  %1 = getelementptr inbounds i32* %a, i64 %i
+  %1 = getelementptr inbounds i32, i32* %a, i64 %i
   ret i32* %1
 }
 
 define i32* @test_array2(i32* %a) {
 ; CHECK-LABEL: test_array2
 ; CHECK:       add  x0, x0, #16
-  %1 = getelementptr inbounds i32* %a, i64 4
+  %1 = getelementptr inbounds i32, i32* %a, i64 4
   ret i32* %1
 }
 
 define i32* @test_array3(i32* %a) {
 ; CHECK-LABEL: test_array3
 ; CHECK:       add x0, x0, #1, lsl #12
-  %1 = getelementptr inbounds i32* %a, i64 1024
+  %1 = getelementptr inbounds i32, i32* %a, i64 1024
   ret i32* %1
 }
 
@@ -35,7 +35,7 @@ define i32* @test_array4(i32* %a) {
 ; CHECK-LABEL: test_array4
 ; CHECK:       movz [[REG:x[0-9]+]], #0x1008
 ; CHECK-NEXR:  add x0, x0, [[REG]]
-  %1 = getelementptr inbounds i32* %a, i64 1026
+  %1 = getelementptr inbounds i32, i32* %a, i64 1026
   ret i32* %1
 }
 
@@ -44,6 +44,6 @@ define i32* @test_array5(i32* %a, i32 %i
 ; CHECK:       sxtw [[REG1:x[0-9]+]], w1
 ; CHECK-NEXT:  orr  [[REG2:x[0-9]+]], xzr, #0x4
 ; CHECK-NEXT:  madd  {{x[0-9]+}}, [[REG1]], [[REG2]], x0
-  %1 = getelementptr inbounds i32* %a, i32 %i
+  %1 = getelementptr inbounds i32, i32* %a, i32 %i
   ret i32* %1
 }

Modified: llvm/trunk/test/CodeGen/AArch64/func-argpassing.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/func-argpassing.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/func-argpassing.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/func-argpassing.ll Fri Feb 27 13:29:02 2015
@@ -34,8 +34,8 @@ define void @add_floats(float %val1, flo
 ; with memcpy.
 define void @take_struct(%myStruct* byval %structval) {
 ; CHECK-LABEL: take_struct:
-    %addr0 = getelementptr %myStruct* %structval, i64 0, i32 2
-    %addr1 = getelementptr %myStruct* %structval, i64 0, i32 0
+    %addr0 = getelementptr %myStruct, %myStruct* %structval, i64 0, i32 2
+    %addr1 = getelementptr %myStruct, %myStruct* %structval, i64 0, i32 0
 
     %val0 = load volatile i32* %addr0
     ; Some weird move means x0 is used for one access
@@ -55,8 +55,8 @@ define void @take_struct(%myStruct* byva
 define void @check_byval_align(i32* byval %ignore, %myStruct* byval align 16 %structval) {
 ; CHECK-LABEL: check_byval_align:
 
-    %addr0 = getelementptr %myStruct* %structval, i64 0, i32 2
-    %addr1 = getelementptr %myStruct* %structval, i64 0, i32 0
+    %addr0 = getelementptr %myStruct, %myStruct* %structval, i64 0, i32 2
+    %addr1 = getelementptr %myStruct, %myStruct* %structval, i64 0, i32 0
 
     %val0 = load volatile i32* %addr0
     ; Some weird move means x0 is used for one access
@@ -108,9 +108,9 @@ define [2 x i64] @return_struct() {
 ; if LLVM does it to %myStruct too. So this is the simplest check
 define void @return_large_struct(%myStruct* sret %retval) {
 ; CHECK-LABEL: return_large_struct:
-    %addr0 = getelementptr %myStruct* %retval, i64 0, i32 0
-    %addr1 = getelementptr %myStruct* %retval, i64 0, i32 1
-    %addr2 = getelementptr %myStruct* %retval, i64 0, i32 2
+    %addr0 = getelementptr %myStruct, %myStruct* %retval, i64 0, i32 0
+    %addr1 = getelementptr %myStruct, %myStruct* %retval, i64 0, i32 1
+    %addr2 = getelementptr %myStruct, %myStruct* %retval, i64 0, i32 2
 
     store i64 42, i64* %addr0
     store i8 2, i8* %addr1
@@ -129,7 +129,7 @@ define i32 @struct_on_stack(i8 %var0, i1
                           i32* %var6, %myStruct* byval %struct, i32* byval %stacked,
                           double %notstacked) {
 ; CHECK-LABEL: struct_on_stack:
-    %addr = getelementptr %myStruct* %struct, i64 0, i32 0
+    %addr = getelementptr %myStruct, %myStruct* %struct, i64 0, i32 0
     %val64 = load volatile i64* %addr
     store volatile i64 %val64, i64* @var64
     ; Currently nothing on local stack, so struct should be at sp

Modified: llvm/trunk/test/CodeGen/AArch64/global-merge-3.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/global-merge-3.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/global-merge-3.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/global-merge-3.ll Fri Feb 27 13:29:02 2015
@@ -12,8 +12,8 @@ define void @f1(i32 %a1, i32 %a2, i32 %a
 ;CHECK-APPLE-IOS: add	x8, x8, __MergedGlobals_x at PAGEOFF
 ;CHECK-APPLE-IOS: adrp	x9, __MergedGlobals_y at PAGE
 ;CHECK-APPLE-IOS: add	x9, x9, __MergedGlobals_y at PAGEOFF
-  %x3 = getelementptr inbounds [1000 x i32]* @x, i32 0, i64 3
-  %y3 = getelementptr inbounds [1000 x i32]* @y, i32 0, i64 3
+  %x3 = getelementptr inbounds [1000 x i32], [1000 x i32]* @x, i32 0, i64 3
+  %y3 = getelementptr inbounds [1000 x i32], [1000 x i32]* @y, i32 0, i64 3
   store i32 %a1, i32* %x3, align 4
   store i32 %a2, i32* %y3, align 4
   store i32 %a3, i32* @z, align 4

Modified: llvm/trunk/test/CodeGen/AArch64/i128-align.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/i128-align.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/i128-align.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/i128-align.ll Fri Feb 27 13:29:02 2015
@@ -8,7 +8,7 @@ define i64 @check_size() {
 ; CHECK-LABEL: check_size:
   %starti = ptrtoint %struct* @var to i64
 
-  %endp = getelementptr %struct* @var, i64 1
+  %endp = getelementptr %struct, %struct* @var, i64 1
   %endi = ptrtoint %struct* %endp to i64
 
   %diff = sub i64 %endi, %starti
@@ -20,7 +20,7 @@ define i64 @check_field() {
 ; CHECK-LABEL: check_field:
   %starti = ptrtoint %struct* @var to i64
 
-  %endp = getelementptr %struct* @var, i64 0, i32 1
+  %endp = getelementptr %struct, %struct* @var, i64 0, i32 1
   %endi = ptrtoint i128* %endp to i64
 
   %diff = sub i64 %endi, %starti

Modified: llvm/trunk/test/CodeGen/AArch64/intrinsics-memory-barrier.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/intrinsics-memory-barrier.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/intrinsics-memory-barrier.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/intrinsics-memory-barrier.ll Fri Feb 27 13:29:02 2015
@@ -22,7 +22,7 @@ define void @test_dmb_reordering(i32 %a,
 
   call void @llvm.aarch64.dmb(i32 15); CHECK: dmb sy
 
-  %d1 = getelementptr i32* %d, i64 1
+  %d1 = getelementptr i32, i32* %d, i64 1
   store i32 %b, i32* %d1             ; CHECK: str {{w[0-9]+}}, [{{x[0-9]+}}, #4]
 
   ret void
@@ -34,7 +34,7 @@ define void @test_dsb_reordering(i32 %a,
 
   call void @llvm.aarch64.dsb(i32 15); CHECK: dsb sy
 
-  %d1 = getelementptr i32* %d, i64 1
+  %d1 = getelementptr i32, i32* %d, i64 1
   store i32 %b, i32* %d1             ; CHECK: str {{w[0-9]+}}, [{{x[0-9]+}}, #4]
 
   ret void
@@ -46,7 +46,7 @@ define void @test_isb_reordering(i32 %a,
 
   call void @llvm.aarch64.isb(i32 15); CHECK: isb
 
-  %d1 = getelementptr i32* %d, i64 1
+  %d1 = getelementptr i32, i32* %d, i64 1
   store i32 %b, i32* %d1             ; CHECK: str {{w[0-9]+}}, [{{x[0-9]+}}, #4]
 
   ret void

Modified: llvm/trunk/test/CodeGen/AArch64/ldst-opt.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/ldst-opt.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/ldst-opt.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/ldst-opt.ll Fri Feb 27 13:29:02 2015
@@ -30,11 +30,11 @@ define void @load-pre-indexed-word(%stru
 ; CHECK-LABEL: load-pre-indexed-word
 ; CHECK: ldr w{{[0-9]+}}, [x{{[0-9]+}}, #32]!
 entry:
-  %a = getelementptr inbounds %struct.word* %ptr, i64 0, i32 1, i32 0
+  %a = getelementptr inbounds %struct.word, %struct.word* %ptr, i64 0, i32 1, i32 0
   %add = load i32* %a, align 4
   br label %bar
 bar:
-  %c = getelementptr inbounds %struct.word* %ptr, i64 0, i32 1
+  %c = getelementptr inbounds %struct.word, %struct.word* %ptr, i64 0, i32 1
   tail call void @bar_word(%s.word* %c, i32 %add)
   ret void
 }
@@ -43,11 +43,11 @@ define void @store-pre-indexed-word(%str
 ; CHECK-LABEL: store-pre-indexed-word
 ; CHECK: str w{{[0-9]+}}, [x{{[0-9]+}}, #32]!
 entry:
-  %a = getelementptr inbounds %struct.word* %ptr, i64 0, i32 1, i32 0
+  %a = getelementptr inbounds %struct.word, %struct.word* %ptr, i64 0, i32 1, i32 0
   store i32 %val, i32* %a, align 4
   br label %bar
 bar:
-  %c = getelementptr inbounds %struct.word* %ptr, i64 0, i32 1
+  %c = getelementptr inbounds %struct.word, %struct.word* %ptr, i64 0, i32 1
   tail call void @bar_word(%s.word* %c, i32 %val)
   ret void
 }
@@ -58,11 +58,11 @@ define void @load-pre-indexed-doubleword
 ; CHECK-LABEL: load-pre-indexed-doubleword
 ; CHECK: ldr x{{[0-9]+}}, [x{{[0-9]+}}, #32]!
 entry:
-  %a = getelementptr inbounds %struct.doubleword* %ptr, i64 0, i32 1, i32 0
+  %a = getelementptr inbounds %struct.doubleword, %struct.doubleword* %ptr, i64 0, i32 1, i32 0
   %add = load i64* %a, align 4
   br label %bar
 bar:
-  %c = getelementptr inbounds %struct.doubleword* %ptr, i64 0, i32 1
+  %c = getelementptr inbounds %struct.doubleword, %struct.doubleword* %ptr, i64 0, i32 1
   tail call void @bar_doubleword(%s.doubleword* %c, i64 %add)
   ret void
 }
@@ -71,11 +71,11 @@ define void @store-pre-indexed-doublewor
 ; CHECK-LABEL: store-pre-indexed-doubleword
 ; CHECK: str x{{[0-9]+}}, [x{{[0-9]+}}, #32]!
 entry:
-  %a = getelementptr inbounds %struct.doubleword* %ptr, i64 0, i32 1, i32 0
+  %a = getelementptr inbounds %struct.doubleword, %struct.doubleword* %ptr, i64 0, i32 1, i32 0
   store i64 %val, i64* %a, align 4
   br label %bar
 bar:
-  %c = getelementptr inbounds %struct.doubleword* %ptr, i64 0, i32 1
+  %c = getelementptr inbounds %struct.doubleword, %struct.doubleword* %ptr, i64 0, i32 1
   tail call void @bar_doubleword(%s.doubleword* %c, i64 %val)
   ret void
 }
@@ -86,11 +86,11 @@ define void @load-pre-indexed-quadword(%
 ; CHECK-LABEL: load-pre-indexed-quadword
 ; CHECK: ldr q{{[0-9]+}}, [x{{[0-9]+}}, #32]!
 entry:
-  %a = getelementptr inbounds %struct.quadword* %ptr, i64 0, i32 1, i32 0
+  %a = getelementptr inbounds %struct.quadword, %struct.quadword* %ptr, i64 0, i32 1, i32 0
   %add = load fp128* %a, align 4
   br label %bar
 bar:
-  %c = getelementptr inbounds %struct.quadword* %ptr, i64 0, i32 1
+  %c = getelementptr inbounds %struct.quadword, %struct.quadword* %ptr, i64 0, i32 1
   tail call void @bar_quadword(%s.quadword* %c, fp128 %add)
   ret void
 }
@@ -99,11 +99,11 @@ define void @store-pre-indexed-quadword(
 ; CHECK-LABEL: store-pre-indexed-quadword
 ; CHECK: str q{{[0-9]+}}, [x{{[0-9]+}}, #32]!
 entry:
-  %a = getelementptr inbounds %struct.quadword* %ptr, i64 0, i32 1, i32 0
+  %a = getelementptr inbounds %struct.quadword, %struct.quadword* %ptr, i64 0, i32 1, i32 0
   store fp128 %val, fp128* %a, align 4
   br label %bar
 bar:
-  %c = getelementptr inbounds %struct.quadword* %ptr, i64 0, i32 1
+  %c = getelementptr inbounds %struct.quadword, %struct.quadword* %ptr, i64 0, i32 1
   tail call void @bar_quadword(%s.quadword* %c, fp128 %val)
   ret void
 }
@@ -114,11 +114,11 @@ define void @load-pre-indexed-float(%str
 ; CHECK-LABEL: load-pre-indexed-float
 ; CHECK: ldr s{{[0-9]+}}, [x{{[0-9]+}}, #32]!
 entry:
-  %a = getelementptr inbounds %struct.float* %ptr, i64 0, i32 1, i32 0
+  %a = getelementptr inbounds %struct.float, %struct.float* %ptr, i64 0, i32 1, i32 0
   %add = load float* %a, align 4
   br label %bar
 bar:
-  %c = getelementptr inbounds %struct.float* %ptr, i64 0, i32 1
+  %c = getelementptr inbounds %struct.float, %struct.float* %ptr, i64 0, i32 1
   tail call void @bar_float(%s.float* %c, float %add)
   ret void
 }
@@ -127,11 +127,11 @@ define void @store-pre-indexed-float(%st
 ; CHECK-LABEL: store-pre-indexed-float
 ; CHECK: str s{{[0-9]+}}, [x{{[0-9]+}}, #32]!
 entry:
-  %a = getelementptr inbounds %struct.float* %ptr, i64 0, i32 1, i32 0
+  %a = getelementptr inbounds %struct.float, %struct.float* %ptr, i64 0, i32 1, i32 0
   store float %val, float* %a, align 4
   br label %bar
 bar:
-  %c = getelementptr inbounds %struct.float* %ptr, i64 0, i32 1
+  %c = getelementptr inbounds %struct.float, %struct.float* %ptr, i64 0, i32 1
   tail call void @bar_float(%s.float* %c, float %val)
   ret void
 }
@@ -142,11 +142,11 @@ define void @load-pre-indexed-double(%st
 ; CHECK-LABEL: load-pre-indexed-double
 ; CHECK: ldr d{{[0-9]+}}, [x{{[0-9]+}}, #32]!
 entry:
-  %a = getelementptr inbounds %struct.double* %ptr, i64 0, i32 1, i32 0
+  %a = getelementptr inbounds %struct.double, %struct.double* %ptr, i64 0, i32 1, i32 0
   %add = load double* %a, align 4
   br label %bar
 bar:
-  %c = getelementptr inbounds %struct.double* %ptr, i64 0, i32 1
+  %c = getelementptr inbounds %struct.double, %struct.double* %ptr, i64 0, i32 1
   tail call void @bar_double(%s.double* %c, double %add)
   ret void
 }
@@ -155,11 +155,11 @@ define void @store-pre-indexed-double(%s
 ; CHECK-LABEL: store-pre-indexed-double
 ; CHECK: str d{{[0-9]+}}, [x{{[0-9]+}}, #32]!
 entry:
-  %a = getelementptr inbounds %struct.double* %ptr, i64 0, i32 1, i32 0
+  %a = getelementptr inbounds %struct.double, %struct.double* %ptr, i64 0, i32 1, i32 0
   store double %val, double* %a, align 4
   br label %bar
 bar:
-  %c = getelementptr inbounds %struct.double* %ptr, i64 0, i32 1
+  %c = getelementptr inbounds %struct.double, %struct.double* %ptr, i64 0, i32 1
   tail call void @bar_double(%s.double* %c, double %val)
   ret void
 }
@@ -187,10 +187,10 @@ define i32 @load-pre-indexed-word2(%pre.
   br i1 %cond, label %if.then, label %if.end
 if.then:
   %load1 = load %pre.struct.i32** %this
-  %gep1 = getelementptr inbounds %pre.struct.i32* %load1, i64 0, i32 1
+  %gep1 = getelementptr inbounds %pre.struct.i32, %pre.struct.i32* %load1, i64 0, i32 1
   br label %return
 if.end:
-  %gep2 = getelementptr inbounds %pre.struct.i32* %load2, i64 0, i32 2
+  %gep2 = getelementptr inbounds %pre.struct.i32, %pre.struct.i32* %load2, i64 0, i32 2
   br label %return
 return:
   %retptr = phi i32* [ %gep1, %if.then ], [ %gep2, %if.end ]
@@ -205,10 +205,10 @@ define i64 @load-pre-indexed-doubleword2
   br i1 %cond, label %if.then, label %if.end
 if.then:
   %load1 = load %pre.struct.i64** %this
-  %gep1 = getelementptr inbounds %pre.struct.i64* %load1, i64 0, i32 1
+  %gep1 = getelementptr inbounds %pre.struct.i64, %pre.struct.i64* %load1, i64 0, i32 1
   br label %return
 if.end:
-  %gep2 = getelementptr inbounds %pre.struct.i64* %load2, i64 0, i32 2
+  %gep2 = getelementptr inbounds %pre.struct.i64, %pre.struct.i64* %load2, i64 0, i32 2
   br label %return
 return:
   %retptr = phi i64* [ %gep1, %if.then ], [ %gep2, %if.end ]
@@ -223,10 +223,10 @@ define <2 x i64> @load-pre-indexed-quadw
   br i1 %cond, label %if.then, label %if.end
 if.then:
   %load1 = load %pre.struct.i128** %this
-  %gep1 = getelementptr inbounds %pre.struct.i128* %load1, i64 0, i32 1
+  %gep1 = getelementptr inbounds %pre.struct.i128, %pre.struct.i128* %load1, i64 0, i32 1
   br label %return
 if.end:
-  %gep2 = getelementptr inbounds %pre.struct.i128* %load2, i64 0, i32 2
+  %gep2 = getelementptr inbounds %pre.struct.i128, %pre.struct.i128* %load2, i64 0, i32 2
   br label %return
 return:
   %retptr = phi <2 x i64>* [ %gep1, %if.then ], [ %gep2, %if.end ]
@@ -241,10 +241,10 @@ define float @load-pre-indexed-float2(%p
   br i1 %cond, label %if.then, label %if.end
 if.then:
   %load1 = load %pre.struct.float** %this
-  %gep1 = getelementptr inbounds %pre.struct.float* %load1, i64 0, i32 1
+  %gep1 = getelementptr inbounds %pre.struct.float, %pre.struct.float* %load1, i64 0, i32 1
   br label %return
 if.end:
-  %gep2 = getelementptr inbounds %pre.struct.float* %load2, i64 0, i32 2
+  %gep2 = getelementptr inbounds %pre.struct.float, %pre.struct.float* %load2, i64 0, i32 2
   br label %return
 return:
   %retptr = phi float* [ %gep1, %if.then ], [ %gep2, %if.end ]
@@ -259,10 +259,10 @@ define double @load-pre-indexed-double2(
   br i1 %cond, label %if.then, label %if.end
 if.then:
   %load1 = load %pre.struct.double** %this
-  %gep1 = getelementptr inbounds %pre.struct.double* %load1, i64 0, i32 1
+  %gep1 = getelementptr inbounds %pre.struct.double, %pre.struct.double* %load1, i64 0, i32 1
   br label %return
 if.end:
-  %gep2 = getelementptr inbounds %pre.struct.double* %load2, i64 0, i32 2
+  %gep2 = getelementptr inbounds %pre.struct.double, %pre.struct.double* %load2, i64 0, i32 2
   br label %return
 return:
   %retptr = phi double* [ %gep1, %if.then ], [ %gep2, %if.end ]
@@ -288,10 +288,10 @@ define void @store-pre-indexed-word2(%pr
   br i1 %cond, label %if.then, label %if.end
 if.then:
   %load1 = load %pre.struct.i32** %this
-  %gep1 = getelementptr inbounds %pre.struct.i32* %load1, i64 0, i32 1
+  %gep1 = getelementptr inbounds %pre.struct.i32, %pre.struct.i32* %load1, i64 0, i32 1
   br label %return
 if.end:
-  %gep2 = getelementptr inbounds %pre.struct.i32* %load2, i64 0, i32 2
+  %gep2 = getelementptr inbounds %pre.struct.i32, %pre.struct.i32* %load2, i64 0, i32 2
   br label %return
 return:
   %retptr = phi i32* [ %gep1, %if.then ], [ %gep2, %if.end ]
@@ -307,10 +307,10 @@ define void @store-pre-indexed-doublewor
   br i1 %cond, label %if.then, label %if.end
 if.then:
   %load1 = load %pre.struct.i64** %this
-  %gep1 = getelementptr inbounds %pre.struct.i64* %load1, i64 0, i32 1
+  %gep1 = getelementptr inbounds %pre.struct.i64, %pre.struct.i64* %load1, i64 0, i32 1
   br label %return
 if.end:
-  %gep2 = getelementptr inbounds %pre.struct.i64* %load2, i64 0, i32 2
+  %gep2 = getelementptr inbounds %pre.struct.i64, %pre.struct.i64* %load2, i64 0, i32 2
   br label %return
 return:
   %retptr = phi i64* [ %gep1, %if.then ], [ %gep2, %if.end ]
@@ -326,10 +326,10 @@ define void @store-pre-indexed-quadword2
   br i1 %cond, label %if.then, label %if.end
 if.then:
   %load1 = load %pre.struct.i128** %this
-  %gep1 = getelementptr inbounds %pre.struct.i128* %load1, i64 0, i32 1
+  %gep1 = getelementptr inbounds %pre.struct.i128, %pre.struct.i128* %load1, i64 0, i32 1
   br label %return
 if.end:
-  %gep2 = getelementptr inbounds %pre.struct.i128* %load2, i64 0, i32 2
+  %gep2 = getelementptr inbounds %pre.struct.i128, %pre.struct.i128* %load2, i64 0, i32 2
   br label %return
 return:
   %retptr = phi <2 x i64>* [ %gep1, %if.then ], [ %gep2, %if.end ]
@@ -345,10 +345,10 @@ define void @store-pre-indexed-float2(%p
   br i1 %cond, label %if.then, label %if.end
 if.then:
   %load1 = load %pre.struct.float** %this
-  %gep1 = getelementptr inbounds %pre.struct.float* %load1, i64 0, i32 1
+  %gep1 = getelementptr inbounds %pre.struct.float, %pre.struct.float* %load1, i64 0, i32 1
   br label %return
 if.end:
-  %gep2 = getelementptr inbounds %pre.struct.float* %load2, i64 0, i32 2
+  %gep2 = getelementptr inbounds %pre.struct.float, %pre.struct.float* %load2, i64 0, i32 2
   br label %return
 return:
   %retptr = phi float* [ %gep1, %if.then ], [ %gep2, %if.end ]
@@ -364,10 +364,10 @@ define void @store-pre-indexed-double2(%
   br i1 %cond, label %if.then, label %if.end
 if.then:
   %load1 = load %pre.struct.double** %this
-  %gep1 = getelementptr inbounds %pre.struct.double* %load1, i64 0, i32 1
+  %gep1 = getelementptr inbounds %pre.struct.double, %pre.struct.double* %load1, i64 0, i32 1
   br label %return
 if.end:
-  %gep2 = getelementptr inbounds %pre.struct.double* %load2, i64 0, i32 2
+  %gep2 = getelementptr inbounds %pre.struct.double, %pre.struct.double* %load2, i64 0, i32 2
   br label %return
 return:
   %retptr = phi double* [ %gep1, %if.then ], [ %gep2, %if.end ]
@@ -389,19 +389,19 @@ define void @load-post-indexed-word(i32*
 ; CHECK-LABEL: load-post-indexed-word
 ; CHECK: ldr w{{[0-9]+}}, [x{{[0-9]+}}], #16
 entry:
-  %gep1 = getelementptr i32* %array, i64 2
+  %gep1 = getelementptr i32, i32* %array, i64 2
   br label %body
 
 body:
   %iv2 = phi i32* [ %gep3, %body ], [ %gep1, %entry ]
   %iv = phi i64 [ %iv.next, %body ], [ %count, %entry ]
-  %gep2 = getelementptr i32* %iv2, i64 -1
+  %gep2 = getelementptr i32, i32* %iv2, i64 -1
   %load = load i32* %gep2
   call void @use-word(i32 %load)
   %load2 = load i32* %iv2
   call void @use-word(i32 %load2)
   %iv.next = add i64 %iv, -4
-  %gep3 = getelementptr i32* %iv2, i64 4
+  %gep3 = getelementptr i32, i32* %iv2, i64 4
   %cond = icmp eq i64 %iv.next, 0
   br i1 %cond, label %exit, label %body
 
@@ -413,19 +413,19 @@ define void @load-post-indexed-doublewor
 ; CHECK-LABEL: load-post-indexed-doubleword
 ; CHECK: ldr x{{[0-9]+}}, [x{{[0-9]+}}], #32
 entry:
-  %gep1 = getelementptr i64* %array, i64 2
+  %gep1 = getelementptr i64, i64* %array, i64 2
   br label %body
 
 body:
   %iv2 = phi i64* [ %gep3, %body ], [ %gep1, %entry ]
   %iv = phi i64 [ %iv.next, %body ], [ %count, %entry ]
-  %gep2 = getelementptr i64* %iv2, i64 -1
+  %gep2 = getelementptr i64, i64* %iv2, i64 -1
   %load = load i64* %gep2
   call void @use-doubleword(i64 %load)
   %load2 = load i64* %iv2
   call void @use-doubleword(i64 %load2)
   %iv.next = add i64 %iv, -4
-  %gep3 = getelementptr i64* %iv2, i64 4
+  %gep3 = getelementptr i64, i64* %iv2, i64 4
   %cond = icmp eq i64 %iv.next, 0
   br i1 %cond, label %exit, label %body
 
@@ -437,19 +437,19 @@ define void @load-post-indexed-quadword(
 ; CHECK-LABEL: load-post-indexed-quadword
 ; CHECK: ldr q{{[0-9]+}}, [x{{[0-9]+}}], #64
 entry:
-  %gep1 = getelementptr <2 x i64>* %array, i64 2
+  %gep1 = getelementptr <2 x i64>, <2 x i64>* %array, i64 2
   br label %body
 
 body:
   %iv2 = phi <2 x i64>* [ %gep3, %body ], [ %gep1, %entry ]
   %iv = phi i64 [ %iv.next, %body ], [ %count, %entry ]
-  %gep2 = getelementptr <2 x i64>* %iv2, i64 -1
+  %gep2 = getelementptr <2 x i64>, <2 x i64>* %iv2, i64 -1
   %load = load <2 x i64>* %gep2
   call void @use-quadword(<2 x i64> %load)
   %load2 = load <2 x i64>* %iv2
   call void @use-quadword(<2 x i64> %load2)
   %iv.next = add i64 %iv, -4
-  %gep3 = getelementptr <2 x i64>* %iv2, i64 4
+  %gep3 = getelementptr <2 x i64>, <2 x i64>* %iv2, i64 4
   %cond = icmp eq i64 %iv.next, 0
   br i1 %cond, label %exit, label %body
 
@@ -461,19 +461,19 @@ define void @load-post-indexed-float(flo
 ; CHECK-LABEL: load-post-indexed-float
 ; CHECK: ldr s{{[0-9]+}}, [x{{[0-9]+}}], #16
 entry:
-  %gep1 = getelementptr float* %array, i64 2
+  %gep1 = getelementptr float, float* %array, i64 2
   br label %body
 
 body:
   %iv2 = phi float* [ %gep3, %body ], [ %gep1, %entry ]
   %iv = phi i64 [ %iv.next, %body ], [ %count, %entry ]
-  %gep2 = getelementptr float* %iv2, i64 -1
+  %gep2 = getelementptr float, float* %iv2, i64 -1
   %load = load float* %gep2
   call void @use-float(float %load)
   %load2 = load float* %iv2
   call void @use-float(float %load2)
   %iv.next = add i64 %iv, -4
-  %gep3 = getelementptr float* %iv2, i64 4
+  %gep3 = getelementptr float, float* %iv2, i64 4
   %cond = icmp eq i64 %iv.next, 0
   br i1 %cond, label %exit, label %body
 
@@ -485,19 +485,19 @@ define void @load-post-indexed-double(do
 ; CHECK-LABEL: load-post-indexed-double
 ; CHECK: ldr d{{[0-9]+}}, [x{{[0-9]+}}], #32
 entry:
-  %gep1 = getelementptr double* %array, i64 2
+  %gep1 = getelementptr double, double* %array, i64 2
   br label %body
 
 body:
   %iv2 = phi double* [ %gep3, %body ], [ %gep1, %entry ]
   %iv = phi i64 [ %iv.next, %body ], [ %count, %entry ]
-  %gep2 = getelementptr double* %iv2, i64 -1
+  %gep2 = getelementptr double, double* %iv2, i64 -1
   %load = load double* %gep2
   call void @use-double(double %load)
   %load2 = load double* %iv2
   call void @use-double(double %load2)
   %iv.next = add i64 %iv, -4
-  %gep3 = getelementptr double* %iv2, i64 4
+  %gep3 = getelementptr double, double* %iv2, i64 4
   %cond = icmp eq i64 %iv.next, 0
   br i1 %cond, label %exit, label %body
 
@@ -519,18 +519,18 @@ define void @store-post-indexed-word(i32
 ; CHECK-LABEL: store-post-indexed-word
 ; CHECK: str w{{[0-9]+}}, [x{{[0-9]+}}], #16
 entry:
-  %gep1 = getelementptr i32* %array, i64 2
+  %gep1 = getelementptr i32, i32* %array, i64 2
   br label %body
 
 body:
   %iv2 = phi i32* [ %gep3, %body ], [ %gep1, %entry ]
   %iv = phi i64 [ %iv.next, %body ], [ %count, %entry ]
-  %gep2 = getelementptr i32* %iv2, i64 -1
+  %gep2 = getelementptr i32, i32* %iv2, i64 -1
   %load = load i32* %gep2
   call void @use-word(i32 %load)
   store i32 %val, i32* %iv2
   %iv.next = add i64 %iv, -4
-  %gep3 = getelementptr i32* %iv2, i64 4
+  %gep3 = getelementptr i32, i32* %iv2, i64 4
   %cond = icmp eq i64 %iv.next, 0
   br i1 %cond, label %exit, label %body
 
@@ -542,18 +542,18 @@ define void @store-post-indexed-doublewo
 ; CHECK-LABEL: store-post-indexed-doubleword
 ; CHECK: str x{{[0-9]+}}, [x{{[0-9]+}}], #32
 entry:
-  %gep1 = getelementptr i64* %array, i64 2
+  %gep1 = getelementptr i64, i64* %array, i64 2
   br label %body
 
 body:
   %iv2 = phi i64* [ %gep3, %body ], [ %gep1, %entry ]
   %iv = phi i64 [ %iv.next, %body ], [ %count, %entry ]
-  %gep2 = getelementptr i64* %iv2, i64 -1
+  %gep2 = getelementptr i64, i64* %iv2, i64 -1
   %load = load i64* %gep2
   call void @use-doubleword(i64 %load)
   store i64 %val, i64* %iv2
   %iv.next = add i64 %iv, -4
-  %gep3 = getelementptr i64* %iv2, i64 4
+  %gep3 = getelementptr i64, i64* %iv2, i64 4
   %cond = icmp eq i64 %iv.next, 0
   br i1 %cond, label %exit, label %body
 
@@ -565,18 +565,18 @@ define void @store-post-indexed-quadword
 ; CHECK-LABEL: store-post-indexed-quadword
 ; CHECK: str q{{[0-9]+}}, [x{{[0-9]+}}], #64
 entry:
-  %gep1 = getelementptr <2 x i64>* %array, i64 2
+  %gep1 = getelementptr <2 x i64>, <2 x i64>* %array, i64 2
   br label %body
 
 body:
   %iv2 = phi <2 x i64>* [ %gep3, %body ], [ %gep1, %entry ]
   %iv = phi i64 [ %iv.next, %body ], [ %count, %entry ]
-  %gep2 = getelementptr <2 x i64>* %iv2, i64 -1
+  %gep2 = getelementptr <2 x i64>, <2 x i64>* %iv2, i64 -1
   %load = load <2 x i64>* %gep2
   call void @use-quadword(<2 x i64> %load)
   store <2 x i64> %val, <2 x i64>* %iv2
   %iv.next = add i64 %iv, -4
-  %gep3 = getelementptr <2 x i64>* %iv2, i64 4
+  %gep3 = getelementptr <2 x i64>, <2 x i64>* %iv2, i64 4
   %cond = icmp eq i64 %iv.next, 0
   br i1 %cond, label %exit, label %body
 
@@ -588,18 +588,18 @@ define void @store-post-indexed-float(fl
 ; CHECK-LABEL: store-post-indexed-float
 ; CHECK: str s{{[0-9]+}}, [x{{[0-9]+}}], #16
 entry:
-  %gep1 = getelementptr float* %array, i64 2
+  %gep1 = getelementptr float, float* %array, i64 2
   br label %body
 
 body:
   %iv2 = phi float* [ %gep3, %body ], [ %gep1, %entry ]
   %iv = phi i64 [ %iv.next, %body ], [ %count, %entry ]
-  %gep2 = getelementptr float* %iv2, i64 -1
+  %gep2 = getelementptr float, float* %iv2, i64 -1
   %load = load float* %gep2
   call void @use-float(float %load)
   store float %val, float* %iv2
   %iv.next = add i64 %iv, -4
-  %gep3 = getelementptr float* %iv2, i64 4
+  %gep3 = getelementptr float, float* %iv2, i64 4
   %cond = icmp eq i64 %iv.next, 0
   br i1 %cond, label %exit, label %body
 
@@ -611,18 +611,18 @@ define void @store-post-indexed-double(d
 ; CHECK-LABEL: store-post-indexed-double
 ; CHECK: str d{{[0-9]+}}, [x{{[0-9]+}}], #32
 entry:
-  %gep1 = getelementptr double* %array, i64 2
+  %gep1 = getelementptr double, double* %array, i64 2
   br label %body
 
 body:
   %iv2 = phi double* [ %gep3, %body ], [ %gep1, %entry ]
   %iv = phi i64 [ %iv.next, %body ], [ %count, %entry ]
-  %gep2 = getelementptr double* %iv2, i64 -1
+  %gep2 = getelementptr double, double* %iv2, i64 -1
   %load = load double* %gep2
   call void @use-double(double %load)
   store double %val, double* %iv2
   %iv.next = add i64 %iv, -4
-  %gep3 = getelementptr double* %iv2, i64 4
+  %gep3 = getelementptr double, double* %iv2, i64 4
   %cond = icmp eq i64 %iv.next, 0
   br i1 %cond, label %exit, label %body
 
@@ -655,15 +655,15 @@ for.body:
   %phi1 = phi i32* [ %gep4, %for.body ], [ %b, %0 ]
   %phi2 = phi i32* [ %gep3, %for.body ], [ %a, %0 ]
   %i = phi i64 [ %dec.i, %for.body], [ %count, %0 ]
-  %gep1 = getelementptr i32* %phi1, i64 -1
+  %gep1 = getelementptr i32, i32* %phi1, i64 -1
   %load1 = load i32* %gep1
-  %gep2 = getelementptr i32* %phi2, i64 -1
+  %gep2 = getelementptr i32, i32* %phi2, i64 -1
   store i32 %load1, i32* %gep2
   %load2 = load i32* %phi1
   store i32 %load2, i32* %phi2
   %dec.i = add nsw i64 %i, -1
-  %gep3 = getelementptr i32* %phi2, i64 -2
-  %gep4 = getelementptr i32* %phi1, i64 -2
+  %gep3 = getelementptr i32, i32* %phi2, i64 -2
+  %gep4 = getelementptr i32, i32* %phi1, i64 -2
   %cond = icmp sgt i64 %dec.i, 0
   br i1 %cond, label %for.body, label %end
 end:
@@ -679,15 +679,15 @@ for.body:
   %phi1 = phi i64* [ %gep4, %for.body ], [ %b, %0 ]
   %phi2 = phi i64* [ %gep3, %for.body ], [ %a, %0 ]
   %i = phi i64 [ %dec.i, %for.body], [ %count, %0 ]
-  %gep1 = getelementptr i64* %phi1, i64 -1
+  %gep1 = getelementptr i64, i64* %phi1, i64 -1
   %load1 = load i64* %gep1
-  %gep2 = getelementptr i64* %phi2, i64 -1
+  %gep2 = getelementptr i64, i64* %phi2, i64 -1
   store i64 %load1, i64* %gep2
   %load2 = load i64* %phi1
   store i64 %load2, i64* %phi2
   %dec.i = add nsw i64 %i, -1
-  %gep3 = getelementptr i64* %phi2, i64 -2
-  %gep4 = getelementptr i64* %phi1, i64 -2
+  %gep3 = getelementptr i64, i64* %phi2, i64 -2
+  %gep4 = getelementptr i64, i64* %phi1, i64 -2
   %cond = icmp sgt i64 %dec.i, 0
   br i1 %cond, label %for.body, label %end
 end:
@@ -703,15 +703,15 @@ for.body:
   %phi1 = phi <2 x i64>* [ %gep4, %for.body ], [ %b, %0 ]
   %phi2 = phi <2 x i64>* [ %gep3, %for.body ], [ %a, %0 ]
   %i = phi i64 [ %dec.i, %for.body], [ %count, %0 ]
-  %gep1 = getelementptr <2 x i64>* %phi1, i64 -1
+  %gep1 = getelementptr <2 x i64>, <2 x i64>* %phi1, i64 -1
   %load1 = load <2 x i64>* %gep1
-  %gep2 = getelementptr <2 x i64>* %phi2, i64 -1
+  %gep2 = getelementptr <2 x i64>, <2 x i64>* %phi2, i64 -1
   store <2 x i64> %load1, <2 x i64>* %gep2
   %load2 = load <2 x i64>* %phi1
   store <2 x i64> %load2, <2 x i64>* %phi2
   %dec.i = add nsw i64 %i, -1
-  %gep3 = getelementptr <2 x i64>* %phi2, i64 -2
-  %gep4 = getelementptr <2 x i64>* %phi1, i64 -2
+  %gep3 = getelementptr <2 x i64>, <2 x i64>* %phi2, i64 -2
+  %gep4 = getelementptr <2 x i64>, <2 x i64>* %phi1, i64 -2
   %cond = icmp sgt i64 %dec.i, 0
   br i1 %cond, label %for.body, label %end
 end:
@@ -727,15 +727,15 @@ for.body:
   %phi1 = phi float* [ %gep4, %for.body ], [ %b, %0 ]
   %phi2 = phi float* [ %gep3, %for.body ], [ %a, %0 ]
   %i = phi i64 [ %dec.i, %for.body], [ %count, %0 ]
-  %gep1 = getelementptr float* %phi1, i64 -1
+  %gep1 = getelementptr float, float* %phi1, i64 -1
   %load1 = load float* %gep1
-  %gep2 = getelementptr float* %phi2, i64 -1
+  %gep2 = getelementptr float, float* %phi2, i64 -1
   store float %load1, float* %gep2
   %load2 = load float* %phi1
   store float %load2, float* %phi2
   %dec.i = add nsw i64 %i, -1
-  %gep3 = getelementptr float* %phi2, i64 -2
-  %gep4 = getelementptr float* %phi1, i64 -2
+  %gep3 = getelementptr float, float* %phi2, i64 -2
+  %gep4 = getelementptr float, float* %phi1, i64 -2
   %cond = icmp sgt i64 %dec.i, 0
   br i1 %cond, label %for.body, label %end
 end:
@@ -751,15 +751,15 @@ for.body:
   %phi1 = phi double* [ %gep4, %for.body ], [ %b, %0 ]
   %phi2 = phi double* [ %gep3, %for.body ], [ %a, %0 ]
   %i = phi i64 [ %dec.i, %for.body], [ %count, %0 ]
-  %gep1 = getelementptr double* %phi1, i64 -1
+  %gep1 = getelementptr double, double* %phi1, i64 -1
   %load1 = load double* %gep1
-  %gep2 = getelementptr double* %phi2, i64 -1
+  %gep2 = getelementptr double, double* %phi2, i64 -1
   store double %load1, double* %gep2
   %load2 = load double* %phi1
   store double %load2, double* %phi2
   %dec.i = add nsw i64 %i, -1
-  %gep3 = getelementptr double* %phi2, i64 -2
-  %gep4 = getelementptr double* %phi1, i64 -2
+  %gep3 = getelementptr double, double* %phi2, i64 -2
+  %gep4 = getelementptr double, double* %phi1, i64 -2
   %cond = icmp sgt i64 %dec.i, 0
   br i1 %cond, label %for.body, label %end
 end:

Modified: llvm/trunk/test/CodeGen/AArch64/ldst-regoffset.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/ldst-regoffset.ll?rev=230786&r1=230785&r2=230786&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/ldst-regoffset.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/ldst-regoffset.ll Fri Feb 27 13:29:02 2015
@@ -12,13 +12,13 @@
 define void @ldst_8bit(i8* %base, i32 %off32, i64 %off64) minsize {
 ; CHECK-LABEL: ldst_8bit:
 
-   %addr8_sxtw = getelementptr i8* %base, i32 %off32
+   %addr8_sxtw = getelementptr i8, i8* %base, i32 %off32
    %val8_sxtw = load volatile i8* %addr8_sxtw
    %val32_signed = sext i8 %val8_sxtw to i32
    store volatile i32 %val32_signed, i32* @var_32bit
 ; CHECK: ldrsb {{w[0-9]+}}, [{{x[0-9]+}}, {{[wx][0-9]+}}, sxtw]
 
-  %addr_lsl = getelementptr i8* %base, i64 %off64
+  %addr_lsl = getelementptr i8, i8* %base, i64 %off64
   %val8_lsl = load volatile i8* %addr_lsl
   %val32_unsigned = zext i8 %val8_lsl to i32
   store volatile i32 %val32_unsigned, i32* @var_32bit
@@ -40,13 +40,13 @@ define void @ldst_8bit(i8* %base, i32 %o
 define void @ldst_16bit(i16* %base, i32 %off32, i64 %off64) minsize {
 ; CHECK-LABEL: ldst_16bit:
 
-   %addr8_sxtwN = getelementptr i16* %base, i32 %off32
+   %addr8_sxtwN = getelementptr i16, i16* %base, i32 %off32
    %val8_sxtwN = load volatile i16* %addr8_sxtwN
    %val32_signed = sext i16 %val8_sxtwN to i32
    store volatile i32 %val32_signed, i32* @var_32bit
 ; CHECK: ldrsh {{w[0-9]+}}, [{{x[0-9]+}}, {{[xw][0-9]+}}, sxtw #1]
 
-  %addr_lslN = getelementptr i16* %base, i64 %off64
+  %addr_lslN = getelementptr i16, i16* %base, i64 %off64
   %val8_lslN = load volatile i16* %addr_lslN
   %val32_unsigned = zext i16 %val8_lslN to i32
   store volatile i32 %val32_unsigned, i32* @var_32bit
@@ -94,12 +94,12 @@ define void @ldst_16bit(i16* %base, i32
 define void @ldst_32bit(i32* %base, i32 %off32, i64 %off64) minsize {
 ; CHECK-LABEL: ldst_32bit:
 
-   %addr_sxtwN = getelementptr i32* %base, i32 %off32
+   %addr_sxtwN = getelementptr i32, i32* %base, i32 %off32
    %val_sxtwN = load volatile i32* %addr_sxtwN
    store volatile i32 %val_sxtwN, i32* @var_32bit
 ; CHECK: ldr {{w[0-9]+}}, [{{x[0-9]+}}, {{[xw][0-9]+}}, sxtw #2]
 
-  %addr_lslN = getelementptr i32* %base, i64 %off64
+  %addr_lslN = getelementptr i32, i32* %base, i64 %off64
   %val_lslN = load volatile i32* %addr_lslN
   store volatile i32 %val_lslN, i32* @var_32bit
 ; CHECK: ldr {{w[0-9]+}}, [{{x[0-9]+}}, {{x[0-9]+}}, lsl #2]
@@ -146,12 +146,12 @@ define void @ldst_32bit(i32* %base, i32
 define void @ldst_64bit(i64* %base, i32 %off32, i64 %off64) minsize {
 ; CHECK-LABEL: ldst_64bit:
 
-   %addr_sxtwN = getelementptr i64* %base, i32 %off32
+   %addr_sxtwN = getelementptr i64, i64* %base, i32 %off32
    %val_sxtwN = load volatile i64* %addr_sxtwN
    store volatile i64 %val_sxtwN, i64* @var_64bit
 ; CHECK: ldr {{x[0-9]+}}, [{{x[0-9]+}}, {{[xw][0-9]+}}, sxtw #3]
 
-  %addr_lslN = getelementptr i64* %base, i64 %off64
+  %addr_lslN = getelementptr i64, i64* %base, i64 %off64
   %val_lslN = load volatile i64* %addr_lslN
   store volatile i64 %val_lslN, i64* @var_64bit
 ; CHECK: ldr {{x[0-9]+}}, [{{x[0-9]+}}, {{x[0-9]+}}, lsl #3]
@@ -194,13 +194,13 @@ define void @ldst_64bit(i64* %base, i32
 define void @ldst_float(float* %base, i32 %off32, i64 %off64) minsize {
 ; CHECK-LABEL: ldst_float:
 
-   %addr_sxtwN = getelementptr float* %base, i32 %off32
+   %addr_sxtwN = getelementptr float, float* %base, i32 %off32
    %val_sxtwN = load volatile float* %addr_sxtwN
    store volatile float %val_sxtwN, float* @var_float
 ; CHECK: ldr {{s[0-9]+}}, [{{x[0-9]+}}, {{[xw][0-9]+}}, sxtw #2]
 ; CHECK-NOFP-NOT: ldr {{s[0-9]+}},
 
-  %addr_lslN = getelementptr float* %base, i64 %off64
+  %addr_lslN = getelementptr float, float* %base, i64 %off64
   %val_lslN = load volatile float* %addr_lslN
   store volatile float %val_lslN, float* @var_float
 ; CHECK: ldr {{s[0-9]+}}, [{{x[0-9]+}}, {{x[0-9]+}}, lsl #2]
@@ -247,13 +247,13 @@ define void @ldst_float(float* %base, i3
 define void @ldst_double(double* %base, i32 %off32, i64 %off64) minsize {
 ; CHECK-LABEL: ldst_double:
 
-   %addr_sxtwN = getelementptr double* %base, i32 %off32
+   %addr_sxtwN = getelementptr double, double* %base, i32 %off32
    %val_sxtwN = load volatile double* %addr_sxtwN
    store volatile double %val_sxtwN, double* @var_double
 ; CHECK: ldr {{d[0-9]+}}, [{{x[0-9]+}}, {{[xw][0-9]+}}, sxtw #3]
 ; CHECK-NOFP-NOT: ldr {{d[0-9]+}},
 
-  %addr_lslN = getelementptr double* %base, i64 %off64
+  %addr_lslN = getelementptr double, double* %base, i64 %off64
   %val_lslN = load volatile double* %addr_lslN
   store volatile double %val_lslN, double* @var_double
 ; CHECK: ldr {{d[0-9]+}}, [{{x[0-9]+}}, {{x[0-9]+}}, lsl #3]
@@ -301,13 +301,13 @@ define void @ldst_double(double* %base,
 define void @ldst_128bit(fp128* %base, i32 %off32, i64 %off64) minsize {
 ; CHECK-LABEL: ldst_128bit:
 
-   %addr_sxtwN = getelementptr fp128* %base, i32 %off32
+   %addr_sxtwN = getelementptr fp128, fp128* %base, i32 %off32
    %val_sxtwN = load volatile fp128* %addr_sxtwN
    store volatile fp128 %val_sxtwN, fp128* %base
 ; CHECK: ldr {{q[0-9]+}}, [{{x[0-9]+}}, {{[xw][0-9]+}}, sxtw #4]
 ; CHECK-NOFP-NOT: ldr {{q[0-9]+}}, [{{x[0-9]+}}, {{[xw][0-9]+}}, sxtw #4]
 
-  %addr_lslN = getelementptr fp128* %base, i64 %off64
+  %addr_lslN = getelementptr fp128, fp128* %base, i64 %off64
   %val_lslN = load volatile fp128* %addr_lslN
   store volatile fp128 %val_lslN, fp128* %base
 ; CHECK: ldr {{q[0-9]+}}, [{{x[0-9]+}}, {{x[0-9]+}}, lsl #4]





More information about the llvm-commits mailing list